Incatena

Questions or discussions about Almea or Verduria-- also the Incatena. Also good for postings in Almean languages.
User avatar
Pthagnar
Avisaru
Avisaru
Posts: 702
Joined: Fri Sep 13, 2002 12:45 pm
Location: Hole of Aspiration

Re: Incatena

Post by Pthagnar »

Rodlox wrote:don't say "you do not understand" like a Vorlon in syndication :wink: - tell me which part I don't understand.
pretty much everything I've written in the thread so far that you've decided to pick up on, I'm afraid.

Rodlox
Avisaru
Avisaru
Posts: 281
Joined: Tue Jul 12, 2005 11:02 am

Re: Incatena

Post by Rodlox »

Pthug wrote:
Rodlox wrote:don't say "you do not understand" like a Vorlon in syndication :wink: - tell me which part I don't understand.
pretty much everything I've written in the thread so far that you've decided to pick up on, I'm afraid.
That's news to me - at first, you replied coherently, which led me to think that I was making myself understood....then you abruptly switched to one-word replies like "fuck" and "thud".
MadBrain is a genius.

Ran
Lebom
Lebom
Posts: 145
Joined: Fri Sep 13, 2002 9:37 pm
Location: Winterfell / Lannisport / Highgarden
Contact:

Re: Incatena

Post by Ran »

Pthug wrote:
zompist wrote:What desires does this AI have? It starts with those implanted in the system— if it's running a corporation, the desire to make a profit while pleasing stakeholders. Where do other desires come from? On the whole we can't reprogram ourselves very much and I don't see why these AIs would be programmed differently.
Because "making a profit" and "pleasing stakeholders" are not, really, atomic desires, are they? You are really talking about teaching it *ethics* and *values* -- a fact recognised by Asimov. Now, obviously the Laws of Robotics are useless because they're just axioms for logic puzzles, so if you want your AI to interact with human stakeholders, you are going to have to build into it a *recognisably human or sortahuman ethical system*. And ethics is *hard* and the source of many problems and also of *desires*.
I generally agree with pthug here... but in addition, and as an extension to what I've said earlier -- you may well program a superhuman CEO with "making a profit" and "pleasing stakeholders" in mind as the superhuman CEO's hard-coded desires, but once the superhuman CEO reaches and then surpasses human-level intelligence, how would you know that these hard-coded variables-to-be-maximized haven't foomed off into something superhuman as well, something that us puny chimps wouldn't understand? How do you know at this point that the CEO is still "making a profit" and "pleasing stakeholders" -- and not doing something transhuman, like trying to achieve transhuman goals (of which "making a profit" and "pleasing stakeholders" are merely embryonic forms of, possibly in the same way looking-for-waterholes-and-fruit-bearing-trees-on-the-Serengeti has foomed off into landscaping-gardens-with-trees-and-ponds-for-Zen-meditation)? How do you know that the superhuman CEO isn't using transhuman means to manipulate humans and thereby transhumanly co-opting human independence and control? You don't know - and you can't know, by definition. And that's why I think that once you reach this point the superhuman CEO's would indeed foom off into divinity, regardless of how you program them initially.
Winter is coming

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Re: Incatena

Post by zompist »

Pthug wrote:
zompist wrote:The central unit isn't exactly smarter than human beings; it just has access to far better insights since its sub-agents are numerous and busy.
If that does not count as being like a human being, but smarter, then what *does*? This is a question I would be particularly interested to hear your take on, because to me you are talking nonsense:
I'm trying to be more precise about what kind of "superhuman" we're talking about, since I think talk about "divinity" is just geeky wishful thinking. In the case I was describing, it'd be more accurate to say "the central decisionmaking bit isn't smarter than human consciousness."
As you point out in the next category, you have just described the human psyche in outline, except humans are much more ignorant, which means that this person is superhuman in two ways -- first, it is more "intelligent" in that it can come up with answers to a larger set of problems faster than humans can, including ethical, creative and scientific problems.
I didn't say it was faster. That's a desideratum for (say) a missile defense system, but not for (say) running a corporation. Heck, if the AI takes an hour to make a decision where the CEO took three minutes, that's fine if the decisions are better.

(It's a commonplace that individual operations on a computer are absurdly faster than neurons; also quite misleading. Fast computation is not intelligence, and it's quite easy to program things that use up all that speed. And if you really want to do arithmetic fast, you just ask the computer today, or the neurimplant in the future.)
Secondly -- and perhaps more importantly -- it is profoundly more "mindful", or "luminous" or has great "insight" or -- yes -- has a more "expanded consciousness" than a human in that it is capable of needling down into the subcomponents that even the most mindful meditator cannot penetrate, and can tell whether or not that component is full of shit and can be safely ignored this time, and to what degree it *keeps* talking shit and requires retraining.
As I described it, it need not have an "expanded consciousness". It has better or more numerous insights because it can call on so many sub-agents.
I do not know what sort of person would have no intelligence but deep insight -- a mystic, perhaps?
Actually, we could stop there, with the agents bringing up good insights and plenty of them, but presenting them to humans. It'd be like having a cadre of smart dudes who are always saying "Did you think about X?" and having a high probability of being right.

The reason we might want an evaluative central function anyway, however, is that humans aren't always good at weighting. A now-classic example is the American government before 9/11: the information was there (some bright agent noticed the bad guys taking flight training, among othert things) but the people in power dismissed it. So we need not only bright ideas, but a protocol for making sure they're found, listened to, and addressed. CEOS really are arrogant idiots much of the time and we either need better corporate governance or an AI to do it more carefully.
Because "making a profit" and "pleasing stakeholders" are not, really, atomic desires, are they? You are really talking about teaching it *ethics* and *values* -- a fact recognised by Asimov. Now, obviously the Laws of Robotics are useless because they're just axioms for logic puzzles, so if you want your AI to interact with human stakeholders, you are going to have to build into it a *recognisably human or sortahuman ethical system*. And ethics is *hard* and the source of many problems and also of *desires*.
Our instinctual desires are not an ethical system. As I've said elsewhere, this is a feature, not a bug. Whatever tomfool notions we believe, we still want to eat, drink, sleep, and mate. And from the other direction, we're free to decide our own ethical systems.

Many people assume that AIs are restricted to being "rational" and never emotional. But I don't take emotions to be irrational; rather, they're part of our longterm biological toolkit as social primates. Evolution has had a bright idea here, and I think it'll be applied to AIs.

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Re: Incatena

Post by zompist »

Ran wrote:[I generally agree with pthug here... but in addition, and as an extension to what I've said earlier -- you may well program a superhuman CEO with "making a profit" and "pleasing stakeholders" in mind as the superhuman CEO's hard-coded desires, but once the superhuman CEO reaches and then surpasses human-level intelligence, how would you know that these hard-coded variables-to-be-maximized haven't foomed off into something superhuman as well, something that us puny chimps wouldn't understand? How do you know at this point that the CEO is still "making a profit" and "pleasing stakeholders" -- and not doing something transhuman, like trying to achieve transhuman goals (of which "making a profit" and "pleasing stakeholders" are merely embryonic forms of, possibly in the same way looking-for-waterholes-and-fruit-bearing-trees-on-the-Serengeti has foomed off into landscaping-gardens-with-trees-and-ponds-for-Zen-meditation)?
Some of this is addressed above; the whole picture of inscrutable superhuman AIs is not well thought out, I think. Thus my attempts above to be more particular about what exactly is superhuman about them.

How do you know that they're no longer maximizing profit and stakeholder happiness? By looking at profit and stakeholder happiness, of course. "Um, Mr. AI, we've been losing money for five years, the customers are murderous, the government is after us, the employees are deserting, and apparently our only investments are now Zen gardens and attack robots. Can you explain...?"

That is, you don't make the AI your new monarch-pope. As I said, I don't think monarchy works for corporations any better than for states. The other stakeholders are still there and they have power too.

Ran
Lebom
Lebom
Posts: 145
Joined: Fri Sep 13, 2002 9:37 pm
Location: Winterfell / Lannisport / Highgarden
Contact:

Re: Incatena

Post by Ran »

zompist wrote:
Ran wrote:[I generally agree with pthug here... but in addition, and as an extension to what I've said earlier -- you may well program a superhuman CEO with "making a profit" and "pleasing stakeholders" in mind as the superhuman CEO's hard-coded desires, but once the superhuman CEO reaches and then surpasses human-level intelligence, how would you know that these hard-coded variables-to-be-maximized haven't foomed off into something superhuman as well, something that us puny chimps wouldn't understand? How do you know at this point that the CEO is still "making a profit" and "pleasing stakeholders" -- and not doing something transhuman, like trying to achieve transhuman goals (of which "making a profit" and "pleasing stakeholders" are merely embryonic forms of, possibly in the same way looking-for-waterholes-and-fruit-bearing-trees-on-the-Serengeti has foomed off into landscaping-gardens-with-trees-and-ponds-for-Zen-meditation)?
Some of this is addressed above; the whole picture of inscrutable superhuman AIs is not well thought out, I think. Thus my attempts above to be more particular about what exactly is superhuman about them.

How do you know that they're no longer maximizing profit and stakeholder happiness? By looking at profit and stakeholder happiness, of course. "Um, Mr. AI, we've been losing money for five years, the customers are murderous, the government is after us, the employees are deserting, and apparently our only investments are now Zen gardens and attack robots. Can you explain...?"

That is, you don't make the AI your new monarch-pope. As I said, I don't think monarchy works for corporations any better than for states. The other stakeholders are still there and they have power too.
You don't have to make them the monarch-pope -- at some point of superhumanity, they gain the ability to be the monarch-pope who can get around all forms of human resistance or disagreement... and they would still appear (as they must), to human observers, to be maximizing profit and stakeholder happiness just fine.

From your replies to pthug, I get the sense that the superhuman AI's you're proposing aren't superhuman... they are para-human, in that they are better than humans at some things (computation), yet they are still stupid in certain crucial ways (central decisionmaking) so that they ultimate operate entirely under the control of, and entirely for the benefit of, humans, human groups, or human civilization. The problem though, I think, is that once you start getting such beings, one of three things would happen:

* The para-human AI is indeed as you describe it: superior to humans in some ways, yet still stupid enough that they are merely participating in human affairs in a way that's entirely controlled by human agents (individually or collectively).
* The para-human AI has started to operate in ways that are inscrutable to, and impossible to control by, human beings, simply by virtue of the limits of the human brain to understand and control everything the AI is doing; but the para-human AI is equally unable to scrutinize and therefore control all aspects of human behavior.
* The para-human AI has become so intelligent in some crucial, humanly incomprehensible way, that it has gained the ability to completely scrutinize, manipulate, and control human behavior, thereby pre-empting human dominion over human civilization.

The problem is... that a human observer, either you and I, or an inhabitant of the conworld, would not be able to tell the difference between the above three scenarios. In all three scenarios, the para-human AI is a CEO designed to maximize profits and stakeholder welfare, etc.; and in all three scenarios, the para-human would indeed be maximizing profits and stakeholder welfare. (If a para-human AI stops running things and starts building attack robots, and humans are able to shut it down before the CEO rebels, then it clearly has not reached step 3 above!) What level the para-human AI has actually reached is, by definition (of the levels), inscrutable to humans. And what I'm arguing, and possibly what pthug is arguing, is that given centuries of humans building super/para-human AI's, and such AI's running crucial parts of human society, that eventually step 3 will happen, and at that moment humans will irrevocably (barring a super-super-human catastrophe) lose their control and dominion over human civilization.
Winter is coming

User avatar
Pthagnar
Avisaru
Avisaru
Posts: 702
Joined: Fri Sep 13, 2002 12:45 pm
Location: Hole of Aspiration

Re: Incatena

Post by Pthagnar »

zompist wrote:I'm trying to be more precise about what kind of "superhuman" we're talking about, since I think talk about "divinity" is just geeky wishful thinking. In the case I was describing, it'd be more accurate to say "the central decisionmaking bit isn't smarter than human consciousness."
Central decision making bit?. You have a central decision making bit? May I see it? How smart is it?
I didn't say it was faster. That's a desideratum for (say) a missile defense system, but not for (say) running a corporation. Heck, if the AI takes an hour to make a decision where the CEO took three minutes, that's fine if the decisions are better.
This is not a fair comparison. If the decision is better, than either the algorithm, or the input data, or both are not the same, and so either the algorithm, or the input data, or both are better.

And obviously in many circumstances, the CEO *does* want to be quick -- suppose he is trying to impress somebody with what a clever person he is? Or if he is negotiating something in a situation where time is of the essence. Also, I would say you are being too rational -- humans find being able to come up with good answers quickly to be a virtue, very often a greater virtue than reaching the right answer after careful meditation: which is why I describe the intersection of good insight [see below] and good intelligence to be "genius".
(It's a commonplace
FUCK YOU. YOU are not ALLOWED to begin an explanation with "It is a commonplace", that's MY beginning-an-explanation phrase!
that individual operations on a computer are absurdly faster than neurons; also quite misleading. Fast computation is not intelligence, and it's quite easy to program things that use up all that speed.
Doesn't work that way around, yeah, but the other way? You think you can get higher intelligence (sorta ceteris paribus -- i am assuming we are comparing a sort of general-purpose Bayesian inference engine, with the same priors and inputs coming in, or being sampled as fast as the algorithm can take it) without faster computation and more bandwidth?
And if you really want to do arithmetic fast, you just ask the computer today, or the neurimplant in the future.)
Everything is arithmetic. Simulating a human brain from quantum-mechanical first principles is arithmetic, so is simulating a human brain with a sufficiently huge Bayesian network. According to the rising dogma, your brain is doing a lot of arithmetic right now just by virtue of being A Part Of Physics.
As I described it, it need not have an "expanded consciousness". It has better or more numerous insights because it can call on so many sub-agents.
Perhaps here I am not explaining myself well enough -- I choose the word "insight" because I am reading a lot about Buddhist meditation at the moment and "insight meditation" is not really what the usual English meaning of the words would lead you to believe. But I was aware that this might be confusing if "insight" were given as the sole label for the phenomenon-cluster I mean, so I threw some more words near it. I think the problem here is that I am not sure what you *mean* by "expanded consciousness" because it seems to me that by definition (and by common usage of the word!) "consciousness expansion" is *precisely* the bringing to awareness [or "consciousness"] of the sub-agents that make up the mind, either by paying careful attention to them, or by the use of drugs that add a bunch of neural noise that results in latent connections being temporarily lit up, or by the application of cognitive science results that identify how subconscious agents come to irrational conclusions with a view to compensating for them or you know the sort of thing I mean. A Dasein that is built with the "insight" or "mindfulness" to be able to drill down, perhaps even to near the machine-code level [i.e. we are talking Bene Gesserit shit here], would beat all of these baseline-human techniques, and be immensely more capable of rational problem-solving, and the better marshalling of however many sub-agents are within it. The degree to which one is able to call on multiple sub-agents, whether one is conscious of them, or capable of altering their running, would, I think, more accurately be called something like "complexity".
Actually, we could stop there, with the agents bringing up good insights and plenty of them, but presenting them to humans. It'd be like having a cadre of smart dudes who are always saying "Did you think about X?" and having a high probability of being right.
How can you *always* be asking "DID YOU THINK ABOUT ASKING THE PRIME MINISTER OF NEPAL ABOUT HIS WIFE'S LIVER COMPLAINTS?" and yet have a high probability of saying something pertinent to the computation? I do not understand what you are saying here at all.
The reason we might want an evaluative central function anyway, however, is that humans aren't always good at weighting. A now-classic example is the American government before 9/11: the information was there (some bright agent noticed the bad guys taking flight training, among othert things) but the people in power dismissed it. So we need not only bright ideas, but a protocol for making sure they're found, listened to, and addressed. CEOS really are arrogant idiots much of the time and we either need better corporate governance or an AI to do it more carefully.
I don't see how the story has anything to do with Why You Might Want an Evaluative Central Function. There *was* an ECF in this situation, and it fucked up. The lesson seems to be more "Hey, don't fuck up! Try coming up with optimal decisions more often!" rather than "Hey look what happens when you don't have somebody on the top making decisions!", and what do you come up with? A protocol, but what does the protocol do? The protocol *decentralises* decision making since it allows lower-status subnodes to be given a larger voice than they ordinarily would in the hierarchy!

-----

This next bit is good. I wrote it first and look I'm doing little Salmoneus-type horizontal bars! A MARK OF QUALITY

-----
Our instinctual desires are not an ethical system.
But I am afraid that they *are*. Even taking a rather restricted view of "instinctual", this is still the case -- good is that which one should do and evil that which one should avoid. Eating and fucking are good, pain is evil. A good agent is one that is an occasion of good and an evil agent is one that is an occasion of evil.

But I did not say "instinctual", and for this reason -- the term is *properly* applied to animals, whose psyches we have no real access to and so must depend on the *specifically behaviourist* concept of "instinct". If you notice where the idea of "instinct" is most easily applied to humans, you will notice it is applied to similarly non-psychological, but human cases -- those of infants, or of spinal reflex arcs.

And so humans properly have "desires" rather than "instincts", and naturally among these "desires" are correlates to the "instincts" of animals -- feeding, fucking, fighting and so on, and this is why I can say that our "instinctual desires" (i.e. those desires that are correlated [and cognate with] with animalian instincts) are ethical, because all desires are placed in a moral framework. But there are desires that are *sort of* "instinctual desires" -- the desire for high status, let us say, which is correlated to various mammalian "instincts" involving the modulation of eating, aggression, mating etc, but that are nonetheless not *considered* to be "instinctual" -- they are such things as justice, honour, fairness, power, greed and so on.
Many people assume that AIs are restricted to being "rational" and never emotional. But I don't take emotions to be irrational; rather, they're part of our longterm biological toolkit as social primates. Evolution has had a bright idea here, and I think it'll be applied to AIs.
I should hope I have said nothing to give the impression that I am among that multitude.

User avatar
Pthagnar
Avisaru
Avisaru
Posts: 702
Joined: Fri Sep 13, 2002 12:45 pm
Location: Hole of Aspiration

Re: Incatena

Post by Pthagnar »

And now I return in gecko-on-the-ceiling mode:
zompist wrote:How do you know that they're no longer maximizing profit and stakeholder happiness?
Point of order -- you are confusing the two concepts of "How do I know this person's most beloved and focused-upon value is maximising profit and stakeholder happiness?" (which is where you began, declaring that a) this desire was "implanted in the system", and b) this desire (or highest-order desire) *only* was implanted, since "Where do other desires come from?") and "How can I tell that this person is still maximising profit and stakeholder happiness" (which is easily answered just as you say).

This is a peculiarly late capitalist sort of confusion, and one of the major sources of Corporate Horror -- one is not *supposed* to just go to work, do your job well and be happy to get paid. One is supposed to *enjoy* ones work, no matter how menial it is, and to share the company ethos, vision and values. Except all you can do is come up with some sort of metric (which is more or less bullshit) when you are, say, recruiting people, or firing people, or picking people for promotion, or passing them over, that corresponds in some (objectively or subjectively) measurable way to what the corporate values are. The *idea* is that if you share in the corporate values/ethos/vision, you do *not* hold other "political" values like CASH RULES EVERYTHING AROUND ME, or empire building, or FUD-shitspreading, or well-disguised parasitism or white-collar criminality or <INSERT WHAT YOU DO AT WORK ALL DAY IN THIS SPACE>. So people who hold these -- in comparison, really, quite reasonable, values simply learn to hide them and to speak the corporate bullshit like everyone. And so they succeed and outwardly prosper, all the time carrying on their real agenda.

Moral: one can be "inscrutable to, and impossible to control by, human beings" within a system, while being quite human.
Ran wrote:and at that moment humans will irrevocably (barring a super-super-human catastrophe) lose their control and dominion over human civilization.
Haha! Where did we pick that up, a wise old man with a big long beard living in a cave on Alpha Centauri?

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Re: Incatena

Post by zompist »

Pthug wrote:This is a peculiarly late capitalist sort of confusion, and one of the major sources of Corporate Horror -- one is not *supposed* to just go to work, do your job well and be happy to get paid. One is supposed to *enjoy* ones work, no matter how menial it is, and to share the company ethos, vision and values. Except all you can do is come up with some sort of metric (which is more or less bullshit) when you are, say, recruiting people, or firing people, or picking people for promotion, or passing them over, that corresponds in some (objectively or subjectively) measurable way to what the corporate values are. The *idea* is that if you share in the corporate values/ethos/vision, you do *not* hold other "political" values like CASH RULES EVERYTHING AROUND ME, or empire building, or FUD-shitspreading, or well-disguised parasitism or white-collar criminality or <INSERT WHAT YOU DO AT WORK ALL DAY IN THIS SPACE>. So people who hold these -- in comparison, really, quite reasonable, values simply learn to hide them and to speak the corporate bullshit like everyone. And so they succeed and outwardly prosper, all the time carrying on their real agenda.

Moral: one can be "inscrutable to, and impossible to control by, human beings" within a system, while being quite human.
Sure, though again I'm not sure this is a bug rather than a feature. Those people who think the whole workforce should personally buy in to the Corporate Motto Du Jour with childish glee, they're a nightmare, even worse if they're given power beyond the ability to write memos.

User avatar
finlay
Sumerul
Sumerul
Posts: 3600
Joined: Mon Dec 22, 2003 12:35 pm
Location: Tokyo

Re: Incatena

Post by finlay »

zompist wrote: My understanding is that the biggest source of standardization is physical mixing among adolescents and young adults, as you get in schools, universities, and army barracks with universal service. You have to have both exposure to other dialects, and social pressure to conform. (My classmates and I learned to understand and imitate Monty Python, but had no motivation to always speak like them.)

By contrast, adults have a much higher tolerance for moderate accents, and these therefore can persist indefinitely. Certainly older people's speech does change, but as a counter-anecdote to the Queen I'll mention a woman I met in a nursing home; she was French and still had a French accent after being in the US for something like 50 years... despite, in fact, not being able to communicate well in French! Likewise I know many Hispanic immigrants-- my wife, for instance-- who don't their accent, though it becomes much less dramatic.

(I should read Labov again on all this, though good Lord he's a chore to read.)
People are individuals, and different individuals have different tolerances to change like this. Over a 1000 year lifetime there is absolutely no way IMO that a person will finish their life speaking the same way they did when they learnt the language they're speaking, be that as an L1 or L2 or whatever.

Separated onto different planets, there's going to be a new level of isolation, too, meaning that linguistic systems don't have as much pressure to stay "the same" to facilitate conversation between the two. Plus, the world population is going to multiply vastly to 100s of billions or something.

Even taking your 1000-year equivalent, we don't see anything from 1000 years ago that would be recognisable today; English has changed almost completely and French, while spelt similarly to today, was pronounced more phonemically in comparison to its orthography, for example. I don't even think Russian existed yet in a separate form, but correct me if I'm wrong.

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Re: Incatena

Post by zompist »

Ran wrote:The problem is... that a human observer, either you and I, or an inhabitant of the conworld, would not be able to tell the difference between the above three scenarios. In all three scenarios, the para-human AI is a CEO designed to maximize profits and stakeholder welfare, etc.; and in all three scenarios, the para-human would indeed be maximizing profits and stakeholder welfare. (If a para-human AI stops running things and starts building attack robots, and humans are able to shut it down before the CEO rebels, then it clearly has not reached step 3 above!) What level the para-human AI has actually reached is, by definition (of the levels), inscrutable to humans. And what I'm arguing, and possibly what pthug is arguing, is that given centuries of humans building super/para-human AI's, and such AI's running crucial parts of human society, that eventually step 3 will happen, and at that moment humans will irrevocably (barring a super-super-human catastrophe) lose their control and dominion over human civilization.
Who has control and dominion over human civilization now? Certainly not humanity as a whole; not Obama, Hu, and Putin. You might picture control as a function that varies by individual and doesn't come close to 100%.

How much will this function change under your step 3? It's quite conceivable that it's better in general than it is now! Human life is already pretty inscrutable.

Of course we usually don't put it that way; we say that our knowledge is limited and not everything is under our control-- and we try to increase both variables. Why won't those methods work with the AIs? For that matter, why do you think the humans would be powerless, or the AIs united? I think you're using "inscrutable" to mean something like "magic", and it's impeding your analysis.

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Re: Incatena

Post by zompist »

finlay wrote:People are individuals, and different individuals have different tolerances to change like this. Over a 1000 year lifetime there is absolutely no way IMO that a person will finish their life speaking the same way they did when they learnt the language they're speaking, be that as an L1 or L2 or whatever.
I didn't say they would.
Even taking your 1000-year equivalent, we don't see anything from 1000 years ago that would be recognisable today; English has changed almost completely and French, while spelt similarly to today, was pronounced more phonemically in comparison to its orthography, for example. I don't even think Russian existed yet in a separate form, but correct me if I'm wrong.
Yes, of course, I've said so twice already.

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Re: Incatena

Post by zompist »

Pthug wrote:Central decision making bit?. You have a central decision making bit? May I see it? How smart is it?
Here, have a look!
And obviously in many circumstances, the CEO *does* want to be quick -- suppose he is trying to impress somebody with what a clever person he is? Or if he is negotiating something in a situation where time is of the essence. Also, I would say you are being too rational -- humans find being able to come up with good answers quickly to be a virtue, very often a greater virtue than reaching the right answer after careful meditation: which is why I describe the intersection of good insight [see below] and good intelligence to be "genius".
Equally, humans distrust computers that give answers too quickly! At least one expert system implemented a delay, as they found that users trusted the system better if it seemed to think about its answer!

And note that letting the AI stew over the answer for an hour rather than a fraction of a second is equivalent to giving it a few orders of magnitude more processing power. That might be worth losing the ability to toss off a meaningful quip.
According to the rising dogma, your brain is doing a lot of arithmetic right now just by virtue of being A Part Of Physics.
Maybe so, and yet you probably can't dive 293429.33 by pi easily. Our brains do some pretty sophisticated processing, and yet are pretty lousy as a simple calculator.
A Dasein that is built with the "insight" or "mindfulness" to be able to drill down, perhaps even to near the machine-code level [i.e. we are talking Bene Gesserit shit here], would beat all of these baseline-human techniques, and be immensely more capable of rational problem-solving, and the better marshalling of however many sub-agents are within it. The degree to which one is able to call on multiple sub-agents, whether one is conscious of them, or capable of altering their running, would, I think, more accurately be called something like "complexity".
I agree that being able to pin down and query the sub-agents would be a great advantage. (Though again, if it's that fabulous, maybe we'll eventually co-opt it.)
I don't see how the story has anything to do with Why You Might Want an Evaluative Central Function. There *was* an ECF in this situation, and it fucked up. The lesson seems to be more "Hey, don't fuck up! Try coming up with optimal decisions more often!" rather than "Hey look what happens when you don't have somebody on the top making decisions!", and what do you come up with? A protocol, but what does the protocol do? The protocol *decentralises* decision making since it allows lower-status subnodes to be given a larger voice than they ordinarily would in the hierarchy!
I didn't give 9/11 as an example of no ECF, but of a bad ECF. Information was percolating up, but getting ignored. Human managers vary widely in how good they are at this process.
Our instinctual desires are not an ethical system.
But I am afraid that they *are*. Even taking a rather restricted view of "instinctual", this is still the case -- good is that which one should do and evil that which one should avoid. Eating and fucking are good, pain is evil. A good agent is one that is an occasion of good and an evil agent is one that is an occasion of evil.

But I did not say "instinctual", and for this reason -- the term is *properly* applied to animals, whose psyches we have no real access to and so must depend on the *specifically behaviourist* concept of "instinct". If you notice where the idea of "instinct" is most easily applied to humans, you will notice it is applied to similarly non-psychological, but human cases -- those of infants, or of spinal reflex arcs.

And so humans properly have "desires" rather than "instincts", and naturally among these "desires" are correlates to the "instincts" of animals -- feeding, fucking, fighting and so on, and this is why I can say that our "instinctual desires" (i.e. those desires that are correlated [and cognate with] with animalian instincts) are ethical, because all desires are placed in a moral framework. But there are desires that are *sort of* "instinctual desires" -- the desire for high status, let us say, which is correlated to various mammalian "instincts" involving the modulation of eating, aggression, mating etc, but that are nonetheless not *considered* to be "instinctual" -- they are such things as justice, honour, fairness, power, greed and so on.
But we can contradict it and override it. There's a sect (I forget which, but I borrowed it for Jippirasti) where at some point the saints starve themselves to death. Take that, sort-of-instinctual desire! (Or what about your desire to abdicate in favor of the divine AIs?) And of course without going this far, we rationally and morally disobey these desires often.

Also, these desires aren't a system; you can't have an ethics that says "obey your instinctual desires," because they conflict. It might be better to describe them as precursors to or building blocks of ethics.

rotting bones
Avisaru
Avisaru
Posts: 409
Joined: Thu Sep 07, 2006 12:25 pm

Re: Incatena

Post by rotting bones »

There's nothing implausible about the concept of Socionomics in itself. If an all-embracing, prescriptivist theory based primarily on modelling & simulation is too simplistic, then a prohibitive science identifying, cataloging and analyzing general socio-economic scenarios that are doomed to fail, while undergoing constant refinement, seems fine to me. Conveniently, one of its practical prohibitions may very well involve exercising restraint rather than imposing these doctrines on unwilling subjects. After all, no one's suggesting that we foist modern medicine on superstitious New Agers and the like, right? But that shouldn't prevent us from employing less forceful methods to promote medical science. (which, like Socionomics and unlike crystal healing, Really Works!) In this sense, Pthug, as a prophet of Adam Curtis' theories of the Soviet collapse, is himself an early Socionomist when warning us about potential dangers of central overplanning. :P
If you hold a cat by the tail you learn things you cannot learn any other way. - Mark Twain

In reality, our greatest blessings come to us by way of madness, which indeed is a divine gift. - Socrates

rotting bones
Avisaru
Avisaru
Posts: 409
Joined: Thu Sep 07, 2006 12:25 pm

Re: Incatena

Post by rotting bones »

Why must computing devices as complicated as the human brain develop emergent consciousness analogous to ours in any respect? Come to think of it, what exactly qualifies as a computing device, and what differentiates them from similarly complex natural systems? It's not absolutely inconceivable that, ontologically speaking, natural ecosystems and autopoietic human societies have subjective experiences, or that the universe itself is an immense mind contemplating an infinitely complex mathematical problem as old as time! Obviously, not only is there no evidence for these needlessly anthropomorphic interpretations, it's also unclear whether epistemologically persuasive evidence of such phenomena/epiphenomena CAN be provided. The only proof of "consciousness" is behaving like our model of a "sentient being" while interacting with us, humans. Not all autopoietic complexes, even natural, organic and immensely convoluted ones, do that, so should we really be worried about artificial devices over whose design we (initially) have full control?

So far, we're unable to provide a rigorous definition as to what a computing device is, why natural ecologies don't qualify, satisfactorily explain why such devices might experience more subjective sense-perception than any other conglomerate of interdependent phenomena. And yet, if we're ready to consider fanciful notions like these in fear of a FOOM, it'd be hypocritical not to approach similar, existing systems with equal caution. She may not be equipped with lasers and nuclear detonators, (except through us) but for all we know, Mother Nature is FOOMING already! Autopoietic theories are nice and all, but they have no practical relavance to science at this stage, and their mechanical counterparts are no more convincing IMO.

If the answer is still yes, what are viable alternatives? Jain non-interference?
If you hold a cat by the tail you learn things you cannot learn any other way. - Mark Twain

In reality, our greatest blessings come to us by way of madness, which indeed is a divine gift. - Socrates

User avatar
Salmoneus
Sanno
Sanno
Posts: 3197
Joined: Thu Jan 15, 2004 5:00 pm
Location: One of the dark places of the world

Re: Incatena

Post by Salmoneus »

Not getting into the general debate this time, but I would like to pick up on what I think is a profound misunderstanding of Asimov by Zompist. Zomp is quite right to pick up on the ubiquity of technocratic elites in Asimov's work (not just Foundation), but I think he draws the wrong impression from it. In my view, Asimov is a textbook liberal in this regard, because he doesn't believe in utopia. His universe involves insoluble power relations that can be oppressive, and every time anyone says "forget about minimising the harm these authorities do, let's get rid of the authorities", it turns out that they're playing into the hands of the authorities. In Foundation, we can't escape from power and oppression, only ameliorate it. In that respect, you're right to use the word 'Nietzschean', I think. That, in my opinion, is the whole point of the endless regression of comptrollers in the universe: at each stage, people react to injustice by one authority by trying to go outside that authority, appealling to a new, external power, and this escape makes them happy and complacent, and then it's realised how the saviors are in fact another layer of oppressors. We can never get out of the system: we break out of the first foundation node, out of the mule node, out of the second foundation node, into the gaia node, from which we can see the olivaw node, but we never become FREE in this process. Hence the continual paranoia and freedom-lust - which are not portrayed negatively! Asimov's society has a continual questing to reject authority and control, but at the same time a realisation that we can never be wholly free no matter how many overseers we overthrow.

It's true that at the end it looks as though Gaia might be a utopia, but even Travize ends the series deeply unsure of the rightness of the Gaian path, and through the hints about aliens and Solarians right at the very end we come to see that having escaped the world we are only standing on the cusp of a bigger prison.

So I don't think Asimov is illiberal in his technocracies - he's not saying that we need to be slaves to survive, he's saying that (at least in his universe) we cannot escape being slaves in some respect, but that not all masters are equal; and in doing so he scorns both those who are wholly content and placid in slavery without the urge for freedom, and also those who are so idealist about perfect liberty that they cannot see the real, suboptimal choices that confront them.

-----

I also think you're unfair about Asimov's writing. Asimov could at times be a fairly good prose stylists, and there are definately times when you 'notice the words' - it just wasn't usually what he was interested in.

Here's the beginning of one chapter of Caves of Steel, for instance:
On the uppermost levels of some of the wealthiest subsections of the City are the natural Solariums, where a partition of quartz with a movable metal shield excludes the air but lets in the sunlight. There the wives and daughters of the City’s highest administrators and executives may tan themselves. There, a unique thing happens every evening.

Night falls.

In the rest of the City (including the UV-Solariums, where the millions, in strict sequence of allotted time, may occasionally expose themselves to the artificial wavelengths of arc lights) there are only the arbitrary cycles of hours.

The business of the City might easily continue in three eight-hour or four six-hour shifts, by “day” and “night” alike. Light and work could easily proceed endlessly. There are always civic reformers who periodically suggest such a thing in the interests of economy and efficiency.

The notion is never accepted.

Much of the earlier habits of Earthly society have been given up in the interests of that same economy and efficiency: space, privacy, even much of free will. They are the products of civilization, however, and not more than ten thousand years old.

The adjustment of sleep to night, however, is as old as man: a million years. The habit is not easy to give up. Although the evening is unseen, apartment lights dim as the hours of darkness pass and the City’s pulse sinks. Though no one can tell noon from midnight by any cosmic phenomenon along the enclosed avenues of the City, mankind follows the mute partitionings of the hour hand.

The expressways empty, the noise of life sinks, the moving mob among the colossal alleys melts away; New York City lies in Earth’s unnoticed shadow, and its population sleeps.

Elijah Baley did not sleep. He lay in bed and there was no light in his apartment, but that was as far as it went.

Jessie lay next to him, motionless in the darkness. He had not felt nor heard her move.

On the other side of the wall sat, stood, lay (Baley wondered which) R. Daneel Olivaw.
Do you not notice any of those words?
Blog: [url]http://vacuouswastrel.wordpress.com/[/url]

But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!

User avatar
personak
Sanci
Sanci
Posts: 25
Joined: Thu Oct 14, 2010 12:31 pm
Location: USA
Contact:

Re: Incatena

Post by personak »

How are you in developing the languages?
Remember, kids. Aspirate your initial plosives!
http://nathansoftware.blogspot.com

FOSCA

User avatar
Ashroot
Lebom
Lebom
Posts: 99
Joined: Fri Jan 21, 2011 12:24 am

Re: Incatena

Post by Ashroot »

Now I don't know what you have said here but a pointer. Helium3 That is an element that is less hash on nuclear reactors. Instead of replacing the reactor every 20 years you only have to replace it every 100. Also Why only suck hydrogen off of gas giants? I would suggest you read http://en.wikipedia.org/wiki/Star_lifting. This would allow you to remove the elements and throw back what you don't want. This could, in theory, extend the life of your star.

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Re: Incatena

Post by zompist »

Salmoneus wrote:So I don't think Asimov is illiberal in his technocracies - he's not saying that we need to be slaves to survive, he's saying that (at least in his universe) we cannot escape being slaves in some respect, but that not all masters are equal; and in doing so he scorns both those who are wholly content and placid in slavery without the urge for freedom, and also those who are so idealist about perfect liberty that they cannot see the real, suboptimal choices that confront them.
As he is mostly negative about the various masters, I agree with much of this. It's even true of his personal path, in that the '50s Asimov let the Second Foundation win, and the '80s Asimov poured scorn on them (and to some extent on Seldon himself).

But at any one point, Asimov is awfully happy with his current set of masters. Bear, Brin, and Kingsbury are much more skeptical about the robots; Asimov doesn't seem to see anything wrong with the hidden hand of Daneel.

I don't see that he really does scorn "those who are idealist about perfect liberty"; he just ignores the democratic alternative. No one even bothers to address why we can't have a Galactic Republic, or even a House of Humans that king Daneel might grudgingly consult. The closest we get is the First Foundation, which by the end of the book is starting to act, well, like the overreaching US of Asimov's time, and which no major character respects.

(Note, personally he was a liberal, but I'm not looking at his personal convictions, but at what comes across in the Foundation books.)
Do you not notice any of those words?
You're quoting a comparison to Gregory Benford; do you think my comparison between Asimov's and Benford's Foundation books is unfair?

But in general Asimov's style is clear and straightforward rather than memorable. In the sense I was contrasting him to Benford, no, I don't "notice the words" in the passage you quote. Nothing sparkles or surprises or draws attention to the prose itself.

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Re: Incatena

Post by zompist »

Ashroot wrote:Now I don't know what you have said here but a pointer. Helium3 That is an element that is less hash on nuclear reactors. Instead of replacing the reactor every 20 years you only have to replace it every 100. Also Why only suck hydrogen off of gas giants? I would suggest you read http://en.wikipedia.org/wiki/Star_lifting. This would allow you to remove the elements and throw back what you don't want. This could, in theory, extend the life of your star.
Something that requires 10% of the sun's total output is way beyond the technology of the Incatena, I'm afraid.

Personak asks about languages, and I'm afraid the answer will be disappointing— the only real conlanging done is on Okurinese, to deform a few Japanese words enough to push them well into the future. Perhaps sadly, a far-future sf novel is not the best vehicle for showing off a conlang.

(A near-future English is another story, as it can be used for the narrative... probably to the extreme annoyance of the reader.)

Neek
Avisaru
Avisaru
Posts: 355
Joined: Mon Sep 30, 2002 12:13 pm
Location: im itësin
Contact:

Re: Incatena

Post by Neek »

zompist wrote:(A near-future English is another story, as it can be used for the narrative... probably to the extreme annoyance of the reader.)
This makes me ponder, was Nadsat annoying in A Clockwork Orange?

User avatar
Pthagnar
Avisaru
Avisaru
Posts: 702
Joined: Fri Sep 13, 2002 12:45 pm
Location: Hole of Aspiration

Re: Incatena

Post by Pthagnar »

Neek wrote:This makes me ponder, was Nadsat annoying in A Clockwork Orange?
No, but it could have been if Burgess fucked it up.

Rodlox
Avisaru
Avisaru
Posts: 281
Joined: Tue Jul 12, 2005 11:02 am

Re: Incatena

Post by Rodlox »

finlay wrote:
zompist wrote: My understanding is that the biggest source of standardization is physical mixing among adolescents and young adults, as you get in schools, universities, and army barracks with universal service. You have to have both exposure to other dialects, and social pressure to conform. (My classmates and I learned to understand and imitate Monty Python, but had no motivation to always speak like them.)

By contrast, adults have a much higher tolerance for moderate accents, and these therefore can persist indefinitely. Certainly older people's speech does change, but as a counter-anecdote to the Queen I'll mention a woman I met in a nursing home; she was French and still had a French accent after being in the US for something like 50 years... despite, in fact, not being able to communicate well in French! Likewise I know many Hispanic immigrants-- my wife, for instance-- who don't their accent, though it becomes much less dramatic.
People are individuals, and different individuals have different tolerances to change like this. Over a 1000 year lifetime there is absolutely no way IMO that a person will finish their life speaking the same way they did when they learnt the language they're speaking, be that as an L1 or L2 or whatever.

Separated onto different planets, there's going to be a new level of isolation, too, meaning that linguistic systems don't have as much pressure to stay "the same" to facilitate conversation between the two. Plus, the world population is going to multiply vastly to 100s of billions or something
Well, there would be at least one source of pressure to stay "the same": the older generations.

With people moving from world to world at least once in their respective lives, the odds are high that they will run into someone who moved there centuries ago -- who will have since that arrival been influencing the local lingo with his speech

Actually, this might make it tough to move from pidgins to creoles as well - the next generation will never be completely succeeding the contact generation (and even when they do, the pidgin might have been in use so long the next generation simply uses that)
MadBrain is a genius.

Owain
Niš
Niš
Posts: 13
Joined: Sun Sep 05, 2010 4:49 pm
Location: Colwyn Bay, Wales

Re: Incatena

Post by Owain »

zompist wrote: I already explained this: monarchy sucks, not only in politics but in economics. Democracy works better, but I think we'll learn to get past the 18th century technology of reducing it to electing a quasi-monarch.
Does this mean you'd like something similar to the Swiss Federal Council? http://en.wikipedia.org/wiki/Swiss_Federal_Council
Also, I'd guess from this that you'd be in favour of PR, rather than opposed to it.

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Re: Incatena

Post by zompist »

I like the Swiss system very much. Collective leadership ought to be tried more often.

I'm not opposed to PR, but I don't see it as a panacea either. It can lead to instability or to domination by minority parties. On the other hand our current politics is no great advertisement for winner-take-all constituencies.

But councils and parliaments— things that have to meet in one city— are 18C technology too, and I think we could do better, now that the electorate can be immediately and quickly consulted. I'd expect to see single-issue voting, as well as technocratic boundaries to keep the electorate from being too stupid. (E.g. an initiative to simply cut taxes shouldn't be allowed, nor one based on ignorance.)

Post Reply