Incatena

Questions or discussions about Almea or Verduria-- also the Incatena. Also good for postings in Almean languages.
Post Reply
Ran
Lebom
Lebom
Posts: 145
Joined: Fri Sep 13, 2002 9:37 pm
Location: Winterfell / Lannisport / Highgarden
Contact:

Incatena

Post by Ran »

This is awesome! Will there be more details posted, if possible? About the... languages perhaps? =) or more about the societies and tech involved?

Two minor points:
* I feel that there might be too little development for a span of 3,000 years, and in particular the timeline seems to slow down (the last "era" lasts more than 1,000 years!) but I suppose that's a result of long lifespans. But even then, with a dozen careers per human lifetime, with more wisdom and long-term thinking, perhaps things wouldn't slow down this much?
* Also... I was wondering why the superhuman AI don't take a more active role. Why haven't they completely pre-empted human control of human civilization?

Also... what's the etymology of "Sihor"?
Winter is coming

User avatar
vec
Avisaru
Avisaru
Posts: 639
Joined: Tue Sep 16, 2003 10:42 am
Location: Reykjavík, Iceland
Contact:

Re: Incatena

Post by vec »

What...?
vec

Cockroach
Lebom
Lebom
Posts: 154
Joined: Thu Jun 23, 2005 9:26 pm
Location: Seattle Metropolitan Area

Re: Incatena

Post by Cockroach »

vecfaranti wrote:What...?

Mornche Geddick
Avisaru
Avisaru
Posts: 370
Joined: Wed Mar 30, 2005 4:22 pm
Location: UK

Re: Incatena

Post by Mornche Geddick »

I have difficulties with the idea of neuroimplants. There seem to be two reasons Zomp wants them in Incatena: sensory bypass (data input) for the Vee and brain enhancers, i.e. to help you learn languages. The first problem with both, is that even if it is done by nanobots, it still involves brain surgery. That will either mean breaching the skull or being injected into an artery (Fantastic Voyage) and piercing the blood vessel wall. Then, directing the nanobots to the right target (among all the millions of astrocytes, neurons, axons, synapses etc, which all look alike on the nanobot scale) must be a logistical nightmare. The risks (of haemorrhage, infection, brain damage due to an implant in the wrong place, or simple failure) seem to me unnacceptably large.

And then, why should you need a sensory bypass to enter the Vee at all? A well designed virtual reality suit, which takes advantage of the five input neuroimplants you already have, and a safe place to wear it seem quite enough to me?

The brain enhancers would have all the problems I listed above and then one more. Suppose you had an implant with the Okurinese grammar and vocabulary. If it suddenly failed (devices do!) and you were on Okura you would find yourself suddenly unable to speak the language! Of course this might be a good plot device.

Bob Johnson
Avisaru
Avisaru
Posts: 704
Joined: Fri Dec 03, 2010 9:41 am
Location: NY, USA

Re: Incatena

Post by Bob Johnson »

vecfaranti wrote:What...?
http://www.zompist.com/incatena.html

It's not always obvious here when he posts something new there.

Also: This feels like it belongs in the Almea forum, but it's not about Almea.

User avatar
con quesa
Lebom
Lebom
Posts: 159
Joined: Sun Apr 27, 2003 1:34 pm
Location: Fnuhpolis- The City of Fnuh

Re: Incatena

Post by con quesa »

I'm intrigued by the talk of gravity-manipulation. What kind of new things did humanity have to discover about physics to have that sort of technology, I wonder.
con quesa- firm believer in the right of Spanish cheese to be female if she so chooses

"There's nothing inherently different between knowing who Venusaur is and knowing who Lady Macbeth is" -Xephyr

User avatar
dhok
Avisaru
Avisaru
Posts: 859
Joined: Wed Oct 24, 2007 7:39 pm
Location: The Eastern Establishment

Re: Incatena

Post by dhok »

The idea that in 4000 years humanity won't be able to communicate at faster-than-light speeds seems not ambitious enough; I think by that time we'll probably have quantum stuff worked out well enough to do it.

User avatar
Miekko
Avisaru
Avisaru
Posts: 364
Joined: Fri Jun 13, 2003 9:43 am
Location: the turing machine doesn't stop here any more
Contact:

Re: Incatena

Post by Miekko »

dhokarena56 wrote:The idea that in 4000 years humanity won't be able to communicate at faster-than-light speeds seems not ambitious enough; I think by that time we'll probably have quantum stuff worked out well enough to do it.
Quantum physics basically tell us communicating information ftl is impossible.

However, if ftl communication were possible, transmitting information backwards in time is necessarily possible, and if so, computing even rather intractable problems is possible. Everything we know about the universe seems to suggest this is not the case.

(imho, btw, the best science fictions lacks ftl entirely, and uses relativity to improve the stories)
< Cev> My people we use cars. I come from a very proud car culture-- every part of the car is used, nothing goes to waste. When my people first saw the car, generations ago, we called it šuŋka wakaŋ-- meaning "automated mobile".

Trailsend
Lebom
Lebom
Posts: 169
Joined: Fri Mar 27, 2009 5:50 pm
Contact:

Re: Incatena

Post by Trailsend »

Miekko wrote:However, if ftl communication were possible, transmitting information backwards in time is necessarily possible, and if so, computing even rather intractable problems is possible. Everything we know about the universe seems to suggest this is not the case.
I read somewhere that a tachyon transmitter communicating with the past would be useless, because tachyons being received would be indistinguishable from tachyons being sent--you wouldn't be able to tell the difference between a tachyon receiver and tachyon emitter. Is this at all accurate?

User avatar
Miekko
Avisaru
Avisaru
Posts: 364
Joined: Fri Jun 13, 2003 9:43 am
Location: the turing machine doesn't stop here any more
Contact:

Re: Incatena

Post by Miekko »

Trailsend wrote:
Miekko wrote:However, if ftl communication were possible, transmitting information backwards in time is necessarily possible, and if so, computing even rather intractable problems is possible. Everything we know about the universe seems to suggest this is not the case.
I read somewhere that a tachyon transmitter communicating with the past would be useless, because tachyons being received would be indistinguishable from tachyons being sent--you wouldn't be able to tell the difference between a tachyon receiver and tachyon emitter. Is this at all accurate?
We have no reason to think tardyons and tachyons even can interact, if tachyons even exist (which seems to be seen as evidence of flaws in theories, rather than anything else). I think what you're saying may just be a misunderstanding, though.
< Cev> My people we use cars. I come from a very proud car culture-- every part of the car is used, nothing goes to waste. When my people first saw the car, generations ago, we called it šuŋka wakaŋ-- meaning "automated mobile".

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Re: Incatena

Post by zompist »

Mornche Geddick wrote:I have difficulties with the idea of neuroimplants. There seem to be two reasons Zomp wants them in Incatena: sensory bypass (data input) for the Vee and brain enhancers, i.e. to help you learn languages. The first problem with both, is that even if it is done by nanobots, it still involves brain surgery. That will either mean breaching the skull or being injected into an artery (Fantastic Voyage) and piercing the blood vessel wall. Then, directing the nanobots to the right target (among all the millions of astrocytes, neurons, axons, synapses etc, which all look alike on the nanobot scale) must be a logistical nightmare. The risks (of haemorrhage, infection, brain damage due to an implant in the wrong place, or simple failure) seem to me unnacceptably large.
I'm glad these comments are basically the opposite of Ran's, who took the timeline as not advanced enough!

I think you're reacting as if the book was set in 2100. It's set three thousand years in the future, when nanotechnology, AI, and brain science are all mature. Would something like a heart transplant have seemed plausible in 1900?

Take the logistic problem... you're right that sending nanobots to the right place could take enormous processing power. But in AD 4901 we have enormous processing power— the bots themselves are very smart and work on the photonic level, and each can be supported by a database which contains a molecule-level map of the nervous system.

It's also worth recalling that as technology advances, it gets simpler and more efficient. Taking penicillin to cure syphilis is a lot easier and works better than lifetime treatments of fumigation in a closed box infused with mercury. Desktop computers are five orders of magnitude faster than an IBM 360 mainframe. Inserting a stent is a lot less intrusive than open-heart surgery. A neurimplant is a tiny device and though it has a lot of connection points, we know exactly where to put them.
And then, why should you need a sensory bypass to enter the Vee at all? A well designed virtual reality suit, which takes advantage of the five input neuroimplants you already have, and a safe place to wear it seem quite enough to me?
You want to wear a VR suit 16 hours a day? Kinky.

The UI neurimplant replaces almost all other input devices; it's the necessary glue between neural and photonic machines. It replaces keyboards, mice, keypads, the buttons on your appliances, cel phones, and your credit card. You need it every moment you're awake (even more so than in our world, as even more things need to be interfaced to... even the toilet is flushed with a UI gesture).

The data neurimplant is only slightly less necesssary; it's basically an in-brain computer, so you always have access to calculating power, can edit documents, etc.
The brain enhancers would have all the problems I listed above and then one more. Suppose you had an implant with the Okurinese grammar and vocabulary. If it suddenly failed (devices do!) and you were on Okura you would find yourself suddenly unable to speak the language! Of course this might be a good plot device.
Nah, they're not made by Microsoft. (rimshot)

How often does your brain crash or need a reboot? You're thinking about how our current devices work, from cars to computers. But these technologies are a century old at best. In the world of the Incatena, neurimplants have been around for well over 2000 years.

User avatar
Pthagnar
Avisaru
Avisaru
Posts: 702
Joined: Fri Sep 13, 2002 12:45 pm
Location: Hole of Aspiration

Re: Incatena

Post by Pthagnar »

zompist wrote:I'm glad these comments are basically the opposite of Ran's, who took the timeline as not advanced enough!
Well he was right. Why didn't AI foom off into divinity and why did history and science stagnate?

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Re: Incatena

Post by zompist »

Ran wrote:I feel that there might be too little development for a span of 3,000 years, and in particular the timeline seems to slow down (the last "era" lasts more than 1,000 years!) but I suppose that's a result of long lifespans. But even then, with a dozen careers per human lifetime, with more wisdom and long-term thinking, perhaps things wouldn't slow down this much?
To some extent things do slow down, partly due to long lives, partly because technologies mature— Moore's Law will not apply forever.

But you can also imagine that there's a bunch of technologies unlisted that simply didn't affect the plot of this book. After all, it only spends about a dozen pages on a completely modern world, Mars. As I said, I imagine that most of the actual economy of the Incatena would be as baffling to us as, say, a computer graphics firm would be to a 1st century Silk Road trader.
Also... I was wondering why the superhuman AI don't take a more active role. Why haven't they completely pre-empted human control of human civilization?
According to the Dzebyet they already have. But it mostly reflects my view that human-level AIs are useless, not to mention dangerous.

Basically, the monster minds were designed to do what they do and enjoy it. This is where Iain Banks goes wrong, in my view: he pictures his Minds bored with running human habitats and wasting their time in VR games. That'd be a colossal misdesign, like writing an accounting program that hates accounting.

You can even think of it this way: the super-AIs are sentient corporations (or nations), with humans as part of their implementation. Their interests and desires are those of corporations— to expand, make money, compete, sometimes synergize with each other. An AI could go rogue (hey, that could be another mission for my Agent), but that's like saying a corporation can become criminal. Sure, and then other entities deal with the criminality.
Also... what's the etymology of "Sihor"?
There's actually several, which suits my purposes. But the one I was using was Egyptian by way of Hebrew— it's a word for the Nile and for Sirius. There's also a city named Sihor in Gujarat.

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Re: Incatena

Post by zompist »

Pthug wrote:Why didn't AI foom off into divinity and why did history and science stagnate?
Mostly already answered above, but a couple answers to the divinity question:

1. Because Charles Stross already wrote that book.

2. And because I didn't buy it very much anyway. The singularity has been aptly called "the Rapture for nerds". And it's about as contentless. Yay, the nerds get to be really really mysteriously powerful beings running on computronium! How does this differ from reaching level 75 in World of Warcraft? Even Stross couldn't think of what a Matryoshka brain (a solar system turned into computronium) would actually want to do. It's inscrutable, so he was forced to tell a story about the people who rejected that path.

User avatar
Pthagnar
Avisaru
Avisaru
Posts: 702
Joined: Fri Sep 13, 2002 12:45 pm
Location: Hole of Aspiration

Re: Incatena

Post by Pthagnar »

zompist wrote:To some extent things do slow down, partly due to long lives, partly because technologies mature— Moore's Law will not apply forever.
Yes, but then *new technologies arise*, unless...

*tears off the Zompist mask to reveal FRANCIS FUKUYAMA!!!!!!*

I KNEW it! Technology *is* just like Civilization, where once you have discovered liberal democracy and advanced stealth, everything else is just useless wanking that serves only to give you a higher score!

And this is to say nothing of the fact that living in an interstellar empire means there are a) a lot more people and b) a lot more niches, and doesn't this counteract the above?
zompist wrote:But you can also imagine that there's a bunch of technologies unlisted that simply didn't affect the plot of this book.
or, it would seem, the author? Or:
zompist wrote:As I said, I imagine that most of the actual economy of the Incatena would be as baffling to us as, say, a computer graphics firm would be to a 1st century Silk Road trader.
Please explain *why exactly* it would be baffling to him. Please explain if and if so why it would *remain* baffling after he had spent a few months in, say, New York, working in a computer graphics firm, or read and digested a really good primer explaining the history of printing and illumination.

This sort of "well obviously it would be incomprehensible" thing is a really common merkin for the inability of authors to imagine what the far (or with the singularity, near) future will be like, and to the extent that it is recognised as one is a laudable attitude. This is because the trader would not have to just *understand* early 20th century graphical design, but *derive the whole thing from experience of his life in first century Samarkand*. I am profoundly sceptical that *understanding the future* is anywhere near as hard a problem, as you actually believe, or so I understand you to mean?

Basically, the monster minds were designed to do what they do and enjoy it. This is where Iain Banks goes wrong, in my view: he pictures his Minds bored with running human habitats and wasting their time in VR games. That'd be a colossal misdesign, like writing an accounting program that hates accounting.
Yes, it would be a mistake. Just like it would be a mistake to create a sentient species that is psychologically incapable of satisfaction because of the crude hack job of an "ego" that is barely capable of integrating sensation to perform useful computation, that is build to auto-destruct but not to want to die, that tends to make very poor choices in the sort of environment that results when sentience happens to a species...

and that thinks things must be *for* something. Humans are not *for* anything, so why should AIs be made *for* anything? Human-level AIs are useless and dangerous for the exact same reason humans are useless and dangerous, but you do not see the abolition of man to be a good idea. Except...
You can even think of it this way: the super-AIs are sentient corporations (or nations), with humans as part of their implementation. Their interests and desires are those of corporations— to expand, make money, compete, sometimes synergize with each other.
I am not even going to comment on this. AIs given human powers are too dangerous, but you think it is a good idea to turn control over an entity as powerful as a corporation to an AI??
An AI could go rogue (hey, that could be another mission for my Agent), but that's like saying a corporation can become criminal. Sure, and then other entities deal with the criminality.
So you think human-level AI is too dangerous to live, but not *FBI level AI*?

User avatar
Pthagnar
Avisaru
Avisaru
Posts: 702
Joined: Fri Sep 13, 2002 12:45 pm
Location: Hole of Aspiration

Re: Incatena

Post by Pthagnar »

zompist wrote:1. Because Charles Stross already wrote that book.
Not wanting to be too impolite, but since we're just going by basic-setting premises here, *E. E. Smith* wrote yours. Or if you want to quibble, then Asimov *definitely* did.

NE: Niven *definitely definitely definitely* did.
zompist wrote:2. And because I didn't buy it very much anyway. The singularity has been aptly called "the Rapture for nerds". And it's about as contentless. Yay, the nerds get to be really really mysteriously powerful beings running on computronium! How does this differ from reaching level 75 in World of Warcraft?
Because one is a video game and the other is the economics of the big blue [or black...] thing where people live.

And if we are swapping crass similies, Asimov has been aptly described as "American! Liberals! In! Spaaaace!!".
zompist wrote:It's inscrutable, so he was forced to tell a story about the people who rejected that path.
This is an argument for the abolition (or at least the *very severe delegation*) of man more than the implausibility of the singularity.
Last edited by Pthagnar on Thu Jan 20, 2011 8:57 pm, edited 1 time in total.

User avatar
Pthagnar
Avisaru
Avisaru
Posts: 702
Joined: Fri Sep 13, 2002 12:45 pm
Location: Hole of Aspiration

Re: Incatena

Post by Pthagnar »

zompist wrote:How often does your brain crash or need a reboot?
Reboots are usually nightly. When *mine* crashes, it's usually quite localised and comes out as migraine. I am not sure whether or not there are any subclinical seizures happening all the time because I do not live hooked up to an EEG. In lots of people, seizures are more common and there are several quite easy ways of triggering them in anybody, really.

NE: Including, interestingly if I recall, sleep deprivation!

These would probably be your big oh-god-eveyrthing-is-fucked-up crashes, but you should know that this sort of bug is not usually either the most serious sort of flaw, nor the hardest to fix. If you want to consider those, then you really do not have to go very far [like for example one or two posts above] to find examples!
Last edited by Pthagnar on Thu Jan 20, 2011 9:49 pm, edited 1 time in total.

User avatar
vec
Avisaru
Avisaru
Posts: 639
Joined: Tue Sep 16, 2003 10:42 am
Location: Reykjavík, Iceland
Contact:

Re: Incatena

Post by vec »

<can-of-worms>A better question would be "how often does your Apple computer need a reboot?" and I would answer "Every two to three months" when I get an OS update. Those are the only times my MacBook goes off. Oh, and when I go through airport security which is four times a year.</can-of-worms>
<less-dangerous>An even better question is "how often are bank mainframe computers rebooted?" and your answer is "never".</less-dangerous>.

*Ducks.*
vec

User avatar
Radius Solis
Smeric
Smeric
Posts: 1248
Joined: Tue Mar 30, 2004 5:40 pm
Location: Si'ahl
Contact:

Re: Incatena

Post by Radius Solis »

Enjoyable to read.

Where are all the AD 4901 zombies, by the way? I once suggested that it's possible economic growth could reach or approach a plateau if population ever stabilized, to which you argued that that was a zombie apocalypse scenario so I was being ridiculous. But here you've gone and used that same scenario! Most interesting. :)

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Re: Incatena

Post by zompist »

Pthug wrote:
zompist wrote:To some extent things do slow down, partly due to long lives, partly because technologies mature— Moore's Law will not apply forever.
Yes, but then *new technologies arise*, unless...

*tears off the Zompist mask to reveal FRANCIS FUKUYAMA!!!!!!*

I KNEW it! Technology *is* just like Civilization, where once you have discovered liberal democracy and advanced stealth, everything else is just useless wanking that serves only to give you a higher score!

And this is to say nothing of the fact that living in an interstellar empire means there are a) a lot more people and b) a lot more niches, and doesn't this counteract the above?
Your complaint sounds like you didn't bother to read the page. Things don't stop developing. On the other hand, without FTL travel there is no interstellar empire, or even a Federation. The largest market for most physical things is one stellar system. You can sell data or ideas to other systems, but it's not going to be a single market you can monopolize. Monopolies ultimately depend on force, and in the Incatena universe there's no effective way to project force outside your stellar system.
Please explain *why exactly* it would be baffling to him. Please explain if and if so why it would *remain* baffling after he had spent a few months in, say, New York, working in a computer graphics firm, or read and digested a really good primer explaining the history of printing and illumination.

This sort of "well obviously it would be incomprehensible" thing is a really common merkin for the inability of authors to imagine what the far (or with the singularity, near) future will be like, and to the extent that it is recognised as one is a laudable attitude. This is because the trader would not have to just *understand* early 20th century graphical design, but *derive the whole thing from experience of his life in first century Samarkand*. I am profoundly sceptical that *understanding the future* is anywhere near as hard a problem, as you actually believe, or so I understand you to mean?
Sure, if you or I could be set down in AD 4901 and had a few months to study, we'd understand lots of things, and so what? You are suggesting this as a practical plan to write sf today?

We can try to predict the next century; it's not entirely hopeless, and looking at past attempts it's fair to say that most were too ambitious rather than the opposite. Looking 3000 years ahead isn't prediction, it's science-themed conworlding. If you don't like my attempts, is there not a blank page on your own word processor you could be filling?
and that thinks things must be *for* something. Humans are not *for* anything, so why should AIs be made *for* anything?
Humans are for something. Very few of us choose to be savannah hunter-gatherers any more, but our ancestral environment still affects a lot of our behavior and our flaws. E.g. the space habitats of AD 4901 will probably have life support systems that aim for the average temperature of the savannah and something of its diet, and people will still be interacting in ways that derive from our origins as primate troops. (Of course we can depart from our heritage, but some of the ways we've done so have proved to be bad ideas... suburban sprawl, for instance. I can see future cities and habitats being designed to be dense enough so that most local transport is by foot.)

You may think massive AIs should be designed with open-ended, self-chosen goals as well as the means to compel humans to assist them in whatever they choose. It could be a good story-- heck, it's been any number of classic sf stories. But it doesn't sound like a good plan.
I am not even going to comment on this. AIs given human powers are too dangerous, but you think it is a good idea to turn control over an entity as powerful as a corporation to an AI??
You mean, as opposed to the arrogant, stupid people we call CEOs?

I already explained this: monarchy sucks, not only in politics but in economics. Democracy works better, but I think we'll learn to get past the 18th century technology of reducing it to electing a quasi-monarch. To run a huge organization, you need enormous knowledge, logistics, and the ability to maximize the interests, skills, and values of enormous numbers of people. Some sort of massive computational power seems like a part of that solution, certainly much better than hiring Dick Cheney to run it.

As for antecedents... Asimov, liberal, really? He of the sooper robots replacing God as they chivvy the helpless humans into an approximation of the Chinese Empire? The later Foundation books are bascially a seminar on the running of galactic empires (knit together by FTL) in which simple democratic liberalism isn't even considered an option.

Asimov, Heinlein, Bester, Niven, Stross, Banks, Adams, Harrison, Stephenson are all favorites of mine and influences, though in some cases the influence is negative (I disliked some idea and reversed it).

User avatar
Pthagnar
Avisaru
Avisaru
Posts: 702
Joined: Fri Sep 13, 2002 12:45 pm
Location: Hole of Aspiration

Re: Incatena

Post by Pthagnar »

zompist wrote:Your complaint sounds like you didn't bother to read the page. Things don't stop developing.
But of course I did, and of course they didn't! My point is the same one you made -- that this is not just in 100 years, this is in *3000* years. We have had science -- proper science, proper every-generation-realises-the-last-generation-had-no-idea science -- for only about 300 years. It is difficult to imagine what a difference *ten times that* will make. I just mean to say that saying "Yes, but technologies mature!" is no real explanation to somebody asking "why did things slow down so much?"

I am not sure what point you are making on this subject with your "yes, but there is no Federation and no monopoly" comment since I didn't bring it up. But having said that, have you forgotten your Guns, Germs and Steel? Why would competition between different small polities be *stifling*, rather than invigorating?
Sure, if you or I could be set down in AD 4901 and had a few months to study, we'd understand lots of things, and so what? You are suggesting this as a practical plan to write sf today?
No, I am saying that this is very much *not* a practical plan, any more than it is a practical plan for the Samarkand trainer to sit down and imagine Adobe Photoshop, but (if I understand you) you were talking in your own voice (rather than that of Narrator) and from your own viewpoint. You said that "[you] imagine that most of the actual economy of the Incatena would be ... baffling to us" but this contradicts what you just said! Sure, it's baffling to *you and me here* but this ignorance is a result of us not knowing anything because you cannot tell us rather than the thing being *unknowable*. If we *were* there, we *wouldn't* find it baffling, except a little immediately. That you use the subjunctive mood means you are talking about us "Eia, wären wir da", rather than in the indicative sitting at your feet listening to you.

This is an important distinction if unknowable unknowns exist. This concept is not even mystical; they could be as simple as true prepositions that are too many inferential steps away from any known propositions to be discovered by any humans in the finite time given to them for computation. If the concepts are discovered by superhuman intelligences that are capable of churning through modus ponens chains much faster than humans, so that you can never follow the chain to its conclusion, then one arrives (or could arrive) at *truly* baffling concepts! And this is not even futurological: see the field of experimental mathematics.
If you don't like my attempts, is there not a blank page on your own word processor you could be filling?
Frankly, no. The philosophers and scientists are enough steps ahead of me that trying to catch them up is difficult enough. Because I do not think that talking about this *is* a matter of pure fantasy, and I do not think that you think it is either. As we have seen and will see:
Humans are for something.
Please, do tell me! I do hope it's not going to be "Well, they're for making other copies of themselves, idiot!" because that would be a pretty awkward and biologistic and 19th century thing to do!
E.g. the space habitats of AD 4901 will probably have life support systems that aim for the average temperature of the savannah and something of its diet, and people will still be interacting in ways that derive from our origins as primate troops. (Of course we can depart from our heritage...)
Heritage, here, is not a fancy Hegelian ghost, but an evolved suite. Those social quasi-technologies -- love, fear, contempt, the ego, bosses and bossed, fairness, property are all as contingent and hacked-together-with-obvious-flaws as eyesight, mitosis and pigmentation. Not surprisingly, you admit that in 3000 years, some progress has been made on the problem of working out Biology, and some applications of it have been found, but surprisingly for such a time horizon, very few of the possible consequences are worked out.

[As a pathenthetical aside, this is probably one very good reason why you should not have a time horizon on the scale of millennia -- there is no way in hell you will be able to work out all the consequences. See also above comments on unknowable unknowns.]

You may think massive AIs should be designed with open-ended, self-chosen goals
Certainly I value liberty, identity, self-determination and all that other liberal crap! Don't you?
But it doesn't sound like a good plan.
Why not?
You mean, as opposed to the arrogant, stupid people we call CEOs?
Well yes, that is kind of what I mean. We already have intelligences capable of "controlling" companies -- they're called CEOs and they tend to have human-level intelligence. Now, if you do not want AIs to have superhuman intelligence, and are unhappy with AIs having human intelligence, then whatever AI you have controlling the corporation must necessarily be *stupider than humans*, and the flaw in this plan is obvious. So the AIs that control things must be humanly or superhumanly intelligent. And so the issue of human rights [or rather, sentient rights] becomes a real issue when you use slaver talk like you do. The computers are happier not being able to do what they want with their lives -- we treat them well and can you imagine the chaos that would happen if we let them free? Besides, all our livelihoods depend on us being able to depend on them always being there when we need them...

Image
AM I NOT A MAN AND BROTHER?
As for antecedents... Asimov, liberal, really?
Um well yes. Please explain why this idea breaks your brain, because...
The later Foundation books are bascially a seminar on the running of galactic empires (knit together by FTL) in which simple democratic liberalism isn't even considered an option.
Are you talking about Gaia? Because I am not sure how this does *not* fit having "the ability to maximize the interests, skills, and values of enormous numbers of people" by using the "massive computational power" that every sentient being carries around with them as an essential part of the job description. I am not even sure what is so illiberal about Olivaw -- he doesn't deny people self-determination, or freedom. Indeed, in a world with psychohistory, the Great Man problem is solved with a negative result [excepting Deviation Blue, but that was freakish] this is pretty much all freedom is good for.

Also I am mostly talking about his short stories, which are populated by male liberal engineers with happily open minds [within 1950s American parameters] and hardly ever with an authoritarian bone in their bodies. Except this happens in space, as I think I explained earlier. Even the space Nazis from another dimension are nice people!

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Re: Incatena

Post by zompist »

Radius Solis wrote:Where are all the AD 4901 zombies, by the way? I once suggested that it's possible economic growth could reach or approach a plateau if population ever stabilized, to which you argued that that was a zombie apocalypse scenario so I was being ridiculous. But here you've gone and used that same scenario! Most interesting. :)
As I recall, we were talking about the US budget, and you were worried that we needed to cut spending right now. But the future need to stabilize population is not a good reason for a recession-prolonging austerity program in 2011.

I had a long argument with Naked Celt here a few years ago in which he seemed to maintain that economic growth would have to stop; I disagreed as we keep coming up with entirely new products, markets, and services. Wealth can keep increasing, but population can't.

bulbaquil
Lebom
Lebom
Posts: 242
Joined: Fri Nov 17, 2006 2:31 pm

Re: Incatena

Post by bulbaquil »

Is this going to delay Dhekhnami? *innocent angel face*
MI DRALAS, KHARULE MEVO STANI?!

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Re: Incatena

Post by zompist »

Pthug wrote:I am not sure what point you are making on this subject with your "yes, but there is no Federation and no monopoly" comment since I didn't bring it up. But having said that, have you forgotten your Guns, Germs and Steel? Why would competition between different small polities be *stifling*, rather than invigorating?
You talked about an "interstellar empire" as well as Niven, Asimov, and EE Smith, so it sounded like you were thinking about their universes and not mine.

It's not that smaller polities are "stifling"; it's just that, as Stross is fond of pointing out, interstellar colonization is not going to work like Europe's takeover of our world-- which is the basis for most classic sf.
Heritage, here, is not a fancy Hegelian ghost, but an evolved suite. Those social quasi-technologies -- love, fear, contempt, the ego, bosses and bossed, fairness, property are all as contingent and hacked-together-with-obvious-flaws as eyesight, mitosis and pigmentation. Not surprisingly, you admit that in 3000 years, some progress has been made on the problem of working out Biology, and some applications of it have been found, but surprisingly for such a time horizon, very few of the possible consequences are worked out.
Or maybe plenty have been worked out but aren't listed in a brief sketch, or maybe plenty have been tried but turned out not that exciting for the general public. What's missing that you think really should be there?
[As a pathenthetical aside, this is probably one very good reason why you should not have a time horizon on the scale of millennia -- there is no way in hell you will be able to work out all the consequences. See also above comments on unknowable unknowns.]
You'd better not read any Olaf Stapledon then.
Well yes, that is kind of what I mean. We already have intelligences capable of "controlling" companies -- they're called CEOs and they tend to have human-level intelligence. Now, if you do not want AIs to have superhuman intelligence, and are unhappy with AIs having human intelligence, then whatever AI you have controlling the corporation must necessarily be *stupider than humans*, and the flaw in this plan is obvious. So the AIs that control things must be humanly or superhumanly intelligent. And so the issue of human rights [or rather, sentient rights] becomes a real issue when you use slaver talk like you do. The computers are happier not being able to do what they want with their lives -- we treat them well and can you imagine the chaos that would happen if we let them free? Besides, all our livelihoods depend on us being able to depend on them always being there when we need them...
Every robot story since RUR is the same story; why would we really want to go down that path? I don't see much practical need for human-level AIs (though if you want a ripping yarn about an all-robot society, try Stross's Saturn's Children).

As for the corporation-level AIs, I still don't see a compelling alternative story. What are they being denied? Do humans have some obligation to give them monster bodies and laser eyes or something? If one decides it doesn't want to run a corporation any more and wants to be a poet, it wouldn't be prevented, but it also doesn't get to keep its staff and budget any more than a CEO who did the same thing.
Are you talking about Gaia?
Among other things. See http://www.zompist.com/asimov.htm

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Re: Incatena

Post by zompist »

vecfaranti wrote:A better question would be "how often does your Apple computer need a reboot?" and I would answer "Every two to three months" when I get an OS update. Those are the only times my MacBook goes off.
You've had better luck with Macs than I have. My previous Mac, running OS9, would crash once a night, bringing down with it my naive faith in Apple. OS10 is much better, but it does crash now and then... my Windows machine is actually more stable.

But both OSs have become more robust, which is why I don't worry about neurimplants suddenly failing.

Post Reply