zompist wrote:Your complaint sounds like you didn't bother to read the page. Things don't stop developing.
But of course I did, and of course they didn't! My point is the same one you made -- that this is not just in 100 years, this is in *3000* years. We have had science -- proper science, proper every-generation-realises-the-last-generation-had-no-idea science -- for only about 300 years. It is difficult to imagine what a difference *ten times that* will make. I just mean to say that saying "Yes, but technologies mature!" is no real explanation to somebody asking "why did things slow down so much?"
I am not sure what point you are making on this subject with your "yes, but there is no Federation and no monopoly" comment since I didn't bring it up. But having said that, have you forgotten your Guns, Germs and Steel? Why would competition between different small polities be *stifling*, rather than invigorating?
Sure, if you or I could be set down in AD 4901 and had a few months to study, we'd understand lots of things, and so what? You are suggesting this as a practical plan to write sf today?
No, I am saying that this is very much *not* a practical plan, any more than it is a practical plan for the Samarkand trainer to sit down and imagine Adobe Photoshop, but (if I understand you) you were talking in your own voice (rather than that of Narrator) and from your own viewpoint. You said that "[you] imagine that most of the actual economy of the Incatena would be ... baffling to us" but this contradicts what you just said! Sure, it's baffling to *you and me here* but this ignorance is a result of us not knowing anything because you cannot tell us rather than the thing being *unknowable*. If we *were* there, we *wouldn't* find it baffling, except a little immediately. That you use the subjunctive mood means you are talking about us "
Eia, wären wir da", rather than in the indicative sitting at your feet listening to you.
This is an important distinction if unknowable unknowns exist. This concept is not even mystical; they could be as simple as true prepositions that are too many inferential steps away from any known propositions to be discovered by any humans in the finite time given to them for computation. If the concepts are discovered by superhuman intelligences that are capable of churning through
modus ponens chains much faster than humans, so that you can never follow the chain to its conclusion, then one arrives (or could arrive) at *truly* baffling concepts! And this is not even futurological: see the field of experimental mathematics.
If you don't like my attempts, is there not a blank page on your own word processor you could be filling?
Frankly, no. The philosophers and scientists are enough steps ahead of me that trying to catch them up is difficult enough. Because I do not think that talking about this *is* a matter of pure fantasy, and I do not think that you think it is either. As we have seen and will see:
Humans are for something.
Please, do tell me! I do hope it's not going to be "Well, they're for making other copies of themselves, idiot!" because that would be a pretty awkward and biologistic and 19th century thing to do!
E.g. the space habitats of AD 4901 will probably have life support systems that aim for the average temperature of the savannah and something of its diet, and people will still be interacting in ways that derive from our origins as primate troops. (Of course we can depart from our heritage...)
Heritage, here, is not a fancy Hegelian ghost, but an evolved suite. Those social quasi-technologies -- love, fear, contempt, the ego, bosses and bossed, fairness, property are all as contingent and hacked-together-with-obvious-flaws as eyesight, mitosis and pigmentation. Not surprisingly, you admit that in 3000 years, some progress has been made on the problem of working out Biology, and some applications of it have been found, but surprisingly for such a time horizon, very few of the possible consequences are worked out.
[As a pathenthetical aside, this is probably one very good reason why you should not have a time horizon on the scale of millennia -- there is no way in hell you will be able to work out all the consequences. See also above comments on unknowable unknowns.]
You may think massive AIs should be designed with open-ended, self-chosen goals
Certainly I value liberty, identity, self-determination and all that other liberal crap! Don't you?
But it doesn't sound like a good plan.
Why not?
You mean, as opposed to the arrogant, stupid people we call CEOs?
Well yes, that is kind of what I mean. We already have intelligences capable of "controlling" companies -- they're called CEOs and they tend to have human-level intelligence. Now, if you do not want AIs to have superhuman intelligence, and are unhappy with AIs having human intelligence, then whatever AI you have controlling the corporation must necessarily be *stupider than humans*, and the flaw in this plan is obvious. So the AIs that control things must be humanly or superhumanly intelligent. And so the issue of human rights [or rather, sentient rights] becomes a real issue when you use slaver talk like you do. The computers are happier not being able to do what they want with their lives -- we treat them well and can you imagine the chaos that would happen if we let them free? Besides, all our livelihoods depend on us being able to depend on them always being there when we need them...
AM I NOT A MAN AND BROTHER?
As for antecedents... Asimov, liberal, really?
Um well yes. Please explain why this idea breaks your brain, because...
The later Foundation books are bascially a seminar on the running of galactic empires (knit together by FTL) in which simple democratic liberalism isn't even considered an option.
Are you talking about Gaia? Because I am not sure how this does *not* fit having "the ability to maximize the interests, skills, and values of enormous numbers of people" by using the "massive computational power" that every sentient being carries around with them as an essential part of the job description. I am not even sure what is so illiberal about Olivaw -- he doesn't deny people self-determination, or freedom. Indeed, in a world with psychohistory, the Great Man problem is solved with a negative result [excepting Deviation Blue, but that was freakish] this is pretty much all freedom is good for.
Also I am mostly talking about his short stories, which are populated by male liberal engineers with happily open minds [within 1950s American parameters] and hardly ever with an authoritarian bone in their bodies. Except this happens in space, as I think I explained earlier. Even the space Nazis from another dimension are nice people!