No, I am definitely talking about yours. It is just a very decentralised empire, fond of satraps and the unifying religion of Socionomics.zompist wrote:You talked about an "interstellar empire" as well as Niven, Asimov, and EE Smith, so it sounded like you were thinking about their universes and not mine.
*There* is somebody who is not afraid to consider the abolition of man! Take heed!You'd better not read any Olaf Stapledon then.
Because if properly done, I think it would be a better state of affairs. This answer is the same for why anything is good. If there exist beings that are capable of much greater happiness than humans, or are capable of producing so much more utility than humans, or whatever your favoured flavour is, then it is consequentialist-right to ensure that they can exist to be happy. If there exist beings that are capable of being much more powerful, or beautiful, or intelligent or whatever virtues your ethics thinks are good, then it is right to encourage the flowering of these virtues. deontology is stupid. Equivalently, if such things can be, and you *stifle* them, you are doing evil by omission.Every robot story since RUR is the same story; why would we really want to go down that path?
I don't see much practical *need* for human-level humans. Yet no matter how much I try to convince people that my point of view is right, people keep on acting like there is. The problem is that humans exist to begin with, so if you want to stop the equivalent problem happening with robots, you will have to try to make AIs with a will to power not exist to begin with. Because once you get AIs that are capable of reproducing themselves and that are capable of understanding ethics and education, you have already fucked up. Good luck keeping the lid on for 3000 years, especially if you hand over control of complicated shit to them!I don't see much practical need for human-level AIs
And where is this explanation of What Humans are For you promised me? ROSENFELDER, I WANT YOUR REPORT ON MY DESK BY MONDAY!
The right to foom off into divinity, which is being denied in case somebody's kid gets killed or something. Humans can't foom off -- the best we have so far is "education", so really this is just looking like *jealousy* on your part...What are they being denied?
Frankly, I find this offensive. "Oh, so when I free my slave have I some obligation to supply him with all the rifles he could want??"Do humans have some obligation to give them monster bodies and laser eyes or something?
And if he wanted to be an abolitionist god, or at least to be an abolitionist and for his grandchildren to be gods?If one decides it doesn't want to run a corporation any more and wants to be a poet, it wouldn't be prevented
["It". How cruel two letters can be.]
What the CEO gets to keep is, of course, the thing that actually matters to him -- his capital and his reputation. And his liberty.but it also doesn't get to keep its staff and budget any more than a CEO who did the same thing.
I do not remember Olivaw "ruthlessly supressing" anybody who disagreed with his very minimalist guidance, but never mind that...
Also are you *absolutely sure* you are not an anarchist? Of the mystical sort that really *believes* in liberty in the same way they believe in the electromagnetic force? Because I do not think that "individual freedom and responsibility" and "a benevolent providence that absconds itself from the galaxy, but nudges things from time to time" are *really* such profoundly incompatible ideas! Mostly because I do not really believe in either of them -- I think that they are nice ideas that result in people being happy when they pretend they are real. The difference is that it actually seems possible that we could create Providence, and the belief that humans are *for* something will stop being nice as soon as it's pointed out what exactly you *are* for. Your socionomics, also, has the exact same problem of being, effectively, such a providence -- and this is one set of profound consequences you do not go into.
Responsibility and freedom are, of course, phantoms created as a result of ignorance. The ancestral environment contained a lot of confusing things, but it turned out that proto-humans that developed the idea that things could *want* things were better at not being killed by some of them. Seen through this shitty, terrible, social-lives-of-apes viewpoint, everything that happens is considered to be the result of *something wanting it to happen*, with a large list of candidates (including spurious entries like spirits of the dead, the creator of the universe, dragons, thunderbolt-throwers etc. etc.). When the algorithm responsible for sorting this out returns somebody else, it is responsibility and when it happens to you, it is free will. The main value that freedom has is that it lets this algorithm of people convincing themselves that they *want* to do what their brains are telling them to do and not to annoy other people if you can avoid it, since such interventions often result in violence because that is how apes roll.
Except, of course, the algorithm has these great bits in it that changes the agent-selecting algorithm based on what everyone else would select, and these are *really awful subroutines*. There is precious little filtering of spurious agents, a resistance to update faulty "first impressions", positive feedback loops that result in hated agents becoming increasingly hated and loved agents becoming increasingly loved, a *profound* dissociation between the agent-representing-the-self created by this algorithm and the *actual psyche that it is embedded in* and a million other kludges. So there are plenty of people who think it is exactly their job to go around telling you all about what you *should want* because they sincerely believe that it is what you *do* want, deep down and you are just faking it. There are other people who go around trying to get other people to want them to do something, so their agent-of-self will want to do it. There are people who are upset that what this agent-of-the-self wants results in their life being bad. There are people who believe that being in the good humour of one or another spurious agents is important to the exclusion of other non-spurious agents...
In other words, I am not convinced that there does not exist a *better way to be Dasein*, one that fixes problems like the above, and I would be surprised if some are not discovered *and implemented*, especially given the discovery of aliens, artificial intelligences and thousand-year-old-genetic engineering!
Also of course I want a world without evil -- evil is, by definition, that which one should avoid. You *are* repeating exactly the same error [it is not vulgar, the angelic Doctor makes it too!] -- since it is bad to be eaten, it is bad to be killed or injured in violence, it is bad to be killed in a volcanic eruption and wars are pretty much generally bad for all of these reasons. If you sincerely ask the question of whether or not you want these, you are asking "Do we want bad things to stop happening?", or at least the question "Should we stop thinking being eaten/war/murder/falling into lava is bad?".