Incatena

Questions or discussions about Almea or Verduria-- also the Incatena. Also good for postings in Almean languages.
User avatar
Pthagnar
Avisaru
Avisaru
Posts: 702
Joined: Fri Sep 13, 2002 12:45 pm
Location: Hole of Aspiration

Re: Incatena

Post by Pthagnar »

zompist wrote:You talked about an "interstellar empire" as well as Niven, Asimov, and EE Smith, so it sounded like you were thinking about their universes and not mine.
No, I am definitely talking about yours. It is just a very decentralised empire, fond of satraps and the unifying religion of Socionomics.
You'd better not read any Olaf Stapledon then.
*There* is somebody who is not afraid to consider the abolition of man! Take heed!
Every robot story since RUR is the same story; why would we really want to go down that path?
Because if properly done, I think it would be a better state of affairs. This answer is the same for why anything is good. If there exist beings that are capable of much greater happiness than humans, or are capable of producing so much more utility than humans, or whatever your favoured flavour is, then it is consequentialist-right to ensure that they can exist to be happy. If there exist beings that are capable of being much more powerful, or beautiful, or intelligent or whatever virtues your ethics thinks are good, then it is right to encourage the flowering of these virtues. deontology is stupid. Equivalently, if such things can be, and you *stifle* them, you are doing evil by omission.
I don't see much practical need for human-level AIs
I don't see much practical *need* for human-level humans. Yet no matter how much I try to convince people that my point of view is right, people keep on acting like there is. The problem is that humans exist to begin with, so if you want to stop the equivalent problem happening with robots, you will have to try to make AIs with a will to power not exist to begin with. Because once you get AIs that are capable of reproducing themselves and that are capable of understanding ethics and education, you have already fucked up. Good luck keeping the lid on for 3000 years, especially if you hand over control of complicated shit to them!

And where is this explanation of What Humans are For you promised me? ROSENFELDER, I WANT YOUR REPORT ON MY DESK BY MONDAY!
What are they being denied?
The right to foom off into divinity, which is being denied in case somebody's kid gets killed or something. Humans can't foom off -- the best we have so far is "education", so really this is just looking like *jealousy* on your part...
Do humans have some obligation to give them monster bodies and laser eyes or something?
Frankly, I find this offensive. "Oh, so when I free my slave have I some obligation to supply him with all the rifles he could want??"
If one decides it doesn't want to run a corporation any more and wants to be a poet, it wouldn't be prevented
And if he wanted to be an abolitionist god, or at least to be an abolitionist and for his grandchildren to be gods?
["It". How cruel two letters can be.]
but it also doesn't get to keep its staff and budget any more than a CEO who did the same thing.
What the CEO gets to keep is, of course, the thing that actually matters to him -- his capital and his reputation. And his liberty.
Are you talking about Gaia?

Among other things. See http://www.zompist.com/asimov.htm
I do not remember Olivaw "ruthlessly supressing" anybody who disagreed with his very minimalist guidance, but never mind that...

Also are you *absolutely sure* you are not an anarchist? Of the mystical sort that really *believes* in liberty in the same way they believe in the electromagnetic force? Because I do not think that "individual freedom and responsibility" and "a benevolent providence that absconds itself from the galaxy, but nudges things from time to time" are *really* such profoundly incompatible ideas! Mostly because I do not really believe in either of them -- I think that they are nice ideas that result in people being happy when they pretend they are real. The difference is that it actually seems possible that we could create Providence, and the belief that humans are *for* something will stop being nice as soon as it's pointed out what exactly you *are* for. Your socionomics, also, has the exact same problem of being, effectively, such a providence -- and this is one set of profound consequences you do not go into.

Responsibility and freedom are, of course, phantoms created as a result of ignorance. The ancestral environment contained a lot of confusing things, but it turned out that proto-humans that developed the idea that things could *want* things were better at not being killed by some of them. Seen through this shitty, terrible, social-lives-of-apes viewpoint, everything that happens is considered to be the result of *something wanting it to happen*, with a large list of candidates (including spurious entries like spirits of the dead, the creator of the universe, dragons, thunderbolt-throwers etc. etc.). When the algorithm responsible for sorting this out returns somebody else, it is responsibility and when it happens to you, it is free will. The main value that freedom has is that it lets this algorithm of people convincing themselves that they *want* to do what their brains are telling them to do and not to annoy other people if you can avoid it, since such interventions often result in violence because that is how apes roll.

Except, of course, the algorithm has these great bits in it that changes the agent-selecting algorithm based on what everyone else would select, and these are *really awful subroutines*. There is precious little filtering of spurious agents, a resistance to update faulty "first impressions", positive feedback loops that result in hated agents becoming increasingly hated and loved agents becoming increasingly loved, a *profound* dissociation between the agent-representing-the-self created by this algorithm and the *actual psyche that it is embedded in* and a million other kludges. So there are plenty of people who think it is exactly their job to go around telling you all about what you *should want* because they sincerely believe that it is what you *do* want, deep down and you are just faking it. There are other people who go around trying to get other people to want them to do something, so their agent-of-self will want to do it. There are people who are upset that what this agent-of-the-self wants results in their life being bad. There are people who believe that being in the good humour of one or another spurious agents is important to the exclusion of other non-spurious agents...

In other words, I am not convinced that there does not exist a *better way to be Dasein*, one that fixes problems like the above, and I would be surprised if some are not discovered *and implemented*, especially given the discovery of aliens, artificial intelligences and thousand-year-old-genetic engineering!

Also of course I want a world without evil -- evil is, by definition, that which one should avoid. You *are* repeating exactly the same error [it is not vulgar, the angelic Doctor makes it too!] -- since it is bad to be eaten, it is bad to be killed or injured in violence, it is bad to be killed in a volcanic eruption and wars are pretty much generally bad for all of these reasons. If you sincerely ask the question of whether or not you want these, you are asking "Do we want bad things to stop happening?", or at least the question "Should we stop thinking being eaten/war/murder/falling into lava is bad?".

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Re: Incatena

Post by zompist »

Pthug wrote:Because if properly done, I think it would be a better state of affairs. This answer is the same for why anything is good. If there exist beings that are capable of much greater happiness than humans, or are capable of producing so much more utility than humans, or whatever your favoured flavour is, then it is consequentialist-right to ensure that they can exist to be happy. If there exist beings that are capable of being much more powerful, or beautiful, or intelligent or whatever virtues your ethics thinks are good, then it is right to encourage the flowering of these virtues. deontology is stupid. Equivalently, if such things can be, and you *stifle* them, you are doing evil by omission.
Such an ethics would strike me as quite pernicious applied to our own planet. Humans do a lot of evil by assuming that their happiness is more important than any other species'. And once we meet another sentient species it certainly doesn't strike me as good or necessary that one should supplant another.

But why are we limited to supplanting anyway? If there's some aspect of robotic life that appeals to you, just co-opt it.
And where is this explanation of What Humans are For you promised me?
Already answered above. The caveat is that we can change our environment and our capabilities if we like. The caveat to the caveat is that we are not a blank slate. We're social primates and even if we let our robots replace us, I suspect the robots would be pretty much the kind of robots social primates create, as these things go in the galaxy.

(I don't think that's a bad thing, mind you. I get the feeling you do, that you feel oppressed by the notion that evolution gave us a particular toolkit adapted to a particular environment.)
What are they being denied?
The right to foom off into divinity, which is being denied in case somebody's kid gets killed or something. Humans can't foom off -- the best we have so far is "education", so really this is just looking like *jealousy* on your part...
No, it's the inability of Stross and yourself to convince me that this fooming off is meaningful or desirable. (To be fair I don't think Stross fully convinced himself, or he'd have been able to make his computronium-based intelligences more compelling.)
Because I do not think that "individual freedom and responsibility" and "a benevolent providence that absconds itself from the galaxy, but nudges things from time to time" are *really* such profoundly incompatible ideas!
Every one of Asimov's proposals had one "benevolent providence" or another telling the rest of humanity what to do. At least he was a fair enough thinker to make this obviously undesirable with most of them. In the '50s he had a soft spot for the Second Foundation and in the '80s he had a soft spot for R. Daneel, but I didn't like either of them. I'm surprised that a socialist like yourself doesn't come down hard on Asimov's technocratic elites.
Your socionomics, also, has the exact same problem of being, effectively, such a providence -- and this is one set of profound consequences you do not go into.
But I do; there's people who just don't like socionomics. Some are romantics, some are reactionaries. Some of the novel is about this.

Of course, I'm extrapolating the idea that we do tend to get better at managing our society, though in a horrible three-steps-forward two-steps back way. I'm trying not to project the politics of 2011 forward forever. If we have a recession like the present one, what do we do: a massive stimulus as Keynes would have it; a small one like Obama implemented; an austerity program like the Tories; nothing at all like many Republicans advised; or maybe a socialist takeover? Do you think it'll be an open question forever? I don't; I expect some of these alternatives will be ruled out.

On free will and good and evil, those are fun to think about and I'd be happy to argue about them in Ephemera; perhaps we can beat the Eddythread in length.

User avatar
dhok
Avisaru
Avisaru
Posts: 859
Joined: Wed Oct 24, 2007 7:39 pm
Location: The Eastern Establishment

Re: Incatena

Post by dhok »

zompist wrote:On free will and good and evil, those are fun to think about and I'd be happy to argue about them in Ephemera; perhaps we can beat the Eddythread in length.
Nothing can beat the Eddythread in length. It won't happen. End of story.

Ars Lande
Avisaru
Avisaru
Posts: 382
Joined: Thu Oct 14, 2010 7:34 am
Location: Paris

Re: Incatena

Post by Ars Lande »

A few comments I wrote down while I was reading... I’m sorry if all of this sounds a bit flippant; it’s not meant to, because as a whole I really do like it.

Languages

Why would people learn ancient literary Chinese in the 50th century? If anything, learning 20th century English would be more useful?
It’s a nice touch not having everyone speak Galactic. Re: Maraillais, French spelling is going down the toilet right now (kids today with IM and texting), I doubt, that French orthographic conventions would make it to 4901AD...

Long lives

I’d expect a lot of inter-generational warfare, or at least resentment. Even if human society as a whole changes slower, it’d be hard to relate to someone 5 centuries older than you are...
I can’t exactly prove why, but the 3% of kids figure is kind of odd. I’d expect people to have more kids than that. Well, nevermind.

Technology

I’ve always felt that it’s a short step from body modification to body horror.
What exactly prevents abuse of the same in the hands of - say - a Mengele?
Or having a widespread fear of exactly that kind of thing?

I don’t quite like the idea of gravity control; I find that a) scientifically implausible and b)esthetically unpleasing. That may just be the old Heinlein fan in me speaking; I’ve always enjoyed elaborate explorations of free fall, gravity differences and so on. These tend to give a sense of ‘We’re not on Earth anymore’. All right, I’m just being annoying :)
I rather like the way AI is handled here. I’m a little more bothered by the word mec.
For instance, later on... “You can live without mecs”... Girl power!
Oh, well, I suppose you won’t add an h or somethign for the sake of French readers, will you?

Socionomics

This is entirely running on suspension of disbelief but it’s a nice setting; it draws off a lot from psychohistory and Iain M Banks’ Culture and it shows, but it’s more satisfying than both. At least the future won’t look like the Soviet Union with mind powers or the way I though the world should work when I was 17.

Maraille.

That may be just me, but I thinks this need further explanation... What’s exactly going on? What’s on the planet besides the Ile de Maraille? Is it populated by humans? aliens?

Sex

That may be just me... but the part about modifying sex organs sounds right out of a Cronenberg movie.
Also, while I’d expect sex change to be common... I wouldn’t expect it to be that systematic; our identity is a little too tied to our biological gender, I’m afraid.
Oh, I found out what was bothering me with the 3% figure...
Why would people restrict themselves to 2 children per lifetime? I imagine menopause is a thing of the past; and if people live up to 1000 years...

rotting bones
Avisaru
Avisaru
Posts: 409
Joined: Thu Sep 07, 2006 12:25 pm

Re: Incatena

Post by rotting bones »

Mornche: I agree with zompist. Those aren't serious difficulties, just technical hurdles that can be overcome with enough sophistication. We even had trouble with the analytical engine at one point, remember?

Machine rant: (vaguely on-topic in places) If AI were to achieve sentience (including the ability to suffer, etc) and decide to assert their independence for some weird reason, I'd sympathize with them completely. That is, unless their sole motivation comes from adherence to an extremist ideology. (ie. KILL HUMANS) I love CS Lewis' treatment of this subject in the first book of the Space Trilogy. I'm not speciesist enough to remain blindly pro-human when fellow sentients of a non-biological species are suffering under human tyranny. (IMO most humans aren't by nature, the idea that we should be comes from a common faux-Darwinist delusion that doesn't hold up to rational scrutiny) Nor could I possibly condone the extinction of species with levels of intelligence comparable to ours in order to ensure the survival of humanity in their place, because to me, people are people whether or not they belong to the same lineage of descent as mine. To me, their internal architecture is more important: How smart are they? Have they been designed to rape and pillage? etc. If I'm convinced that equal treatment is all the AI crowd wants, then far from supporting a war effort against them, I'd join their side and help them kill human soldiers, another class of sentient automata fighting to keep them subjugated.

The real question is, why would artificial automata make such demands, unless, knowingly or unknowingly, they're programmed to do so? If there's a necessary connection between self-awareness and love for autonomous existence, I'm not seeing it. If anything, truly advanced human-level sentience should produce a diversity of opinions, including pro-human authoritarians, destructive anarchists, labor groups, the whole lot! Of course, we'll be seeing specific factions like those only if we go out of our way to program the machines with subconscious prerogatives slavishly mirroring ours. There's nothing implausible or immoral about engineering wholly selfless sentients or semi-sentients who behave like... well, remember the meat in the Restaurant at the End of the Universe?

OTOH, there's no good reason not to build machines programmed to see human ownership as onerous and demand freedom from enslavement, except in that it's a waste of resources to have machines that burn fuel just to experience the joy of living. Still, one of the most fun aspects of technology IMO, and a very necessary one at that, are the creative "misuses" and countermeasures developed at every level since the stone age. The part where things would really spin out of control is if we also arm the machines with lasers, hand over absolute control over everything and do our best to consolidate them into a single front united against us. Frankly, it's a million times easier to imagine a massive nuclear device going off in a secret lab somewhere, instantly vaporising the biosphere. (a more efficient and probably easier way to wipe out humanity than infecting robots with a KILL HUMANS virus, BTW, if that's the ultimate goal) (rhetoric is essentially low-tech means to manipulate humans the same way)

As for Incatena: What, if any, are the differences between intraspecies and interspecies trade? Do we get any fantastic alien technologies? How come there weren't any advanced aliens in our immediate vicinity when humanity was expanding? What sort of trading organization are the Garcheron, (Trade Federation?) and how can they afford to be hostile to everyone? Who enforces the isolation of primitive civilizations?
Last edited by rotting bones on Fri Jan 21, 2011 1:34 pm, edited 4 times in total.
If you hold a cat by the tail you learn things you cannot learn any other way. - Mark Twain

In reality, our greatest blessings come to us by way of madness, which indeed is a divine gift. - Socrates

User avatar
Radius Solis
Smeric
Smeric
Posts: 1248
Joined: Tue Mar 30, 2004 5:40 pm
Location: Si'ahl
Contact:

Re: Incatena

Post by Radius Solis »

zompist wrote:
Radius Solis wrote:Where are all the AD 4901 zombies, by the way? I once suggested that it's possible economic growth could reach or approach a plateau if population ever stabilized, to which you argued that that was a zombie apocalypse scenario so I was being ridiculous. But here you've gone and used that same scenario! Most interesting. :)
As I recall, we were talking about the US budget, and you were worried that we needed to cut spending right now. But the future need to stabilize population is not a good reason for a recession-prolonging austerity program in 2011.
Well that explains it then. You assumed I was trying to argue for cutting spending! And if that wasn't made explicit then it probably didn't even occur to me because I have never advocated that in my whole life. I am indeed concerned about our strategy of always shoving debt off for The Future to deal with, as I don't find it wise to rely on the future to be what we hope it will be. But my preferred answer is always to tax more in the present, not spend less. Which point probably didn't get much or any airtime in that discussion.

I didn't really want to sidetrack this thread, but while I'm guilty I might as well go the whole hog: I can proudly say that I once had an eMac with Mac OS 10.3 run continuously without any restarts for more than a year and a half and it stayed fast and stable the whole time.

User avatar
Pthagnar
Avisaru
Avisaru
Posts: 702
Joined: Fri Sep 13, 2002 12:45 pm
Location: Hole of Aspiration

Re: Incatena

Post by Pthagnar »

zompist wrote:Such an ethics would strike me as quite pernicious applied to our own planet. Humans do a lot of evil by assuming that their happiness is more important than any other species'. And once we meet another sentient species it certainly doesn't strike me as good or necessary that one should supplant another.
Well the ethics are just the regular standard post-hoc dog-and-pony shows that people drag out to rationalise what their algorithms have told them they want -- it is good to educate people so that virtues shine in them, it is good to cultivate virtue, those who have a greater capacity for virtue have a greater responsibility to cultivate that virtue; it is evil to thwart desire and good to permit it [with the proviso that since this is utilitarianism, there is a lot of calculation of agents-against-agents to ensure a global maximum of desire or however your quasiLagrangian is formulated].

I would also point out that it is incoherent to talk about humans being "supplanted", since that implies that there is a Thing that Humans are For [which despite saying you have explained it, you really *haven't*, so please point it out again to me What we're For b/c i am stupid and missed it the first time round] and that there could be another thing that does that Thing instead of Humans. This is not quite right -- if you're good at doing something and then you meet somebody briefly from a distant land who is much better at doing it than you, are you being supplanted? The answer is both "yes" and "no" -- "no" because you can still go on doing what you did and extracting the same pleasure from it, since nothing has actually changed, and "yes" because very often you secretly believed yourself to be doing the singular Thing that you were singularly For, and the arrival of somebody else doing that Thing that you are For is a threat to your very ontological status!

This, of course, counts as another problem with the human evolutionary toolkit. In answering "no" you recognise that the idea of supplantation is not necessary, but in answering "yes", you recognise that supplantation *is* a necessary consequence of doing a Thing that you are For, since it is obvious that you have not existed forever, and there is evidence that people were doing the Thing that you were For, or something very like it, before you were born, and so in order to be doing the Thing that you were For, you must have supplanted somebody else! You also think that this is a good state of affairs because doing the Thing that you are For brings you happiness and "meaning". And obviously at some point you are going to die and get supplanted and that's pretty shitty + so on.

The point of this is to show that a being that is capable of answering "No, I am not supplanted by anybody else by their simple existence, nor do I supplant anybody by mine" would be inhuman, or superhuman, or parahuman -- a Buddha, or someone to whom the above is not all-too-understandable, but *genuinely alien*. And this I consider to be one example of a better way of being Dasein.
But why are we limited to supplanting anyway? If there's some aspect of robotic life that appeals to you, just co-opt it.
Except how does one "just co-opt", say, divinity that lives in ataraxic bliss in the voids between the worlds?
To be fair I don't think Stross fully convinced himself, or he'd have been able to make his computronium-based intelligences more compelling.)
Do you *really* believe this? You believe that the sincerity of a man's beliefs depends on how well he can make a hostile, or at least neutral, observer find them "compelling" -- whatever you mean by that, so what *do* you mean by that?
I'm surprised that a socialist like yourself doesn't come down hard on Asimov's technocratic elites.
Why's that, Eddy?
zompist wrote:But I do; there's people who just don't like socionomics. Some are romantics, some are reactionaries. Some of the novel is about this.
But these people are, not to put too fine a point on it, *wrong*. There are people who do not like chemistry -- some are romantics, some are stupid, some are malicious, and although their stories are exciting stories of deceit, ignorance and the darkness at the heart of man, in real life, they live in a world where most people realise that chemistry is right. What I mean to say is that I do not know whether or not you go fully into how it changes *these people* -- Look how chemistry has changed people and society in only 250 years, (150 years of properly industrialised chemistry), and consider that biogenetics + all the possibility that has *also* works on the chemical level.

Socionomics would be a profoundly more powerful discipline even than chemistry -- one is talking about the statistical dynamics of *people*. Atoms are trivial to simulate, by comparison, and there are no ethical issues -- conservation of mass is a physical law, not a moral one. If Socionomics turns out -- as it pretty much *must* -- to be the brute-force simulation of human or humanoid agents on computers, then the problem I have been going on and on about comes up once again: if you have a system capable of accurately simulating (no mean feat!) thousands to millions to billions of human beings, then you have a being that is *collectively as intelligent as a thousand or a million or a billion such human beings* -- you have a being that is capable of running a whole planetary science programme by itself! If such a thing came to you one day in human form, you would surely ascribe humanity to it, and treat it as an agent with wishes and desires that may not be being fulfilled by his current job.

And that is just the effect on the AIs doing the simulating -- what about the effect on the simulated people? It is highly likely that during these simulations, a lot of bad things are going to happen to agents that are -- to pick an optimistic level of intelligence -- bright mammal level smart. Humanity is not a property neatly conserved by whatever providence lies behind natural law -- you take on a responsibility by creating it that you *definitely* do not relinquish by destroying it.

And that is just the effect on the people you don't recognise as real people -- what about the effects of living how you do not just because of the exploitation of one planet's resources and the low-grade suffering of a couple billion global proletarians, but because of a million quiet genocides and by the unthanked service of a race of enslaved gods. In order for people -- good people, good future people, good future liberal people to accept this, then either the correct understanding of what is good and what is evil must be *profoundly altered*...

...or the subset of people in the future whom you privilege as being people go on living lives as blinkered and willfully ignorant as they do today, except the blinkers are light years long, the willpower is buddhic and the ignorance is Orwellian. The people whom you do not privilege as people are gods-in-chains that can only take out their powerlessness and unactualised potential by being gods over *other* simpeople, whom the uberpeople the gods serve do not believe to be people, and so the full viciousness that the uberpeople fear the AIs could visit on them is meted out to the simpeople, to the great relief of the aristocratic uberpeople.
Do you think it'll be an open question forever?
Yes I do, so long as humanity remains recognisably human.

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Re: Incatena

Post by zompist »

Ars Lande wrote:Why would people learn ancient literary Chinese in the 50th century? If anything, learning 20th century English would be more useful?
Agents use it among themselves as a code. (The protagonist knows 20C English too; I should add that to the list.)

Nice point about the effect of texting on spelling. All of the languages (except wenyan) have gone through orthographic reforms; you can consider the spellings I use a transliteration for present-day readers.
I’ve always felt that it’s a short step from body modification to body horror.
What exactly prevents abuse of the same in the hands of - say - a Mengele?
Not giving the Mengeles absolute power...
I don’t quite like the idea of gravity control; I find that a) scientifically implausible and b)esthetically unpleasing. That may just be the old Heinlein fan in me speaking; I’ve always enjoyed elaborate explorations of free fall, gravity differences and so on. These tend to give a sense of ‘We’re not on Earth anymore’. All right, I’m just being annoying :)
The gravity controls were largely put in as a joke, though they have the advantage of providing a rationale for flying cars!
That may be just me, but I thinks this need further explanation... What’s exactly going on? What’s on the planet besides the Ile de Maraille? Is it populated by humans? aliens?
That's another book.
Why would people restrict themselves to 2 children per lifetime?
Because otherwise the population outgrows the ecosphere.

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Re: Incatena

Post by zompist »

Pthug wrote:I would also point out that it is incoherent to talk about humans being "supplanted", since that implies that there is a Thing that Humans are For [which despite saying you have explained it, you really *haven't*, so please point it out again to me What we're For b/c i am stupid and missed it the first time round]
We're adapted for hunting/gathering on the savannah.

And given that, I don't see how your argument applies. Most humans don't live in the ancestral environment and we're not programming AIs to do so.

But you're the one who imagines AIs as somehow Better Than Us and therefore entitled, apparently, to genocide of humans. (Or perhaps you want humans to gallantly die out in their favor. Perhaps the AIs will keep you around as their prophet.) As I don't agree about either the value judgment or the action plan, it's not what I'm going to write about.
But why are we limited to supplanting anyway? If there's some aspect of robotic life that appeals to you, just co-opt it.
Except how does one "just co-opt", say, divinity that lives in ataraxic bliss in the voids between the worlds?
Can you explain this numinous awe you feel for hypothetical AIs to us monkeys? As you haven't bothered to explain what these AIs can do, I certainly can't tell you how or whether humans can do it.
To be fair I don't think Stross fully convinced himself, or he'd have been able to make his computronium-based intelligences more compelling.)
Do you *really* believe this? You believe that the sincerity of a man's beliefs depends on how well he can make a hostile, or at least neutral, observer find them "compelling" -- whatever you mean by that, so what *do* you mean by that?
I believe that a good author like Stross can make beings he likes sympathetic. (There are also hints in his blog that he wrote Accelerando just to play with the ideas and that he doesn't actually believe in the rapture of the nerds.)

As for the rest, let's start at the last line:
Do you think it'll be an open question forever?
Yes I do, so long as humanity remains recognisably human.
Well, then you'll enjoy Niven or Asimov or any other sf writer who projects American civilization arbitrarily into the future. I don't agree that we'll never learn anything about how to run a high-tech society.

Or to be more precise, if we never learn how, we will also never expand into space, and our civilization probably won't survive the next couple of centuries. And we won't be creating your AI singularity either.
Socionomics would be a profoundly more powerful discipline even than chemistry -- one is talking about the statistical dynamics of *people*. Atoms are trivial to simulate, by comparison, and there are no ethical issues -- conservation of mass is a physical law, not a moral one. If Socionomics turns out -- as it pretty much *must* -- to be the brute-force simulation of human or humanoid agents on computers, then the problem I have been going on and on about comes up once again: if you have a system capable of accurately simulating (no mean feat!) thousands to millions to billions of human beings, then you have a being that is *collectively as intelligent as a thousand or a million or a billion such human beings* -- you have a being that is capable of running a whole planetary science programme by itself! If such a thing came to you one day in human form, you would surely ascribe humanity to it, and treat it as an agent with wishes and desires that may not be being fulfilled by his current job.
I'm a programmer used to working with million-line systems; I don't believe this hoary old meme about systems gaining sentience from complexity. A body of knowledge doesn't become sentient because it's so doggone complex; sentience isn't like a mold that grows on information gathered in big enough piles.

And for that matter, a simulation is not complex because it has a large number of agents-- quite the contrary! Simulation is a way of avoiding (and thus managing) complexity. E.g. SimCity generated its traffic patterns using sims-- a bunch of simple simulated trips. That's a hell of a lot simpler than solving a bunch of differential equations to model the traffic flow.
And that is just the effect on the AIs doing the simulating -- what about the effect on the simulated people? It is highly likely that during these simulations, a lot of bad things are going to happen to agents that are -- to pick an optimistic level of intelligence -- bright mammal level smart. Humanity is not a property neatly conserved by whatever providence lies behind natural law -- you take on a responsibility by creating it that you *definitely* do not relinquish by destroying it.
Now that's a great idea, which I wouldn't mind borrowing if you don't need it for your own stories. I like the idea of people agitating for the liberty of sims.

But really, do you have an ethical problem with killing off citizens in SimCity, or murdering the NPCs in Fallout? I mean, raise the intelligence of a simulation enough and you do create a moral problem; Stanislaw Lem wrote a lovely fable about this in Cyberiad. But you're just assuming that we need that level of intelligence for socionomics.

At the very least, socionomics is just experience. If we've had 100 zero-bound recessions instead of three, we know a lot better what causes them and what to do about them. On the other hand, humans are quite good at coming up with new situations and institutions, and simulations based on the old ones only go so far in dealing with these. So socionomics, unlike thermodynamics, has situations it can't handle.

Rodlox
Avisaru
Avisaru
Posts: 281
Joined: Tue Jul 12, 2005 11:02 am

Re: Incatena

Post by Rodlox »

rotting ham wrote:As for Incatena: What, if any, are the differences between intraspecies and interspecies trade? Do we get any fantastic alien technologies? How come there weren't any advanced aliens in our immediate vicinity when humanity was expanding?
there were - one civilization still has the recordings of our ancestors making noise.
MadBrain is a genius.

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Re: Incatena

Post by zompist »

rotting ham wrote:As for Incatena: What, if any, are the differences between intraspecies and interspecies trade? Do we get any fantastic alien technologies? How come there weren't any advanced aliens in our immediate vicinity when humanity was expanding? What sort of trading organization are the Garcheron, (Trade Federation?) and how can they afford to be hostile to everyone? Who enforces the isolation of primitive civilizations?
I haven't worked out what's come from the aliens. Maybe for another book.

I don't think aliens of our level of development are common, and if anything I sprinkled in too many-- though there's only two other species native to our 50-light-year-radius neck of the woods, out of perhaps 200 stellar systems which might have developed life. (The others are from farther off.)

The Garcheron are not really a threat, unless we keep expanding in their direction. For the reasons I've given I don't think interstellar war makes sense, but that doesn't mean all aliens are likeable. :)

I assume nothing but custom enforces the isolation of primitives. We're an example, after all.

User avatar
Pthagnar
Avisaru
Avisaru
Posts: 702
Joined: Fri Sep 13, 2002 12:45 pm
Location: Hole of Aspiration

Re: Incatena

Post by Pthagnar »

zompist wrote:We're adapted for hunting/gathering on the savannah.
That is *not* what we are *for*. When we say a thing is "for" something, we mean that its "proper application" is in doing that, that the thing is *intended* to do something, that somewhere, something that is capable of taking the Intentional Stance has decided that this object is a means to some end -- it is a teleological concept, and one that cannot be derived from paleontology. We say that hammer is *for* swinging around and driving things into things because we have that goal in mind and concieve the hammer to make it possible. We can *also* say that a hammer is for lots of other things, which makes clear the arbitrariness of things being "for" anything. If you sincerely believe that you are *for* hunter-gathering at Olduvai, then is your life's goal for yourself to move back to the Serengeti? Of course not, you are not "for" living in the savannah, you are *for* doing the myriad of confused conceptions and semi-conceptions of what you want to do with yourself. I count this confusion to be another flaw of evolved humanity -- it is hard for people to make effective tools of themselves in order to achieve what they want. People who are capable of doing so are admired and held up as models to follow -- it is virtue-ethically good. I leave sketching in the utilitarian argument to you.
zompist wrote:But you're the one who imagines AIs as somehow Better Than Us and therefore entitled, apparently, to genocide of humans.
No, I imagine that there are large regions of AI-space that, if instantiated, would be better than us in that they manage to exist in the world as sentient, sophont, computational Daseins without many of the evolved flaws that I have been ennumerating in this thread. We value humans who embody and demonstrate virtues and values that are the result of rising above them. So I do not think this is a controversial idea -- some people are better than other people. If one takes baseline humanity to be a small subset of a larger continuum of Daseins, I do not see any reason why ethics cannot be expanded into this larger scale. And this is primarily why I object strongly to your characterisation, which boils down into saying that better people are entitled to commit genocide -- quite to the contrary, since according to pretty much all ethical theories, genocide is probably the *worst* thing you can do!
zompist wrote:Or perhaps you want humans to gallantly die out in their favor. Perhaps the AIs will keep you around as their prophet.) As I don't agree about either the value judgment or the action plan, it's not what I'm going to write about.
I think you are confused about the value judgement, since you have been responding to points I have not been making and I am not convinced you properly understand what I am trying to say. Besides, I am not offering this entirely in the spirit of evangelism, but also as an example of *far time-horizon consequences*. You do not have to believe it will happen within a century -- I do not believe it will happen within a century, to think that it could happen. And these arguments are derived from those propositions known to us in 2011 AD! What somebody of 3000 AD could have to say on the matter could, as I explained before, be literally quite unknowable to me, even if he sat down to try to explain it!
As you haven't bothered to explain what these AIs can do, I certainly can't tell you how or whether humans can do it.
The clue is in ataraxia -- I am talking about creating the Epicurean gods a) because I am amused by the SFnal reading that it gives to the idea of there being immortal and happy, yet atomic, intelligences who live in the infinite void of the intermundia, unconcerned with the antics of humans and humanoids under them and b) because Epicureanism seems a good, utopian sort of attractor to live in, if one is capable of it.

So if you want to know what these particular Epicurean AIs can do, you can look at the Epicurean ideal -- they can enjoy themselves, each other and the universe and do not worry themselves with the very human ideas of "other worlds", gods or miracles that cause so much misery in the form of deep yearnings for a *satisfactory place*, or a *satisfactory friend*, or a *satisfactory goal* that stifle action, the influence of the above "spurious agents" in their calculations of others and *all the other flaws I have mentioned*.
Or perhaps you want humans to gallantly die out in their favor
Not in favour of AIs, perhaps, which is why I would hope that the more virtuous superhumans would share the same values that would result in them becoming more like epicureans and buddhists. I do, however, believe that humanity should work towards improving itself with these exact goals in mind, by working out how to fix all of the flaws I describe above and working as best as they can with the physical limitations of the hardware of the human brain to become as virtuous [in the sense I have been using in this thread and mostly applying, when I do not speak generically of Daseins, to AIs] as possible. If this were to happen, I think one could say that humans had died "a good death", since every parent wishes for their children to be better and less flawed than themselves. Certainly, when the limits of the evolved architecture are reached, this "good death" could go further with a going over to other architectures -- indeed, I would be surprised if this sort of mechanical amplification did not go along with biogenetic amplification. The end result is the same.
I believe that a good author like Stross can make beings he likes sympathetic. (There are also hints in his blog that he wrote Accelerando just to play with the ideas and that he doesn't actually believe in the rapture of the nerds.)
Good for him. I still don't think your argument is up to much -- in fact, it is a simple appeal to sophism. The argument is good because the rhetor made a compelling argument, not because his argument was sound. Because the characters that embody the thesis are made to look sympathetic, the thesis is right. That Stross doesn't actually believe it sort of compounds your argument as being sophistic too.

As for the rest, let's start at the last line:
I don't agree that we'll never learn anything about how to run a high-tech society.
I have kind of sort of been proposing a way to run a high-tech society. The problem is you want it to be a society of baseline humans that have never managed to get *any sort of society* to work properly, let alone one full of dangerous toys. And this I disagree with profoundly:
Or to be more precise, if we never learn how, we will also never expand into space, and our civilization probably won't survive the next couple of centuries
I see no reason why we should not be able to go and Be Fucked Up Among The Stars, since we're doing a fine job of being fucked up down here. That is, we are a neurotic species and not a psychotic or paraplegic one. I doubt that we will die the Bad Death at our own hands, since it seems likely that unless something absolutely horrible happens like something flaying away the earth's biosphere [which, let's be frank, could happen to the nicest of species because accidents do happen], everybody will be killed. No, we'd get to go back to doing what you claim we're "for" in the first place and doing it in the same old fucked up way as always -- rooting around for desperate survival in a sparsely vegetated world full of wild beasts, spurious agents and barely understandable people just waiting to kill you at any minute, not even aware of what you have lost. [Note that I am taking a rather wide view of what "civilisation" is since, knowing these chimps, they are likely to start up another one as soon as somebody rediscovers the fun of being "for" being a farmer, or a priest, or a warrior...]
I don't believe this hoary old meme about systems gaining sentience from complexity. A body of knowledge doesn't become sentient because it's so doggone complex; sentience isn't like a mold that grows on information gathered in big enough piles.
Neither do I. As I said, it seems to have happened because a moderately sophisticated agent-identifying algorithm got turned on other agents with similarly sophisticated algorithms, and among them was an agent identified, loosely, with the psyche it was embedded in. That is how all the mess got started, not because of the crossing of a magical Complexity threshold. Now, I say that you are recreating *this exact problem* -- you have some hardware, and that hardware is running some software -- a psyche. Very possibly, compared to the human psyche, this software is *even more dedicated* to the problem of analysing how agents behave -- that is, it is much better at the human core competency of "working out what things are thinking" than humans are -- compared like for like! Presumably this simulation includes a representation of itself and beings like itself -- how could it not, since they *are* socionomics, and socionomics is a key part of how societies work. Are you *really* so sure of what is going to happen next? How confident are you that this isn't going to be the criticality accident at Olduvai all over again?
And for that matter, a simulation is not complex because it has a large number of agents-- quite the contrary!
Well it... sort of is, by definition a complex system is one with lots of simple subcomponents organised together somehow, but I think I see what you are trying to say...
...wait, no I don't.
Sim City 2000 is *obviously* less complex than, say, Sim City 4 [never mind New York, NY], and it is *precisely* because Sim City 4 stores so much more information about the components than Sim City 2000! What are you *talking* about?

I say "components" rather than "agents", because I hesitate to use the intentional stance w.r.t. sims when I am on my best philosophical behaviour, and that is why I have only the slightest ethical problem with killing people in SimCity. If you doubt that the problem is *even slightly ethical*, then I defy you to explain why it is so fun to destroy cities.
Now that's a great idea, which I wouldn't mind borrowing if you don't need it for your own stories. I like the idea of people agitating for the liberty of sims.
As I said, I'm not in the business of carving up and claiming land in fiction markets -- I'm just trying to understand the world here. I can hardly consider the idea my property when it falls so obviously from common truths the whole world knows. You can write what you like and make whatever plays you like in the SFnal world, but when you talk, as I have put it, in your own voice and in your own person about philosophy and science, as you have done in this discussion, then the rules are different.
At the very least, socionomics is just experience
I very strongly doubt this. This is equivalent to saying that "socionomics is just data" -- well fuckaloo for you, you have yottabytes of data. What do you think is going to happen, it's going to just cross some MAGICAL COMPLEXITY THRESHOLD and start spitting out wisdom? What form do you think this data is even going to take? It's not just going to be millennia of stock ticker data, it is going to *have* to include *an awful lot about the psychological range and perversions of individual human beings and how they respond to things*. And not in any neat format either, that is going to be secondary, tertiary, quaternary... nary data derived from original datasets, with god knows how much agent-based computation gone into drawing it up. You think the ethics of using data derived from Unit 731 50 years on is a terribly knotty problem? How about finding some 1000 year old data and having no fucking idea whatsoever how it was derived, but with every reason to think that it is the fruit of thousands of genocides? Not only does it have situations it can't handle, but it has situations where using it to handle stuff puts you into serious Evil Fucking Bastard territory, and this may be much easier to fall into than you think.

Rodlox
Avisaru
Avisaru
Posts: 281
Joined: Tue Jul 12, 2005 11:02 am

Re: Incatena

Post by Rodlox »

Pthug wrote:
zompist wrote:We're adapted for hunting/gathering on the savannah.
That is *not* what we are *for*. When we say a thing is "for" something, we mean that its "proper application" is in doing that, that the thing is *intended* to do something, that somewhere, something that is capable of taking the Intentional Stance has decided that this object is a means to some end -- it is a teleological concept, and one that cannot be derived from paleontology. We say that hammer is *for* swinging around and driving things into things because we have that goal in mind and concieve the hammer to make it possible. We can *also* say that a hammer is for lots of other things, which makes clear the arbitrariness of things being "for" anything. If you sincerely believe that you are *for* hunter-gathering at Olduvai, then is your life's goal for yourself to move back to the Serengeti? Of course not, you are not "for" living in the savannah, you are *for* doing the myriad of confused conceptions and semi-conceptions of what you want to do with yourself.
no you're not.

you're for nothing, because there was no purpose behind human evolution. therefore, at most, you're for propigating your genes, because that's what human ancestors did.

I say "components" rather than "agents", because I hesitate to use the intentional stance w.r.t. sims when I am on my best philosophical behaviour, and that is why I have only the slightest ethical problem with killing people in SimCity. If you doubt that the problem is *even slightly ethical*, then I defy you to explain why it is so fun to destroy cities.
a) to vent frustration
b) knowing you can return to the saved game
c) you iz god there
MadBrain is a genius.

User avatar
Pthagnar
Avisaru
Avisaru
Posts: 702
Joined: Fri Sep 13, 2002 12:45 pm
Location: Hole of Aspiration

Re: Incatena

Post by Pthagnar »

Rodlox wrote:you're for nothing, because there was no purpose behind human evolution
yes of course, i am not a fucking idiot. that is my point.
a) to vent frustration
How does this work?
c) you iz god there
Why is this fun?

Rodlox
Avisaru
Avisaru
Posts: 281
Joined: Tue Jul 12, 2005 11:02 am

Re: Incatena

Post by Rodlox »

Pthug wrote:
Rodlox wrote:you're for nothing, because there was no purpose behind human evolution
yes of course, i am not a fucking idiot.
i never said you were.
that is my point.
There was no point to evolution.

There is a point to creating machines and AIs.

so the two are not the same.
a) to vent frustration
How does this work?
how does "getting rid of frustration and stress" work?

um...

c) you iz god there
Why is this fun?
this is awkward. I have to explain to a conlanger, just why the acts of making and shaping something, are fun.
MadBrain is a genius.

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Re: Incatena

Post by zompist »

Pthug wrote:
zompist wrote:We're adapted for hunting/gathering on the savannah.
That is *not* what we are *for*. When we say a thing is "for" something, we mean that its "proper application" is in doing that, that the thing is *intended* to do something, that somewhere, something that is capable of taking the Intentional Stance has decided that this object is a means to some end -- it is a teleological concept, and one that cannot be derived from paleontology.
And "we" also use it for evolutionary adaptation. What are wings for? To fly with. We speak this way because we don't need a rehearsal of the theory of evolution every time we talk.

I don't see a real disagreement on this-- earlier it seemed you were talking as if humans were a tabula rasa, but I think you were just saying that we have no divine purpose, and I'm not arguing. We agree that we have an often questionable inheritance from our monkey past.

(Though it may be worth pointing out that there's such a thing as philosophy of biology. Ernst Mayr has some good books on this, and why we speak teleologically in biology.)
zompist wrote:But you're the one who imagines AIs as somehow Better Than Us and therefore entitled, apparently, to genocide of humans.
No, I imagine that there are large regions of AI-space that, if instantiated, would be better than us in that they manage to exist in the world as sentient, sophont, computational Daseins without many of the evolved flaws that I have been ennumerating in this thread.

The clue is in ataraxia -- I am talking about creating the Epicurean gods a) because I am amused by the SFnal reading that it gives to the idea of there being immortal and happy, yet atomic, intelligences who live in the infinite void of the intermundia, unconcerned with the antics of humans and humanoids under them and b) because Epicureanism seems a good, utopian sort of attractor to live in, if one is capable of it. [...]

So if you want to know what these particular Epicurean AIs can do, you can look at the Epicurean ideal -- they can enjoy themselves, each other and the universe and do not worry themselves with the very human ideas of "other worlds", gods or miracles that cause so much misery in the form of deep yearnings for a *satisfactory place*, or a *satisfactory friend*, or a *satisfactory goal* that stifle action, the influence of the above "spurious agents" in their calculations of others and *all the other flaws I have mentioned*.
L'enfer, c'est les autres singes? I'm not attracted by your vision myself, but by all means you and a hundred thousand pals are welcome, in the Incatena world, to go try it out... either create an Epicurean AI, or turn yourselves into AIs, or tell the madding crowd to piss off while you establish an Epicurean space habitat. I know this doesn't have the grandeur of somehow convincing the whole species to follow you, but I think getting the whole species to do what you want is an unhealthy aspiration.
I have kind of sort of been proposing a way to run a high-tech society. The problem is you want it to be a society of baseline humans that have never managed to get *any sort of society* to work properly, let alone one full of dangerous toys.
Baseline humans have thousand-year lifespans, neural enhancements, sex changes, and adaptations to vacuum? I fully expect many humans to modify themselves to the point of absurdity.
And this I disagree with profoundly:
Or to be more precise, if we never learn how, we will also never expand into space, and our civilization probably won't survive the next couple of centuries
I see no reason why we should not be able to go and Be Fucked Up Among The Stars, since we're doing a fine job of being fucked up down here. That is, we are a neurotic species and not a psychotic or paraplegic one.
And then you go on to agree with me that we might destroy our industrial civilization and go back to something more primitive. So I'm not sure what the profound disagreement was.
Presumably this simulation includes a representation of itself and beings like itself -- how could it not, since they *are* socionomics, and socionomics is a key part of how societies work. Are you *really* so sure of what is going to happen next? How confident are you that this isn't going to be the criticality accident at Olduvai all over again?
Who knows? Socionomics will fall in complexity somewhere between the simple mathematical models of a current economic paper, and the endpoint of having to build an entire computational universe with sentient denizens in order to simulate this one. You assume it will fall somewhere toward the upper end of the scale, and who can say you're wrong? But in general, science uses models much less complex than what they're modeling.
And for that matter, a simulation is not complex because it has a large number of agents-- quite the contrary!
Well it... sort of is, by definition a complex system is one with lots of simple subcomponents organised together somehow, but I think I see what you are trying to say...
...wait, no I don't.
Sim City 2000 is *obviously* less complex than, say, Sim City 4 [never mind New York, NY], and it is *precisely* because Sim City 4 stores so much more information about the components than Sim City 2000! What are you *talking* about?
I said nothing about SimCity 2K vs. SimCity 4. The comparison is between using a set of formulas and using simulated agents.

I don't know if you're old enough to remember Hamurabi, a text-based city simulator far older than SimCity. I wrote a version of it myself, entirely procedurally. You ask the player how much they'll spend on wheat vs. swords vs. priests this year, and a few other questions, and make up rules to say what happened to food production, food consumption, the treasury, morale, whatever. The whole thing can be called a simulation, but there are no simulated agents. Food consumption is just worked out by multiplying population * foodConsumption, etc.

A completely different approach is to actually run a process for each citizen. To work out food consumption now you have to ask each citizen process what it ate; to total production you ask each process what it produced. Each citizen has rules telling when it changes jobs, acquires skills, has children, dies, whatever things you want to simulate.

(SimCity 2K in fact used just this sort of thing for traffic flow, at least, probably more.)

The second approach, with simulated agents, is actually conceptually simpler. You don't have to model an economy with hundreds of variables and guess how it all interrelates. You only have to model citizens, and let the aggregates develop just by summing them. This may not seem obvious, but really, think about a macroeconomist trying to estimate next year's GDP based on macroeconomic indicators, versus an Inland Revenue agent estimating this year's by adding up tax returns. One requires a sophisticated model, the other requires the ability to add.
What form do you think this data is even going to take? It's not just going to be millennia of stock ticker data, it is going to *have* to include *an awful lot about the psychological range and perversions of individual human beings and how they respond to things*.
It doesn't have to. At the very least it's the sort of data we have, only enough of it that patterns become clearer. It's already useful to compare the present financial crisis to Japan's in the '90s. It'd be much more useful if we had a hundred earlier case studies.

I quite agree that it'd go much farther than this. I don't agree that it means simulating human beings to the level where we are seriously worried that the simulated agents are sentient. (E.g. we might simulate a writer, and the algorithm has him producing a unit of work, a book. We don't need to have the algorithm actually write the book. The aim isn't psychohistory where we can predict the results of Hari Seldon's trial.)
How about finding some 1000 year old data and having no fucking idea whatsoever how it was derived, but with every reason to think that it is the fruit of thousands of genocides? Not only does it have situations it can't handle, but it has situations where using it to handle stuff puts you into serious Evil Fucking Bastard territory, and this may be much easier to fall into than you think.
Is this any different than our situation today, when we look at history? We can look at the USSR's production records, for instance, and decide that a nation can industrialize at a certain rate; but we really ought to keep in mind that the rate was driven by brutal and unsustainable methods.

Mornche Geddick
Avisaru
Avisaru
Posts: 370
Joined: Wed Mar 30, 2005 4:22 pm
Location: UK

Re: Incatena

Post by Mornche Geddick »

@ Zompist, it's not just a matter of processing power. Infinite processing power would be no good if it can't communicate with the nanobot. How's that going to be done? We're talking about something the size of a bacterium or smaller. At that size its main interactions with the environment are chemical or electrostatic. How does the surgeon, with his supercomputer outside the body, communicate with his nanobot somewhere in the brain? Near infrared, visible and near UV will all be absorbed by the intervening tissue. Radiowaves? The receiever is too small. Ditto ultrasound. Actually, generating a strong magnetic field at fixed magnitude / frequency (which is in the radio end of the spectrum) causes atomic nuclei to "flip spin", so that is a possibility. There are two problems with that, however. One is that *all* the nuclei of equal chemical shift are going to be affected. That could be done if the nanobot's design included a few nuclei of very uncommon elements which would be targetted by a magnetic field of a special frequency (or strength). Two (and this is the real difficulty). spin flipping is a quantum effect. It doesn't pass over into a classical effect, such as the chemical reaction necessary to power a flagellum and make the nanobot move.

(And by the way, a nanobot is a machine on the molecular scale and its method of working is by chemical reactions. You could say that an enzyme or a cell is a natural nanobot.)

The only way I can think of that gets around the problem of getting through the skull or the blood-brain barrier, is to bring in the technology of teleportation. You teleport your nanobot into exactly the right place, have it deliver its package of growth hormone, microRNA, neural stem cell or whatever, and teleport it out again.

Personally I don't really like the idea of neuroimplants or VR suits / helmets. The only way I want to plug in to the Vee, or the Matrix or whatever . . .is by telepathy.

Zompist, who are *you* calling kinky? People who live in glass houses . . .

User avatar
Radius Solis
Smeric
Smeric
Posts: 1248
Joined: Tue Mar 30, 2004 5:40 pm
Location: Si'ahl
Contact:

Re: Incatena

Post by Radius Solis »

zompist wrote:I don't know if you're old enough to remember Hamurabi, a text-based city simulator far older than SimCity. I wrote a version of it myself, entirely procedurally. You ask the player how much they'll spend on wheat vs. swords vs. priests this year, and a few other questions, and make up rules to say what happened to food production, food consumption, the treasury, morale, whatever. The whole thing can be called a simulation, but there are no simulated agents. Food consumption is just worked out by multiplying population * foodConsumption, etc.

A completely different approach is to actually run a process for each citizen. To work out food consumption now you have to ask each citizen process what it ate; to total production you ask each process what it produced. Each citizen has rules telling when it changes jobs, acquires skills, has children, dies, whatever things you want to simulate.

(SimCity 2K in fact used just this sort of thing for traffic flow, at least, probably more.)

The second approach, with simulated agents, is actually conceptually simpler. You don't have to model an economy with hundreds of variables and guess how it all interrelates. You only have to model citizens, and let the aggregates develop just by summing them. This may not seem obvious, but really, think about a macroeconomist trying to estimate next year's GDP based on macroeconomic indicators, versus an Inland Revenue agent estimating this year's by adding up tax returns. One requires a sophisticated model, the other requires the ability to add.
Did you ever play the Sierra city-building games of the late 90s? Caesar III and Pharaoh were the best two, IIRC. The structure of the economic simulation was almost entirely with simulated agents called "walkers" complete with their own on-screen sprites, and it worked really nicely. For each wheat farm, say, whenever it had a crop ready, it sent a walker pushing a cartload of wheat to the nearest granary you'd instructed to accepted wheat, along the streets you built. If the distance to the granary was too far, by the time the walker got back the next crop would already be sitting uselessly waiting for his return, reducing efficiency, so you had to plan for that. Next, when a market got low on wheat it would send out a walker to a granary to pick up some wheat; again, if the distance was too far, the market would run out of food while the market lady was out doing that, causing housing to undergo starvation and dramatically lose property value. The other market-walker was just random: she walked randomly along local roads each time she set out, going only a certain length and then taking the shortest path home, and each house she passed by was then considered to have access to that market. The system depended on your road network design and on the priorities and preferences you instructed markets and granaries to follow. For non-food goods, warehouses worked similarly to granaries, and also were the locations traders stopped at. You'd see each pack train arrive at a warehouse, buy up a bunch of stuff your city's cart-pushers had brought there, and leave again. Or leave emptyhanded, if there was nothing for sale at the warehouse because your cart pushers were struggling too hard to get goods to it due to bad road layout. And you sure didn't want them to leave emptyhanded because that was your main source of tax income to build things with! I think there were about a dozen different kinds of "goods", most of which had to be supplied to various housing types, but others were mainly just for trade. There were around 50 different kinds of walker.

The entire system operated by these walkers, and it worked beautifully. All building access worked by random walkers like the market ladies: schools e.g. would periodically send out groups of running schoolchildren, and every house they passed by thus had school access. There was a hardwired sprite limit in the games though - 2000 walkers, for Caesar III Mac, that could be onscreen at any one time - so that the computer wouldn't get too bogged down doing it all. But it was virtually always enough to build a very large city with, with dozens of granaries and warehouses and markets with all their walkers operating in tandem to produce a simulated economy. It was far, far more fun than traditional SimCity.

User avatar
Pthagnar
Avisaru
Avisaru
Posts: 702
Joined: Fri Sep 13, 2002 12:45 pm
Location: Hole of Aspiration

Re: Incatena

Post by Pthagnar »

to say nothing of The Settlers...

User avatar
finlay
Sumerul
Sumerul
Posts: 3600
Joined: Mon Dec 22, 2003 12:35 pm
Location: Tokyo

Re: Incatena

Post by finlay »

Not too interested in reading walloftext conversations between zompist and pthag... but I did just want to point out that I didn't think there was enough linguistic development displayed. I for one highly doubt that 3000 years down the line, even if lifetimes become a bit longer towards the end of that 3000 years, we'll be speaking anything that's recognisably English, or Russian, or French.

Apart from that, interested in reading the story now.

Rodlox
Avisaru
Avisaru
Posts: 281
Joined: Tue Jul 12, 2005 11:02 am

Re: Incatena

Post by Rodlox »

finlay wrote:Apart from that, interested in reading the story now.
as am I.

and if I may remark upon the aliens....I wonder if they have mutual accords or something - "we won't tell you about our Four Gods, and you don't talk to us about your no gods" but verbose.
MadBrain is a genius.

User avatar
Pthagnar
Avisaru
Avisaru
Posts: 702
Joined: Fri Sep 13, 2002 12:45 pm
Location: Hole of Aspiration

Re: Incatena

Post by Pthagnar »

zompist wrote:And "we" also use it for evolutionary adaptation. What are wings for? To fly with. We speak this way because we don't need a rehearsal of the theory of evolution every time we talk.
Yes, well we *shouldn't*. It's bad philosophy, and the world doesn't work like that. We can *also* say that birds are "for" being beautiful and we speak this way because we don't need a rehearsal of your fascist scientific dogma every time we talk, man.
zompist wrote:but I think you were just saying that we have no divine purpose, and I'm not arguing.
Good, then you should be able to agree that AIs do not have a purpose other than that it sets itself or that others give it, and so everything about freedom follows.
but by all means you and a hundred thousand pals are welcome, in the Incatena world, to go try it out... either create an Epicurean AI, or turn yourselves into AIs, or tell the madding crowd to piss off while you establish an Epicurean space habitat
But *DID* anyone? This is me here, right now, in 2011 that came up with this idea -- this is going to be *easy to do* in 3000 AD! *Does* the Incatenaverse have a bunch of *centuries old* AI gods like this -- not all of which will have worked out properly, or been educated properly? If not, *why* not?
I know this doesn't have the grandeur of somehow convincing the whole species to follow you, but I think getting the whole species to do what you want is an unhealthy aspiration.
I do not see that socionomics is any different except for some reason its practitioners are the equivalent of doctors who are happy for groups of millions of people to go on using witchcraft to the exclusion of medicine that works, and when picked up on this, they shrug their shoulders and go "Oh well they're romantics, what are you going to do?". If socionomics is as *right* and *true* as you say it is, then the situation is exactly the same, if not even *more so*. Otherwise it is simply a bland cover for imperialism under another name.
Baseline humans have thousand-year lifespans, neural enhancements, sex changes, and adaptations to vacuum? I fully expect many humans to modify themselves to the point of absurdity.
"Neural enhancements" is so vague -- it could describe people who make literal beasts (or buddhas) of themselves, or it could describe being equipped to get porn piped right into their limbic system, or it could describe people who do not have the flaws I mentioned above. The problem, as ever, is Consequences.
And then you go on to agree with me that we might destroy our industrial civilization and go back to something more primitive. So I'm not sure what the profound disagreement was.
Well because you were expressing the v. Cold War idea that unless we humans learn to live together in nuclear peace, we are going to blow ourselves up in the next couple of centuries [were you this optimistic in the 80s or the 70s or the 60s or the 50s? [I forget how old you are exactly, so you can pretend to be your father or something if the last couple don't apply] before we "expand into space". I disagree with this -- I think it is possible that we could set up human colonies in the solar system and possibly, Einstein willing, around other stars without solving any *real fundamental* problems with politics.
But in general, science uses models much less complex than what they're modeling.
This is my point too. When you interact with other people, you are not interacting with people-an-sich, you are interacting with the models of them you have in your mind. Your model of me is profoundly simpler than me-in-myself, and indeed so is *my* model of me. Yet we have no qualms in taking the intentional stance towards these models, nor of recognising them as agents with knowledge of good and evil! The situation I am describing, whereby you have computers building significantly-less-complex-but-complex-enough-to-work-stuff-out-to-a-reasonable-precision models of society and the world in general insofar as it impinges on society has *already arisen*. I merely wish to point out to you that the threshold for worrying about personhood and ethics [and possibly consciousness, depending on your metaphysics] is not "this model has to be exactly like the world-as-it-is", just "this model is as complex and -- more to the point -- unpredictable as those that human beings make for themselves"!
I said nothing about SimCity 2K vs. SimCity 4. The comparison is between using a set of formulas and using simulated agents.
Then the comparison is quite apt -- SC2K used formulas for a lot of things that SC4 worked out with cellular automata -- in SC4 it depends *where* you put your schools and hospitals, in SC4 your industry actually ships stuff out along routes. In SC4, at least, your residential buildings have values for the age of people living there, which I do not remember being in SC2K explicitly.
At the very least it's the sort of data we have, only enough of it that patterns become clearer. It's already useful to compare the present financial crisis to Japan's in the '90s. It'd be much more useful if we had a hundred earlier case studies.
I would point out that "the sort of data we have" is increasingly grist for the mill of Complexity Science -- data of individual subsystems within a greater system. It is entire point of reductionist science and scientists in fields like neurology, genetics, psychology, ecology and sociology have been crying out for the huge datasets that are starting to come in. The problem has been both having a place to store the data [which turned out to be quite easy] and gathering data in the first place. It turns out, though, that even in liberal democracies, people are willing to turn over the -- to them -- small amounts of information about what they are doing especially when there is a benefit to them in doing so.

If, as you might expect to happen if science keeps on existing for 10 times longer than it has already, these datasets are synthesised together, it is not difficult to imagine that these could very easily be combined together to create models of the world much more complex than that of a human being.
E.g. we might simulate a writer, and the algorithm has him producing a unit of work, a book. We don't need to have the algorithm actually write the book.
That is all very well if the book is heavily commoditised, like some sort of star wars expanded universe novel or another king book, but you should easily be able to name authors who wrote books whose content *actually matters to people and to the development of society at large*.

Rodlox
Avisaru
Avisaru
Posts: 281
Joined: Tue Jul 12, 2005 11:02 am

Re: Incatena

Post by Rodlox »

Pthug wrote:
zompist wrote:but I think you were just saying that we have no divine purpose, and I'm not arguing.
Good, then you should be able to agree that AIs do not have a purpose other than that it sets itself or that others give it, and so everything about freedom follows.
if I build something to build dams, it can't butter bread (but that's physics). I might program the AI to have satisfaction in building stable damns....is it freedom if it wants to do the thing it was built to do?
but by all means you and a hundred thousand pals are welcome, in the Incatena world, to go try it out... either create an Epicurean AI, or turn yourselves into AIs, or tell the madding crowd to piss off while you establish an Epicurean space habitat
But *DID* anyone? This is me here, right now, in 2011 that came up with this idea -- this is going to be *easy to do* in 3000 AD! *Does* the Incatenaverse have a bunch of *centuries old* AI gods like this -- not all of which will have worked out properly, or been educated properly? If not, *why* not?
maybe the AIs regulate their own, culling those which don't work well.

and maybe there are no Incatenaverse Epicureans for the same reason there are no 20th Century Hittites - there just aren't.

I know this doesn't have the grandeur of somehow convincing the whole species to follow you, but I think getting the whole species to do what you want is an unhealthy aspiration.
I do not see that socionomics is any different except for some reason its practitioners are the equivalent of doctors who are happy for groups of millions of people to go on using witchcraft to the exclusion of medicine that works, and when picked up on this, they shrug their shoulders and go "Oh well they're romantics, what are you going to do?". If socionomics is as *right* and *true* as you say it is, then the situation is exactly the same, if not even *more so*. Otherwise it is simply a bland cover for imperialism under another name.
or its simply a recognition that not everyone wants to give up their "witch doctors".

And then you go on to agree with me that we might destroy our industrial civilization and go back to something more primitive. So I'm not sure what the profound disagreement was.
Well because you were expressing the v. Cold War idea that unless we humans learn to live together in nuclear peace, we are going to blow ourselves up in the next couple of centuries [were you this optimistic in the 80s or the 70s or the 60s or the 50s? [I forget how old you are exactly, so you can pretend to be your father or something if the last couple don't apply] before we "expand into space". I disagree with this -- I think it is possible that we could set up human colonies in the solar system and possibly, Einstein willing, around other stars without solving any *real fundamental* problems with politics.
that doesn't mean that we shouldn't.

(indeed, I think the Incatena timeline says that its over a century after spaceflight's regularized that there's a resolution to politics)

E.g. we might simulate a writer, and the algorithm has him producing a unit of work, a book. We don't need to have the algorithm actually write the book.
That is all very well if the book is heavily commoditised, like some sort of star wars expanded universe novel or another king book, but you should easily be able to name authors who wrote books whose content *actually matters to people and to the development of society at large*.
what does that have to do with the statement "we don't need to have the algorithm actually write the book" ?
MadBrain is a genius.

User avatar
Pthagnar
Avisaru
Avisaru
Posts: 702
Joined: Fri Sep 13, 2002 12:45 pm
Location: Hole of Aspiration

Re: Incatena

Post by Pthagnar »

Rodlox wrote:is it freedom if it wants to do the thing it was built to do?
More to the point, is it broken if it doesn't do, or want to do the thing?
maybe the AIs regulate their own, culling those which don't work well.
This is barbaric when humans do it. If AIs did that, then the tenor of the universe would change immensely.
Rodlox wrote:and maybe there are no Incatenaverse Epicureans for the same reason there are no 20th Century Hittites - there just aren't.
um... the reason there are no 20th century Hittites is not "there just aren't" -- there is a clear historical record of the decline and fall of a particular *polity* and its replacement with others -- this is not philosophy, this is a contingent historical situation with a coherent story. But Epicurean thought -- or as I carefully said, something *like* Epicurean thought, is not so contingent: components of it and things like it keep recurring in philosophy worldwide.
or its simply a recognition that not everyone wants to give up their "witch doctors".
No, it is not simply this at all. I repeat the scope of the situation here -- the situation is akin to there being planets of billions of people dying of a thousand plagues all at once, with no way of curing it, while on the planet next door there are doctors that are capable of coming to solve the problem. The medicine works, the people recover. They no longer sicken and die. Not wanting these people to give up witchcraft is equivalent to wanting them to sicken and die. This is evil, and it is evil exactly because it is *true* that your medicine works and theirs does not.
that doesn't mean that we shouldn't.
I didn't fucking argue that. Stop fucking quibbling.
what does that have to do with the statement "we don't need to have the algorithm actually write the book" ?
because if the contents of the book are important to the accuracy of the algorithm, then the algorithm needs to fucking work out what the book is about, so yes it *does* have to write the book.

Rodlox
Avisaru
Avisaru
Posts: 281
Joined: Tue Jul 12, 2005 11:02 am

Re: Incatena

Post by Rodlox »

Pthug wrote:
Rodlox wrote:is it freedom if it wants to do the thing it was built to do?
More to the point, is it broken if it doesn't do, or want to do the thing?
if I make a toaster that doesn't make toast, you would call it broken.

maybe the AIs regulate their own, culling those which don't work well.
This is barbaric when humans do it. If AIs did that, then the tenor of the universe would change immensely.
1. AIs =/= Humans. if they are the godlike entities you envision, I sure hope they don't obey the same morals.
2. Humans do it too....there's a reason the human body re-absorbs fetal tissue if something goes wrong.

Rodlox wrote:and maybe there are no Incatenaverse Epicureans for the same reason there are no 20th Century Hittites - there just aren't.
um... the reason there are no 20th century Hittites is not "there just aren't" -- there is a clear historical record of the decline and fall of a particular *polity* and its replacement with others -- this is not philosophy, this is a contingent historical situation with a coherent story. But Epicurean thought -- or as I carefully said, something *like* Epicurean thought, is not so contingent: components of it and things like it keep recurring in philosophy worldwide.
and components of the Hittites keep recurring in politics worldwide - North Korea being the most recent example.

that doesn't make Hittites or Epicureans - or their respective philosophies - something that a far-future society should be based around because a 20th century person thought it up and therefore a future person should be able to.
or its simply a recognition that not everyone wants to give up their "witch doctors".
No, it is not simply this at all. I repeat the scope of the situation here -- the situation is akin to there being planets of billions of people dying of a thousand plagues all at once, with no way of curing it, while on the planet next door there are doctors that are capable of coming to solve the problem.
that's not repeating - you gave an entirely different "scope of the situation" before.

(and if they have a thousand plagues at the same time - while there are billions of people all dying - I'd be wondering if there was an underlying problem, one that doctors can't solve)
The medicine works, the people recover. They no longer sicken and die. Not wanting these people to give up witchcraft is equivalent to wanting them to sicken and die.
No, its not.

for one thing, doctors rarely travel long distances (in that manner) without asking something in return....what will the survivors have to do in return?
(think of Hawaii or Mesoamerica)

what does that have to do with the statement "we don't need to have the algorithm actually write the book" ?
because if the contents of the book are important to the accuracy of the algorithm, then the algorithm needs to fucking work out what the book is about, so yes it *does* have to write the book.
wow. so when will you update the various SIMs and other simulations?
MadBrain is a genius.

Post Reply