Incatena

Questions or discussions about Almea or Verduria-- also the Incatena. Also good for postings in Almean languages.
User avatar
Pthagnar
Avisaru
Avisaru
Posts: 702
Joined: Fri Sep 13, 2002 12:45 pm
Location: Hole of Aspiration

Re: Incatena

Post by Pthagnar »

please, rodlox, if you are not going to try to understand what i am trying to say, fuck off.

User avatar
Pthagnar
Avisaru
Avisaru
Posts: 702
Joined: Fri Sep 13, 2002 12:45 pm
Location: Hole of Aspiration

Re: Incatena

Post by Pthagnar »

no, that was uncourteous. here, in words of one syllable or less:
Rodlox wrote:if I make a toaster that doesn't make toast, you would call it broken.
not the same thing at all, christ, these are gods, not goods, they are just like you but more so. they are not like a grill at all, they have no point to be just as you do not.
1. AIs =/= Humans. if they are the godlike entities you envision, I sure hope they don't obey the same morals.
me too, as i said
there's a reason the human body re-absorbs fetal tissue if something goes wrong
cells do not have a mind, men do, gods do. a mind is a bad thing to waste.
and components of the Hittites keep recurring in politics worldwide - North Korea being the most recent example.
what the fuck do you mean
that doesn't make Hittites or Epicureans - or their respective philosophies - something that a far-future society should be based around because a 20th century person thought it up and therefore a future person should be able to.
men die, cows die, you, yes, you will die. but i know one thing that will not die: truth that dies not.
that's not repeating - you gave an entirely different "scope of the situation" before.
not true. you read it wrong the first time.
and if they have a thousand plagues at the same time - while there are billions of people all dying - I'd be wondering if there was an underlying problem, one that doctors can't solve)
no i said they *could*. you are wrong -- mark told us that it *works* and it is *true*.
No, its not [equivalent to wanting them to sicken and die]
yes, it is since if you say a thing is good to do, then you want what comes next, since it is a good thing that comes next but if the things that come next are bad -- and worse than the things that come next if you do not do the thing, then it is bad to do the thing so you should not want to do the thing since bad is that which you should not do and good is that which you should.
for one thing, doctors rarely travel long distances (in that manner) without asking something in return....what will the survivors have to do in return?
mark tells us that folk are nice in space and do not use cash for base needs like drugs, health and so on -- read the page one more time.
wow. so when will you update the various SIMs and other simulations?
one last time, what the fuck do you mean?

Rodlox
Avisaru
Avisaru
Posts: 281
Joined: Tue Jul 12, 2005 11:02 am

Re: Incatena

Post by Rodlox »

Pthug wrote:
Rodlox wrote:if I make a toaster that doesn't make toast, you would call it broken.
not the same thing at all, christ, these are gods, not goods, they are just like you but more so. they are not like a grill at all, they have no point to be just as you do not.
okay, lets say we have a god....and it appears to be a rock. to all senses (of humans, machines, aliens, etc), it registers as a rock.

now, is it a god? and if it is, what is the god doing that a rock can't?

there's a reason the human body re-absorbs fetal tissue if something goes wrong
cells do not have a mind, men do, gods do. a mind is a bad thing to waste.
in purely biological terms, not really.....otherwise many parasites wouldn't exist. (as they evolved from brainier, more complex ancestors)

in social terms, yes, a mind is a bad thing to waste. but bear in mind that every single society on Earth has rules for who can do what - including who can be in charge.

(i wouldn't want myself - or some of the people I went to school with - to be in charge of the CDC or a nuclear facility....we simply lacked the brainpower for it)

now, if the AIs are as powerful as the two of you suggest, wouldn't it be a good thing if the AIs policed their fellows' abilities and intactness?
that doesn't make Hittites or Epicureans - or their respective philosophies - something that a far-future society should be based around because a 20th century person thought it up and therefore a future person should be able to.
men die, cows die, you, yes, you will die. but i know one thing that will not die: truth that dies not.
truth and bad ideas both are tough to kill.

and if they have a thousand plagues at the same time - while there are billions of people all dying - I'd be wondering if there was an underlying problem, one that doctors can't solve)
no i said they *could*. you are wrong -- mark told us that it *works* and it is *true*.
remember what Churchill said about democracy.

it may be that it works better than prior societial structures and is more true than its predecessors, but that does not mean it is the summitt of all things - just the summitt of known things.

No, its not [equivalent to wanting them to sicken and die]
yes, it is since if you say a thing is good to do, then you want what comes next, since it is a good thing that comes next but if the things that come next are bad -- and worse than the things that come next if you do not do the thing, then it is bad to do the thing so you should not want to do the thing since bad is that which you should not do and good is that which you should.
it was a good thing that the Conquistadors came and ended human sacrifice in Mesoamerica? yes.

it was a good thing that the Mesoamericans died from plagues and STDs brought by the Conquistadors? no.
for one thing, doctors rarely travel long distances (in that manner) without asking something in return....what will the survivors have to do in return?
mark tells us that folk are nice in space and do not use cash for base needs like drugs, health and so on -- read the page one more time.
okay....so they're uber-altruists? why do they need AIs then? :D

my point remains: what will they ask in return for the assistance?

wow. so when will you update the various SIMs and other simulations?
one last time, what the fuck do you mean?
if "the more detail going in, the better & more accurate the result" applies to all simulations, then you need to update SimLife, SimCity, etc so there is greater level of detail.
MadBrain is a genius.

User avatar
Pthagnar
Avisaru
Avisaru
Posts: 702
Joined: Fri Sep 13, 2002 12:45 pm
Location: Hole of Aspiration

Re: Incatena

Post by Pthagnar »

Rodlox wrote:okay, lets say we have a god....and it appears to be a rock. to all senses (of humans, machines, aliens, etc), it registers as a rock.
now, is it a god? and if it is, what is the god doing that a rock can't?
um
Rodlox wrote:in social terms, yes, a mind is a bad thing to waste. but bear in mind that every single society on Earth has rules for who can do what - including who can be in charge.
so
Rodlox wrote:i wouldn't want myself - or some of the people I went to school with - to be in charge of the CDC or a nuclear facility....we simply lacked the brainpower for it)
good
Rodlox wrote:now, if the AIs are as powerful as the two of you suggest, wouldn't it be a good thing if the AIs policed their fellows' abilities and intactness?
uh
truth and bad ideas both are tough to kill.
deep

remember what Churchill said about democracy.
fuck
it was a good thing that the Conquistadors came and ended human sacrifice in Mesoamerica? yes.
gawrsh
it was a good thing that the Mesoamericans died from plagues and STDs brought by the Conquistadors? no.
:x
okay....so they're uber-altruists? why do they need AIs then? :D
thud
my point remains: what will they ask in return for the assistance?
love
if "the more detail going in, the better & more accurate the result" applies to all simulations, then you need to update SimLife, SimCity, etc so there is greater level of detail.
what

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Re: Incatena

Post by zompist »

finlay wrote:Not too interested in reading walloftext conversations between zompist and pthag... but I did just want to point out that I didn't think there was enough linguistic development displayed. I for one highly doubt that 3000 years down the line, even if lifetimes become a bit longer towards the end of that 3000 years, we'll be speaking anything that's recognisably English, or Russian, or French.
Definitely; as I said, take the names I use as transliterations for contemporary speakers, or in some cases archaisms. (E.g. Novorossiya was settled just four centuries from now, so it's not unreasonable that its name is close to present-day Russian.)

But language change is going to slow down drastically due to the long lives. If I did the math right, there's about 44 generations between now and AD 4901. With normal historical lifespans that'd be just 1100 years’ worth of change.

User avatar
Pthagnar
Avisaru
Avisaru
Posts: 702
Joined: Fri Sep 13, 2002 12:45 pm
Location: Hole of Aspiration

Re: Incatena

Post by Pthagnar »

zompist wrote:But language change is going to slow down drastically due to the long lives. If I did the math right, there's about 44 generations between now and AD 4901. With normal historical lifespans that'd be just 1100 years’ worth of change.
How sure are you that this is not as ridiculous as saying that language change is going to slow down drastically due to the existence of audio recording technology? And how sure are you that current linguistics is going to be *enough*, when you take into account the existence of mind-machine interfaces and neural modifications? And the fact that people are used to significantly changing their, "lifestyles," several times over in a life.

Without departing from the strictly acoustic, Want to be able to *really* distinguish between more than two or three formants in vowels? Should be possible! *Really* fine pitch and length distinctions? Those too! And then you have multiple vocal tracts, or larynges capable of producing different waveforms etc. etc. It's the same problem as ethics -- 3000 years is a lot of time to develop distinctions and concepts of "linguistic change" that just do not apply at the moment!

NE: especially bearing in mind that most of the economy appears to be quaternary-and-higher sector wanking. Since that is all about masturbating with the media, research and "culture", if the energy of an interstellar em...imp...feder...polity that would otherwise be put into making things is put into it, I would expect [from here at my position in this the two-thousand and eleventh year of grace] fripperies like this to be huge markets!
Last edited by Pthagnar on Sat Jan 22, 2011 6:56 pm, edited 1 time in total.

Rodlox
Avisaru
Avisaru
Posts: 281
Joined: Tue Jul 12, 2005 11:02 am

Re: Incatena

Post by Rodlox »

Pthug wrote:
Rodlox wrote:okay, lets say we have a god....and it appears to be a rock. to all senses (of humans, machines, aliens, etc), it registers as a rock.
now, is it a god? and if it is, what is the god doing that a rock can't?
um
exactly.

Rodlox wrote:in social terms, yes, a mind is a bad thing to waste. but bear in mind that every single society on Earth has rules for who can do what - including who can be in charge.
so
I assume that's a question.

well, if we have rules for who can be in charge, why should AIs be exempt?
truth and bad ideas both are tough to kill.
deep
anti-Semitic actions and arguments have been around for over 1900 years. now, I doubt you would say that the longevity of anti-Semitism makes it a good idea. so why is epicurianism a good idea just because its been around a long time?

remember what Churchill said about democracy.
fuck
is that your way of conceeding defeat? because if thats not what you meant, you're not my type.
it was a good thing that the Conquistadors came and ended human sacrifice in Mesoamerica? yes.
gawrsh
it was a good thing that the Mesoamericans died from plagues and STDs brought by the Conquistadors? no.
:x
well, your argument seemed to be "anyone coming to stop a bad thing, is automatically good - full stop."

okay....so they're uber-altruists? why do they need AIs then? :D
thud
another concession of defeat?

my point remains: what will they ask in return for the assistance?
love
they can't get love where they came from? starting to wonder which world had it better.
MadBrain is a genius.

Rodlox
Avisaru
Avisaru
Posts: 281
Joined: Tue Jul 12, 2005 11:02 am

Re: Incatena

Post by Rodlox »

Pthug wrote:
zompist wrote:But language change is going to slow down drastically due to the long lives. If I did the math right, there's about 44 generations between now and AD 4901. With normal historical lifespans that'd be just 1100 years’ worth of change.
How sure are you that this is not as ridiculous as saying that language change is going to slow down drastically due to the existence of audio recording technology? And how sure are you that current linguistics is going to be *enough*, when you take into account the existence of mind-machine interfaces and neural modifications? And the fact that people are used to significantly changing their, "lifestyles," several times over in a life.

Without departing from the strictly acoustic, Want to be able to *really* distinguish between more than two or three formants in vowels? Should be possible! *Really* fine pitch and length distinctions? Those too! And then you have multiple vocal tracts, or larynges capable of producing different waveforms etc. etc. It's the same problem as ethics -- 3000 years is a lot of time to develop distinctions and concepts of "linguistic change" that just do not apply at the moment!
remind me: do you want to read the Incatenaverse's stories? or are you just throwing darts?
MadBrain is a genius.

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Re: Incatena

Post by zompist »

Mornche— well, all of that would be news to the researchers working on nano radio right now.
Mornche Geddick wrote: The only way I can think of that gets around the problem of getting through the skull or the blood-brain barrier, is to bring in the technology of teleportation. You teleport your nanobot into exactly the right place, have it deliver its package of growth hormone, microRNA, neural stem cell or whatever, and teleport it out again.

Personally I don't really like the idea of neuroimplants or VR suits / helmets. The only way I want to plug in to the Vee, or the Matrix or whatever . . .is by telepathy.
So you'd like to think at your devices to make them work? Congrats, you've reinvented the neurimplant. Your telepathy has to have some physical basis, as your neural impulses are physical, not spiritual.

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Re: Incatena

Post by zompist »

Pthug wrote:
zompist wrote:But language change is going to slow down drastically due to the long lives. If I did the math right, there's about 44 generations between now and AD 4901. With normal historical lifespans that'd be just 1100 years’ worth of change.
How sure are you that this is not as ridiculous as saying that language change is going to slow down drastically due to the existence of audio recording technology?
Fairly sure, as I'm not mistaking the main source of language transmission: we adapt our speech to our peers in adolescence, not what we hear on the radio. You only get one adolescence.

(With a caveat; I've entertained the notion that, in order to recapture some neoteny, people might choose to loosen up their brains, so to speak. Bored with life, disgusted with the new music, baffled by the latest devices? Replasticize your neural connections! Then you do get another adolescence, and that could increase the rate of language change.)

Now, I don't know that we have the last word on the effect of sound recording, once we discount the error that people learn their language from the radio. Mass media spread familiarity with different dialects... when I was a lad, I and my friends could quote Monty Python in an approximation of British dialects. It's an open question how long people will have a passive knowledge, at least, of 20C norms. As long as people still want to watch our media; but that only begs the question, as wanting to watch it surely correlates with being able to understand it. But it won't have a huge impact on their contemporary speech.
Without departing from the strictly acoustic, Want to be able to *really* distinguish between more than two or three formants in vowels? Should be possible! *Really* fine pitch and length distinctions? Those too! And then you have multiple vocal tracts, or larynges capable of producing different waveforms etc. etc. It's the same problem as ethics -- 3000 years is a lot of time to develop distinctions and concepts of "linguistic change" that just do not apply at the moment!
Just like we've all adopted Speedtalk, Interlingua, and Basic English? The features you name are lovely conlanging ideas, but I don't see the obstacles to interlangs disappearing.

And sure, in 49C WoW maybe people talk Orkish, if they can pick it up with a neurimplant. Conlanging might well be a chunky economic sector... heck, it fits the main criteria for interstellar transport, as it's nothing but data. But it'd be just one niche competing with a thousand others, and people will still need lingua francas to communicate with whoever doesn't share their particular slice of geekdom.

Ran
Lebom
Lebom
Posts: 145
Joined: Fri Sep 13, 2002 9:37 pm
Location: Winterfell / Lannisport / Highgarden
Contact:

Re: Incatena

Post by Ran »

Um.... perhaps at some point we should just suspend disbelief (or doubt) and just enjoy the story. I mean, a conworld can be fun without being 100% logically worked out -- and this applies even more to a conworld that's made mainly to serve as the setting of a story.

Regarding the AI's fooming off into divinities thing, I am still not convinced that superhuman AI's wouldn't foom off into divinity simply by the virtue of them being superhumanly intelligent. I think pthug has it right on the "what [insert being here] is For" line of argument (or at least, what I understood of it, as I don't really have as much familiarity with philosophy, nor with sci-fi explorations thereof, as pthug), -- simply designing an accounting program, or a superhuman CEO, to be "happy" with what it is "for," and perhaps letting it be a poet instead if it agrees to give up its staff and money, isn't really going to cut it. This superhuman CEO being superhuman, why wouldn't it have its own intentions and desires (I use these terms of course with the caveat that, as pthug puts it, "desires and "intentions" are the only models through which my meager Olduvai brain can possibly conceptualize and analyze the actions of such a hypothetically superhuman CEO), and why should such intentions and desires be so amenable to negotiation with or control by beings less intelligent than it? Why wouldn't it completely, and quite easily, co-opt all of the purposes, motives, knowledge, and freedom of humanity, for its own designs, in the same way humans are easily able to do the same for any animal species today? (Again, I'm using certain terms that pthug would call artificial, but being the Olduvai ape that I am, what can I do? certainly there are parallel or superior concepts to these things as "purposes, motives and designs," that only a parahuman or superhuman mind could understand. But since it is rather futile to speculate on things that we are physically unable to understand, I'm just going to end this caveat right here).

Of course, none of these has to affect how the book operates, because since a superhuman AI is going to operate on concepts that humans don't and can't understand, clearly humans in such a future will live without the faintest clue as to what exactly the superhuman AI's are doing (beyond human-level approximations, of course). And so, a book written by a human, for other humans, is not going to (and is not able to) expound on such concepts either, any more than one dog could understand fully, and explain comprehensibly to other dogs, what it is that humans do for and with dogs and why humans do those things. In short, the book, and the conworld, need not mention at all anything about superhuman AI's, beyond the fact that such superhuman AI's exist, and we don't know completely what they are doing, but since we've not had a recession since we started using them, they are *clearly* 1) allied with us and 2) controlled by us for the purposes we use them for ["allied" and "controlled" being, again, human-level concepts, of course], so why should we worry our spongy brains about whether the superhuman CEOs are also super-manipulating humans in order to achieve their galactic super-designs?

[And a quick thing about the co-opting... sure, some of the humans could try to match the AI's ascent to divinity by upgrading their own cerebral hardware. But then we have simply shifted the problem, as it would then be the enhanced humans that are fooming off into gods. And if the characters of the book are to be such humans, that would render the book quite unwriteable.]

... finally, regarding the question of whether the Socionomics simulator will actually contain sprites so bright that we actually "shouldn't" kill them off, I also liked pthug's book example... after all, to properly simulate the effect of books, you cannot treat them all as fungible objects. You're going to have to account for the effect of books on society... but to do that, you need to simulate how other sprites in SimCity 90000 are going to react to the book. And how do you simulate that, except by actually getting one sprite to write the book, and the other sprites to read it? And of course this book example applies across all elements of SimCity 90000 -- from natural parks to healthcare systems -- so that eventually you DO have a system that behaves just like an actual universe, with sprites in it that interact just like real humans. Why then aren't the sprites then sentient in the same way humans are sentient?

Like I said at the beginning, I'm fine with suspension of disbelief/doubt and just reading a good story. After all, as is pointed out in PCK there're plenty of fantasy novels where WMD-like magical powers have bafflingly not been industrialized and weaponized, yet I've enjoyed those, and clearly Incatena is much superior to those books. So really, we don't have to work these puzzles out, though I agree with pthug that these puzzles exist.

[On a completely unrelated note, and in the hopes that this hasn't been answered already, I have a question about the trade of physical goods. Why do DNA, seeds etc. need to be traded? Isn't it much, much cheaper to trade the specs and have the recipient reconstruct the thing themselves?]
Winter is coming

User avatar
Pthagnar
Avisaru
Avisaru
Posts: 702
Joined: Fri Sep 13, 2002 12:45 pm
Location: Hole of Aspiration

Re: Incatena

Post by Pthagnar »

zompist wrote:With a caveat; I've entertained the notion that, in order to recapture some neoteny, people might choose to loosen up their brains, so to speak. Bored with life, disgusted with the new music, baffled by the latest devices? Replasticize your neural connections! Then you do get another adolescence, and that could increase the rate of language change.
I don't think there's really any "might" about it, unless you are expecting that people will decide to live centuries in the senescent, hidebound, cranky stage that is associated with the elderly, and to a certain extent, "adulthood" in the present day. I should think that the preferred and practically *default* state would be a sort of "mature late adolescence" -- "If I were 19, knowing what I did now...".
zompist wrote:As long as people still want to watch our media; but that only begs the question, as wanting to watch it surely correlates with being able to understand it.
What sort of a freak wants to watch flat video with no neural overlay giving it full glass-bead-context? The sort of hippy freaks who believe in woowoo like laissez-faire stock markets and haruspicy, *that's* who.
Just like we've all adopted Speedtalk, Interlingua, and Basic English? The features you name are lovely conlanging ideas, but I don't see the obstacles to interlangs disappearing.
Obviously *we* haven't, because *we* don't live in a magical future land of consciousness-expansion where coming up with and consuming wacky/fun/business-critical ways to communicate ever more subtle and complex concepts is big business -- we're stuck with unglamorous occupations like growing corn, coding bespoke databases and flogging sugarwater.
zompist wrote:And sure, in 49C WoW maybe people talk Orkish, if they can pick it up with a neurimplant. Conlanging might well be a chunky economic sector... heck, it fits the main criteria for interstellar transport, as it's nothing but data. But it'd be just one niche competing with a thousand others, and people will still need lingua francas to communicate with whoever doesn't share their particular slice of geekdom.
You are not churning through modus ponens enough! This need *not* be a geeky niche -- this will be a standard way of dealing with media of the time! My god, *everyone* has all this multiplexed bandwidth wired right into every functional neural cluster in their brains, with the only people dissenting being barbarians, and this state of affairs has been in existence for centuries, and all the money in the empire goes into making this as fun and complex and profitable as possible, and you think the "lingua franca" is going to be still basically as restricted as present-day languages except with a millennium's worth of sound- and grammatical change?

SPECIAL NINJA EDIT FOR RAN: No, those terms are no more artificial than any other. I see no problem with saying that they can have desires, thoughts, "purposes, motives and designs," etc. -- that's *definitely* what it means to be an agent and probably to be Dasein too!

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Re: Incatena

Post by zompist »

Trying to reduce a bit of the wall-of-textness...
but by all means you and a hundred thousand pals are welcome, in the Incatena world, to go try it out... either create an Epicurean AI, or turn yourselves into AIs, or tell the madding crowd to piss off while you establish an Epicurean space habitat
But *DID* anyone? This is me here, right now, in 2011 that came up with this idea -- this is going to be *easy to do* in 3000 AD! *Does* the Incatenaverse have a bunch of *centuries old* AI gods like this -- not all of which will have worked out properly, or been educated properly? If not, *why* not?
The Incatena is a big place, so sure, why not? The Pthag Rest Habitat for Epicurean AIs is hereby launched. (Give me your real name and I'll even put it in the documentation somewhere.)
I know this doesn't have the grandeur of somehow convincing the whole species to follow you, but I think getting the whole species to do what you want is an unhealthy aspiration.
I do not see that socionomics is any different except for some reason its practitioners are the equivalent of doctors who are happy for groups of millions of people to go on using witchcraft to the exclusion of medicine that works, and when picked up on this, they shrug their shoulders and go "Oh well they're romantics, what are you going to do?". If socionomics is as *right* and *true* as you say it is, then the situation is exactly the same, if not even *more so*. Otherwise it is simply a bland cover for imperialism under another name.
It's an ethical dilemma! But not, I think, a very hard one. If a bunch of fundies— or Epicureans— want to run off to the void on their own terms, the consensus is that it's a win-win situation.

If a whole planet gets taken over by crazy people who want to make their citizens miserable... well, then we have a situation, and that's what I've written a novel about.
Well because you were expressing the v. Cold War idea that unless we humans learn to live together in nuclear peace, we are going to blow ourselves up in the next couple of centuries [were you this optimistic in the 80s or the 70s or the 60s or the 50s? [I forget how old you are exactly, so you can pretend to be your father or something if the last couple don't apply] before we "expand into space". I disagree with this -- I think it is possible that we could set up human colonies in the solar system and possibly, Einstein willing, around other stars without solving any *real fundamental* problems with politics.
Of course we can tool around the solar system— though colonizing anything beyond Luna in the next few centuries is unlikely. Alpha Centauri, no. The problem isn't optimism; it's the damn distance. You either need relativistic ships whose energy needs require megaprojects (i.e. things we can't be assumed to have in the next few centuries), or you need generation ships where, yes, you have to solve the problem of keeping an ecosphere and politics stable for hundreds of years.
E.g. we might simulate a writer, and the algorithm has him producing a unit of work, a book. We don't need to have the algorithm actually write the book.
That is all very well if the book is heavily commoditised, like some sort of star wars expanded universe novel or another king book, but you should easily be able to name authors who wrote books whose content *actually matters to people and to the development of society at large*.
As an aspiring novelist, I'm charmed that you think so highly of novelists. But thinking as a tech writing simulations, it can all be estimated by throwing in a non-Gaussian distribution for number of sales. Shakespeare is just a black swan of the book market.

A much more interesting problem is how socionomics models progress. (And of course progress does come from books; e.g. maybe our writer is Karl Marx.) One answer is to just allow some of our sims to boost productivity, again with a non-Gaussian distribution. So Sim X23423 suddenly starts producing his goods faster, and we model how this knowledge spreads.

What about the real game changers, like Hari Seldon? We don't model them; those are the holes in socionomics. It's not thermodynamics where there's never a really wonderful electron whose properties far exceed the normal range.

User avatar
finlay
Sumerul
Sumerul
Posts: 3600
Joined: Mon Dec 22, 2003 12:35 pm
Location: Tokyo

Re: Incatena

Post by finlay »

What about that gene thing? I'm puzzled as to how you think genes work. Hint: It's not really like this.

However, I can totally buy sex reassignment surgery becoming more viable in the future (particularly in the genital area, which still has a long way to go in the →male type and currently leaves trans people infertile. A transwoman able to give birth would be interesting too.), and I can imagine people surgically drafting fur or tails onto themselves, like plastic surgery or something. I'm not totally convinced that SRS will become as widespread as you imply, although with the whole restarting every century or so with a new life, I suppose it makes some grain of sense if someone has a later-in-life crisis and decides they need to try the other side for a while or something. But you'll definitely get people who stay with one sex their entire life, and those who (as you get today) have fluid gender roles and don't really give a shit about genitals.

User avatar
Pthagnar
Avisaru
Avisaru
Posts: 702
Joined: Fri Sep 13, 2002 12:45 pm
Location: Hole of Aspiration

Re: Incatena

Post by Pthagnar »

zompist wrote:The Incatena is a big place, so sure, why not? The Pthag Rest Habitat for Epicurean AIs is hereby launched. (Give me your real name and I'll even put it in the documentation somewhere.)
My real name is Επίκουρος ο Σάμιος. I invented the idea of gods living in the Mετακόσμια -- probably a name better suited to their dignity. If you want to cite me properly, then unfortunately most of my work has got lost on the way to the publisher. I can recommend D Laertius (-230) Βίοι καὶ γνῶμαι τῶν ἐν φιλοσοφίᾳ εὐδοκιμησάντων, βιβλος ιʹ - Ἐπίκουρος -- it is mostly in there as clearly as I have presented it. There are several copies available at Alexandria if you cannot find anyone with it locally.
It's an ethical dilemma! But not, I think, a very hard one. If a bunch of fundies— or Epicureans— want to run off to the void on their own terms, the consensus is that it's a win-win situation.
The Epicurean gods, yes, provided they do not grow deranged, but rogue polities in general? *Fuck* no! You think it's a win-win situation to have gods, or apotheotics, or would-be-godmakers with the ability to generate post-scarcity quantities of energy from any nearby stars or even zero-point energy from the vacuum just toddle off into the fog of where, where you have no idea what they are getting up to? Hegemonising swarms would only be the *beginning* of your troubles!
If a whole planet gets taken over by crazy people who want to make their citizens miserable... well, then we have a situation, and that's what I've written a novel about.
Super!
or you need generation ships where, yes, you have to solve the problem of keeping an ecosphere and politics stable for hundreds of years.
Okay, I grant you that you require some degree of stability to some degree for generation ships to work. But they could survive many kinds of limited war, especially if you send lots and lots out. And optimising microcosmoi like generation ships seem like absolutely *perfect* problems for highly detailed AI simulations, provided you don't mind taking on responsibility for all the fuckups!

Why is everybody living on planets instead of in space, anyway?
As an aspiring novelist, I'm charmed that you think so highly of novelists.
Hey, I have the greatest respect for all craftsmen! It's not *me* who created the commodity system, no, that's that rascal Karl Marx's fault!
One answer is to just allow some of our sims to boost productivity, again with a non-Gaussian distribution. So Sim X23423 suddenly starts producing his goods faster, and we model how this knowledge spreads.
And this boost we call "Marxism". Are you sure this isn't better suited for modelling Who Moved My Cheese?
We don't model them; those are the holes in socionomics
Then I think your idea of socionomics is too small -- a chemistry-like truth for how societies work over large time- and space- scales would be able to work out such things, and not fail to reproduce as simple and as small-scale a phenomenon as the industrial revolution. Or World War 2 for that matter.
Last edited by Pthagnar on Sat Jan 22, 2011 8:56 pm, edited 1 time in total.

User avatar
finlay
Sumerul
Sumerul
Posts: 3600
Joined: Mon Dec 22, 2003 12:35 pm
Location: Tokyo

Re: Incatena

Post by finlay »

Is socionomics like psychohistory?

User avatar
Pthagnar
Avisaru
Avisaru
Posts: 702
Joined: Fri Sep 13, 2002 12:45 pm
Location: Hole of Aspiration

Re: Incatena

Post by Pthagnar »

finlay wrote:Is socionomics like psychohistory?
The real question is "How is socionomics *not* like psychohistory?". Psychohistory has anomalies too -- its pioneers are just Hari Seldons who are aware of the existence of mules, who exist in larger numbers. Also, it requires supercomputers, rather than a pocket calculator.

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Re: Incatena

Post by zompist »

Ran wrote:Regarding the AI's fooming off into divinities thing, I am still not convinced that superhuman AI's wouldn't foom off into divinity simply by the virtue of them being superhumanly intelligent. I think pthug has it right on the "what [insert being here] is For" line of argument (or at least, what I understood of it, as I don't really have as much familiarity with philosophy, nor with sci-fi explorations thereof, as pthug), -- simply designing an accounting program, or a superhuman CEO, to be "happy" with what it is "for," and perhaps letting it be a poet instead if it agrees to give up its staff and money, isn't really going to cut it. This superhuman CEO being superhuman, why wouldn't it have its own intentions and desires [...] and why should such intentions and desires be so amenable to negotiation with or control by beings less intelligent than it? Why wouldn't it completely, and quite easily, co-opt all of the purposes, motives, knowledge, and freedom of humanity, for its own designs, in the same way humans are easily able to do the same for any animal species today?
The problem I have with all this fooming, even when an author as good as Stross does it, is that it's so undefined, and the activities the writers suggest are so lame. Admittedly it's a narrative problem: what do gods want to do? But it's really no answer to say "Uh... whatever they want! Stuff!"

So, let's try to be more specific: in what way are the Incatena AIs superhuman? Given their purpose, I think the answer is clear: they have more bandwidth. They can contemplate more data— they can form a bigger picture, make better connections, with a more detailed model than we can.

And as the Hamurabi examples (or Radius's walkers) demonstrate, this need not be because they have a greatly expanded consciousness. Much of the processing might be distributed to not-very-smart sub-units. To the central decisionmaking unit, much of this might filter up as insights or connections whose origin it can't explain (unless it drills down and asks). The central unit isn't exactly smarter than human beings; it just has access to far better insights since its sub-agents are numerous and busy.

Our own brains follow this distributed model much more than that of a "supercomputer". If we have what we call an "idea", we don't really know where it came from or what its internal structure is! Some subconscious agent created it for us.

What desires does this AI have? It starts with those implanted in the system— if it's running a corporation, the desire to make a profit while pleasing stakeholders. Where do other desires come from? On the whole we can't reprogram ourselves very much and I don't see why these AIs would be programmed differently.

(Actually the discussion is making me question even further the necessity for AI at all. It may be done for a lark, or to populate the Pthag Habitat, or to investigate the nature of our own minds, but those corporate minds are more akin to glorified data mining applications.)

(As for the sim stuff, I've answered that above.)
[On a completely unrelated note, and in the hopes that this hasn't been answered already, I have a question about the trade of physical goods. Why do DNA, seeds etc. need to be traded? Isn't it much, much cheaper to trade the specs and have the recipient reconstruct the thing themselves?]
Suppose you're investigating the native ecosphere of Okura and you'd like to send some sample cells to Sihor. Are you sure that your data representation contains everything Sihor wants to know about this alien cell, especially as if you get it wrong it'll take 28 years before you can send a correction? It's much safer to send a small biological sample.

I do posit nanoduplication as a 49C technology, but I think it can't reliably create a living organism. (E.g. you're nanoduping a cow and you're halfway through. Half a cow is usually referred to as a corpse. I'm not buying nanoduplication as fast as a Star Trek transporter.)

User avatar
Radius Solis
Smeric
Smeric
Posts: 1248
Joined: Tue Mar 30, 2004 5:40 pm
Location: Si'ahl
Contact:

Re: Incatena

Post by Radius Solis »

zompist wrote:
Pthug wrote:
zompist wrote:But language change is going to slow down drastically due to the long lives. If I did the math right, there's about 44 generations between now and AD 4901. With normal historical lifespans that'd be just 1100 years’ worth of change.
How sure are you that this is not as ridiculous as saying that language change is going to slow down drastically due to the existence of audio recording technology?
Fairly sure, as I'm not mistaking the main source of language transmission: we adapt our speech to our peers in adolescence, not what we hear on the radio. You only get one adolescence.
Well, this would seem right if we model sound change as something like a transmission error between generations. But people's speech does not just freeze when they mature, as far as I'm aware. The only two pieces of specific evidence I know of are, first and least, even the Queen - whose speeches over the many years over her reign have been analyzed for it - is known to still be acquiring new sound changes from society. And second and greater, I recall there being an extensive sociolinguistic study of how sound changes start and spread: a key point was that at least in America, they tend to start among middle-aged females and spread among already-mature speakers even before reaching the currently adolescent generation, whereas sound changes starting among under-20s are paradoxically less likely to stick, perhaps because most of them do eventually try to conform to the adult world when it comes time for things like careers and marriage.

If I am indeed recalling these points correctly, which it's possible I'm not, then it seems to me like there should be little reason to expect sound change rate to be tied to the rate of generational replacement.

User avatar
Pthagnar
Avisaru
Avisaru
Posts: 702
Joined: Fri Sep 13, 2002 12:45 pm
Location: Hole of Aspiration

Re: Incatena

Post by Pthagnar »

zompist wrote:The problem I have with all this fooming, even when an author as good as Stross does it, is that it's so undefined, and the activities the writers suggest are so lame. Admittedly it's a narrative problem: what do gods want to do? But it's really no answer to say "Uh... whatever they want! Stuff!"
This is where fiction falls behind philosophy -- it doesn't say anywhere that the actual universe has to be fun. [In fact, it mostly isn't.] People love to quote the old Haldane line about the universe being queerer than we *can* think, but when actual examples are pointed out, the idea loses its charm. Although there is an awful lot of philosophy written about what gods might want to do, but here is another irony -- those people [i am talking here of singularitarians, mostly, rather than people who believe the Rapture is imminent, but it holds a little even then] who *do* hold onto the charm of Haldane's idea are too fast to rubbish theology, so we get people who do not believe that gods are atomic *real* beings like themselves devouring what the most intelligent human beings have had to say over the past millennia about what superhuman beings might be like, whereas those who believe that they will be in the real presence of those gods within the next few decades don't give a shit. It is most amusing.
zompist wrote:The central unit isn't exactly smarter than human beings; it just has access to far better insights since its sub-agents are numerous and busy.
If that does not count as being like a human being, but smarter, then what *does*? This is a question I would be particularly interested to hear your take on, because to me you are talking nonsense:

As you point out in the next category, you have just described the human psyche in outline, except humans are much more ignorant, which means that this person is superhuman in two ways -- first, it is more "intelligent" in that it can come up with answers to a larger set of problems faster than humans can, including ethical, creative and scientific problems. Secondly -- and perhaps more importantly -- it is profoundly more "mindful", or "luminous" or has great "insight" or -- yes -- has a more "expanded consciousness" than a human in that it is capable of needling down into the subcomponents that even the most mindful meditator cannot penetrate, and can tell whether or not that component is full of shit and can be safely ignored this time, and to what degree it *keeps* talking shit and requires retraining.

These are sort of two different things -- somebody with little computational power and little insight is somebody who is profoundly retarded. Somebody with strong powers in both is a genius. The other two cases are interesting -- the extreme case of somebody with intelligence but no insight is an idiot savant [or your original conception of your AIs, unsurprisingly]. I do not know what sort of person would have no intelligence but deep insight -- a mystic, perhaps?
zompist wrote:What desires does this AI have? It starts with those implanted in the system— if it's running a corporation, the desire to make a profit while pleasing stakeholders. Where do other desires come from? On the whole we can't reprogram ourselves very much and I don't see why these AIs would be programmed differently.
Because "making a profit" and "pleasing stakeholders" are not, really, atomic desires, are they? You are really talking about teaching it *ethics* and *values* -- a fact recognised by Asimov. Now, obviously the Laws of Robotics are useless because they're just axioms for logic puzzles, so if you want your AI to interact with human stakeholders, you are going to have to build into it a *recognisably human or sortahuman ethical system*. And ethics is *hard* and the source of many problems and also of *desires*.
zompist wrote:(Actually the discussion is making me question even further the necessity for AI at all. It may be done for a lark, or to populate the Pthag Habitat, or to investigate the nature of our own minds, but those corporate minds are more akin to glorified data mining applications.)
What do you think *you* are, that you can be so snotty about "glorified data mining applications", a shard of the pleroma fallen into Saclas' black iron prison?

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Re: Incatena

Post by zompist »

Radius Solis wrote:Well, this would seem right if we model sound change as something like a transmission error between generations. But people's speech does not just freeze when they mature, as far as I'm aware. The only two pieces of specific evidence I know of are, first and least, even the Queen - whose speeches over the many years over her reign have been analyzed for it - is known to still be acquiring new sound changes from society. And second and greater, I recall there being an extensive sociolinguistic study of how sound changes start and spread: a key point was that at least in America, they tend to start among middle-aged females and spread among already-mature speakers even before reaching the currently adolescent generation, whereas sound changes starting among under-20s are paradoxically less likely to stick, perhaps because most of them do eventually try to conform to the adult world when it comes time for things like careers and marriage.
My understanding is that the biggest source of standardization is physical mixing among adolescents and young adults, as you get in schools, universities, and army barracks with universal service. You have to have both exposure to other dialects, and social pressure to conform. (My classmates and I learned to understand and imitate Monty Python, but had no motivation to always speak like them.)

By contrast, adults have a much higher tolerance for moderate accents, and these therefore can persist indefinitely. Certainly older people's speech does change, but as a counter-anecdote to the Queen I'll mention a woman I met in a nursing home; she was French and still had a French accent after being in the US for something like 50 years... despite, in fact, not being able to communicate well in French! Likewise I know many Hispanic immigrants-- my wife, for instance-- who don't their accent, though it becomes much less dramatic.

(I should read Labov again on all this, though good Lord he's a chore to read.)

Rodlox
Avisaru
Avisaru
Posts: 281
Joined: Tue Jul 12, 2005 11:02 am

Re: Incatena

Post by Rodlox »

Pthug wrote:
zompist wrote:
It's an ethical dilemma! But not, I think, a very hard one. If a bunch of fundies— or Epicureans— want to run off to the void on their own terms, the consensus is that it's a win-win situation.
The Epicurean gods, yes, provided they do not grow deranged, but rogue polities in general? *Fuck* no! You think it's a win-win situation to have gods, or apotheotics, or would-be-godmakers with the ability to generate post-scarcity quantities of energy from any nearby stars or even zero-point energy from the vacuum just toddle off into the fog of where, where you have no idea what they are getting up to? Hegemonising swarms would only be the *beginning* of your troubles!
let me see if I understand this....

you want the AIs to be unregulated (they can be any level of sanity, completeness, development)

you also warn that AIs can't just wander off into the back of beyond (where they can be as Epicurean or whatnot as they like)

:?:
MadBrain is a genius.

User avatar
Pthagnar
Avisaru
Avisaru
Posts: 702
Joined: Fri Sep 13, 2002 12:45 pm
Location: Hole of Aspiration

Re: Incatena

Post by Pthagnar »

You do not understand.

Rodlox
Avisaru
Avisaru
Posts: 281
Joined: Tue Jul 12, 2005 11:02 am

Re: Incatena

Post by Rodlox »

Pthug wrote:
zompist wrote:The central unit isn't exactly smarter than human beings; it just has access to far better insights since its sub-agents are numerous and busy.
If that does not count as being like a human being, but smarter, then what *does*? This is a question I would be particularly interested to hear your take on, because to me you are talking nonsense:
a modern computer can process far more data than any human can, can search for and locate images/data better than a human can, can perform many multiple tasks at the same time - and can do all that, far faster and better than a human can. is a computer more intelligent than a human?
MadBrain is a genius.

Rodlox
Avisaru
Avisaru
Posts: 281
Joined: Tue Jul 12, 2005 11:02 am

Re: Incatena

Post by Rodlox »

Pthug wrote:You do not understand.
which part?

don't say "you do not understand" like a Vorlon in syndication :wink: - tell me which part I don't understand.
MadBrain is a genius.

Post Reply