zompist wrote:I'm trying to be more precise about what kind of "superhuman" we're talking about, since I think talk about "divinity" is just geeky wishful thinking. In the case I was describing, it'd be more accurate to say "the central decisionmaking bit isn't smarter than human consciousness."
Central decision making bit?. You have a central decision making bit? May I see it? How smart is it?
I didn't say it was faster. That's a desideratum for (say) a missile defense system, but not for (say) running a corporation. Heck, if the AI takes an hour to make a decision where the CEO took three minutes, that's fine if the decisions are better.
This is not a fair comparison. If the decision is better, than either the algorithm, or the input data, or both are not the same, and so either the algorithm, or the input data, or both are better.
And obviously in many circumstances, the CEO *does* want to be quick -- suppose he is trying to impress somebody with what a clever person he is? Or if he is negotiating something in a situation where time is of the essence. Also, I would say you are being too rational -- humans find being able to come up with good answers quickly to be a virtue, very often a greater virtue than reaching the right answer after careful meditation: which is why I describe the intersection of good insight [see below] and good intelligence to be "genius".
(It's a commonplace
FUCK YOU. YOU are not ALLOWED to begin an explanation with "It is a commonplace", that's MY beginning-an-explanation phrase!
that individual operations on a computer are absurdly faster than neurons; also quite misleading. Fast computation is not intelligence, and it's quite easy to program things that use up all that speed.
Doesn't work that way around, yeah, but the other way? You think you can get higher intelligence (sorta ceteris paribus -- i am assuming we are comparing a sort of general-purpose Bayesian inference engine, with the same priors and inputs coming in, or being sampled as fast as the algorithm can take it) without faster computation and more bandwidth?
And if you really want to do arithmetic fast, you just ask the computer today, or the neurimplant in the future.)
Everything is arithmetic. Simulating a human brain from quantum-mechanical first principles is arithmetic, so is simulating a human brain with a sufficiently huge Bayesian network. According to the rising dogma, your brain is doing a lot of arithmetic right now just by virtue of being A Part Of Physics.
As I described it, it need not have an "expanded consciousness". It has better or more numerous insights because it can call on so many sub-agents.
Perhaps here I am not explaining myself well enough -- I choose the word "insight" because I am reading a lot about Buddhist meditation at the moment and "insight meditation" is not really what the usual English meaning of the words would lead you to believe. But I was aware that this might be confusing if "insight" were given as the sole label for the phenomenon-cluster I mean, so I threw some more words near it. I think the problem here is that I am not sure what you *mean* by "expanded consciousness" because it seems to me that by definition (and by common usage of the word!) "consciousness expansion" is *precisely* the bringing to awareness [or "consciousness"] of the sub-agents that make up the mind, either by paying careful attention to them, or by the use of drugs that add a bunch of neural noise that results in latent connections being temporarily lit up, or by the application of cognitive science results that identify how subconscious agents come to irrational conclusions with a view to compensating for them or you know the sort of thing I mean. A Dasein that is built with the "insight" or "mindfulness" to be able to drill down, perhaps even to near the machine-code level [i.e. we are talking Bene Gesserit shit here], would beat all of these baseline-human techniques, and be immensely more capable of rational problem-solving, and the better marshalling of however many sub-agents are within it. The degree to which one is able to call on multiple sub-agents, whether one is conscious of them, or capable of altering their running, would, I think, more accurately be called something like "complexity".
Actually, we could stop there, with the agents bringing up good insights and plenty of them, but presenting them to humans. It'd be like having a cadre of smart dudes who are always saying "Did you think about X?" and having a high probability of being right.
How can you *always* be asking "DID YOU THINK ABOUT ASKING THE PRIME MINISTER OF NEPAL ABOUT HIS WIFE'S LIVER COMPLAINTS?" and yet have a high probability of saying something pertinent to the computation? I do not understand what you are saying here at all.
The reason we might want an evaluative central function anyway, however, is that humans aren't always good at weighting. A now-classic example is the American government before 9/11: the information was there (some bright agent noticed the bad guys taking flight training, among othert things) but the people in power dismissed it. So we need not only bright ideas, but a protocol for making sure they're found, listened to, and addressed. CEOS really are arrogant idiots much of the time and we either need better corporate governance or an AI to do it more carefully.
I don't see how the story has anything to do with Why You Might Want an Evaluative Central Function. There *was* an ECF in this situation, and it fucked up. The lesson seems to be more "Hey, don't fuck up! Try coming up with optimal decisions more often!" rather than "Hey look what happens when you don't have somebody on the top making decisions!", and what do you come up with? A protocol, but what does the protocol do? The protocol *decentralises* decision making since it allows lower-status subnodes to be given a larger voice than they ordinarily would in the hierarchy!
-----
This next bit is good. I wrote it first and look I'm doing little Salmoneus-type horizontal bars! A MARK OF QUALITY
-----
Our instinctual desires are not an ethical system.
But I am afraid that they *are*. Even taking a rather restricted view of "instinctual", this is still the case -- good is that which one should do and evil that which one should avoid. Eating and fucking are good, pain is evil. A good agent is one that is an occasion of good and an evil agent is one that is an occasion of evil.
But I did not say "instinctual", and for this reason -- the term is *properly* applied to animals, whose psyches we have no real access to and so must depend on the *specifically behaviourist* concept of "instinct". If you notice where the idea of "instinct" is most easily applied to humans, you will notice it is applied to similarly non-psychological, but human cases -- those of infants, or of spinal reflex arcs.
And so humans properly have "desires" rather than "instincts", and naturally among these "desires" are correlates to the "instincts" of animals -- feeding, fucking, fighting and so on, and this is why I can say that our "instinctual desires" (i.e. those desires that are correlated [and cognate with] with animalian instincts) are ethical, because all desires are placed in a moral framework. But there are desires that are *sort of* "instinctual desires" -- the desire for high status, let us say, which is correlated to various mammalian "instincts" involving the modulation of eating, aggression, mating etc, but that are nonetheless not *considered* to be "instinctual" -- they are such things as justice, honour, fairness, power, greed and so on.
Many people assume that AIs are restricted to being "rational" and never emotional. But I don't take emotions to be irrational; rather, they're part of our longterm biological toolkit as social primates. Evolution has had a bright idea here, and I think it'll be applied to AIs.
I should hope I have said nothing to give the impression that I am among that multitude.