rotting bones wrote:Salmoneus wrote:Promoting dictatorship on the grounds that, after much terrible bloodshed, it might be possible to reinstate democracy one day (and presumably then reinstitute dictatorship again to solve new problems as they arise, ad infinitum) is still promoting dictatorship. Particularly when it's dictatorship by a literal superhuman.
What? I meant create a democracy immediately after revolution with institutions balanced to handle climate change.
This just makes no practical sense. You think our failure to correctly value long-term costs can be solved by... slightly rejigging our constitutions!? In a way that only a computer could calculate?
How is this utopia implemented? By asking existing governments to reform themselves the way you prefer, instead of the way their citizens prefer? That's not going to happen. By forcing them to reform? Well, they've got the guns and the nukes and you have a big calculator, so I don't see what your practical route to reform is here.
But more so, I'm just perplexed by this weird combination of wild utopianism (there is some slightly different arrangement of democratic structures - a change in term limits, perhaps, or a reassessment of adjudication procedures for state-federal disagreements - that will magically completely solve these fundamental problems of human psychology) and fatalist pessimism (no human being could think of these reforms, and no progress at all is possible without exactly these reforms). This paradoxical and improbable set of beliefs doesn't seem to have any perceivable foundation.
I mean, I can at least understand the reasoning behind "humans can't solve these problems, so an AI will have to force us to do so". But "humans can't solve these problems, but if an AI forces us to adjust some of our democratic institutions a bit, suddenly all the problems will go away"...? I don't get it.
Here are the six fundamental problems of climate change:
- humans weight the values of immediate costs and benefits more highly than those of temporally remote costs and benefits (i.e. it's irrationally difficult to accept a cost today even for a benefit tomorrow)
- humans weight losses much higher than gains, and weight gains much higher than the deprivation of gains or the amelioration of loss (i.e. people are irrationally opposed to declines in standards of living, but also irrationally disinterested in reducing the scale of a decline in standards of living; this means that any proposal along the lines of "accept a concrete loss now in order to reduce the scale of loss tomorrow" is directly against the human grain, even allowing for the rate of future discounting)
- humans weight the value of costs and benefits that are personally close to them more highly than those of those who are remote from them (i.e. it's hard to get them to accept costs to themselves and their friends and family to produce benefits for strangers they will never meet, in far away countries about which they know little)
- humans assess probability irrationally, including by overestimating both moderately large and extremely small probabilities. For instance, in the 2016 election people overestimated the odds for both Clinton (large but not near-certain) and Stein (virtually zero) while underestimating the odds for Trump (small but significant). Similarly, people overestimate wild improbabilities that would result in climate change just solving itself, while also overestimating major threats that are not certain (that we won't be able to do anything about it), and underestimating small but significant probabilities (like, we can fix the problem). [they also overestimate the catastrophic worst-case scenarios].
- humans weight risks asymmetrically. They tend to be suboptimally (on average) risk averse. That is, it takes a lot of potential gain to outweigh a known loss, and a lot of potential loss to outweigh a known gain. Any definite policy to combat climate change is known, but its benefits are only potential.
- the logic of single-play and short-run games often encourages defection. Climate change is one such example - specifically, it's an assurance game. Nobody wants to be the one bearing the burden (particularly because one county sacrificing probably won't have any effect anyway) and everybody wants to be the freerider (because if everyone else sacrifices, you get the benefit even if you don't sacrifice). (i.e. it's extremely hard to organise different people to act for the collective good through co-ordinated action, even when everybody agrees on the desired actions)
There are, of course, other more superficial problems (like lack of awareness of the issue), but these six are fundamental. Four of them are fundamental human irrationalities, one of them is a human idiosyncracy that results in, on average, suboptimal outcomes*, and one of them is a logical result in game theory. None of these problems will ever go away under any democratic, or indeed any human-controlled, decision-making regime. This could be an argument for a robot dictator. But it's no argument at all for a robot lawgiver who then lets humans make the actual decisions. Because the root causes of bad decisions will still remain.
*our risk weighting is on average suboptimal, but it can't be called irrational, because the value of risk is incommensurable with the value of utility, and so any attempt to assess the value of a given risk/utility portfolio requires arbitrary price setting for risk. For instance, a conservative portfolio (lower average reward but lower risk of catastrophe) cannot objectively be said to be rationally inferior to an aggressive portfolio (higher average reward but higher risk of catastrophe), or vice versa.
Salmoneus wrote:No, we don't. We normally give our reasons, rather than our decision procedure. If someone says "why did you drop that bit of metal?" and you say "because it was really hot", that's a reason, but it's not a decision procedure.
Yes, we do. The inference is the heat was burning your fingers, and it is known that humans instinctively avoid that.
...no, we don't.
You may
infer that the heat was burning my fingers, and that I avoid burning my fingers, and that I assessed that letting go of the hot object was the best way to avoid the heat, and that that's why I dropped the hot object. You may infer that. But
I did not
give that as a reason. I may not even know that, for instance, humans instinctively avoid burning. So that instinct may be an
explanation, but it need not be my
reason (it is, of course,
a reason for dropping a hot object. But it is not necessarily
my reason. For instance, if I'm immune to pain, I may consciously be aware that I should drop the hot object to avoid injury, but not have any instinctive impulse to do so). And even when it is my reason, it is not necessarily the reason
I give. But in any case, what you suggest is not a decision procedure. At most, you could say that a reason was a combination of a given decision procedure
and a given data input, although even that would be controversial. [if I have the procedure "if touching something hot, let go", that's not a reason to let go of anything unless we also have the data input 'this is something hot'. So the maxim itself is not inherently a reason unless it is
categorical. Cf. Hume and Kant.]
Salmoneus wrote:No, it's not. I mean, you could have an algorithmic decision procedure, but I don't think many people would advocate that even in theory, and certainly we don't do that in practice.
Yes, it is. That is indeed what everyone does all the time, with the proviso I explain below.
The fact that you can model something
as if it followed some algorithmic procedure does not mean that it
does.
Salmoneus wrote:No, they're not. This is a category error. Even if you believe decisions are made algorithmically, which they're not, reasons are the input, not the process. You can't "run" a reason - perhaps you're confusing 'a reason' with 'reasoning'?
Yes, they are. There is no category error. Decisions are made through an analog
distributed synthetic-biology-type system, but that is irrelevant. I only care about the level of representation. It is because you do not understand this that you think there is a category error where there is none. The problem is that you think I'm attacking this question as an analytic philosopher and trying to articulate what things are in ways that can be challenged only by phrasing things in a convoluted manner. But I'm attacking the problem as a computer scientist, and in our discipline, we don't care what happens beyond the level of representation. What things are is an interesting question, but one we abstract away in our solutions. This always produces category errors when taken literally, but that doesn't matter even a little bit at the level of algorithmic analysis. It is common practice among the sciences to carve out their niches of mutual irrelevance in this way. It is an interesting question how this is possible in computer science in particular, especially if you believe in philosophical materialism, but analytic philosophers don't have an answer to this question. We just know that it works by induction for some reason. We don't even know how it is possible to characterize the "space of algorithms" per se, but again that's a different question. Even speaking as an analytic philosopher, not all analytic philosophers agree with the position you have adopted to attack mine.
It "matters" because if you use words in an inconsistent way, the sentences you form are incoherent.
You may well know how to program a computer. But you are advancing theories in psychology, philosophy, and political theory, and you clearly do not know what you are talking about; being able to program a computer does not give you omniscience.
Salmoneus wrote:Quick question: are you God?
This whole train of reasoning is irrelevant, not because I'm putting you down, but because the algorithm my AI is running is based on fulfilling people's own wishes. Therefore, I'm not imposing my particular overarching vision on others. I'm only seeking to raise the baseline of wish fulfillment. I think I can get all non-monsters to agree with this aim. So to answer your question: No, I'm not God, but I know Satan when I see him.
Well, I've never been "Satan" before, but I'll take that as a compliment. However, what you are doing here is your usual goalpost shifting - you say one thing, and then back away as soon as challenged, pretending to have said something else entirely.
You say "I'm not imposing my particular overarching vision", but you also say "I prefer democracy for a reason. What if I could have an AI which ran
that reason as its algorithm?" and "how could that AI be totalitarianism? Non-totalitarianism is part of the algorithm
I use to pick democracy." These things are obviously incompatible. You cannot impose rule by an AI that is programmed to follow
your priorities and overarching vision, and then say you're not imposing your overarching vision.
More particularly, whether your AI is popular or not is
irrelevent to the purely logical point I made. Not a political point, but a logical point, which you have ignored. Your argument is logically flawed. It is fallacious.
Let's recap: you say "How could democracy possibly be superior to an AI whose raison d'etre is the criterion by which democracy is superior to other forms of government?". Well, the answer is, "because you defined the AI as operating by the criterion that
lead you to think that democracy was superior, which, unless you're God, may not be the reason that democracy
is actually superior." I'm sure you can understand this fallacy:
- I think democracy is good because it maximises X
- I build my AI to maximise X
- therefore my AI cannot be worse than democracy
This ONLY WORKS if you assume that "the reason I think democracy is good" and "the reason democracy is good" are identical. But, obviously, unless you are God, they are not necessarily the same. A big part of why people support democracy is precisely the recognition that what I perceive as good personally may not be what is actually good.
This is a strictly logical point about the validity of your reasoning. Complaining about me being Satan, or shifting the discussion to whether your AI 'fulfills people's wishes', does not address this glaring fallacy.
Salmoneus wrote:It should also be pointed out that there seems to be a confusing in your basic idea. If an overman, using your terminology, "ran" your reason for democracy, then the result would by definition just be "democracy". It wouldn't be "lower the voting age to 16" or "raise taxes on whiskey". An Overman that is expected to make real moral decisions must have an entire moral framework, not just "democracy is good", or even "democracy is good because X".
This is not true because I don't support democracy just because democracy is democratic. Nor, I claim, do most democrats. For example, many people are of the opinion that democracy is the least bad form of government. This would make no sense if they wanted democracy for the sake of democracy. It follows that people want democracy for some other reason. My AI is intended to optimize for that reason.
No, obviously that does not follow. What people imply by saying "democracy is the least bad system" is that democracy has vices; that does not mean that it does not also have virtues. Most people would agree that self-government and freedom from tyranny
are two of the goods that democracy seeks to maximise . People disagree, of course, as to how those goods should be weighed against other goods, like freedom from terrorist atrocities, but most people would agree that these goods - which are intrinsic, not instrumental - are goods.
In particular, people say they want to live in democracies because that is the best way to fulfill their dreams. My AI only seeks to raise the baseline of wish fulfillment among humans.
Maybe people say that where you live. To me, that seems an extraordinarily right-wing sentiment to have. Many people support democracy
even though they personally might benefit from, say, a military dictatorship. The military, for example.
But of course, the very fact that we are having this discussion shows that your views on what is valuable in a political system are not universal, so any attempt to impose them on everybody else, with or without a computer program, is oppressive.
Salmoneus wrote:By coincidence, I have a non-totalitarian invisible pink unicorn here. It really exists! I know, because existence is part of the algorithm I used to choose which invisible pink unicorn to have. So this one definitely totally exists.
This is totally not condescending in any way! I thank you muchly for the most respectful conversation I've had in years!
Dude, you're relying on the ontological argument. Catch up to the last 600 years of reasoning and people may stop treating you like a child. Your argument was
utterly ridiculous. And I demonstrated precisely why - but, as always, you're more interested in claiming offence than in actually addressing your irrational thinking. This is why most of us generally don't engage with your posts anymore.
Salmoneus wrote:Certainly any dictator who could, for example, impose energy use rationing on every family on earth would be totalitarian!
Any democratic government that seeks to redistribute resources is totalitarian!
No, it isn't. However, any democratic government, or in this case non-democratic government, that considers itself to be fully sovereign and has the wherewithal to exercise that sovereignty - that is, a government that is not subject to constitutional limitations - is by definition totalitarian in constitution, even if it is restrained in practice.
We should totally return to traditional religions like Catholicism and Islam that never impose forms of social organization that seek to regulate our lives in any way!
As I'm both non-religious and politically secular, I find the strawman bizarre. Other than demonstrating your own continued strange obsession.
As I keep telling you every time, the AI will not seek to regulate people who do not want to be regulated by it except in cases of dire need.
While at the same time bringing about a global revolution and reducing the amount I use my lightbulbs. This is disingenuous.
I think I'll leave this here. As always, it is clear no headway can be made, because you are not willing to settle down to serious discuss any point in clear terms.