Epistemic Status: likely, except where otherwise noted
This post was originally called “Propositional Uncertainty as a Epistemically Useful Type of Logical Uncertainty”, but this I realized how much that sounded like snarXiv-esque word salad, with a decision theoretic bent. I’m better than that, I hope.
In the Friendly AI literature, there’s this concept known as an ‘ontological crisis’. Simply put, it’s a situation where your model of reality blows up in your face, and since your value system is probably intricately hooked into your model (you aren’t a wirehead, are you?) things don’t look pretty. You’re faced with the task of reconstructing a utility function now that your old has been thrown out with the bathwater.
The canonical example of this is the loss of faith accompanied by realizing God doesn’t exist.
What people don’t know, is that ontological micro-crisises (ontologic stress?) are far more ubiquitous than than the flashy loss of faith many educated people eventually face. I think we can build a far more interesting model by investigating some phenomena superficially distinct (but deeply related) to the ontological crisis.
And inward we go.
I mentioned it earlier, so let’s start with that example of God’s existence, albeit with a different objective in mind. I’ll assume that since you’re reading this, the overwhelming majority of you will agree that God probably doesn’t exist. But like good Bayesian, we acknowledge the nonzero possibility.
But, of course, like good Bayesians, we should be able to give a ballpark estimate of the subjective probability of God’s existence, and it cannot be 0 or 1.
I myself would bet most atheist would pick some uproariously small number, perhaps as a consequence of the contradiction with science or logic, or perhaps as a signal of tribe membership.
The excerpt of interest in the first article is:
Actually, art and magic are pretty much synonymous. …The central art of enchantment is weaving a web of words around somebody… When that enchantment is the creation of gods and the creation of mythology, or the kind in the practice of magic, what I believe one is essentially doing is creating metafictions. It’s creating fictions that are so complex and so self-referential that for all practical intents and purposes they almost seem to be alive. That would be one of my definitions of what a god might be. …It is a concept that has become so complex, sophisticated, and so self-referential that it appears to be aware of itself….If gods and entities are conceptual creatures, which I believe they are self-evidently, then the concept of a god is a god.
[emphasis present in the article, mtraven may or may not have added it himself]
The excerpt of interest in the second is:
I believe that “God” is a coherent idea (or meme if you will), as it seems to be, since both theists and atheists seem to have a rough agreement about what they are talking about, and just disagree on its ontological status. It’s not coherent in the philosophical sense (as Carroll shows), but coherent in the sense that its a stable idea, a mind-virus that thrives in the environment of human culture. Is it an idea like “Harry Potter”, that is, purely fictional and arbitrary? Or is it more like a mathematical idea, like pi or the Pythagorean theorem, immaterial objects that seem to have a real existence outside of human culture and invention? Harry Potter is likely to be forgotten in a thousand years (well, maybe not…) but God is likely to stay, despite the best efforts of people like Dawkins.
My intuition, which I can’t yet articulate, is that there is something about the concept of God that can almost be captured in a formal mathematical way, something that makes it a necessary concept of minds that are conscious and have agency. God’s ontological status is somewhere between Harry Potter (wholly fictional and arbitrary) and pi (an apparently inevitable aspect of some deep structure of reality). Or so my flailing intuition tells me.
I’d also throw that story about Atheist Shoes into the mix. If you haven’t heard, the scoop is that packages labelled ‘atheist’ are something like 10 times more likely to be mishandled or damaged. The most common explanation of this effect is attributed the effect to a aggregation of unconscious behavior across everyone who handles the package.
That last sentence was likely obscure, so here’s a better explanation. Remember priming? This is just priming, on a greater, distributed scale. No one needs to be blamed for the 10 times higher rate of mishandlings, since no one person did all of them, it’s just an aggregation of the unconscious behaviors of many people.
Especially with the controversy around the replication crisis, these is no certainty that the effect is existent, but we don’t need certainty for our purposes. Merely outlining this, along with the sentiments of mtraven and the atheist shoes case, we can outline a plausible apologetic for the existence of God. Which is:
When instantiated in the form of churches and cross necklaces and social rituals, God literally, and figuratively blesses people with marginally more postive affect and actions and can statistically smite heathens via compounding enough unconscious primes.
Here comes the fun part. Think about the atheist who answers ‘one in a quadrillion’ or ‘1/3^^^3’ to the probability of God existing. All it took was some basic experimental psychology, and extrapolation to argue something literally inconceivable.
Sometimes — actually, most of the times — with insights, someone gets there before you do. That someone is a ever impressive Scott Alexander, who coins the term ‘epistemic learned helplessness’ for very similar phenomena. My ontologic stress is when two sources of knowledge conflict. His epistemic helplessness is the logical endpoint of continuous ontologic stress.
Being an avid reader, I am plagued by realizations like the one from section II, and I at this point I’ve transitioned from assigning probability solely on a constative right/wrong dichotomy. I now assign chances to a propositions being in three distinct situations: it’s right enough; it’s wrong enough; it’s not even right enough to be wrong.
This neatly makes you antifragile against outside the box arguments like the one above. As well as inoculating your probability distribution against most forms of ontological stress.
In Antifragile, Taleb describes how theories are fragile, and phenomenology is robust. This is important. Intentional definitions are based on theory, and commit the existence of the defined object to the details of the definition. If you define fire as ‘the release of ‘phlogiston’, you have now ontologically committed fire to the correctness of a the phlogiston theory (correctness which, incidentally, it does not have).
Intensionality is fragile, extensionality is robust. Necessary and sufficient criterion is fragile, machine learning is robust.
(I have to wonder if there is a completion to this. Is there a equivalent to either of the two pairs above which is antifragile?)
There’s a more unsettling point associated with this notion as well. Preference solipsism is robust. Every single metaethical theory, from deontology to consequentialism to contractualism, has problems. Deontology’s rules couldn’t stand up to a determined and evil rules lawyer. Consequential has a laundry list of problems, such as the repugnant conclusion, utility monsters, Pascal’s Muggings, and that’s not even getting into the informal criticisms. Contractualism deserves a post on it’s own, but it on;y works when you have agents of similar levels of power capable of playing a non-zero sum game.
The key point here is the one Scott labored over in his Consequentialism FAQ: that morality must live in the world. And, unfortunately, that’s not quite accurate. Morality must live in the model, which is well-known to dissolve upon contract with the mind-shattering dimensions of real things.
To tie this back into my thesis about ontological stress: this is the reason it’s such a significant phenomena. As I alluded to earlier, when your morality is built into the architecture of your perceived reality, it’s quite difficult to avoid throwing out the baby with the bathwater, and perhaps harder the keep the baby, while saving zir metaphorical toys as well.
I was, am, and will always be a Devil’s advocate, a kind of intellectual hipster. It’s said that arguments are negative sum games, in some cases a pain auction. For whatever reason, I’m weird. I like arguing, with or without bombastic shouts and people getting red in the face. It’s far less fun on forums, with asynchronous messaging, for whatever reason. But in real life or IM? Sign me up.
But for most, it’s negative sum: every sarcastic barb and vicious remark deals a emotional blow to your interlocutor. But you’d better believe she’s getting you back for every single one of them. A more common term for this is a lose-lose situation. You can’t possibly win.
(this does throw up a the question of how someone can be a ‘skilled arguer’, which is oxymoronic if the game is negative sum. Take it from a skilled arguer: it’s not negative sum for us, we enjoy it)
The funniest thing about me though, is that I can effectively argue for any position without actually being some crazy post-modernist or sophist. Take me to a theist, I’ll give them the usual atheist dialogue, perhaps a bit on the sophisticated side. Take me to an atheist, I’ll give them the same spiel I gave you, or maybe just a Righteous Mind inspired agnostic defense. Add similar considerations for left/right partisans etc.
They say if you can effectively argue for anything, you have no knowledge. This is a bit more nuanced. As in the titled, and the alternative titled, propositional uncertainty isn’t dichotomic, it’s trichotomic at least. “Right”, “wrong”, “not even wrong”.
It’s hard to articulate, but I find hard-wiring an escape clause into your model, having a little cubby hole labelled ‘something happened‘ endlessly amusing.
I think most things are ‘not even wrong’. That if you give a fragile, propositional statement of belief, a more significant portion of my probability mass will be located in the ‘not even wrong’ camp than you might guess, what with the notion probably looking like a worst-case error-correction mechanism at first blush.
I think a great many of us were that petulant child, that when asked to do something, interpreting their orders in the most convenient possible light (being asked “pick up your sock” , and proceed to pick it up, then put it back down again). This is an example of model theory, that every diagonalizable maths system has an infinity of models, delineated by undecidable theorems. English is one such system, where even the most unflinchingly precise statement can be interpreted in unfavorable light.
This is a bit like the hidden complexity of wishes, where any wish is equivalent to the entire human morality. But more general. Propositions that seem to have overwhelming evidence in favor can be false because a subset of the positions reductively equivalent to the negation happens to be true. My canonical example, as I’ve said so many time in this post, is my argument in favor of God’s ‘existence’.
Confusion means something doesn’t add up. So what is it here? I’d wager it’s this: the propositions on display are shibboleths, limitus tests. A is the class of models vaguely asserting A. ~A is the class of models vaguely asserting ~A. But every assertion implies a decrease in the number of applicable models, because not every statement is well-formed in the language of certain models. For instance, in our model of reality ‘gravity downed’ is not well-formed; it doesn’t even make sense.
Saying ‘the evidence favors God’s nonexistence’ is saying the the least complex hypothesis in the ~A class is wrong, while being ambivalent about which model in class A is right. My argument above does not contradict this, it simply implies a model that seems ambiguously within both A & ~A is right.
Which plays back into my thesis. Propositions aren’t discrete statements, they are vague clusters in modelspace, where several models varying greatly in accuracy and complexity are situated. A is cluster, ~A is cluster. These clusters can intersect, points in the space can be ambiguously within both of them or none of them. it is interesting that dissolving ontological uncertainty gives us something like Causkoti logic.
And, as is said on the wiki page, De Morgan’s Laws mean the ‘both A & ~A‘ and ”neither A & ~A‘ are equivalent: Either it’s right, it’s wrong, or it’s not even wrong.