...suppose, for a moment, that we're playing billiards. I hit the cue ball, which hits the 11, which sinks the 6.
Which ball caused the 6 to sink? The cue ball, or the 11?
In morality, the answer is that the 11 ball caused the 6 ball to sink. Why? Didn't the cue ball cause the 11 to cause the 6 to sink?
Well, that depends. Do you think a woman is responsible for being raped if she flirted with her rapist, who happens to be an individual who rapes anybody who flirts with him? Is an anti-slavery activist responsible for his own lynching if he knows that he'll be lynched for saying the things that he's saying?
If you treat behavior as deterministic, and you treat moral responsibility as fully transitive, weird shit starts falling out of your moral systems.
In my response to Scott Alexander, and his response to me, we have a disagreement over moral behavior; he argues that free will is meaningless, and therefore we're all responsible to make the universe a better place, because we all share responsibility for every bad thing that happens in it (loosely speaking). If you accept that behavior is deterministic, and that moral responsibility is fully transitive, he is of course fully correct. His argument is completely absurd in such a case, of course, because I don't have any moral responsibility for disagreeing with him, as my environment made me disagree with him. Indeed, nobody is in any sense responsible to actually change anything to make the universe a better place; moral culpability for their actions lays in the formation of the universe. But he's correct even while he's being absurd.
When moral theory starts suggesting absurd things, when it starts suggesting things at odds with what we understand to be moral, it's probably not correct, by which I mean it is an inaccurate description of human morality. The object of moral theory, after all, is not to invent morality - we already have it - but to define it, in something the manner Newton defined, but did not invent, gravity.
What do I mean we already have morality?
I'd have to refer the reader to a -host- of material on the subject. The community of Less Wrong refers to the idea as Egan's Law, and to Adding up to Normality. There's disagreement, of course. But consider this: There's nothing wrong with the following moral axiom: "Planet Earth should be destroyed and humanity eradicated." It doesn't contradict itself. It's kind of... pointless, for a moral theory, and rather short. You can form a moral theory based on this axiom alone. Is that theory correct? What basis can you use to say it isn't, except that it contradicts your notions of right and wrong? Those notions aren't valid in such a moral theory; they're not part of its logic. You don't get to import your own moral theory to prove this one wrong; that's not how logic works.
Unless, of course, morality isn't about inventing right and wrong, but discovering preexisting principles. Where did those principles come from? Evolution or God or social upbringing (which is just another form of evolution) or any number of other ideas. Doesn't actually matter. They're there. They're the reason the moral philosophy is repulsive to all but a handful of human beings.
And what's this "excluded middle" thing I reference in the title?
It's a possible solution for the problem. Either we're morally culpable, or we aren't. There's no transitivity; no partial culpability shared with all other causal agents. (In a sense we can be partially culpable, but this is a result of considering complex results, and assigning blame for considerations which involve multiple units of culpability; we're fully culpable for the atomic units of decision-making within those results, or we aren't. An actor is responsible for pulling a trigger without checking the gun, somebody else is responsible for replacing the stage blanks with real bullets, etc.)
There's another solution which salvages Scott Alexander's moral system, at the cost of his argument: Morality is -partially- transitive. But in order for morality to be partially transitive, we have to have free will (we have to be capable of making decisions for which we can be morally culpable), which pretty much costs him the point he was trying to make.
(Ultimately his point was silly from the get-go for other reasons, but this is a more comprehensive rejection than "Your argument only makes sense if everybody else if a P-Zombie.")