Martin Heidegger was a Nazi. This can never be forgotten, any more than it can be forgotten that Jefferson owned and sired slaves, or that Nietzsche vilified women, or that Lindbergh was an anti-Semite. We need to redefine greatness: there is no such thing as a “great man” or “great woman,” only people—morally ambiguous beings who ocasionally do great things. The bad with the good, in philosophy as well as in philosophers: they were all mongers of ousia, but their mongering was often fascinating and now constitutes much of your intellectual spinal chord. The special problem with Heidegger is that his Nazism was, he claimed, a philosophical position—he himself joined his evil to his greatness. The joint is disturbing: what good is philosophy if it doesn’t teach you that Nazism is evil?
The connection is also suspect. Heidegger’s thought, which he himself clearly did not understand very well, is not unified enough for all aspects of it to be automatically classed as Nazi. Consider, in particular, Heidegger’s notion of the “totality-of-significance” (Bewandtnisganzheit), the context which enables the use of a tool. If the tool is a pen, its totality-of-signifieance includes physical things like paper and light source, social things like the technology of eventual publication or private delivery, cultural things such as the genre in which one writes, and personal things such as what one thinks one has to say. Totalities-of-significance concern practices of tool-using, and his emphasis on them joins Heidegger, not to Nazism, but to pragmatism.
It is testimony to the hold atemporality has on philosophical minds that so many readers of Heidegger think that a totality of significance must be all there at once, arrayed (if “non-thematically”) around the tool at the outset of its use. In fact, a moment’s reflection shows that even the most stable physical components of such a totality have to be encountered in a certain order. I need to secure paper and a light source before I pick up the pen, but I don’t yet have to know what I am going to say (and usually don’t, in any detail). Totalities of significance unfold step by step, which is why I call them “scripts.”
One important function of such scripts is to direct awareness: it is the script I am engaged in, and in executing which I have arrived as a certain point or “line,” that selects for me, from all the sensory messages that I am receiving at a given moment, the ones I attend to. This selection has the feel of something surging up before me, of its unwilled movement to the center of my awareness; and since it is governed by the script, it is a surging-up-as. The same thing (in physical terms) can surge up differently if I am engaged in a different script.
Scripts were for a while major concerns of artificial intelligence (see Shank and Abelson Scripts, Plans, Goals, and Understanding, 1977). They dropped out of centrality because of the field’s obsession with getting computers to perform them. Heideggerean scripts are, more realistically, riddled with holes and ambiguities, which give them the flexibility to be performed on differing occasions. Computers, at least the ones available to Shank and Abelson, are not good with holes and ambiguities, and attempts to program even simple behaviors into computers recurrently collapsed under the weight of exceptions and spur-of-the-moment modifications. So much the worse for programing.
In human beings, the holes and ambiguities in scripts are mastered by their goal-directed character, and this helps explain why computers can’t run them. The person executing a script does so to achieve some goal, and any part of the script needs to be executed only well and completely enough to realize that goal. But deciding that requires a judgment-call which computers are, so far, unable to make. Their tendency is to execute each phase of a script perfectly, and this requires that it be written perfectly—something humans, so far, are unable to do. Hubert Dreyfus thus finds himself reversed[1]: in this case, getting computers to mimic human intelligence fails because of what humans, not computers, can’t do.
Because scripts guide behavior, philosophers have mostly ignored them; they have historically viewed the mind as a device for looking at things (“representational mind”), not acting. But pragmatism is not the only exception to this: Heidegger’s account of the worldhood of the world, which is built on the notion of scripts, is basically a Teutonification of Aristotle’s pragmatically-oriented Nicomachean Ethics I.1. And what are Kant’s categories but guides for encountering any object whatever?
Kant’s treatment reveals the limits of traditional philosophy, however, for he insisted that the validity of the categories’ operation extended to all experiences. This meant that the categories could not be acquired from experience, because then they would not apply to the experience of their acquisition. So they had to be a priori, independent of all experience, and that meant independent of the intuitive form of all experience, time—landing Kant back in traditional philosophy. Heidegger, by contrast, talks about relative a priori’s: contexts of significance acquired independently of my current experience, which they help to shape, but not of experience altogether: they are learned, i.e. come from previous experiences.
[1] Dreyfus What Computer Can’t Do, New York 1972