1. The Fundamental Nature and Ontology of the Mind

Let’s start off with something you’ll have heard many times before: that your experience of the world is self-created. You can know that you are experiencing sensations and interpreting them as a tree or a house or a chicken, but you have no way of knowing what the things you’re seeing and hearing and touching actually are, or indeed whether the sights or sounds or feelings actually correspond to anything outside of you at all. The experience of a chicken is an imposed interpretation on a field of sense data.

Introductory cognitive science classes like to demonstrate this fact by drawing attention to the ways that interpretations can change or break—for example, simple optical illusions like the Necker Cube provide very mild examples of the way a set of sensations can be multiply interpretable. The examination of processing disorders like aphasia can tell us interesting things about how our conceptual constructions are composed by showing us what happens when different parts of a person’s ability to perceive or interpret sensations break down. You may have encountered a TED talk at some point by Jill Bolte Taylor, a neuroscientist who experienced a stroke and described the experience of losing her ability to interpret her sensations. For Taylor, this was something of a religious experience. This makes sense—many of the forms of enlightenment that meditation traditions pursue include (or just are) the realization that the interpretations are imposed on the sensations, rather than the world actually objectively being the way the person interprets it. “Maya”—the Hindu concept of Illusion—is the interpretation assumed to be reality. The basic ideas of the constructed nature of experience and the unknowability of what’s beyond it are ancient. All good philosophers note the fundamental inability to know that there is anything outside of their experience as their base epistemic position.

The framework I use in my practice makes distinctions between a few fundamental aspects of mind that provide me with a basis for modeling what is constructed in experience and how. Let’s run through them:

The awareness is just experience. You could theoretically have awareness with no sensations, which might be similar to what people imagine the experience of nothing to be—though this would not actually be the experience of nothing, because the awareness would experience itself, and the awareness itself is not nothing. You can’t actually experience nothing. Awareness is not constructed by the mind; it is there whether the mind is constructing interpretations or not.

The sensations are experienced by the awareness, but it’s important to make a distinction between the sensations and the interpretations of the sensations—for example, “hand” is not a set of sensations, it’s an interpretation imposed upon some colors and temperatures and proprioceptive data. The colors and temperatures and proprioceptive data are the sensations. You could imagine an awareness that experienced those colors and temperatures and proprioceptive data but did not interpret them as a hand, or as anything. In some sense, the experience of completely uninterpreted sensations might also be similar to some idea of the experience of nothing—you might think of it as “pure noise,” except that even “noise” is a concept, and so the evaluation of the uninterpreted sensations as uninterpreted would not be part of that experience.

  The question of whether the sensations are constructions of the mind is the same question as “does any of this exist or am I hallucinating all of it?” This is an impossible question to answer with certainty. I happen to believe that sensations do in fact correspond to realities that are (at least partially) independent of the observer. I think it may be the case that all sensations correspond to some reality that is partially independent of the observer, even the imaginations—imaginations may just be very heavy conceptualizations of relatively minimal sets of sensations. That is, the difference between the visual perception of an elephant and the imagination of an elephant may just be that the concept “elephant” is being used to interpret light hitting your eyes after reflecting off of an elephant in one case, and being imposed on a very minimal set of random visual and/or proprioceptive sensations in the other case. In the first case, the sensations being interpreted as “elephant” correspond (potentially) to an actual elephant, and in the second case they are merely part of the default sensation landscape of the body, and it is the body’s state that they correspond to.

The concepts are that with which you interpret the sensations. In this document I’m going to talk constantly about concepts, interpretations, models, beliefs, and constructs. These are, roughly speaking, all the same stuff. A concept is like “hand” or “tree” or “house” or “government”; a model is a complex, dynamic, structured interpretation of a dynamic data set, e.g. a model of economics, or a model of your mother. These will be contextually interchangeable, but sloppily speaking my typical usage will be something like—a concept is analogous to a word, a belief is analogous to a sentence and a model is analogous to a book. An interpretation is the use of a particular concept or set of concepts to parse sensations—just as we can theoretically think about about sensations separately from concepts, so too can we think about concepts separately from sensations.

  Actually, as a pedantic aside, I’m not sure we can literally do that—I work with the assumption that the use conditions for a concept are its application to a set of sensations, and so in order to think about “a concept distinct from sensations,” you must be applying that concept to some set of sensations. The sensations can be extremely minimal—some people who think the thought “a concept distinct from sensations” might generate a visual representation of a conceptual framework in empty space, while others might merely refer to points in proprioceptive space, likely without even noticing they’re doing so. It can certainly seem that nothing is happening except the summoning of the concept, but I currently believe that, even in the least visual, least audial, least verbal thoughts, there are sensations involved in summoning and manipulating the concepts.

  While it may not be possible to actually think “concept” without applying the concept of a concept to some set of sensations, it is possible to apply the concept “concept absent sensations” to some set of sensations! More importantly, I do believe there is a sense in which we have the concepts even when we’re not applying them to sensations. Which is to say, you’re not reinventing the concept of a rabbit every time you encounter the set of sensations that are best interpreted as “rabbit.” It was there, in some form, ready to be applied.

The conceptual content is the construction of the mind, and the way that sensations under-determine their interpretation is what things like the Necker Cube illusion are meant to demonstrate. Most people vastly underestimate the space of possible interpretations of the world around them. Let’s try something: pick a nearby object, like a tree or a pillow. Try imagining that it’s a rabbit. Don’t squint your eyes or imagine that it looks different; just apply the concept “rabbit” to it. Now, try interpreting it as your mom. Now, interpret it as trying to kill you. Now interpret it as secretly dancing. Isn’t it odd that you can do this? Yet you can. Normally, we would think that a person who believed a pillow was their mother or a tree was trying to kill them was insane. I believe that most of us are this insane—that we are misinterpreting the world around us to a comparable degree on a constant basis. But we are only likely to notice our misinterpretations when they diverge wildly from the interpretations of those around us, or when we are so surprised or make an error so obvious that we are forced to re-examine our models.

  Let’s call the application of conceptual content to sensations the engagement of that content. To engage the concept of a rabbit—that is, to think “rabbit”—one applies the concept “rabbit” to a set of sensations (perhaps a visual image, perhaps light bouncing off a rabbit and hitting your eyes, perhaps the slight physical memory of something warm and fuzzy, or perhaps to a toothbrush if we’re having some trouble that day or happen to be tripping balls).

  If one does this consciously, this is attention. One does not have to notice that one is applying concepts for the concepts to be applied. You might drive from home to work, deeply absorbed in some line of thought about a presentations you’re going to give that afternoon, and arrive at your destination with detailed anticipations about how your presentation will go but no memory of the drive. Nevertheless, you were engaging many complex models outside of your attention—your model of the route, of traffic laws and driving hazards, and of how to operate the car, not to mention your models of how to coordinate your body—not only how to sit and move the gear gear shift and the steering wheel but how to breathe and digest and regulate your body temperature and heartbeat. Which models of how to regulate your body you were engaging probably shifted as you thought about standing up in front of everyone and talking. The models you’re engaging outside of your attention are still models, many of them in formats far away from verbal representation, but, I believe, fundamentally no different from your models of the content of your presentation.

  Attention is a critical idea because attention is what allows you to intentionally change a model. Concepts are formed, shaped and changed by their application. Learning is the formation and changing of concepts. Thus, if you want to improve a concept or a model, you have to obtain its application conditions—that is, you have to use it. If it is to change, it has to be given the opportunity to take new inputs. And if you want to direct the change, you have to figure out a way to get the model in attention. A model’s application conditions are not always easy to obtain; figuring out how to get something into attention is a big part of my practice. We’ll come back to this idea in more detail later.

In addition to the conceptual structures themselves, it seems likely that there are constraints on or features of concept formation that aren’t properly categorized as concepts—meaning, they guide or are part of the conceptual formation machinery, but are not produced by and cannot be edited by that machinery. Time and space are likely “built in” in this way, rather than being the products of parsed sensations. It’s possible that there are other “built in” features of the conceiver—things that would be necessary features of all concepts or features of the structure that concepts are built in. I don’t believe that I know what all of these are, but I do believe that one of them—one that is absolutely critical to understand—is Purpose.

Purpose/Goal/Telos/Good

Concepts are formed for a purpose. This is something that I think modern cognitive science understands only locally and shallowly, and that many of the meditation traditions that track the constructed nature of experience miss altogether. Cog sci will take a concept like “snake,” note that it’s constructed somehow from visual primitives that pick out a wavy line in a visual field that takes straight lines as default, and make a guess that it’s important to prioritize the recognition of snakes for evolutionary reasons. This is a functional explanation at all—i.e., the idea is that the concept is formed for the purpose of survival—and so to my mind it has a leg up on a bunch of the nirvana-seeking or moksha-seeking traditions, which will say something closer to “the snake is an illusion; let go of your attachment to the snake.” This is bullshit; don’t be Buddhist. The snake is an illusion—that is, you are constructing your experience of it—but you have conceptual machinery for a reason.

 But the reason is not fundamentally survival. If you have to explain all human behavior,  survival by itself is pretty unsatisfactory as a motivation. You can sort of do it, at a stretch, and people often do, explaining altruistic behavior as motivated by a species-level survival drive, homosexuality as a glitch or as an obscure form of pro-sociality that somehow contributes to species survival, suicide the same, the purchase of expensive fashionable clothes as an expression of the drive to propagate one’s own genes forward in time. You’d have to do something a little fancier, I think, to explain behaviors like those of terrorists who want to wipe out the human species altogether as expressions of a survival drive, but people have definitely tried.

You could take other things besides survival to be the fundamental driver. For many years, I believed that the sole fundamental motivation of mind was to connect to other minds. If you do this, you can do similar antics as one does when trying to cash everything out in survival. I used to be especially focused on explaining violent and manipulative behavior as contorted connection drive—sometimes correctly and sometimes less correctly, as it turned out. That’s it’s own story, but suffice it to say I’ve updated away from that position.

Whatever you choose as a theoretical base motivation, you’re going to have to jump through some hoops explaining the entire space of thought and behavior and preference according to that motivation, because the space is in fact complex and convoluted and full of apparent contradiction. The alternatives are to posit a fundamentally fragmentary set of drives, or arbitrary motivation. We can discard arbitrary motivation, because clearly there exists some pattern to thought and behavior and preference. I hope it will be self-evident why fully environmentally determined motivation and fully imitative motivation are incoherent ideas. As for the option of fundamentally fragmentary drives—for example, the idea that there is a sex drive and a death drive and both exist as incommensurable primitives in the mind, always in competition—all I can say is that I have never found two apparently separate behaviors or preferences to in fact be based in fundamentally separate or incommensurable drives. If you take someone who on the one hand wants to live and on the other hand wants to die, I predict—and I expect to be right—that the “part” of them that wants to live is focused on one set of information, and the “part” of them that wants to die is focused on a a different set of information. Those sets of information are compartmentalized, which results in functionally separate and conflicting drives, but they can be de-compartmentalized—and once that happens, some more basic motivation will flow through the combined information landscape and output a coherent and non-conflicting preference.

What is the more basic motivation? I don’t know. It is the goal—it is the telos, the will, the fundamental drive; it is value; it is Goodness. Good is only ever coherently definable in terms of the desire of an agent; of the Good, all I can say is that there is that which the agent fundamentally desires, and it is in essence a unified whole. And, since we are discussing not a model of an individual mind but a model of the invariant features of mind in general—a model that is meant to stand independent of individuation or embodiment—the claim is not merely that the individual human has a fundamentally unified or unifiable will, but that at the most fundamental layer, the Goal or the Good is invariant across minds. This is an extremely strong claim. It is not, however, a particularly unusual one—many spiritual traditions make similar claims, usually articulated more fuzzily, along the lines of “we are all one.” But “we are all one” is an extremely strong claim if taken seriously, and can lead people to do really, really stupid and dangerous things. An astounding amount of complexity and divergence can arise from a single algorithm that processes varied inputs, and we will explore that complexity and divergence in depth as we go. For now, just know that the claim of the universality of telos is not an invitation to treat—or to value—everyone the same.

So, we have a model of mind—awareness containing sensations, sensations interpreted by concepts, concepts composed within a structured framework and for a Goal. I believe that the ability to make and think through the distinctions laid out above is extremely important, for reasons that will hopefully become clear later in this document, and so we have the separate terms, and we’ve run through the idea of awareness without sensation, sensation without concept and concept without sensation. In reality, I don’t know which if any such separations ever occur. The idea of sensations or concepts as mental phenomena without awareness is incoherent (in the way that the idea of a thought without a mind is incoherent), so we can rule that one out. Some meditators speak of achieving states of “pure awareness,” where they are not interpreting their sensations and/or not experiencing sensations, and have no will or desire but “just are.” I believe it’s possible to turn one’s attention away from sensations, and I believe it’s possible to dissolve and/or inhibit the application of many, many of one’s conceptual constructs. I don’t believe, however, that it’s actually possible to turn off the will, and I also believe that while all of what people are typically cognizant of as their conceptual content is probably dissolvable, the generator of conceptual content is not. So my actual model is something more like, there exists the “fabric of mind,” which is the awareness-will, which irrevocably contains the parser/conceptualizer/creative apparatus, and through which sensations pass and are responded to by that apparatus in accordance with the will.

  If the model as stated is true, then your sole purpose in this world is to navigate your way toward what you want—sole purpose not in a moral sense, but in a determinist one, in that if the model above is correct, any idea of morality on which you might have that mission or any other was generated by the function structuring your model of the world in order to pursue your goal, and thus is superseded by that function.

  It is not possible to do other than pursue what you want, but there is a truth about what you actually want, and a truth about how to get it, and it’s quite possible to be ignorant or wrong on both counts.

  We’ll come back to that. Onward.

Previous Back to Writing Next