3. Agents and the Development of Internal Conflict

Earlier, I made a big deal of asserting that the mind’s fundamental drive is unitary. But it seems obvious that within individual people there are different, often conflicting drives. How would it come to be that in a mind with a unified goal, we would experience both the desire to stay in a relationship and to leave it, or the desire to diet and the desire to eat, or the desire to make money and the desire to stop working?

The answer lies in the distinction between the Goal and the subgoals that the mind develops in the process of conceptualizing the environment. Internal conflict arises from the fact that distinct subgoals do not necessarily know about each other or know how to share information. They may not know that they are ultimately working toward the same end, or agree about how to get there. In the same way that the reasons behind a foreign cultural practice can be opaque to an outsider, so can one function of the mind be opaque to another function. In some cases, this opacity can even be intentional, either because one part of the mind does not want to understand another, or because it does not want to be understood itself. In other cases, the problem is merely one of failure to communicate.

To get a better understanding of this idea of subgoals, let’s go back to our baby with its Mom + Warm Snuggliness concept. Actually, this is not just a concept—it’s a subgoal: something that the baby has conceptualized because it has recognized in the experience a pattern relevant to its Goal. The baby now has a conceptual structure directing its will that not only recognizes Mom + Warm Snuggliness but seeks or anticipates it, and responds with action or impulse when it encounters it. 

In order to be able to seek and anticipate Mom + Warm Snuggliness, it must be that the conceptual structure (model) pertaining to the subgoal is more complex than just the subgoal itself. It must contain not just the concept of the subgoal but also conceptual structure representing the ways that other things relate to it, e.g. the conditions under which it is more or less likely to occur (perhaps time of day), or the conditions under which it is more or less important to attain (perhaps hunger versus satiation, or wakefulness versus sleepiness). 

In this particular example, the action or impulse that occurs in response to the anticipation of Mom + Warm Snuggliness is fairly subtle; perhaps the baby relaxes. But you could imagine a different structure where the person had an extensive model of playground status dynamics, which contained among other things a subgoal like “pursue justice” and concept like “bully,” such that upon recognizing a bully (i.e., applying the concept to a situation where it seemed to fit), the person would punch the bully in the face. 

I think of this entire structure as a function. The function is the will being channeled through a dynamic conceptual structure (a model) toward a conceptualized aspect of the Goal (a subgoal). The function takes inputs in the form of sense data, parses them in accordance with the conceptual structure, and produces outputs in form of impulse, action, emotion, and/or thought. Outputs may also include amendments to the conceptual structure. I believe that individual functions of the mind come into being where Goal-relevance is perceived in the environment, and conceptual structure is built to pursue that aspect of, or apparently necessary step toward, the Goal. That aspect or step is the subgoal; the function is that which pursues the subgoal. (From here out, I will drop the term subgoal—a function will have a goal and the mind will have its Goal.)

Frequently, it makes sense to refer to such functions as agents, or parts of the person. To some of the people reading this, the term “part” will sound familiar from their work with Internal Family Systems, or IFS, which is the only therapy style I’ve encountered that explicitly ontologizes the mind as a multi-agent system, although many others do so implicitly/occasionally/to varying degrees. (If you’re interested in the IFS ontology, you can read more here). To me, the term “part” usually represents the personification of a function; I like to use it when I’m interacting directly with a “part” of someone or otherwise describing it as though it is a person. In other cases, I like to use the term “function” because it emphasizes the idea that the individual function is part of a more general system, rather than being a fundamentally separate entity. And I like to use the term “agent” when I want to capture the idea of intentionality in a way that the term “function” doesn’t, but want to remain impersonal-sounding so as to seem abstract and cool.

Anyway. Now that we have a clear idea of agents/functions/parts, we can look at the conditions under which they come into conflict. Let’s start with a “simple” case, by which I mean a case in which there is no intentional obfuscation, merely a failure of communication.

Earlier, I made the analogy between a conceptual model and a book. This is actually not a great analogy for several reasons. For one thing, a book is static, while a model evolves as it is applied, and even begets “offspring” as new situations are encountered that can be parsed with elements of existing models. For another thing, a book is meant to be read. A model is meant to be applied, but is not necessarily formatted to be understood or communicated. Thus, a person can end up in a position where they seem not to be able to “access” information they know they have. Imagine someone trying to eat healthy. She “knows” she should eat vegetables, but she craves ice cream instead. So we have two functions at work here—one that wants vegetables and one that wants ice cream. Let’s think about the differences between these functions’ models. 

In some sense, the two models contain very similar information—at least, they purport to model the same domain. The function that wants the ice cream probably has a model of  the caloric and chemical content of both ice cream and vegetables, formatted as an understanding of how they taste and how it will feel to eat them—their felt effects on the person’s energy levels and digestion, and the emotional states associated with those. The function that prescribes vegetables probably also has a model of the the caloric and chemical content of both ice cream and vegetables, formatted more abstractly, perhaps recalled as visual/audial imaginations—perhaps a memory of a conversation the person had with her mom or a health article she once read. (The latter function’s model may not be very precise—it might be as simple as “ice cream is fatty and sugary and vegetables have nutrients and vitamins.” But even in the case where the person is counting calories and tracking specific nutrients, it’s worth noting that the abstract model of the caloric and chemical contents of food is almost definitionally nowhere near as detailed as the model generating the craving.)

So, two models of the caloric and chemical contents of food. To expect to be able to use the ice cream-craving function’s model to write down the chemical composition or number of calories of the ice cream, however, is obviously ridiculous (at least without training). In that sense, the information in the model of ice cream that generates the craving is not at all the same information as the information in the explicit model (the one that wants the person to eat vegetables). Both models contain descriptions of ice cream, but the formatting of the information is so different that the two can only be said to be the same information in a very abstract sense. Simply put, they’re not.

However, the two functions are attempting to model the world and produce action in the same domain, and thus are coming into conflict. The vegetable-prescribing function can’t “talk to” the ice cream-seeking function, because when it says “that ice cream doesn’t have any nutrients,” the ice-cream seeking function not only has no idea what it’s talking about—it may have no idea that it’s talking at all. So the person tells herself, “I should eat vegetables,” and this affects the status of her ice-cream craving not one iota.

You could imagine her integrating the two models, building a translation between them by creating a highly detailed mapping between her sensory experience of food and her explicit understanding of nutrition. (You could even imagine her getting good enough to be able to approximately guess a food’s caloric and nutritional content from taste and digestive feel.) In the world where both parts shared similar goals or recognized the validity of each other’s goals, this would resolve the conflict—the person might crave ice cream, then be able to propose the idea of vegetables to herself via visceral imagination, at which point the craving might shift. Or perhaps she might get a feeling in response to the visceral imagination of eating vegetables that she could understand as “no, I’m not really hungry, I just want to feel comfort right now.” At that point she might decide to that comfort was a good idea, and eat the ice cream without feeling guilty about it. Or, she might be able to counter-offer with something healthier but equally comforting. If the relationship between the two parts is good enough that both recognize the Goal-relevance of each other’s goals, and her counter-offer is good, the counter-offer will not only be acceptable but may present as viscerally superior to the part that was previously craving ice cream, since it will value attaining both comfort and health.

Such successful communication between parts, however, depends on their willingness to understand or attend to each other’s information, and/or value the achievement of their goals. You could easily imagine a person with a pair of functions similar to the ones we just described, except where the vegetable-prescribing function strictly cared only about following an explicit set of nutritional rules and didn’t care about the other function’s nutritional models or desire for comfort. This version of the vegetable-prescribing function instead would actively ignore and do its best to override all of the other function’s signals. Obviously, in this scenario, the communication channel we imagined above would never get built. 

Imagine yet another variation: the vegetable-prescribing function is willing to make some allowances for ice cream cravings, but only where they’re properly justified. Imagine that it only gives the green light to eat ice cream if it’s been a really, truly hard day. Now imagine that the ice cream-craving function is experiencing a certain type of hunger and wants the sugar and fat that the ice cream contains, but has learned that it’s only going to get them if it can make the case that things are going sufficiently poorly. Now, when the person is debating with herself about whether to eat the ice cream, she finds herself imagining all the worst aspects of the day, dwelling on and amplifying the difficult parts, spending a lot of attention catastrophizing. This is happening because the available rationale for ice cream is determined by the constraints she has set on what counts as acceptable; the ice cream-craving function has an incentive to obfuscate its models, instead fabricating information that will pass a filter. Thus the possibility of accurate introspection is compromised.

This kind of thing happens all the time. People who believe they shouldn’t ask for care if they’re healthy and able to care for themselves will often find themselves feeling mysteriously sick. People who believe they can only be loved if they’re sufficiently moral will fabricate altruistic justifications for their behavior that obscure their real reasons. A lot of internal fragmentation—that is, the state of different parts of the mind being unable to share information, or more simply put, the inability of a person to know themselves—is created and stabilized by parts of the mind inadvertently setting up adversarial incentives for others, where both are vying to produce action in a shared domain.

Setting up good communications between functions, however, can be easier said than done. A function with a directly adversarial relationship to another function is often configured that way because it came into being for the express purpose of fighting or compensating for the first function. Take our ice cream-craving function and our vegetable-prescribing function. It’s possible that the vegetable-prescribing function developed when the person read a book on nutrition and decided it would be a good idea to try to eat more healthily. I should mention, by the way, that calling these guys the “ice cream-craving function” and the “vegetable-prescribing function” is a silly simplification, and I’m not at all implying that ice cream and vegetables are their only or their primary goals. In the scenario we’re describing now, where the vegetable-prescribing function arises from an encounter with a nutrition book, it’s much more likely that the person already had an elaborate self-management and information-gathering function, and the plan to encourage herself to eat vegetables was a simple addendum developed by a function that knew how to seek and integrate a wide range of information into its plans. It’s also possible, however, that the vegetable-seeking function arose specifically in response to some terrible failure of the ice cream-seeking function. If the person’s cravings were pretty out of whack and generating some externalities that it didn’t seem to be able to take into account, and she found herself eating until she felt sick or gaining weight she didn’t want to gain, the second function might have come into being specifically to rein in the first.

If this is the case, we can say that the ice cream-craving function is prior to the vegetable-prescribing function. The vegetable-prescribing function has the goal of being healthy, but is largely concerned with correcting the ice cream-craving function. If we see this, we know something about the responsive evolution of the person’s psychological structure. Psychological structure develops over time in response to perceived error or incompleteness in existing structure, when the mind observes its actions and forecasts its own failure to achieve its Goal. In my experience, common functional growth over time looks much like the branches of a tree—tapering from extremes into nuance as the person’s different functions get increasingly refined in their mutual compensation patterns. In some cases, people with deeper than average understandings of their own inner workings are able to more frequently understand and amend the original function generating an error rather than develop a separate compensating function, and so experience more integration and less “branching.” Dysfunctional development, on the other hand, is more “bipolar,” with the different parts of the mind each refusing to accept the realities of the other and instead implementing more and more extreme compensations and repressions on both sides, until functional sustained action becomes virtually impossible.

At this point, we have a working model of psychological structure, sufficiently detailed to allow a huge amount of navigation, correction and development. We have a base ontology of mind that lets us understand the distinction between reality and interpretation, and maintain an understanding of the commensurability of goals; we have the understanding of the interpretive apparatus building conceptual structure in response to its environment; and we have the understanding of that same interpretive apparatus building further conceptual structure in response to its projections of its own behavior in its environment. This lets us understand the general pattern of psychological genesis, and, with good introspective training, also lets us trace the genesis of a specific psychological structure, which lets us understand its subjective environment and primary concerns, its checks and balances, and thus positions us to be able to communicate with it in a targeted manner and help it develop positively.

Previous Back to Writing Next