Insight Into Injunction-like Cognitive Tech Often Undermines The Tech, But You Should Use It Anyway
There’s a broad category of cognitive technologies1 that seem to serve a similar function to deontological imperatives, e.g., “thou shall not steal.” That function, I claim, is something like stabilizing preferences or policies over time as different urges get activated by the environment. Having such stabilization can be extremely practically valuable for agents such as ourselves, who have to deal with all sorts of urges coming up that are counter to our long-term projects. Interestingly, for many—perhaps for most—of these injunction-like cog-techs, having clear insight into their function or purpose makes it much harder to continue to employ them effectively. Insight into their workings undermines their working. This suggests a puzzle: can you effectively employ injunctions in your own life without having to avoid looking clearly at how those injunctions function? I claim that you can, but that it requires a deep appreciation of your own stupidity coupled with something like stubbornness.2
In this post I will give a few examples of injunction-like cog-techs, explain how insight into those cog-techs undermines their ability to function, and say something about how you might rescue some of their usefulness while retaining clear insight into how they function. I will also provide a generalized just-so story for how we ended up with these injunction-like cog-techs.
I felt I needed to invent this clunky “injunction-like cog-tech” phrase because I wanted to make sure that you did not understand me to be talking only about normative imperatives of the sort that philosophers most often talk about. I mean to include any cognitive technology that helps you stabilize your policies or preferences over time. That said, I do think that standard moral imperatives make a decent paradigm example of injunction-like cog-techs.
An interesting aspect of moral-speak as most often practiced by everyday people is that the imperative mood is actually quite rare. It is rare for folks to say “you shall not steal” or “you must not steal,” and much more common for folks to say “stealing is wrong.” The fact that much moral-speak happens in the indicative mood is something a lot of philosophers have commented on and appropriately treated as a puzzle. On a super-naive semantics of a phrase like “stealing is wrong,” the meaning of the phrase is something like “there are some events that count as instances of stealing, and those events have the property of being wrong,” where “being wrong” is some weird property that makes events inherently to-be-avoided regardless of what your preferences or beliefs are. It has been noticed by many that this is a very weird property to try to make sense of on any broadly natural account of the world.
I won’t go into the various ways one might try to give an alternative account of the semantics of normative indicatives here. I only bring this up to point out that it is a central property of the way that people do moral talk that they express their moral judgments as statements, and that the truth or falsity of these statements does not depend on the attitudes of any agent, including the one who is assessing that judgment. Moral relativism and moral antirealism are popular even among folks these days, and I don’t want to claim that this is the only way that people use moral-speak, but it is a prototypical use of moral-speak that all native speakers are fluent in.
Why then would we end up with this weird feature of moral-speak? What purpose would this serve? One purpose that it does seem to serve quite often is to stabilize our policies or preferences across time. Consider someone who believes that it is morally wrong to steal—that is, they believe that stealing is to be avoided no matter what their preferences are, and perhaps also that if you do something that is morally wrong, that makes you a bad person. It seems like such a person would have an easier time not stealing, even if they were very tempted to in that moment. Compare this to someone who merely thinks that stealing is a bad idea most of the time, what with the people who get mad at you and maybe punish you if you get caught. Most of the time getting a free fish isn’t worth the risk. But of course sometimes the risk is low or the benefit is quite high, and in those cases, there is no reason not to steal.
So, sure, if you believe it is morally wrong to steal, you will have an easier time not stealing even when the risk is very low and the benefit is very great. But how could taking something like that on possibly be practically beneficial? If the risk is low and the benefit is great, that means you should do it! Duh! That’s like rule number one of being an agent.
To answer that question, let’s turn to a fairly different example of an injunction-like cog-tech: belief in a God who is everywhere and can see everything. Actually, any being or beings who are invisible and could be watching at any time would work about as well—spirits, ancestors, the animals, etc. How does such a belief help you stabilize your preferences or policies across time?
Well, people are much more likely to fuck over other people when they think they can get away with it (especially if the only reason that they have to not fuck over other people is a practical fear of punishment). Often (especially in the ancestral environment) much of someone’s judgment of whether they can get away with something depends on whether they think someone is watching them. So the extremely practical cognitive technology of believing that important judgmental beings with great powers are watching you at all times helps you not believe that you can get away with things that no other person will ever find out about. Again you might ask, how the hell is this practically beneficial to the agent with this belief? It makes sense why other people in the tribe might want to trick him into thinking that the ancestors are always watching, but how in the hell does that help him?
There are a few good answers here. One is that people discount future costs too much compared to immediate benefits.3 Imagine that you are a caveman who has the opportunity to fuck one of the alpha’s women. It seems like nobody is around. Well, the benefit of getting to fuck this very attractive woman is going to happen right now. The possibility that you get caught and punished is going to happen later.
A related reason is that parts of your environment trigger urges and general cognitive modes—e.g., horniness, hunger, etc.—which make certain kinds of costs and benefits more or less salient. When you see the attractive alpha female, and you’re really turned on by her, getting punished by the alpha later doesn’t weigh that much on your mind. You’re preoccupied by the hot babe being all seductive in front of you. I mean, there’s nobody around, and I’m pretty strong, how sure am I really that I couldn’t win in a fight with the alpha? I have some friends; I’m sure they’d back me up.
So the story here then is that through a mix of hyperbolic discounting, rationalizing, and stimuli-activated dispositions, we tend to underweight the chances and/or cost of being punished for a norm transgression, especially when we are in the moment of deciding whether to transgress. This is why injunction-like cog-techs are so mixed up with norms and norm enforcement: they are more practical in worlds where you have to deal with others establishing and especially enforcing norms.
It also goes further than this. It also helps you practically by helping you signal to others that you will follow a norm even when there is little chance of you being caught. Consider another very popular injunction-like cog-tech: feeling bad about things nobody will ever find out about.
This works quite well on our story. You do not need to worry about estimating the chances of getting caught or how bad it would be if you did get caught, or the fact that you might get caught much later than you would get the benefits. You already know that you will punish yourself quite harshly even if nobody else does. Heck, you are probably already punishing yourself now for even considering it.
This is also the sort of thing that other people can detect. If you know that someone is a guilty, scrupulous person, this makes them a far better candidate for guarding the food pile at night than someone who is merely smart and charismatic, especially if they seem kind of generally… loose, let’s say. It might sometimes work to merely lie about believing that it is wrong to steal food from the food pile, but it’s even better if you believe it yourself. Otherwise, when you are interviewed for the position, the interviewer might well be able to detect that you are lying, especially if they are an experienced interviewer.
So that then is our just-so story. Injunction-like cog-techs help us stabilize our preferences and policies over time. This is particularly helpful for beings that are bad at estimating risk, have steep discount rates, have different cognitive modes that get triggered by stimuli, and have to deal with norm enforcement from other beings in their environment.
The problem with this story is that applying it with clear eyes to the injunction-like cog-techs that we are currently implementing, and then investigating what those very cog-techs are good for, or how they got there, or how they work, tends to undermine our ability to continue using them. Let us go through a few examples.
This is easy enough to see in the case of believing in invisible beings who are watching you at all times. If the point of having that belief is to make it so that you do not ever feel like you aren’t being watched, and so it is as if the norm enforcers do not even have to bother catching you, it would require psychologically unrealistic amounts of compartmentalization to continue believing it. How did you actually end up believing the ancestors were always watching? Oh yeah, it was those norm enforcers who told you about it. They sure seemed real dead-set on making sure you ended up with that belief; that’s curious. Thinking about this clearly, it seems that the wool has been pulled over your eyes.
In the case of using the indicative to talk and reason about moral judgments, using our just-so story to analyze that cog-tech also seems to undermine it. Firstly, it is pretty mysterious what an utterance like “stealing is wrong” could possibly mean on any natural understanding of the world while not being either wildly improbable or devoid of most of its moral oomph. If it means “events that count as stealing are inherently to be avoided by all beings regardless of their attitudes,” that’s very difficult to make sense of in a naturalistic picture of the world. If it just means “stealing is punished around here” or “boo stealing” or “if you steal, we will fight you,” then it loses all of the practical import that I argued believing something like that has for an agent. So it seems like if you want to retain the practical value of believing that stealing is wrong, you are going to have to go with something like the first interpretation. But again, if you see clearly that the belief that stealing is wrong is some nearly nonsensical kludge of a belief that mostly serves you by making sure that you don’t steal when you shouldn’t because you are bad at estimating the costs and benefits of stealing, it sure seems hard to continue believing it in that case. And how was it again that you ended up with this belief? Oh yeah, it was those norm enforcers who were trying to get you to believe it. And oh, fuck, right, they sure did seem very motivated to make sure that you believed this, and they really didn’t like it if you asked them why this was true.
The case of punishing yourself internally turns out fairly similar. If the point of feeling guilty is to make sure that you yourself punish yourself ahead of time so you don’t risk misestimating the utility calculus on some norm violation, surely you would be better off just not punishing yourself internally and up-weighting your estimate of the risk of any given norm transgression you consider. If the point of it is to convince others that you will not violate a norm even when no one is looking, or that you won’t do it again, surely you could do this by committing harder, or by getting better at lying. Seems like that would be practically better. This particular injunction-like cog-tech, like all of them really, has a particular stickiness, but a clear view of it is still significantly undermining. Those of you who are unusually introspective might have at one point wondered whether you were feeling so unusually bad about some norm transgression because you were caught and wanted to signal remorse to those who were deciding your punishment. If you are even more introspective, you might have figured out that that was what you were doing, and decided to stop, perhaps out of a sense of duty or out of a sense of virtue.
So that’s that then. Injunction-like cog-techs don’t survive being seen clearly, and that which can be destroyed by the truth should be. Let’s just stop using them. I will follow the norms when they make sense and not when they do not. If I have to get better at estimating risk and benefit, so be it. That’s the way—not continuing to use twisty, non-sensical cognitive habits that were put there by forces actively trying to control me.
Well, as may be obvious by now, I think the problem with that is that these cog-techs serve an important purpose: stabilizing your preferences and policies across time. In the previous section, all of the examples of how an injunction-like cog-tech could be useful had something to do with avoiding transgressing a norm when it was practically unwise to. Those norms mostly served the practical aims of others (e.g., not getting stolen from, not having your harem sleep with other cavemen). Here let me give some rapid-fire examples of when injunction-like cog-techs are useful for projects that serve only or mostly the agent who has implemented them:
You want to work out because you want to be hotter or healthier. This requires going to the gym regularly, so you start feeling bad or doing negative self-talk when you miss a day at the gym. Alternatively, maybe you develop a belief that it is good to go to the gym. Or maybe you imagine that a hero is watching you. What would Arnold Schwarzenegger say about you skipping leg day?
Perhaps you want to quit smoking cigarettes. In order to do that you reconceptualize every individual decision to either smoke or not smoke a particular cigarette as the decision to either be a smoker or not be a smoker.
You are in a happy monogamous marriage and you want to make sure that your marriage does not fall apart. When you are in such a situation, you remind yourself of how much you care about your partner and think about what they would think of the situation you are in.
You have founded a startup and you want it to succeed. In order for it to succeed you have to work 10+ hours every day. In order to achieve this you build your identity as someone who ships, or someone who cares a lot about their career, enough that spending time away from the office makes you feel guilty.
You want to stop eating animals because you care about animals but you really love meat. In order to do this you cultivate a disgust response to meat in yourself. You keep watching videos of animals being slaughtered in gruesome ways and try to associate this with eating meat internally.
You want to write a blog post every day for an extended period…
It seems to me like a lot of the coolest things that humans have ever done could not have been accomplished by the humans involved unless they had some kind of injunction-like cognitive tech. Human decision making is too fickle on its own to accomplish that. You might think perhaps that a clear understanding of the injunction-like cog-techs mentioned above is less undermining. Maybe some, but not really enough. I’ll leave it as an exercise to the reader to think about how applying similar reasoning to our just-so story might undermine the adoption of the cog-techs above (in part because I sort of think you shouldn’t be exposed to that without explicit consent).
Worse than that: plausibly, a lot of the coolest things that humans have ever done have been done using injunction-like cog-techs that really were of the particularly insane-seeming kind, e.g., believing that you are always being watched by a very powerful invisible guy. It would be sad to completely lose the ability to stabilize our preferences across time and scenarios that these techs afforded us.
So, what do?
I have two prescriptions: one a hack, and the other, the real deal.
First the hack.
You might have noticed that a lot of these injunction-like cog-techs function like commitment devices. They tie you to the raft, as it were. In particular, they mostly are cognitive commitment devices, and most of their practical import (at least to the agents that adopt them, rather than to the agents that install them) comes from their function as commitment devices.
But not all commitment devices are cognitive! You can, for instance, give a trusted friend access to your bank account and ask them to take money from you if you do not lose a certain amount of weight by some date. One of my partners and I once agreed that if we got into a fight over some period, we would both have to donate to the Communist Party of the US. (This did end up working.) External commitment devices have the advantage of not requiring you to mess with your cognition in order to implement them. They do, however, have the disadvantage of requiring some overhead relative to cognitive commitment devices. That said, they are relatively underutilized in my opinion.4
Now for the real deal.
The answer is to adopt injunction-like cog-techs on purpose because you correctly think you are kind of stupid and corrupted. And actually, that’s not quite right, because it doesn’t take seriously enough how difficult and extraordinary it would be to be able to just have stable preferences across time without using injunction-like cog-techs. It would actually be very hard. So perhaps you should also have an appreciation for that. You should both know that you are very stupid and that it is ridiculously difficult to have stable preferences across time. In order to deal with this challenge you on purpose design your own injunction-like cog-techs and adopt them.
Have you noticed that you are more likely to believe something when it is convenient? Perhaps you should believe things less when it is convenient to believe them—but this will of course be more tempting when the problem comes up than it is now. It’s easy to think that you should believe things less when it would be convenient if they were true. It’s extremely difficult, when you come across something that seems definitely true, to notice that it would also be convenient if it were true and then believe it less. So what should you do?
Perhaps you could take on an identity as someone who doesn’t do that sort of thing. Or you could publicly vow to not do that. Or you could develop a sense of disgust for certain kinds of cognitive moves that help you believe convenient things. There are a lot of options.
We might reasonably ask why a clear vision of how this cog-tech works wouldn’t undermine it in the same way that the other cog-techs are undermined by insight. The answer is that I identified a feature of the stability of my policy across time that I did not like, and I decided to implement this tech for the sake of my own aims. It involves no false or non-sensical beliefs, and the reason for it being there flows from my own judgments about what policy practically benefits me most. As such I do not need to worry that there is some other way to better achieve my aims. It may be a sort of kludge, but it is, according to me, the best option available.
There is a genuine cost to this strategy. Injunction-like cog-techs are harder to disavow than beliefs. There is a genuine reduction in your flexibility here. But that is the whole point. Regardless, the injunction-like cog-techs I design and adopt to serve my own aims are much easier to disavow when the time comes than the injunction-like cog-techs that I was taught to adopt as a child by parents and busybodies.
I mean to use “cognitive technology” in the broadest sense possible: a thing people learn to do with their minds in order to get something done.
I would be remiss not to mention that many of the insights in this post came from conversations with Eric Campbell. He has a paper on similar things which I would link here if I could; I haven’t been able to find it on ResearchGate, and I lost the copy he gave me long ago. Here is the JSTOR link.
I am sure a bunch of my discussion here was also influenced by Robin Hanson’s The Elephant in the Brain and the first four chapters of Johnathon Haidt’s The Righteous Mind.


Suspect this could be one of the best essays on the app with a gentler first 2 paragraphs
A very thoughtful approach to a question that is, I think, too self-limiting to produce a compelling answer.
This theory sounds a lot like Albert Ellis’s Rational Emotive Therapy. You’re doing your best to explain “behavior” in a world without virtue or dread as inherent to human experience. There are serious problems with trying to treat oneself — and by extrapolation, others — as rational machines which modify themselves, robotically, as a means of persisting. Persisting for what purpose?
I’m curious how you come to inhabit this small space. Did you toss out Plato’s idea of the Good? Dismiss Freud’s idea of the multilayered id-ego-superego? I’m not recommending Freud necessarily, his theories are also reductionist, but they afford a sharper glimpse (useful tech?) into the problem of desire. Are we all past that, now that we know for certain that we are (just) machines?
Morality starts with the insight that the other person is also you, because their suffering has an uncanny connection with your suffering. That’s why people don’t steal. You don’t steal because you know how it feels when someone else steals from you. You recognize that you, through your actions and attitudes, can (and do) bring more suffering into the world. You count the other’s suffering as equivalent in some important way with your own suffering. Not because you’re afraid of defying social norms. There are levels of deeper insight. You can see that there is something seriously awry with the world, and not only because you don’t get to have maximum-fucks-without-consequences.
If you start with the premise that every person is a self-persisting windowless monad who wants to eat as much candy as possible, then, yeah, one would expect that there would be a lot of conniving and bargaining, and efficient “cog-tech” rules of thumb techniques for advancing one’s project… but it’s highly relevant to know what the project is, what is the project going to accomplish? Raiding the liquor cabinet fails pretty quick, and leads to other orders of question, such as, “what do I want?” This is where mechanical theories stumble, because they can’t answer why desire is desirable. If the premise is, “I’m a machine,” one can only beg the question by summoning explanatory variants, for example, “biological imperatives.” These are the small, sad places we moderns have hidden ourselves, as we lack a sense of requiring virtue.
All those quaint notions: loyalty, honor, respect, integrity, humility; modern people treat these as pretenses, or worse, in desperation for something to care about — such as “Palestinians are being oppressed!” —people fall into hysterical, violent displays that demonstrate profound moral failings.
An interesting take on our moral unity/disunity is Julian Jaynes’s The Origin of Consciousness in the Breakdown of the Bicameral Mind.