AI Detox: Reclaiming Thought in an Efficiency Era

7 minute read

Published:

I joined a recent meeting about how we can use AI rigorously and efficiently. This was my first time hearing the terms super prompting and skill files. We all agreed on the same point: LLMs improve our capacity to work. Tasks that used to take hours can now be done in less time. However, the time we gain is often spent asking LLMs to do even more tasks. Efficiency creates new demand for efficiency. the more we rely on it, and the less often we practice thinking without immediate assistance. What looks like productivity gain may also become cognitive outsourcing.

I propose breaking this cycle, which I call AI detox.

What AI detox means

AI detox is not a rejection of AI. It is a deliberate practice of occasionally refusing immediate solution. It means setting aside a block of time in which I do not ask an LLM to generate the answer. During that time, I gather information from primary sources, reason, sketch possibilities, predict, create, and reiterate. In this time, I tolerate inefficiency, and allow myself to be wrong.

The idea has roots in health. Comfort, sensation, desire is not always worth pursuing. A hot bath is soothing; a cold bath can be restorative although it is less comfortable. In climbing, even when there is a known beta, I still like to explore alternative solutions or try a route in a style that does not come naturally to me. The goal is not merely to finish the climb. It is to expand the space of possible action.

The overall objective is to preserve the capacity for independent problem solving and, more importantly, to preserve the ability to build frameworks. Please read the previous post (universal law and mind), which defines perceived reality vs. objective reality, and proposes that to live thoughtfully is to keep building frameworks. By frameworks, I mean the conceptual lenses through which we interpret perceived reality. To think is not only to reach answers, but is to construct the structure by which answers become meaningful.

The widespread use of a limited number LLM models impacts framework building. If many people rely on a small number of dominant models to propose solutions, then our reasoning may converge toward a limited set of frameworks. This convergence will reduce epistemic diversity.

In principle, we can build world models and train diverse LLM agents. But this is unlikely to happen at scale for two practical reasons: 1) industry incentives prioritize capacity and speed over diversity, in no way do humans want to create such creature/product that threatens their supremacy. and 2) the scripts used to build world models are likely built by LLMs, so bias can be inherited early in the pipeline.

For programmer, the call is more pressing. LLM grabs programmers’ attention because it provide solution faster than coding without LLMs, thereby pumping dopamine in the brain.

An example: heart-evoked potential reasoning

I evaluate whether the cognitive effects of transauricular vagus nerve stimulation (taVNS) are mediated by heart-evoked potential (HEP). I am trying to separate brain activity coupled with heartbeat from volume conduction of the electrical field generated by myocardium.

Without LLM, my first instinct was an extreme thought experiment: if we could measure EEG when the brain is dead but cardiac activity remains, we might isolate what purely cardiac contamination looks like. This only occurs under conditions where the brain dies and the heart continues to beat through mechanical ventilation and medication. In a normative human being, brain activity and the electrical field from myocardium contraction are intrinsically coupled.

HEP is operationally calculated as the average electrical potential across heartbeats. This process cancels out random signals without a temporal relationship to the heartbeat. So, if we apply this averaging process to ECGs, EEG periods with consistent ECG morphology are more likely to reflect volume conduction than other periods.

(I found drawing the trace helpful for proceeding.)

EEG components that do not align with expected “HEP” from ECGs are candidates for neural response to heart, reflecting introspection.

What mattered in this process was not that I immediately reached the correct answer. By building the reasoning path myself. Drawing the traces, imagining edge cases, and forming an imperfect model of the problem, I generate insights and diversify my thinking pattern.

A short, intentional period of independent reasoning can be inefficient in the moment, but essential for understanding, predicting, and modeling perceived reality. If we never pause to reason without automation , we risk outsourcing framework formation. Consequently, we become skilled at using tools, while becoming less certain of why is the case.

My agenda: neurotech for human cognition

AI detox connects to my broader interest in neurotechnology for human cognition and metacognition. Human biology evolved for survival under conditions very different from modern life. It did not evolve for modern society with information, industrial schedules, or systems optimized to capture attention. Modern life is comfortable in many ways, but it is also cognitively invasive. It hijack human sensory system unconsciously (or we don’t bother to reflect) and leaves little room to reflect on what one is living for.

My view (see the posts about universal law) is that to live thoughtfully is, in part, to continue building frameworks rather than inheriting them. That is why I am more interested in AI for neurotechnology than in neurotechnology for AI. Much of modern computational neuroscience has helped build better artificial systems by extracting principles from human cognition. Attention inspired transformers; visual neuroscience inspired convolutional networks. This line of research draws many investiments. But I am drawn to the inverse direction: using technology to help humans understand and regulate our mind with agency.

The long-term promise of neurotechnology, as I see it, is not simply cognitive enhancement in the narrow sense of better memory, faster processing, or higher productivity. It is giving people tools to sense, interpret, and shape the internal processes through which they construct perceived reality. The knowldge base for an LLM model is probably larger than what neural activity can encode at a given time. I’d like to envision a future where neurotechnologies enable humans to have working memory of such knowldge base. That is the practice of imaging how LLM sees the world, a network of tokens.

This ambition is not free from contradiction. Neurotechnology also depends on funding and incentives. Like any field, its goals can drift toward what is marketable rather than what is the original agenda. Even if that happens, the final product should allow human to ask what the original aim was, what values are being traded away, and whether the tool still serves human thought.

No doubt technology has made life easier, but it also defines a landscape for living; for example, the skill of using tech is required in companies. We accept being born, being educated, forming relationships, and finding a job as the norm for living. Conceiving framework for these norms has not been revolutionarily advantageous. Humans are being domesticated by technologies in some sense. But you see domestic obeydian dogs survive while the wolves are hunted to be endangered. Technology will continue to evolve quickly. In ten years, systems may be far better at capturing attention, shaping desire, and converting sensation into profitable behavior. My hope is modest, that humans will still retain the capacity to step back, notice the structure of influence, and think for themselves.

Give it a try, maybe you will think differently