JAEIR logo: a bird over the letters JAEIR, navy blue with bright orange and red highlights The Journal of Advanced Esoteric Interdisciplinary Research
(JAEIR)

Future Primitive II: Can We Build A Machine That Nobody Is Good At?

Abstract

This article interrogates the possibility of constructing systems or technologies that are categorically inaccessible to human intuitive skill. It proceeds from the recognition that most tools are developed within an anthropocentric feedback loop, wherein designs unconsciously accommodate innate human perceptual, cognitive, and motor traits. By analyzing this recursive design logic and exploring marginal counterexamples such as deliberately obscure programming languages, systems based on pure randomness, and interfaces divorced from metaphor, this piece considers whether human-designed systems can ever be truly alien to human affordance. The inquiry complements the prior installment on innate aptitude in synthetic domains, while remaining confined to present capacities rather than speculative future or non-human evolution.

I. Introduction

The previous entry in this series considered the paradox of humans demonstrating natural aptitude in domains—such as software engineering or aviation—that lack any plausible evolutionary precedent. Here, we extend that inquiry in a distinct direction: if humans can be "naturally good" at recent inventions, can the inverse be true? Is it possible to design a system so thoroughly disaligned with human capabilities that no one is naturally adept at it? Or do all technologies, by virtue of being conceived and iterated by humans, inevitably remain within reach of some subset of the population?

II. The Anthropocentric Loop

Virtually all contemporary tools and interfaces emerge within a recursive and self-selecting design environment. Designers are human, beta testers are human, and success is measured by human uptake. This produces an anthropocentric loop: if a tool or interface proves illegible to its intended users, it is reworked, discarded, or abandoned. Legibility—whether semantic (programming syntax), visual (icons and metaphors), or ergonomic (control layouts)—is not a byproduct, but a prerequisite for persistence.

This feedback cycle ensures that technologies that survive the design process tend to be intuitively graspable by at least a minority of users. Even seemingly arbitrary conventions become embedded: from QWERTY keyboards to the graphic metaphor of "desktop folders," design decisions evolve in tandem with human interpretive limits.

III. Deliberate Obfuscation and Anti-Usability

To subvert this legibility requires intentional resistance. Consider the phenomenon of esoteric programming languages ("esolangs"), such as Brainfuck or INTERCAL. These systems are built to be confusing, inelegant, and unusable by conventional standards. Their syntax is opaque, their structure non-intuitive, and their output unpredictable to the uninitiated.

Yet even these anti-tools attract small communities of enthusiasts. Mastery is difficult but not impossible. This suggests that total unintelligibility—even when actively pursued—is difficult to achieve. There remains a minimum substrate of human-parsable logic beneath even the most alienating design.

IV. Randomness and the Threshold of Skill

Another route toward non-navigability lies in systems governed by chance rather than deterministic logic. Games of pure luck—roulette, lotteries, certain slot machines—defy strategic mastery. Still, humans routinely perceive patterns where none exist, deriving "lucky streaks," rituals, or numerological schemes to manage the perceived risk.

Importantly, these domains often generate a false sense of skill. Even when outcomes are entirely probabilistic, humans impose narrative and causality. While no one may be innately good at guessing a random number, some may be unusually resistant to cognitive biases such as gambler’s fallacy—arguably a kind of meta-skill within randomness itself.

V. The Limits of Alienation

If total opacity is difficult to sustain even when engineered, then perhaps the threshold lies not in design but in perceptual non-correspondence. Interfaces that fail to map onto visual, tactile, or conceptual metaphors may resist intuitive use. A system operating entirely on infrasound, imperceptible light frequencies, or hyperdimensional topology might challenge not just cognition but sensation itself. Still, as long as the mechanism remains physically accessible, some form of pattern recognition may emerge.

In this view, human cognition acts less like a brittle codebase and more like a probabilistic engine: given sufficient exposure and feedback, it generates strategies—even in domains without precedent.

Conclusion

It remains an open question whether total un-usability is a viable design outcome. So far, even our most obscure tools remain legible to a fringe. Whether this is a product of neural plasticity, cultural overfitting, or statistical variation among cognitive profiles, it points to a striking conclusion: humans are hard to lock out.

Future entries in this series will consider whether this adaptability persists across time or across species—exploring what happens when the human subject is either displaced by non-human cognition or transplanted into a technological ecosystem not of its own making. For now, the second paradox stands: while we can be naturally good at synthetic skills, it may be nearly impossible to construct synthetic tools that no one can be naturally good at.

* * *