Intelligence needs discipline.

Not more intelligence. Discipline.

The Founder

Ejentum is built by Frank Brsrk (Franko Luci). Self-taught AI developer. Solo founder. No team page because there's no team, one person engineering, benchmarking, and shipping.

The origin: while experimenting with RAG pipelines, Frank noticed that reshaping HOW knowledge was structured before upsert changed model behavior more than adding more knowledge. Emergent patterns appeared through trials: certain structural formats consistently altered reasoning quality. That observation became a question: what if you could retrieve reasoning itself, not just information? That question became Ejentum. The work hasn't stopped. Every week, abilities are stress-tested, rewritten, or deprecated based on what the benchmarks reveal.

The Thesis

LLM failures in production are structural, not informational. The model has the knowledge. It applies the wrong cognitive operation. Every tool in your stack gives the agent more to work with. Nothing tells it how to think about what it already has.

Cognitive operations are human-like traits of the process from thought to action: from how you handle a dirty glass to how you untie a hard knot. Simple or complex, every task demands a specific reasoning posture. LLMs need rich representations of these operations so they can actuate them. That's what the 311 abilities are: structured cognitive operations, engineered and optimized for agentic inference.

The central discovery: telling a model what NOT to do is more powerful than telling it what to do. Suppression outperforms amplification. This wasn't theorized. It was measured across 250+ blind-evaluated tasks on the strongest reasoning model available.

What We Built

That thesis produced 311 cognitive abilities across 6 reasoning domains. Not prompts. Not templates. Engineered operations, each held to a standard before entering the graph:

  • 1.Must be a cognitive operation: not domain knowledge
  • 2.Must be LLM-executable: no external tools
  • 3.Must work across subjects: not tied to one industry

Where This Goes

Every agent calls Ejentum for judgment the way every web app calls a database for state. Not an optional enhancement. Infrastructure.

To get there: 6 domains become 12. Agents that confuse surface similarity with structural analogy. Agents that project their own knowledge onto other actors. Agents that violate physics. Agents that lose their thesis mid-document. Agents that miss what was meant but not said. Agents that forecast without calibrating confidence. Six new failure classes. Six new domains to address them.

Then: cross-model validation. If suppression signals target architectural properties of transformers, and our evidence suggests they do, the mechanism works on every model. Model-agnostic reasoning infrastructure.

How We Work

Every claim on this website traces to a specific benchmark with named methodology. When results disappoint, we publish the regression alongside the improvement. When our most complex mode underperformed our simplest one, we deprecated it and published why. When 62% of tasks received the wrong reasoning domain and improvements persisted anyway, we published that too, because the finding was more valuable than the embarrassment.

Austere methodology. Adversarial audits. Scientific rigor as a product requirement, not a marketing claim.

info@ejentum.com

Research funded by Robert Franks. Infrastructure provided by Dennis Kassner.