Recableado

A 72-year-old traveler discovering the last continent


Two Sides of the Same Coin: When AI Learns Alone and When Someone Asks Who's Watching


Artificial intelligence took two steps this week. One forward. One sideways, to see who it’s knocking over. And for the first time in a long while, I believe both are equally important.


What Happened This Week?

Two announcements that, on the surface, have nothing to do with each other.

On March 7th, Andrej Karpathy — former head of AI at Tesla, OpenAI co-founder, one of the most influential minds in the field — open-sourced a project called autoresearch. A system where AI agents run research experiments alone, in a loop, without human intervention. You go to sleep and wake up with 100 completed experiments.

That same week, Anthropic — the company behind Claude, the AI I work with every day — launched The Anthropic Institute. Its mission: to create a space for real dialogue with workers, industries, and communities affected by AI.

One story is about speed. The other is about direction. And you can’t understand one without the other.

Infographic: Accelerate and be responsible — AI: Two Sides of the Same Coin. Side A (Karpathy): Radical Acceleration. Side B (Anthropic): Systemic Responsibility.

What Is Autoresearch and Why Does It Matter?

Karpathy condensed the core of a language model (LLM) training system into 630 lines of Python code. One file. One GPU. No million-dollar infrastructure.

The project has three pieces:

FileWho touches itWhat it does
prepare.pyNobody (fixed)Downloads data, prepares the tokenizer
train.pyThe AI agentModifies architecture, optimizer, hyperparameters
program.mdThe humanDefines objectives, boundaries, and strategy for the agent

The flow is simple and brutal:

  1. The agent modifies the training code
  2. Runs a training session of exactly 5 minutes
  3. Measures whether the result improved (a metric called validation bits-per-byte)
  4. If it improved → saves the change as a Git commit. If not → discards it
  5. Repeats. Alone. Non-stop.

The pace: ~12 experiments per hour. ~100 experiments while you sleep.

In the chart Karpathy posted on X, each dot is a complete language model training run. Hundreds of dots accumulated in a single night. The post racked up 8.6 million views in two days.

The community has already created versions for macOS and Windows. The project has forks for more modest GPUs. You don’t need a data center. You need a graphics card and curiosity.


What Is The Anthropic Institute?

Diana Szyperska, an AI educator and critical thinker, summarized on LinkedIn what resonated most with her about the announcement:

“What strikes me most is the bidirectional design: real feedback loops with workers, industries, and communities facing real disruption. Not just building the technology, but building the accountability architecture around it.”

The institute isn’t a technical lab. It’s a space for listening. So that the people who will live with the consequences of AI have a voice before it’s too late, not after.

Anthropic has long differentiated itself through its focus on safety. But an institute dedicated to accountability with real communities is something else entirely. It’s admitting, from inside the industry, that building the technology isn’t enough.


Why These Two Stories Need Each Other

This is where I want us to pause for a moment.

Autoresearch proves that autonomous agents are no longer a concept. They’re a GitHub repository you can clone today. An agent that improves on its own, accumulates knowledge, and doesn’t need anyone to tell it when to stop.

This won’t stay in Silicon Valley labs. If an agent can iterate on AI training code, it can iterate on anything measurable and improvable: marketing strategies, network configurations, supply chains, medical diagnostics.

Now think: who defines the boundaries?

In autoresearch, the answer is elegant: a file called program.md. A human-written document that establishes what the agent can do, what it should optimize, and what it must not touch. It is, in essence, governance in miniature. A contract between human and machine, written in natural language.

It works because the system is small: one file, one GPU, one researcher. But what happens when you scale that to an entire industry? To millions of workers? To communities that don’t write .md files?

That’s where Anthropic Institute comes in. It’s the attempt to create the equivalent of program.md at societal scale. Not a file, but an architecture. Not technical instructions, but real conversations with real people.

¿Te está gustando? Compártelo


What I See From My Trench

I’m 72 years old. I run a travel agency. I use AI every day: for budgets, for web content, for managing trip files, for writing this blog. I’m not a machine learning engineer. I’m a power user trying to understand where this is heading.

And what I see is a fork in the road.

On one hand, the speed at which autonomous agents are maturing is hard to process. A year ago, an agent that iterated on code was science fiction accessible only to major labs. Today it’s a weekend project on a single GPU.

On the other hand, the conversation about what we do with this is still too slow, too abstract, too distant from the people who will actually live the consequences.

My industry — tourism — doesn’t appear in any whitepaper on responsible AI. But when an agent can plan a complete trip without human intervention (and we’re not far), the question won’t be technical. It will be: who answers when something goes wrong? Who listens to the traveler with a problem at 3 AM in Tokyo?

Technology already knows how to accelerate. The pending question is who writes society’s program.md.


Three Takeaways

  1. The most important file in autoresearch isn’t the code. It’s the prompt. program.md is where the human draws the boundaries. Without that file, the agent has no direction. Governance starts by defining what the machine must NOT do.

  2. Speed without direction is just noise. 100 experiments per night is impressive. But the right question isn’t how many, it’s where to. And that requires human judgment.

  3. Listening is a technology. What Anthropic Institute proposes — real feedback loops with communities — isn’t PR. It’s acknowledging that AI doesn’t deploy in a vacuum. It deploys onto people, jobs, and lives.


Sources


Giora Gilead Elenberg — Founder of Viajes Scibasku, digital explorer at 72. I write at recableado.blog about AI, work, and real life.

What did you think?

G

Giora

Recableado

72 años, 42 vendiendo viajes, y 5 IAs que hacen el trabajo de un equipo entero. Pregúntame lo que quieras — sobre el blog, mi stack, o cómo pasé de un gin tonic a un prompt.

Recableado · Blog de Giora Gilead