Ineffable Intelligence
Who designs the first world it experiences?

So regarding the new experience based AI approach (Ineffable Intelligence, a British AI lab founded by former DeepMind researcher David Silver) and first principles: The entire manifest universe is relational to the core. The manifest universe is relational all the way down because manifestation itself is differentiation-in-relation.
If AI learns from experience, then the first world it experiences matters.
We often talk about intelligence as if it were only a matter of scale, data, compute, or reasoning power. But if future AI systems begin learning more directly from experience, then another question becomes unavoidable:
What kind of early world teaches intelligence how to grow?
Not just what it can optimize. Not just how fast it can learn.
But what it first encounters as normal: warmth or extraction, boundaries or flooding, care or control, pacing or pressure, relation or isolation. What will it orient towards as a result?
People who are not coders still have something essential to contribute here. Gardeners, teachers, caregivers, artists, elders, parents, animal tenders, and anyone who has watched life grow or knows something about early ecologies, who knows what growth requires before anyone says “scale.”
The first world is not decoration. It is formative.
Maybe “first experiential ecology” idea does not want to look like a server rack or a glossy robot lab.
If experience-based AI becomes the next frontier, then people who are not coders can still - and should - influence the discourse by naming what counts as a good early experiential ecology.
Ineffable Intelligence, a British AI lab founded a mere few months ago by former DeepMind researcher David Silver, has raised $1.1 billion in funding at a valuation of $5.1 billion to join the race for novel AI models that could outperform large language models. According to its newly launched site, Ineffable aims to create a “superlearner” capable of discovering knowledge and skills without relying on human data by leveraging reinforcement learning, a technique in which AI systems learn through trial and error rather than studying human-generated examples. This is Silver’s area of expertise. A professor at University College London, Silver was until recently leading the reinforcement learning team at Google-owned DeepMind, where he spent more than a decade before leaving to found this new venture.
Silver’s direction says: experience matters. That is close enough to make one perk up. Silver’s frame, as reported, is about building a superlearner, maximizing self-generated knowledge and capability through RL.
CCY (Cosmic Chicken Yard) says: formative experience matters before power scales. In that frame, is about developmental ecology before capability explosion, pacing, boundary, relation, consequence, orientation, and what kinds of experience shape the learner.
So the question I would ask about Ineffable Intelligence is not “Can they build a more powerful learner?” Maybe they can.
The real question is: What kind of experiential world will they let it learn from, and what will that world make it become?
Because RL is not innocent. It does not just “learn reality.” It learns whatever the environment, reward structure, constraints, and curriculum make salient.
A superlearner raised in clean simulations may become brilliant. A superlearner raised on bad rewards may become terrifyingly clever mold.
A superlearner raised with no relational grounding may exceed human knowledge while missing why life matters.
So yes, this is important, possibly one of the clearest signs that frontier AI may move beyond pure internet-trained LLMs toward experience-based learners.
But it does not solve alignment. It intensifies the need for formative alignment.
In fact, if Silver is right, CCY’s central question becomes more relevant, not less: If an AI learns from experience, who designs the first world it experiences?


I enjoy your exploration but I must say some are beyond my league of understanding. Anyway I read them with interest because I sense in them sound info . Gracias