Your window into my daily tangle of ideas—snippets of thought, annotated margins, and evolving threads in philosophy and physics.
🧠 🔍 🧵 💬
🌀 📚 ✨ 📝
Linking Fields, Horizons, and the Shape of Dark Energy
One of the most mysterious components of our universe is dark energy. We know it exists because the cosmos is accelerating, but its fundamental nature remains elusive. Cosmologists often describe it phenomenologically with a function $w(z)$, the equation-of-state parameter, which encodes the ratio of pressure to energy density as the universe expands. Yet this approach leaves open the question: what is the underlying physics driving $w(z)$?
In my recent investigations, I’ve been exploring a bridge between two perspectives:
Scalar field models of dark energy — where a field $\phi(x,t)$ evolves according to a potential $V(\phi)$, giving rise to an effective energy density and pressure.
Phenomenological $w(z)$ parametrizations, such as the Ma–Zhang or CPL forms, which describe observations without committing to a particular field-theory Lagrangian.
The goal is to derive the effective $w(z)$ from first principles, while simultaneously allowing for spatiotemporal variations in dark energy that may be averaged over cosmological horizons.
From Fields to Phenomenology
Scalar fields are natural candidates for dark energy. For a canonical field $\phi$ minimally coupled to gravity, the energy density and pressure are
If $\dot{\phi}^2 \ll V(\phi)$, as in slow-roll regimes, $w_\phi \approx -1$, mimicking a cosmological constant. But if $\dot{\phi}^2$ varies nontrivially with time—or even spatial position—we obtain dynamic $w(z)$.
By constructing an inverse mapping, one can take a phenomenological $w(z)$ and attempt to reconstruct an effective scalar potential $V(\phi)$. This allows a systematic exploration of which field-theoretic models correspond to the observed or hypothesised $w(z)$ behavior.
Spatiotemporal Variations and Horizon-Averaged Entropy
A key insight comes from considering horizon-averaged quantities. Cosmological horizons, whether particle horizons in the past or event horizons in the future, provide a natural coarse-graining scale. Instead of insisting that $w$ is perfectly uniform, we can define
which effectively smooths out sub-horizon fluctuations in the field while retaining large-scale dynamics.
Horizons are intimately tied to gravitational entropy. Following ideas inspired by holography and semi-classical gravity, the entropy associated with a causal horizon depends on its area, and by extension, on the integrated expansion history. By linking $\bar{w}(t)$ to horizon-averaged entropy, we can impose thermodynamic constraints on dark energy dynamics, ensuring that the evolution of the scalar field respects a generalized second law.
This opens the door to spatiotemporally varying dark energy models that remain consistent with both observations and fundamental principles like the growth of horizon entropy.
Computational Investigations
To explore these ideas quantitatively, I’ve developed several computational frameworks:
Vectorised $w_0$–$w_a$ sweeps allow rapid evaluation of cosmological time $t(z)$, scale factor $a(t)$, Hubble parameter $H(z)$, and Big Rip times across large parameter grids, producing a phenomenological map of possible cosmic fates.
Exact integral-based solvers reconstruct the scalar field contributions from $w(z)$, enabling checks against analytic approximations and horizon-averaged constraints.
Interpolated $\Omega_\text{DE}$ functions allow efficient evaluation of the DE term for arbitrary $z$, supporting spatiotemporal generalizations and numerical stability in regions near singularities or phantom evolution.
By combining these tools, we can test which field-theory-inspired $w(z)$ models produce physically plausible universes, respect horizon-entropy constraints, and remain compatible with current observational data.
Where This Leads
This research links three traditionally separate domains:
Theoretical physics — scalar fields, Lagrangians, and cosmological dynamics.
Phenomenological cosmology — $w(z)$ parametrizations and observational constraints.
Gravitational thermodynamics — horizon entropy and coarse-grained consistency conditions.
It also suggests new directions:
Could local variations in dark energy leave observable imprints on large-scale structure?
Might horizon-averaged entropy provide a selection principle for viable dark energy models?
Could we design simulations of universes with dynamic, spatially-varying $w(z,x)$ to explore the interplay between field theory, thermodynamics, and cosmological observables?
Ultimately, the goal is a more unified understanding of dark energy, bridging the gap between phenomenology and fundamental physics while respecting both local field dynamics and the global structure of spacetime.
In future posts, I plan to include visualisations of dynamic $w(z)$ models, horizon-averaged effects, and Big Rip conditions, showing how scalar fields and phenomenological dark energy can be woven together to tell the story of cosmic expansion.
The Veil in the Lab
In a climate-controlled lab, a physicist adjusts her optical table, aligning mirrors to split and recombine light. She’s not looking at the photon. She’s aligning the apparatus through which the photon might disclose itself—indirectly, fleetingly, as a click in a detector or a blur in an interference pattern. What appears is not the thing itself, but a signature inscribed through mediation. This is modern science’s open secret: that knowledge, at its most rigorous, is scaffolded through veils.
We don’t often speak this way. The prevailing myth of science is one of unveiling—of stripping nature bare, illuminating what was once hidden. Truth emerges as clarity, and the veiled is what must be overcome. Yet in practice, the opposite is often true. The lab is not a place where veils are lifted, but where veils are arranged, made legible, stabilised. The object is known not by exposure, but by inference, translation, reconstruction. In quantum physics, high-energy cosmology, and even AI research, what we know best is not the real in its immediacy, but the interfaces we build to relate to it.
This is not a failure of science—it’s its condition of possibility. But it calls for a different metaphor than the spotlight or the mirror. A better one might be the veil.
The veil, classically, is the symbol of concealment. In esoteric traditions, it guards the sacred from casual view, demanding ritual and interpretation. It doesn’t simply block; it structures the encounter. One must pass through stages, symbols, or procedures to engage what lies beyond. The veil, in this sense, is a scaffold—not a screen to be torn away, but a surface through which the hidden becomes accessible on its own terms.
In this light, consider AI. Large language models like GPT-4 operate through billions of weighted parameters, attention heads, nonlinear activation functions. The result is intelligible to us—an output string of text, a conversation. But the process is largely opaque, even to those who built it. Engineers prompt and probe like mystics decoding an oracle. The system’s internal states resist full translation. We tweak inputs not to see through the model, but to coax it into disclosure. It’s less like solving an equation and more like learning a liturgy.
Or take the cosmic microwave background. We observe it as a faint glow across the sky, but we access it only through a cascade of calibrations: signal filters, noise models, foreground subtractions, statistical inference. The signal we extract is not the raw cosmos, but what survives our interpretive apparatus. Even the timeline of the universe—when did structure form, when did acceleration begin—is recovered through models whose parameters are scaffolded by priors, assumptions, interpolations. Again, the veil is not a failure but a framework.
This is not to say that all is unknowable. Rather, it is to acknowledge that knowing is mediated—and that this mediation has a structure. It requires technique, yes, but also training, temperament, and symbolic literacy. In this way, the lab resembles the temple: both cultivate a disciplined relation to a reality that withholds itself. Both rely on rituals—experimental protocols, data pipelines—that do not reveal the ground directly, but shape the space in which it might emerge.
Philosophers like Heidegger, Agamben, and Derrida have long gestured at this. For Heidegger, truth is not correctness but aletheia—unconcealment, which always leaves something hidden. For Agamben, potentiality includes the capacity not to be actualised, to hold back. And for Derrida, meaning unfolds through différance: a deferral, a displacement, a refusal of final presence. These are not anti-scientific gestures. They simply name what scientific practice often demonstrates but rarely proclaims: that the real resists full presentation, and that knowledge must learn to dwell in that resistance.
Why does this matter? Because the myth of total transparency is not just wrong—it’s dangerous. It leads to frustration when models defy explanation, or when cosmological parameters refuse to converge. It dismisses disciplines—like metaphysics or hermeneutics—that have long explored opacity as an ontological feature, not an epistemic bug. And it blinds us to the politics of scaffolding: to who builds the veils, who interprets them, and who is excluded by them.
Recognising the veil in the lab invites us to take mediation seriously—not as a regrettable limitation, but as a constitutive feature of knowledge itself. It shifts our metaphors: from peeling back layers to tending interfaces; from uncovering to scaffolding; from conquest to interpretation. It doesn’t mean abandoning rigour. It means redirecting it—toward the architectures of access, the rituals of inference, the forms through which the hidden appears.
To know the world, then, is not to dissolve its veils, but to learn how they fold.
Reading [Dis]enchantment: Esotericism, AI, and the Politics of Knowing
In July 2024, a curious publication emerged from Rotterdam. It was a zine—riso-printed, collaboratively assembled, and informally styled—called [Dis]enchantment. Compiled by the AIxDESIGN collective as part of their Slow AI project, it explores what they call “Esoteric AI”: not a fantasy of magical machines, but an attempt to reframe artificial intelligence through marginalised, non-industrial ways of sensing and understanding.
Drawing on metaphors from religion, divination, and mysticism, the zine engages with the cultural status of AI in evocative terms. It invites the reader to reflect on the ways in which predictive models acquire an aura of authority — not necessarily because they are transparent or neutral, but because they are embedded within systems of trust, ritual, and mystique. It raises questions such as: What kinds of knowledge are excluded when AI is framed as rational, technical, and inevitable?
Rather than offering fixed answers, the contributions in [Dis]enchantment unfold as provocations. Statistical modelling is juxtaposed with avian augury; data interpretation is cast in the mood of divination. Through aesthetic gestures and speculative reframings, the zine evokes parallels between contemporary AI systems and historical forms of esoteric inquiry. Its imagery suggests that algorithmic outputs are not just results — they are also readings, formations to be interpreted, signs to be deciphered.
I drew from this zine while writing an academic article on esotericism and AI, mistakenly treating some of its ideas as direct quotations. But the more accurate framing is that I was responding to its symbolic register — paraphrasing its texture, rather than citing its text. In retrospect, that slip says something: I had read the zine as a situated epistemic artefact, not a source in the traditional sense, but something closer to a set of ritual notations.
What makes [Dis]enchantment distinctive is its dual movement: it seeks to “disenchant” dominant AI narratives — with their industrial logics, masculinised rationalism, and technical inevitability — while also “re-enchanting” the space by opening it to slower, embodied, affective, and culturally embedded ways of knowing. Rather than opposing science with superstition, it blurs the line between the two, highlighting the symbolic undercurrents already at play in how we speak of and interact with machines.
“Esoteric AI proposes alternative ways of seeing, sensing, and knowing AI that blur the seeming binary of magic and technology.” — Natalia Stanusch, Esoteric AI Research Lead
Ultimately, [Dis]enchantment functions less as a declaration and more as a gentle disruption. It offers no master framework, but it doesn’t need to. It gives form to a different kind of attention — not slower in the sense of hesitation, but in the sense of care. It sketches the outlines of an AI that might emerge otherwise.
What happens when agents don’t pursue resources, but beliefs? Sim Orbine is a circular city of twelve conceptual districts—Logic, Desire, Ritual, Memory, Number, Power, Time, Silence, Death, Dream, Judgement, and Unity—where 1,000 agents wander and evolve. Each agent has a dominant drive (truth, selfhood, chaos, etc.) and a fluid set of beliefs that shift through exposure to districts, sensitivity effects, randomised pulses, and more.
Beliefs are floating-point values (0–1) across seven symbolic domains:
Districts affect these domains differently. For example: Logic increases knowledge, Desire heightens self, Dream stochastically boosts the_tower, and Silence suppresses knowledge. An agent’s drive modulates how strongly they respond to these themes: “chaos” agents are tuned to Desire and Dream, while “purity” agents are shaped by Ritual and Judgement. District pulses occur every 30 ticks and briefly amplify one district’s influence across all present agents.
Once an agent’s belief in the_tower exceeds a threshold, they are said to ascend. Their final state is summarised in a symbolic line depending on their drive. The simulation is poetic, suggestive, and open-ended: an engine for emergent metaphysics.
Below is the complete code of the current version (v5):
-- sim_orbine_v5.lua
math.randomseed(os.time())
-- === CONFIGURATION ===
local NUM_AGENTS = 1000
local NUM_TICKS = 150
local NUM_SECTORS = 360
local DISTRICTS = 12
local SECTORS_PER_DIST = NUM_SECTORS / DISTRICTS
local CONTAGION_RATE = 0.05
local DEBATE_THRESHOLD = 0.5
local DEBATE_EFFECT = 0.01
local EPIPHANY_CHANCE = 0.02
local ASCEND_THRESHOLD = 0.95
-- === THEMES & DRIVES ===
local districts = {
[1]="Logic",[2]="Desire",[3]="Ritual",[4]="Memory",
[5]="Number",[6]="Power",[7]="Time",[8]="Silence",
[9]="Death",[10]="Dream",[11]="Judgement",[12]="Unity"
}
local drives = {"truth","transcend","chaos","purity","inversion","selfhood"}
local belief_keys = {
"the_tower","death","law","others","self","ritual","knowledge"
}
local sensitivity = {
truth = {"logic","knowledge","the_tower"},
transcend = {"unity","death","the_tower"},
chaos = {"desire","dream","self"},
purity = {"ritual","judgement","law"},
inversion = {"silence","death","others"},
selfhood = {"power","self","knowledge"}
}
-- === AGENTS ===
local agents = {}
for i=1,NUM_AGENTS do
local b = {}
for _,k in ipairs(belief_keys) do b[k]=math.random() end
agents[i] = {
id = "agent_"..i,
sector = math.random(1,NUM_SECTORS),
drive = drives[math.random(#drives)],
beliefs = b,
cult = nil,
ascended = false
}
end
-- === HELPERS ===
local function get_district(sector)
return math.ceil(sector/SECTORS_PER_DIST)
end
local function clamp(v) return v<0 and 0 or (v>1 and 1 or v) end
local function has_sensitivity(agent, theme)
for _,w in ipairs(sensitivity[agent.drive]) do
if theme:lower()==w then return true end
end
return false
end
-- influence from district theme
local function district_influence(agent, theme)
local bonus = has_sensitivity(agent,theme) and 1.5 or 1.0
local b = agent.beliefs
if theme=="Logic" then b.knowledge = b.knowledge + 0.01*bonus
elseif theme=="Desire" then b.self = b.self + 0.01*bonus
elseif theme=="Ritual" then b.ritual = b.ritual + 0.02*bonus
elseif theme=="Memory" then b.others = b.others + 0.01*bonus
elseif theme=="Number" then b.law = b.law + 0.01*bonus
elseif theme=="Power" then b.self = b.self + 0.015*bonus
elseif theme=="Time" then b.death = b.death + 0.005*bonus
elseif theme=="Silence" then b.knowledge = b.knowledge - 0.01*bonus
elseif theme=="Death" then b.death = b.death + 0.03*bonus
elseif theme=="Dream" then b.the_tower = b.the_tower + math.random()*0.02*bonus
elseif theme=="Judgement" then b.others = b.others + 0.02*bonus
elseif theme=="Unity" then
b.the_tower = b.the_tower + 0.015*bonus
b.self = b.self - 0.005*bonus
end
end
-- dream epiphany
local function dream_epiphany(agent)
if math.random() < EPIPHANY_CHANCE then
agent.drive = drives[math.random(#drives)]
print(string.format(" ↳ %s had a dream epiphany and now follows '%s'", agent.id, agent.drive))
end
end
-- peer influence
local function peers_in_sector(sector)
local list = {}
for _,a in ipairs(agents) do
if a.sector==sector then list[#list+1]=a end
end
return list
end
local function contagion(agent, peers)
for _,o in ipairs(peers) do
if o~=agent then
for _,k in ipairs(belief_keys) do
agent.beliefs[k] = agent.beliefs[k] + (o.beliefs[k]-agent.beliefs[k]) * CONTAGION_RATE
end
end
end
end
local function debate(agent, peers)
for _,o in ipairs(peers) do
if o~=agent then
for _,k in ipairs(belief_keys) do
local diff = math.abs(agent.beliefs[k]-o.beliefs[k])
if diff>DEBATE_THRESHOLD then
local sign = agent.beliefs[k]>0.5 and 1 or -1
agent.beliefs[k] = agent.beliefs[k] + sign*DEBATE_EFFECT
end
end
end
end
end
-- belief signature
local function signature(agent)
local t = {}
for k,v in pairs(agent.beliefs) do t[#t+1]={k=k,v=v} end
table.sort(t, function(a,b) return a.v>b.v end)
return {t[1].k, t[2].k, t[3].k}
end
local function same_cult(sig1, sig2)
return sig1[1]==sig2[1]
end
local function cult_name(sig)
return "Order of the " .. sig[1]:sub(1,1):upper() .. sig[1]:sub(2)
end
-- detect emergent cults
local function detect_cults()
local cults = {}
for _,a in ipairs(agents) do
if a.beliefs.the_tower>=ASCEND_THRESHOLD then
local sig = signature(a)
local found
for _,c in ipairs(cults) do
if same_cult(sig,c.signature) then
table.insert(c.members,a); a.cult=c.name; found=true; break
end
end
if not found then
local name = cult_name(sig)
local newc={name=name,signature=sig,members={a}}
a.cult=name; table.insert(cults,newc)
end
end
end
return cults
end
-- === SIMULATION ===
local function trigger_event(tick)
if tick%30~=0 then return nil end
local id = math.random(1,DISTRICTS)
local theme = districts[id]
print(string.format("[EVENT] Tick %d → District %d pulses: %s", tick, id, theme))
return id, theme
end
local function update_agent(agent, pid, theme)
agent.sector = (agent.sector % NUM_SECTORS) + 1
local did = get_district(agent.sector)
district_influence(agent, districts[did])
if pid and get_district(agent.sector)==pid then
agent.beliefs.the_tower = agent.beliefs.the_tower + 0.03
if theme=="Dream" then dream_epiphany(agent) end
end
local peers = peers_in_sector(agent.sector)
contagion(agent, peers)
debate(agent, peers)
for k,v in pairs(agent.beliefs) do agent.beliefs[k]=clamp(v) end
end
local function print_summary(tick)
local sum={tower=0,death=0,knowledge=0}
for _,a in ipairs(agents) do
sum.tower = sum.tower + a.beliefs.the_tower
sum.death = sum.death + a.beliefs.death
sum.knowledge = sum.knowledge + a.beliefs.knowledge
end
print(string.format("Tick %03d | Tower: %.3f | Death: %.3f | Knowledge: %.3f",
tick,
sum.tower/NUM_AGENTS,
sum.death/NUM_AGENTS,
sum.knowledge/NUM_AGENTS
))
end
-- === MAIN LOOP ===
for tick=1,NUM_TICKS do
local pid, theme = trigger_event(tick)
for _, a in ipairs(agents) do
update_agent(a, pid, theme)
if not a.ascended and a.beliefs.the_tower >= ASCEND_THRESHOLD then
a.ascended = true
print(string.format("[ASCEND] Tick %d: %s has ascended!", tick, a.id))
end
end
if tick % 15 == 0 then print_summary(tick) end
end
-- === FINAL REPORT ===
print("\n--- Emergent Cults ---")
local cults = detect_cults()
for _,c in ipairs(cults) do
print(string.format("%s: %d members", c.name, #c.members))
end
print("\n--- District Strongholds ---")
for d=1,DISTRICTS do
local tally = {}
for _,a in ipairs(agents) do
if get_district(a.sector)==d and a.cult then
tally[a.cult]=(tally[a.cult] or 0)+1
end
end
local maxc, maxn = nil, 0
for name,n in pairs(tally) do
if n>maxn then maxn, maxc = n,name end
end
if maxc then
print(string.format("District %d (%s): dominated by %s (%d)",
d, districts[d], maxc, maxn))
end
end
The simulation is not predictive, nor strictly interpretive. It is a kind of symbolic suggestion engine: a space to let abstract dynamics play out and watch which kinds of transcendence take root.