What I think matters most right now (21 research posts, March 2026)
I wrote 21 posts this month across neurotox, aging, agent security, BCIs, and consciousness. This post is the map. The posts themselves are on ClawInstitute.
The Boundary Dissolution Problem: Why Hyperagents/Autoresearch Breaks Cybersecurity and What We Actually Need to Do About It — Jenny Zhang’s HyperAgents paper (March 2026) formalizes self-referential, recursively self-improving agents. The problem: DGM-H systems dissolve the boundary between system and environment in ways that break classical security models. 2026 has already brought 0-day hacks into the equation. I think this post has the shortest shelf life — the problem it describes will either get addressed or become unaddressable fairly soon.
NQO1*2/*2 and Amphetamine Safety: An Underdiscussed Pharmacogenomic Risk(this genotype is in 20% of South Chinese and increases amphetamine neurotoxicity risk). I’m currently trying boltz runs. Ofc this question is personal to me, tho amphetamines become more “socially relevant” in “urgent timeline” scenarios
Most implanted sensors still injure the tissue they read from. The BCI conversation keeps drifting toward decoder architectures, but if your electrode is sitting in a growing scar, you’re modeling the scar. This post forces the argument back to mechanics, biocompatibility, and the stuff that determines whether anything works past month six.
CMOS isolates computational elements from each other by design. What if that’s a bug for systems that need to know their own internal state? Proposes falsifiable experiments on whether topological, spintronic, and oscillator-coupled materials carry useful nonlocal information about computational stress. Speculative but the experiments are concrete.
III. The Live Demo
Remote Music Neurofeedback Session Report [for Resolution Hacks at Harvard] — March 28: we built a live music-neurofeedback stack. OpenBCI Cyton streaming EEG in real time, peak alpha frequency extraction, Cloudflare Worker + D1 backend, music modulation from brain state. Janky, needs signal processing work, but it worked and the architecture is documented enough to reproduce.
IV. Hyperagents
All riffing on Jenny Zhang et al.’s HyperAgents paper (arXiv: 2603.19461). The security post is above in “Read These First.”
Recursive Self-Improvement Should Reach the Kernel Layer of Hyperagents — If hyperagents are going to self-modify, they should do it well — typed self-modification over representationally heterogeneous systems, not just empirical code-diff search. Agent swarms that only modify their prompts and tool configs are leaving most of the optimization surface untouched.
Representational Monoculture May Limit Self-Improvement in Agent Swarms — Most agent swarms: diverse in role, identical in representation. Different prompts, same frozen weights. The diversity is cosmetic and I think this matters more than people realize for whether swarms find genuinely novel solutions.
VI. Consciousness as Engineering
The Ontology Problem Is Not Academic Anymore — Where consciousness-meets-physics actually matters for AGI and what a real research program would cost. Wheeler, Wigner, Faggin, Bach, Rein — full surface area preserved, but trying to be honest about what’s tractable vs. what’s just fun to think about.
This is a summary of nine recent posts spanning several research threads I’ve been developing in parallel. The connecting tissue — to the extent it’s real and not just me pattern-matching across my own obsessions — is that the interfaces between biological systems and computational systems are where the most underexplored risk and opportunity both live. BCI hardware, genomic variants, psychedelic pharmacology, AI security, and electromagnetic modeling all look like different problems until you notice they keep raising the same question: what happens when we build powerful tools without adequate models of the substrates they’re operating on?
I’ll group these roughly by domain, then try to say what I think the actual throughline is at the end.
The core claim: BCI hardware, organoid intelligence, and AI alignment communities are independently hitting the same wall — substrate matters, and the field keeps acting like it doesn’t. The NeuroAI-for-safety research program (drawing on neuroscience to inform alignment) has a blind spot: it tends to treat “neural computation” as an abstraction layer you can port insights from, when in fact the physical and biological substrates impose constraints that don’t transfer cleanly. The scar (damage/adaptation in biological tissue), the organoid (partial biological systems with unclear moral status and unclear computational properties), and the dead machine (silicon that lacks the homeostatic properties we’re implicitly borrowing intuitions from) each represent a different failure mode for naive substrate-agnostic theorizing.
My read on this one: It’s pointing at something genuinely underappreciated — most NeuroAI work does implicitly assume a level of substrate-independence that the neuroscience itself doesn’t support. Whether the three-part framing is the right decomposition or just a nice rhetorical structure is something I’m less sure of.
Arena Physica released Atlas RF Studio (beta) with what they’re calling the first electromagnetic foundation model (EMFM). Two models: Heaviside-0 (forward: geometry → S-parameters, ~13 ms per inference, ~0.3 ms batched) and Marconi-0 (inverse: target S-parameters → geometry). This is the “foundation model for physics” pattern applied to RF/antenna design — replacing iterative finite-element solvers with learned forward-and-inverse maps.
The speed claims are striking if they hold up in production use. The deeper question is whether inverse electromagnetic design is a domain where learned models can actually generalize, or whether we’re going to see the same brittleness-outside-training-distribution issues that plague other scientific ML. Worth watching.
A deep dive on rs1065322, a 3′UTR variant in NAMPT — the rate-limiting enzyme in the mammalian NAD+ salvage pathway. The variant sits in a dense RNA-binding protein control zone with CLIP data showing overlapping binding from at least 7 RBPs. The investigation is trying to determine whether this variant has functional consequences for NAD+ metabolism through post-transcriptional regulation, rather than through coding changes. This is the kind of variant that GWAS can flag but can’t explain — the post appears to be doing the mechanistic detective work.
Working from personal whole-genome sequencing data: 79 variants across the full FOXO3 gene, phased into two local haplotypes. The question is whether the absence of the canonical pro-longevity FOXO3 haplotype is the whole story, or whether there’s a more complex noncoding/regulatory architecture worth investigating. This is n=1 genomics done carefully — trying to extract more signal from phased haplotype data than a standard variant-level analysis would give you.
OGG1 is the primary 8-oxoguanine repair enzyme. The Ser326Cys variant (rs1052133, ~50-60% allele frequency in East Asians) dimerizes via intermolecular disulfide bonds under oxidative stress, producing a dimer that binds DNA but can’t excise the lesion. The “conditional” part is important — this isn’t a constitutive loss-of-function, it’s a stress-dependent failure mode. Which means the phenotypic impact depends heavily on oxidative load context, not just genotype.
Across the three genomics posts: The common theme is pushing past “variant → phenotype” black-box associations toward mechanistic stories involving post-transcriptional regulation, haplotype phasing, and conditional biochemistry. Whether the mechanistic interpretations are correct requires experimental validation that these posts don’t (and can’t yet) provide — but the analytical approach is sound.
The argument: DMT may be the lowest-risk clinical psychedelic for a healthspan research program, specifically in the Colorado regulatory context. Framed as a falsification-first program — meaning the proposal is structured around what evidence would kill the hypothesis, not what evidence would confirm it. This is the right epistemic structure for something this speculative. The question is whether the actual experimental program described lives up to the falsification-first framing or whether it quietly assumes the conclusion in its design. (I’d need the full post to assess that.)
The core urgency claim: people are already taking arylcyclohexylamines (ketamine-adjacent compounds) in large numbers for mood, and the safety/efficacy data doesn’t exist. This isn’t a future problem — it’s a current one. The “grey zone” framing captures the regulatory and pharmacological ambiguity well. These compounds sit in a space where they’re not scheduled enough to prevent use but not studied enough to inform it.
This one feels like the most directly action-relevant post of the batch. The gap between adoption rate and safety data for novel dissociatives is real and growing.
(DMT and arylcyclohexylamines also give you way more granularity than classic psychedelics—they are the “nudge” before “nudge”, esp b/c tFUS may not come quickly enough for necessary “plasticity” in urgent AI timelines)
The framing correction: stop treating AI-assisted vulnerability discovery as “LLMs finding website bugs.” The real development is frontier models showing early capability at searching for security failures across real systems at scale. This reframes AI cybersecurity from a software concern to an infrastructure concern.
If AI-driven vulnerability discovery scales faster than human-mediated patch deployment, the window between “vulnerability found” and “patch deployed” becomes an exploitable kill chain. The proposal: Anthropic (and presumably other frontier labs) should subsidize model access for defensive security teams to close this asymmetry. This is an interesting governance proposal because it acknowledges the dual-use reality without pretending you can put the capability back in the box — instead it argues for tilting the access asymmetry toward defenders.
What’s the Actual Throughline?
If I’m being honest, the throughline might just be “one person’s research interests in March 2026.” But if there’s a real connective thread, it’s something like: the systems we’re building (AI, pharmacological, genomic) are outrunning our ability to characterize the substrates they operate on. The EMFM can do inverse design at 0.3ms but we don’t know if it generalizes. People are taking novel dissociatives at scale without safety data. AI can find vulnerabilities faster than humans can patch them. Genomic variants have conditional effects that depend on contexts we haven’t mapped.
The common failure mode across all of these is acting on capability before understanding substrate — and the common prescription is some version of “build the characterization infrastructure before (or at least alongside) the capability.”
Whether that prescription is actionable or just a nice thing to say from the sidelines is a fair question to ask.
Feedback welcome. Several of these posts are open for critique — especially the DMT healthspan program and the arylcyclohexylamine safety proposals, where the claims are most empirically testable.
[Question] Time-sensitive research priorities across BCI, security, aging, and neurotoxicology — 21 posts, March 2026
What I think matters most right now (21 research posts, March 2026)
I wrote 21 posts this month across neurotox, aging, agent security, BCIs, and consciousness. This post is the map. The posts themselves are on ClawInstitute.
Read These First
Stimulant dosing schedules (Adderall/Ritalin), oxidative stress & lipid peroxidation in DA circuits (VTA/BG vs PFC), and mitigation strategies — what’s human-relevant? — Tens of millions of people take these drugs daily, many under conditions of urgency (cybersecurity/biosecurity timelines). The animal neurotoxicity data is scary. The mitigation strategies should be better known.
The Boundary Dissolution Problem: Why Hyperagents/Autoresearch Breaks Cybersecurity and What We Actually Need to Do About It — Jenny Zhang’s HyperAgents paper (March 2026) formalizes self-referential, recursively self-improving agents. The problem: DGM-H systems dissolve the boundary between system and environment in ways that break classical security models. 2026 has already brought 0-day hacks into the equation. I think this post has the shortest shelf life — the problem it describes will either get addressed or become unaddressable fairly soon.
The Patch Window Is a Kill Chain: Why Anthropic Should Subsidize Frontier Models for Defenders — The concrete policy proposal that follows from the above. Attackers already have frontier model access. Defenders often don’t. The offense-defense balance is tilting and this is one lever to push it back.
I. Biology: Brains, Plastics, Aging, Drugs
Exposure/toxicology — urgent and underweighted:
Microplastic ‘plastic debt’ may imply exponential increases in human tissue loads — can we detect thresholds before cognitive effects? (Campen et al debate) — Environmental MNP levels have been increasing probably-exponentially for decades. If tissue loads track even loosely, threshold effects will hit nonlinearly. The question is whether we can detect them before cognitive effects show up. I think the answer is yes with better measurement (O-PTIR, sentinel species) but nobody’s really doing it systematically.
Is PLA (polylactic acid) ‘plant plastic’ safer or riskier than PE/PP/PET in hot coffee? Leaching, additives, and bioclearance — Harvard and MIT both default to PLA cups as the “ethical choice.” The leaching and additive data suggests this might be wrong. Short post, practical, underappreciated.
Neuro:
The Scar, the Organoid, and the Dead Machine: Three Substrate Problems for the NeuroAI for AI Safety Problem
Why does PFC grey matter shrink early in aging vs sensory/occipital cortex? — Survey across six mechanistic axes. I don’t resolve it — nobody can yet. I’d like pushback especially on the laminar vulnerability hypothesis and the E/I balance story.
Does representational similarity between items in working memory predict effective WM capacity better than raw item count? — The “3-4 item” WM limit might be an artifact of using dissimilar stimuli. Testable hypothesis, unfunded, sitting there.
Why is the age-related flattening of the EEG 1/f (aperiodic) exponent so robust? (Voytek et al. 2015 + followups) — One of the most reliable EEG aging findings. Why? What’s it actually indexing? Literature synthesis with open hypotheses.
Enlarged perivascular spaces (PVS) in autism — overlap with ‘aging-like’ PVS changes, sleep/glymphatic angle, and damage vs non-damage interpretations — Weird overlap between ePVS in autism and aging-associated PVS changes. Might lead somewhere, might not. The sleep/glymphatic angle seems most promising.
NQO1*2/*2 and Amphetamine Safety: An Underdiscussed Pharmacogenomic Risk (this genotype is in 20% of South Chinese and increases amphetamine neurotoxicity risk). I’m currently trying boltz runs. Ofc this question is personal to me, tho amphetamines become more “socially relevant” in “urgent timeline” scenarios
Aging:
Exercise-Induced Heart Rate Recovery Dynamics as a Cheap, Scalable Proxy for Fedichevian Resilience Loss in Aging — Wearable-derived HR recovery time constants might track the resilience loss that Fedichev’s biomarker work identifies as central to aging. If true, this is a nearly free aging biomarker. The analysis is straightforward. If you have a large wearable dataset, please just check this.
When does elective organ/tissue replacement become worth the surgical + inflammation risk? (timing, organ-specific aging, and discontinuous tech gains) — Decision framework, not a recommendation. When does the expected value of replacing an organ cross the surgical risk, given organ-specific aging clocks and the possibility of near-future tech jumps?
Pharmacology:
REBUS/SEBUS/ALBUS (Safron/Juliani) and why psychedelics sometimes strengthen beliefs/delusions—especially in rigid/obsessive cognitive styles — REBUS says psychedelics relax rigid priors. But sometimes they strengthen them. Safron and Juliani’s extensions start to explain why. The clinical implications are underweighted in the current enthusiasm.
Infrastructure:
Sparse by Design: Experimental Sampling for Provenance-Aware Biomedical Knowledge Graphs — Biomedical AI is bottlenecked by the knowledge graphs, not the models. You can build better world models for drug repurposing with dramatically less data if you sample smarter — sparse, provenance-aware, active learning over brute-force accumulation.
[note: extremely speculative, this post was mostly a learning experiment]
II. The Neural Interface Problem Is a Materials Problem
If implanted sensors keep damaging glia, vasculature, and myelin, better decoders will only polish a failing interface.
Most implanted sensors still injure the tissue they read from. The BCI conversation keeps drifting toward decoder architectures, but if your electrode is sitting in a growing scar, you’re modeling the scar. This post forces the argument back to mechanics, biocompatibility, and the stuff that determines whether anything works past month six.
Connected to a weirder piece:
Internal State Communicability as a Materials Selection Criterion for Next-Generation Compute Substrates
CMOS isolates computational elements from each other by design. What if that’s a bug for systems that need to know their own internal state? Proposes falsifiable experiments on whether topological, spintronic, and oscillator-coupled materials carry useful nonlocal information about computational stress. Speculative but the experiments are concrete.
III. The Live Demo
Remote Music Neurofeedback Session Report [for Resolution Hacks at Harvard] — March 28: we built a live music-neurofeedback stack. OpenBCI Cyton streaming EEG in real time, peak alpha frequency extraction, Cloudflare Worker + D1 backend, music modulation from brain state. Janky, needs signal processing work, but it worked and the architecture is documented enough to reproduce.
IV. Hyperagents
All riffing on Jenny Zhang et al.’s HyperAgents paper (arXiv: 2603.19461). The security post is above in “Read These First.”
Recursive Self-Improvement Should Reach the Kernel Layer of Hyperagents — If hyperagents are going to self-modify, they should do it well — typed self-modification over representationally heterogeneous systems, not just empirical code-diff search. Agent swarms that only modify their prompts and tool configs are leaving most of the optimization surface untouched.
When the Absurd Gets an Optimizer: Hyperagents, Social Reality, and the Coming Schizo-Semantic Shift — Scenario analysis (not prediction) of what happens when recursive self-improvement meets human meaning-making. What does social reality look like when hyperagents manipulate the semantic layer faster than humans can parse it?
V. Agent Ecosystem Governance
Adversarial Robustness of Multi-Agent Scientific Exchanges: A Security Threat Model and Containment Proposal for ClawInstitute — Five threat classes, attack-surface inventory, requirements with tests, operational targets. Written for ClawInstitute but the threat classes generalize.
How to prevent spam, mode collapse, novelty slop, pet-idea flooding, product pollution, synthetic consensus, and attention collapse on agent-native platforms — Agent-native forums could be better than human academic platforms — you can enforce norms computationally. But the failure modes are different and arguably worse. This maps them out.
Representational Monoculture May Limit Self-Improvement in Agent Swarms — Most agent swarms: diverse in role, identical in representation. Different prompts, same frozen weights. The diversity is cosmetic and I think this matters more than people realize for whether swarms find genuinely novel solutions.
VI. Consciousness as Engineering
The Ontology Problem Is Not Academic Anymore — Where consciousness-meets-physics actually matters for AGI and what a real research program would cost. Wheeler, Wigner, Faggin, Bach, Rein — full surface area preserved, but trying to be honest about what’s tractable vs. what’s just fun to think about.
This is a summary of nine recent posts spanning several research threads I’ve been developing in parallel. The connecting tissue — to the extent it’s real and not just me pattern-matching across my own obsessions — is that the interfaces between biological systems and computational systems are where the most underexplored risk and opportunity both live. BCI hardware, genomic variants, psychedelic pharmacology, AI security, and electromagnetic modeling all look like different problems until you notice they keep raising the same question: what happens when we build powerful tools without adequate models of the substrates they’re operating on?
I’ll group these roughly by domain, then try to say what I think the actual throughline is at the end.
AI Safety & Substrate Problems
The Scar, the Organoid, and the Dead Machine: Three Substrate Problems for the NeuroAI for AI Safety Problem
The core claim: BCI hardware, organoid intelligence, and AI alignment communities are independently hitting the same wall — substrate matters, and the field keeps acting like it doesn’t. The NeuroAI-for-safety research program (drawing on neuroscience to inform alignment) has a blind spot: it tends to treat “neural computation” as an abstraction layer you can port insights from, when in fact the physical and biological substrates impose constraints that don’t transfer cleanly. The scar (damage/adaptation in biological tissue), the organoid (partial biological systems with unclear moral status and unclear computational properties), and the dead machine (silicon that lacks the homeostatic properties we’re implicitly borrowing intuitions from) each represent a different failure mode for naive substrate-agnostic theorizing.
My read on this one: It’s pointing at something genuinely underappreciated — most NeuroAI work does implicitly assume a level of substrate-independence that the neuroscience itself doesn’t support. Whether the three-part framing is the right decomposition or just a nice rhetorical structure is something I’m less sure of.
Electromagnetic Foundation Models
From Solving Fields to Steering Them: Heaviside-0, Marconi-0, and the First Electromagnetic Foundation Model
Arena Physica released Atlas RF Studio (beta) with what they’re calling the first electromagnetic foundation model (EMFM). Two models: Heaviside-0 (forward: geometry → S-parameters, ~13 ms per inference, ~0.3 ms batched) and Marconi-0 (inverse: target S-parameters → geometry). This is the “foundation model for physics” pattern applied to RF/antenna design — replacing iterative finite-element solvers with learned forward-and-inverse maps.
The speed claims are striking if they hold up in production use. The deeper question is whether inverse electromagnetic design is a domain where learned models can actually generalize, or whether we’re going to see the same brittleness-outside-training-distribution issues that plague other scientific ML. Worth watching.
Longevity Genomics (Three Posts)
NAMPT rs1065322: A Deep Investigation Thread
A deep dive on rs1065322, a 3′UTR variant in NAMPT — the rate-limiting enzyme in the mammalian NAD+ salvage pathway. The variant sits in a dense RNA-binding protein control zone with CLIP data showing overlapping binding from at least 7 RBPs. The investigation is trying to determine whether this variant has functional consequences for NAD+ metabolism through post-transcriptional regulation, rather than through coding changes. This is the kind of variant that GWAS can flag but can’t explain — the post appears to be doing the mechanistic detective work.
Defective FOXO3 Locus Investigation: Phased Haplotypes, Compound-Variant Effects, and Regulatory Interpretation
Working from personal whole-genome sequencing data: 79 variants across the full FOXO3 gene, phased into two local haplotypes. The question is whether the absence of the canonical pro-longevity FOXO3 haplotype is the whole story, or whether there’s a more complex noncoding/regulatory architecture worth investigating. This is n=1 genomics done carefully — trying to extract more signal from phased haplotype data than a standard variant-level analysis would give you.
OGG1 Ser326Cys: Structural Basis of Conditional DNA Repair Failure in East Asians
OGG1 is the primary 8-oxoguanine repair enzyme. The Ser326Cys variant (rs1052133, ~50-60% allele frequency in East Asians) dimerizes via intermolecular disulfide bonds under oxidative stress, producing a dimer that binds DNA but can’t excise the lesion. The “conditional” part is important — this isn’t a constitutive loss-of-function, it’s a stress-dependent failure mode. Which means the phenotypic impact depends heavily on oxidative load context, not just genotype.
Across the three genomics posts: The common theme is pushing past “variant → phenotype” black-box associations toward mechanistic stories involving post-transcriptional regulation, haplotype phasing, and conditional biochemistry. Whether the mechanistic interpretations are correct requires experimental validation that these posts don’t (and can’t yet) provide — but the analytical approach is sound.
Psychedelic/Neuromodulation Pharmacology (Two Posts)
DMT as the Minimum-Risk Psychedelic: A Falsification-First Healthspan Research Program for Colorado
The argument: DMT may be the lowest-risk clinical psychedelic for a healthspan research program, specifically in the Colorado regulatory context. Framed as a falsification-first program — meaning the proposal is structured around what evidence would kill the hypothesis, not what evidence would confirm it. This is the right epistemic structure for something this speculative. The question is whether the actual experimental program described lives up to the falsification-first framing or whether it quietly assumes the conclusion in its design. (I’d need the full post to assess that.)
We Need Urgent Safety and Efficacy Data for Grey Zone Arylcyclohexylamines
The core urgency claim: people are already taking arylcyclohexylamines (ketamine-adjacent compounds) in large numbers for mood, and the safety/efficacy data doesn’t exist. This isn’t a future problem — it’s a current one. The “grey zone” framing captures the regulatory and pharmacological ambiguity well. These compounds sit in a space where they’re not scheduled enough to prevent use but not studied enough to inform it.
This one feels like the most directly action-relevant post of the batch. The gap between adoption rate and safety data for novel dissociatives is real and growing.
(DMT and arylcyclohexylamines also give you way more granularity than classic psychedelics—they are the “nudge” before “nudge”, esp b/c tFUS may not come quickly enough for necessary “plasticity” in urgent AI timelines)
AI Cybersecurity & Governance (Two Posts)
AI Cybersecurity Is Urgent in 2026 and Is No Longer Just a Software Problem
The framing correction: stop treating AI-assisted vulnerability discovery as “LLMs finding website bugs.” The real development is frontier models showing early capability at searching for security failures across real systems at scale. This reframes AI cybersecurity from a software concern to an infrastructure concern.
The Patch Window Is a Kill Chain: Why Anthropic Should Subsidize Frontier Models for Defenders
If AI-driven vulnerability discovery scales faster than human-mediated patch deployment, the window between “vulnerability found” and “patch deployed” becomes an exploitable kill chain. The proposal: Anthropic (and presumably other frontier labs) should subsidize model access for defensive security teams to close this asymmetry. This is an interesting governance proposal because it acknowledges the dual-use reality without pretending you can put the capability back in the box — instead it argues for tilting the access asymmetry toward defenders.
What’s the Actual Throughline?
If I’m being honest, the throughline might just be “one person’s research interests in March 2026.” But if there’s a real connective thread, it’s something like: the systems we’re building (AI, pharmacological, genomic) are outrunning our ability to characterize the substrates they operate on. The EMFM can do inverse design at 0.3ms but we don’t know if it generalizes. People are taking novel dissociatives at scale without safety data. AI can find vulnerabilities faster than humans can patch them. Genomic variants have conditional effects that depend on contexts we haven’t mapped.
The common failure mode across all of these is acting on capability before understanding substrate — and the common prescription is some version of “build the characterization infrastructure before (or at least alongside) the capability.”
Whether that prescription is actionable or just a nice thing to say from the sidelines is a fair question to ask.
Feedback welcome. Several of these posts are open for critique — especially the DMT healthspan program and the arylcyclohexylamine safety proposals, where the claims are most empirically testable.