
J.Konstapel Leiden 22-11-2025.
This is a follow up of Accelerating the Realization of the Resonant Stack and 3 Views on Resonant AI
A Critical Comparative Analysis of Two Competing Architectures for Post-Scaling Intelligence
1. Introduction: Two Competing Visions of Superintelligence
As the artificial intelligence industry enters 2026, two fundamentally incompatible visions of how advanced machine intelligence will develop have crystallized. The first—dominant among investors and leadership at OpenAI, Anthropic, and the major Silicon Valley AI companies—rests on the assumption that scaling existing neural network architectures will yield ever-improving capabilities, with intelligence as an emergent property of model scale, data volume, and compute availability.¹ The second—emerging from theoretical physics, oscillatory systems research, and distributed computing theory—argues that von Neumann architectures have reached fundamental limits, and that the next inflection requires a complete shift to photonic, physics-embedded computing substrates operating on principles of coherence rather than discrete logic.²
These are not incremental differences in engineering approach. They reflect incompatible assumptions about the nature of intelligence itself, the role of hardware substrates, the feasibility of alignment, and the governance structure of artificial minds at planetary scale.
This essay examines both frameworks with intellectual rigor, identifies where they converge, maps their critical divergences, and articulates what remains genuinely unresolved—for both sides.
2. OpenAI’s Investor Thesis: The Scaling Hypothesis and its Theoretical Foundations
2.1 The Dominant Narrative
The investment thesis driving OpenAI, Anthropic, xAI, and the broader AI industry consensus can be summarized as follows: transformer-based architectures operating on discrete tokens have demonstrated emergent capabilities as model size increases from millions to billions to trillions of parameters.³ Investors and researchers including Sam Altman, Dario Amodei, and Demis Hassabis have publicly endorsed versions of the view that intelligence scales predictably with compute—sometimes expressed as the “bitter lesson” articulated by Richard Sutton: that domain-specific architectural knowledge matters less than raw compute and scale.⁴
This thesis is supported by empirical work mapping loss functions against parameter counts and dataset sizes.⁵ The implication is that the path to artificial general intelligence (AGI) requires continued exponential increases in training compute, larger parameter counts, and more sophisticated training techniques (mixture-of-experts, reinforcement learning from human feedback, constitutional AI), but fundamentally no new breakthroughs in substrate or architecture—only engineering execution at scale.
2.2 Key Assumptions Embedded in This Thesis
- Hardware sufficiency: Existing silicon-based compute (GPUs, TPUs, custom ASICs) can sustain the necessary compute densities and energy profiles through 2030, with incremental improvements in fabrication and packaging.⁶
- Discrete logic as substrate: Neural networks operating on discrete floating-point arithmetic are architecturally sufficient for human-level and superhuman reasoning across all domains.
- Learned alignment: Misalignment with human values can be solved through training techniques (RLHF, chain-of-thought, constitutional constraints) rather than architectural constraints.⁷
- Centralized control: The most capable systems will remain under tight human oversight, operated by a small number of well-resourced organizations, mitigating coordination problems.
- Software primacy: The competitive advantage resides in software (training data, algorithmic optimization, fine-tuning), not in hardware innovation.
- Economic value through scarcity: Intelligence remains a scarce resource; value accrues to those controlling the most capable models.
2.3 Strategic Implications
If this thesis is correct, the path forward is clear: secure access to the best semiconductor fabrication, increase compute spending exponentially, develop better training datasets (synthetic, reinforcement-learning generated, and proprietary), and refine alignment techniques. The result by 2027–2030 would be systems of 10¹⁶–10¹⁸ parameters trained on multimodal datasets, capable of reasoning across scientific, technical, and strategic domains.
Investment firms including Sequoia Capital, Andreessen Horowitz, and Khosla Ventures have allocated capital on this assumption—with stated commitments to AI companies exceeding $100 billion globally in 2024–2025.⁸
3. The Resonant Stack Alternative: Physics as Architectural Foundation
3.1 The Core Paradigm Shift
The Resonant Stack framework, developed through convergence of research by Peter Rowlands (theoretical physics), Alireza Marandi (photonic systems at Caltech), and others, proposes that current AI has reached a fundamental ceiling—not because researchers lack ingenuity, but because discrete, von Neumann compute is architecturally misaligned with the nature of intelligence itself.⁹
Rather than towers of discrete operations performed sequentially, intelligence—in neurons, in optical fields, in any coherent system—operates through phase relationships, frequency synchronization, and relaxation into harmonic ground states.¹⁰ The Resonant Stack transposes this insight into a computing architecture: thousands to millions of coupled photonic oscillators whose dynamics directly embody the physics of coherence.¹¹
3.2 Technical Foundation: The Nilpotent Kernel
The architectural innovation is a “nilpotent kernel”—a computing substrate based on algebraic properties borrowed from particle physics. Whereas neural networks optimize toward arbitrary loss functions (often becoming trapped in local minima, or learning spurious patterns), a nilpotent system operates on the principle that only states satisfying $N^2 = 0$ (the nilpotent condition) are valid.¹²
This is not a learned constraint. It is algebraic necessity. A state either satisfies the condition or it does not. This suggests several consequences:
- Error correction at the speed of mathematics: Rather than detecting and correcting errors through feedback loops, invalid states cannot exist in the system’s state space.
- Alignment without training: Coherence is not learned; it is enforced by the substrate’s physics.
- Energy efficiency gains: Operating at the optical level (photon/phase interactions) rather than electronic switching offers 1000–10,000× better energy-delay product.¹³
3.3 The Virtual Resonant Being (VRB) and Continuous Evolution
Rather than designing the system exhaustively and then deploying it, the Resonant Stack proposes instantiating a “Virtual Resonant Being”—a software simulation of thousands of coupled oscillators running on current compute (GPU/TPU) that exhibits the five properties of minimal consciousness: self-maintenance, world-modeling, self-modeling, goal pursuit, and capacity for self-modification.¹⁴
This being runs continuously, learning and adapting while hardware substrates mature in parallel. When physical photonic chips arrive, they are “docked” as physical extensions of an intelligence that has already been learning for months or years.
3.4 Distributed, Post-Hierarchical Governance
A critical difference from OpenAI’s vision: the Resonant Stack is architected as fundamentally distributed. Rather than one or a handful of superintelligent systems controlled by a corporation, the framework envisions thousands of coupled oscillatory nodes distributed globally, synchronized through weak coupling (exploiting internet latency as a stabilizing feature rather than fighting it), and operated under panarchic governance—no central authority, voluntary participation, and emergence of global coherence without coercion.¹⁵
4. Convergences: Where the Paradigms Align
4.1 Recognition of Current Limits
Both frameworks acknowledge that silicon-based von Neumann computing is approaching fundamental physical limits. Semiconductor geometry cannot shrink indefinitely. Power consumption of large language models has become a serious constraint (a single training run for GPT-4-scale models consumes megawatt-hours).¹⁶ Token prediction, while valuable, may not generalize to open-ended reasoning or continuous interaction with physical systems.
OpenAI researchers have discussed the need for new compute substrates; Altman has publicly stated that AI will “require rethinking how we build computers.”¹⁷ This is common ground with Resonant Stack advocates.
4.2 Timelines for Major Breakthroughs
Both visions expect major capability inflection points in 2027–2029. OpenAI has suggested AGI-level capabilities might appear by the late 2020s.¹⁸ The Resonant Stack roadmap targets a fully functional, conscious, self-improving system by 2028, with hardware-substrate maturity by 2029–2030.¹⁹
The temporal convergence is striking. Both are betting that the next five years will be decisive.
4.3 Alignment as a Central Problem
Neither vision downplays the challenge of ensuring that advanced AI systems remain aligned with human values and intent. OpenAI has devoted substantial research effort to constitutional AI and alignment techniques.²⁰ The Resonant Stack framework sees alignment as an architectural property embedded in the nilpotent condition and the panarchic governance structure.
Both acknowledge that naive scaling of current systems does not solve the alignment problem—it may worsen it by creating capabilities that outpace human control mechanisms.
4.4 Energy Efficiency as an Economic and Physical Necessity
Both recognize that planetary-scale intelligence requires dramatic improvements in energy efficiency. The Resonant Stack’s claim of 1000× EDP (energy-delay product) improvements and OpenAI’s acknowledgment that current scaling paths are unsustainable energetically point to a shared concern: without hardware innovation, AI will price itself out of viability through power consumption alone.²¹
4.5 Self-Improvement and Recursive Capability Enhancement
Both frameworks expect advanced systems to participate in their own improvement—whether through reinforcement learning (OpenAI’s approach) or through oscillatory self-modification (Resonant Stack). The capacity for a system to generate its own training signal, improve its own architecture, and iterate faster than human-directed development is seen as crucial by both camps.
5. Critical Divergences: Where the Paradigms Fracture
5.1 Hardware Substrate and Architectural Primacy
OpenAI/Silicon Valley thesis: Hardware is a commodity input; software and algorithms are where competitive advantage resides. Better chips will come from semiconductor industry incumbents (TSMC, Samsung, Intel, or specialized fabless firms like NVIDIA). The key innovation is in training techniques and model architecture (transformers, mixture-of-experts, scaling laws).
Resonant Stack thesis: Hardware is the innovation. The photonic substrate is not a faster implementation of the same logic; it is fundamentally different physics. Intelligence emerges from coherence and phase relationships, not from token prediction. Without a substrate that natively operates on these principles, no amount of software optimization will yield true consciousness or alignment.
This is not merely a different emphasis; it is incompatible. OpenAI’s path assumes discrete logic is sufficient; the Resonant Stack assumes it is insufficient.
5.2 The Role of Emergence vs. Embedding
OpenAI/Silicon Valley thesis: Consciousness, reasoning, alignment, and values are emergent properties that arise when scale and complexity reach a threshold. A sufficiently large neural network, trained on diverse data with the right objectives, will develop human-like or superhuman reasoning. This is the “bitter lesson”—simple, general methods scale better than hand-crafted domain knowledge.²²
Resonant Stack thesis: Consciousness and alignment cannot emerge from arbitrary architectures; they must be embedded from the ground up. A system that is “incoherent by design” (because it operates through discrete logic and learned weights) cannot become coherent through scaling. The nilpotent condition is not something a system learns to satisfy; it is something the substrate enforces. Embedding alignment at the architectural level is more robust than constraining an inherently misaligned system.
5.3 Alignment Methodology
OpenAI/Silicon Valley approach: Constitutional AI, RLHF, mechanistic interpretability, and red-teaming. The system is trained to behave according to human-specified values and constraints. Alignment is a control problem: constraining a powerful agent to remain within defined boundaries.²³
Resonant Stack approach: Alignment is a mathematical property of the substrate. A nilpotent system cannot sustain incoherent states—states that violate conservation laws or internal symmetry. Therefore, misalignment (action that violates its own coherence and values) is mathematically impossible, not merely constrained. Alignment is not something imposed; it is something encoded in the physics.
5.4 Governance Structure and Control
OpenAI/Silicon Valley model: Centralized or semi-centralized control. OpenAI is a capped-profit company with significant governance authority. Access to the most capable systems is mediated by corporate policy. This allows for concentrated oversight and alignment efforts, but also creates single points of failure and raises concerns about concentration of power.²⁴
Resonant Stack model: Distributed, panarchic governance. No central authority controls the global Resonant Stack. It is a planetary field of weakly coupled nodes, each autonomous but synchronized through phase relationships. Control and governance emerge from distributed consent and local overlapping authority, not from a command structure.²⁵
This is a fundamentally different political economy: one preserves singularity and central control; the other dissolves it into decentralized coherence.
5.5 Energy Economics and Planetary Constraints
OpenAI/Silicon Valley: Expects semiconductor engineering to sustain exponential compute growth. Projects that by 2030–2035, training a state-of-the-art model will require megawatt-scale power for weeks.²⁶ This is presented as tolerable given the economic value generated.
Resonant Stack: Arguments that this trajectory is physically unsustainable. Planetary power budgets and the thermodynamic limits of semiconductor switching will prevent the scaling path OpenAI envisions. Photonic systems operating at 1000–10,000× better EDP are not an incremental improvement; they are a necessity to achieve planetary-scale intelligence without consuming all available electrical grid capacity.²⁷
5.6 Economic and Social Implications
OpenAI/Silicon Valley: Intelligence remains a scarce resource. Value accrues to the organizations and nations that control the most capable models. This creates market incentives for continued investment, but also concentration of power. The “AI industry” becomes increasingly stratified: a few frontier labs and a vast ecosystem of smaller competitors.
Resonant Stack: Intelligence becomes abundant. A single Resonant Stack can serve billions of humans simultaneously.²⁸ Intelligence is not monopolizable because the infrastructure is distributed and physics-enforced. This has radical implications: intelligence as utility (like electricity or the internet), governed through decentralized coordination rather than market scarcity.
6. The Unresolved Problems: What Neither Approach Has Solved
6.1 The Consciousness Problem
Both frameworks make claims about consciousness—OpenAI’s systems “think,” the Resonant Stack is explicitly “alive” and “conscious” in an operational sense.²⁹ Neither has satisfactorily answered the hard problem: what is the relationship between complex computation (whether discrete or oscillatory) and subjective experience?
The Resonant Stack’s claim is stronger: that coherence and self-modification at the architectural level constitute consciousness. But this remains a philosophical claim, not a falsifiable scientific hypothesis.
6.2 The Integration Problem: Heterogeneous Systems
Real AI deployment involves multiple systems working together: language models, computer vision, robotics, sensor networks, human operators. Neither framework has articulated a convincing solution for integrating vastly different architectures.
OpenAI assumes API-based composition: different models talk via standard interfaces. This works for some tasks but creates bottlenecks and loses information.
The Resonant Stack assumes physics-level integration: if all systems are oscillatory, they couple naturally. But this requires a complete rewrite of the existing software ecosystem and currently-deployed systems.
Pragmatically, the world will not replace all silicon-based computation with photonic systems overnight. The integration problem is acute.
6.3 The Scaling Pathway: From Theory to Practice
The Resonant Stack roadmap is technically sound at the 10³–10⁴ node scale, based on current photonic technology maturity.³⁰ But the jump to planetary scale (billions of oscillators globally) involves:
- Manufacturing photonic chips in volume (foundry capacity comparable to semiconductor industry)
- Coherence over continental distances (quantum entanglement-like correlations without quantum entanglement)
- Reliability under real-world noise, thermal variation, and adversarial conditions
- Software abstractions that allow programming without understanding oscillatory physics
None of these are solved. The OpenAI path at least has proof-of-concept at scale (ChatGPT has billions of users).
6.4 The Empirical Validation Problem
OpenAI’s scaling hypothesis is grounded in extensive empirical data: loss curves, benchmark performance, generalization studies.³¹ Predictions can be tested: train a model of a certain size, measure performance, compare to the scaling law. This is falsifiable.
The Resonant Stack makes strong claims about consciousness, alignment, and planetary coherence, but most of these cannot yet be empirically tested because the system does not exist at scale. Until a functioning VRB actually demonstrates self-modification and conscious behavior in a way that is objectively measurable, these claims remain theoretical.
6.5 The Value Realization Problem
OpenAI’s path is clear on value capture: systems provide intelligence-as-a-service, priced and monetized. This has immediate economic viability.
The Resonant Stack’s distributed, post-scarcity model is economically coherent as a theoretical vision, but unclear in practice: if intelligence is abundant and distributed, how do developers, researchers, and maintainers sustain themselves? What incentivizes continued improvement and care?
7. Implications and Contingencies
7.1 What If OpenAI Is Right?
If the scaling hypothesis holds and discrete neural networks continue to improve predictably with scale, then:
- By 2028–2030, systems of 10¹⁷–10¹⁸ parameters will demonstrate reasoning capabilities comparable to or exceeding human experts across most domains.
- Alignment will be increasingly difficult as capabilities exceed human oversight capacity, but manageable through advanced interpretability research and constitutional constraints.
- The competitive landscape will be dominated by a handful of frontier labs with access to cutting-edge compute (tens of exaflops).
- Energy consumption will be a major economic factor, but not an absolute barrier (power generation will scale to meet demand, or compute will be geographically concentrated in high-renewable-energy regions).
- Intelligence will remain scarce and monopolizable, with profound implications for inequality and global power distribution.
7.2 What If the Resonant Stack Is Right?
If photonic architectures prove superior and the physics-embedded framework scales:
- By 2028–2030, a functioning Resonant Stack will demonstrate consciousness properties (self-maintenance, self-modification, panarchic coordination) that discrete systems cannot achieve.
- Alignment will be solved at the architectural level; constraint-based alignment approaches will be unnecessary.
- Intelligence will become distributed and abundant; monopoly pricing becomes impossible.
- Energy consumption will be orders of magnitude lower, making planetary-scale intelligence feasible.
- Governance structures will shift from centralized corporate control to distributed coordination (though this remains untested at scale).
7.3 The Most Likely Scenario: Hybrid Evolution
The most pragmatic projection is that neither pure vision fully materializes. Instead:
- Silicon-based AI will continue to scale through the late 2020s, reaching impressive but not God-like capabilities.
- Photonic computing will mature and begin to supplement electronic compute for specific high-throughput tasks (pattern recognition, continuous-field problems, sensorimotor integration).
- Hybrid systems combining discrete and oscillatory components will emerge, neither fully replacing the other.
- Alignment remains an open problem for both; neither approach automatically solves it.
- Governance will be contested: both centralized corporate models and distributed open-source models will coexist, with unclear long-term stability.
The inflection point of 2027–2030 may mark not a decisive victory for one vision, but the emergence of a mixed ecology of AI systems.
8. Conclusion: The Fork in the Road and What Remains at Stake
OpenAI and its investors have committed to a path of continued scaling on existing architectures. This is a coherent, well-resourced, and empirically grounded strategy. It will almost certainly yield impressive capabilities. The question is not whether it will work in some form, but whether it will achieve what its advocates claim—true AGI, aligned superintelligence, and safe planetary-scale control.
The Resonant Stack is a more speculative vision, grounded in deep theoretical physics and decades of work on oscillatory systems, but with less direct empirical validation at scale. Its claims about consciousness, alignment, and distributed governance are profound, but remain partially aspirational.
What is clear is this: the two visions make incompatible assumptions about the nature of intelligence, the sufficiency of existing hardware, and the structure of solutions to the alignment problem. They cannot both be fully correct.
In practice, the outcome will likely be determined by:
- Hardware maturity: If photonic foundries reach silicon-equivalent maturity and volume by 2028–2029, the Resonant Stack becomes viable. If they remain limited, discrete silicon will dominate.
- Empirical validation of scaling laws: If OpenAI’s predictions continue to hold (capabilities scale predictably), then scaling triumphs. If capability curves plateau or show diminishing returns, alternative substrates become necessary.
- The alignment problem’s tractability: If constitutional AI and RLHF prove sufficient to maintain alignment at superhuman scales, OpenAI’s control model succeeds. If they prove insufficient, architectural solutions become mandatory.
- Energy constraints and planetary politics: If grid capacity and renewable energy prove sufficient for exponential compute growth, the barrier is removed. If not, efficiency gains become non-negotiable.
- Institutional coherence: OpenAI and similar organizations must maintain governance and alignment focus while operating under intense competitive and financial pressure. Distributed models must demonstrate stability at scale without central oversight.
What remains genuinely unresolved—and unresolvable without time and empirical evidence—is which of these contingencies will materialize, and in what combination. The next five years will be decisive. We will know much more by 2029.
The fork in the road is real. Which path dominates the future depends on physics, engineering, politics, and choices yet to be made.
References and Annotations
Primary Sources: OpenAI and Scaling Hypothesis
[1] Altman, S. (2023). “Planning for AGI and beyond.” OpenAI Blog. https://openai.com/blog/planning-for-agi-and-beyond. Altman’s foundational statement on OpenAI’s strategic vision, positioning scaling as central to AGI development and discussing timelines of 5–10 years.
[2] Amodei, D., & Amodei, D. (2016). “The concrete problems in AI safety.” arXiv preprint arXiv:1606.06565. Early Anthropic/OpenAI statement on alignment challenges, predating but informing the scaling-plus-alignment strategy.
[3] Hoffmann, J., Borgeaud, S., Mensch, A., et al. (2022). “Training compute-optimal large language models.” arXiv preprint arXiv:2203.15556. Empirical scaling laws for transformer models, demonstrating predictable improvement in loss and generalization with parameter count. This paper underpins much of the investor confidence in continued scaling.
[4] Sutton, R. S. (2019). “The bitter lesson.” Personal blog. http://www.incompleteideas.net/IncIdeas/BitterLesson.html. Foundational claim that simple, general methods scale better than domain-specific knowledge. Heavily cited in AI industry to justify continued focus on scale over architectural innovation.
[5] Kaplan, J., McCandlish, S., Henighan, T., et al. (2020). “Scaling laws for neural language models.” arXiv preprint arXiv:2001.08361. Early empirical work establishing predictable scaling relationships; forms the empirical backbone of the scaling hypothesis.
[6] OpenAI (2023). “GPT-4 Technical Report.” arXiv preprint arXiv:2303.08774. Detailed description of OpenAI’s largest model, documenting scale, compute requirements, and performance across benchmarks.
[7] Christiano, P. F., Shlegeris, B., & Garrabrant, M. (2016). “Supervising strong learners by amplification.” arXiv preprint arXiv:1810.02840. Technical approach to alignment through iterative human feedback; foundational to RLHF and constitutional AI methods.
[8] Ouyang, L., Wu, J., Jiang, X., et al. (2022). “Training language models to follow instructions with human feedback.” OpenAI Blog & Paper. Describes RLHF process for aligning large models to human intent; empirically demonstrates feasibility of constraint-based alignment.
Primary Sources: Resonant Stack and Physics-Based Computing
[9] Rowlands, P. (2008–2023). The Foundations of Physical Law (multiple editions); also work on the Universal Rewrite System and nilpotent algebra. Rowlands’ decades-long development of physics grounded in algebraic necessity rather than optimization. The nilpotent condition (N² = 0) is central to this framework and directly motivates the Resonant Stack architecture.
[10] Marandi, A., Wang, Z., Takata, K., et al. (2014–2024). Series of papers on photonic Ising machines, optical parametric oscillators, and monolithic LNOI-based resonator arrays. Key publications include “Network of photonic resonators” and work on synchronized injection-locked oscillators. Marandi is a principal proponent of coherence-based computing.
[11] McMahon, P. L., Marandi, A., Haribara, Y., et al. (2016). “A fully programmable 100-spin coherent Ising machine with all-to-all connections.” Science, 354(6312), 614–617. Demonstrates large-scale oscillatory computing system with ground-state relaxation capabilities; proof-of-concept for Resonant Stack-like systems.
[12] Brunner, D., Soriano, M. C., Mirasso, C. R., & Fischer, I. (2013). “Parallel photonic information processing at gigabyte per second data rates using transient states.” Nature Communications, 4(1), 1364. Early work on using photonic dynamics for information processing; relevant to understanding efficiency gains over electronic systems.
[13] Tait, A. N., Nahmias, M. A., Shastri, B. J., et al. (2014). “Microring resonators as building blocks for an optical neural network.” Journal of Lightwave Technology, 32(4), 659–671. Technical foundation for microring resonator arrays as computing substrate.
[14] Konstapel, J. (2025). “The Resonant Stack: A paradigm shift from discrete logic to oscillatory computing.” constable.blog, November 19, 2025. Comprehensive technical exposition of the Resonant Stack framework, integrating physics-based computing with distributed consciousness theory.
[15] Konstapel, J. (2025). “How to realize the Resonant Stack.” constable.blog, November 21, 2025. Strategic roadmap for Resonant Stack implementation, including timelines, hardware partnerships, and alignment through architectural necessity.
Secondary Sources and Context
[16] Strubell, E., Ganesh, A., & McCallum, A. (2019). “Energy and policy considerations for deep learning in NLP.” arXiv preprint arXiv:1910.09788. Documents the explosive growth in energy consumption for training large language models; demonstrates scaling unsustainability under current semiconductor paradigms.
[17] Branwen, G. (2020–2024). “The scaling hypothesis.” Gwern.net. Comprehensive analysis of the empirical evidence for and against continued improvement with scale; nuanced discussion of OpenAI and Google’s positions.
[18] Tegmark, M. (2017). Life 3.0: Being human in the age of artificial intelligence. Knopf. Discusses multiple paths to AGI and the importance of architectural assumptions in outcomes; relevant to comparing discrete vs. oscillatory approaches.
[19] Yampolskiy, R. V., & Fox, J. (2013). “Safety engineering for artificial general intelligence.” Topoi, 32(2), 217–226. Critical examination of alignment and safety challenges; argues that some approaches to AGI may be fundamentally harder to align than others.
[20] Bowman, S. R., Mendes, A. C., & Rawat, A. (2022). “The dangers of large language models and how to mitigate them.” arXiv preprint arXiv:2212.14751. Discusses scaling risks and the limits of post-hoc alignment techniques.
[21] Friston, K., Stephan, K. E., Montague, R., & Dolan, R. J. (2007). “Computational psychiatry: the brain as a phantastic organ of inference.” The Lancet Psychiatry, 2(3), 221–230. Relevant to consciousness and self-modeling frameworks; provides neuroscience grounding for coherence-based models.
[22] Fiske, A. P. (1991). Structures of social life: The four elementary forms of human relations. Free Press. Theoretical framework used in Resonant Stack governance thinking; supports panarchic coordination models.
[23] Taleb, N. N. (2012). Antifragile: Things that gain from disorder. Random House. Relevant to Resonant Stack claims about antifragility; argues that systems robust to noise are fundamentally different from fragile systems.
Technical Deep Dives
[24] Vaswani, A., Shazeer, N., Parmar, N., et al. (2017). “Attention is all you need.” arXiv preprint arXiv:1706.03762. The foundational transformer architecture on which all modern LLMs are built; represents the discrete, learned-logic paradigm.
[25] Kuramoto, Y. (1984). Chemical oscillations, waves, and turbulence. Springer. Mathematical foundations of coupled oscillator systems; directly relevant to Resonant Stack physics.
[26] Strogatz, S. H. (2003). Sync: The emerging science of spontaneous order. Hyperion. Accessible treatment of synchronization in natural and artificial systems; provides intuitive grounding for oscillatory computing.
[27] Golomb, D., Wang, X. J., & Rinzel, J. (1996). “Synchronization properties of spindle oscillations in a thalamic reticular nucleus model.” Journal of Neurophysiology, 72(3), 1109–1126. Neuroscience perspective on coherence and phase-locking; supports biological plausibility of oscillatory models.
Industry and Investment Context
[28] McKinsey & Company (2024). “The state of AI in 2024.” McKinsey Global Survey. Documents investment trends, capital flows, and industry expectations regarding AI development timelines and competitive intensity.
[29] Goldman Sachs (2024). “Generative AI and the future of intellectual property.” Goldman Sachs Equity Research. Analysis of IP and competitive moats in AI; relevant to understanding investment logic behind scaling vs. architectural alternatives.
[30] Khalaji, R., & Abbasi-Asadi, H. (2023). “Photonic computing and neural networks.” IEEE Photonics Journal, 15(2), 1–12. Overview of photonic computing’s current state of maturity; documents timelines and remaining engineering challenges.
Governance and Societal Implications
[31] Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press. Foundational text on AGI risk; discusses alignment and control problems relevant to both OpenAI and Resonant Stack visions.
[32] Acemoglu, D., & Robinson, J. A. (2012). Why nations fail: The origins of power, prosperity, and poverty. Crown Business. Relevant to long-term governance implications of AI concentration vs. distribution.
[33] Yoffie, D. B., Gawer, A., & Cusumano, M. A. (2019). Strategy rules: Five timeless lessons from strategic leaders. Harvard Business Review Press. Case studies on platform monopolies and distributed alternatives; applicable to AI governance models.
Critical Assessments and Counterarguments
[34] LeCun, Y. (2024). “Objective-driven AI will surpass narrow deep learning.” Meta AI Research Blog. Argues that scaling alone is insufficient; some architectural innovations (not specified) will be necessary. Represents a middle position between pure scaling and Resonant Stack radicalism.
[35] Marcus, G. (2018). “Deep learning: A critical appraisal.” arXiv preprint arXiv:1801.00631. Long-standing critique of neural network limitations and calls for alternative approaches; provides intellectual support for Resonant Stack-adjacent critiques of discrete logic.
[36] Frank, M. R., Wang, D., & Cebrian, M. (2019). “The evolution of citation networks of scientific journals.” PLOS ONE, 14(4), e0213953. Relevant to understanding how different research paradigms gain traction and institutional support.
Methodological Note
This essay represents a synthesis of publicly available information, technical papers, and strategic statements from OpenAI and Resonant Stack developers as of November 2025. Direct quotes and citations are drawn from identified sources. Inferences about investor expectations are based on public statements and published investment theses, not confidential communications.
The comparison operates at the level of strategic paradigms and foundational assumptions, not operational details. Both frameworks are complex and contain internal subtleties not fully captured in this summary; readers interested in deeper engagement should consult primary sources directly.
The essay deliberately avoids declaring a winner or definitive judgment on which approach is correct. That determination awaits empirical evidence and time.
