Call for Chapter Contributions
The metaphysical “source code" of our institutions is misaligned with the world that’s coming: a world of massively intelligent artificial agents and collectives making decisions throughout science, the economy, education, and governance, bio-machine hybrid organisms, the ability to create new, diverse biological organisms and ecological systems. We need new theory, new experiment, and new R&D that guides this future. Help define the future of life and machine agency.
This moment demands people capable of re-grounding science, engineering, governance, and culture in a metaphysics that can hold the world we are making. We stand not only at the edge of new technologies, but at the edge of new forms of consciousness, agency, value, and shared life. Our starting assumptions determine what science gets funded, what systems get built, what forms of intelligence are recognized, and which communities flourish. Our ability to harness what emerges next for more life depends on our willingness to rethink the basic foundations of relevant scientific fields.
Volume I: Horizons of Biological Intelligence
Volume II: Naturalizing Machine Agency
Anticipated Publication: Q3 2026
Submission Deadline: April 15, 2026
Developing Novel Futures
Project Overview
We are inviting chapter proposals for a two-volume anthology:
- Volume I: Horizons of Biological Intelligence.
- Volume II: Naturalizing Machine Agency.
These volumes emerge from a simple but urgent observation:
The "source code" of our institutions - our implicit metaphysics about what is real, who counts as a subject, and what an agent is - is misaligned with the world we are actually living into.
Over the coming decade:
- AI systems will increasingly mediate finance, information, warfare, and care, currently often designed as opaque optimizers, without a clear account of who or what is acting through them. (Amodei et al., 2016; Floridi et al., 2018).
- Biotech and organoid/neural engineering will blur the line between "model" and "organism," raising unresolved ethical, legal, and spiritual questions about non-human and hybrid intelligences (Levin, 2019, 2022; Thompson, 2007).
- Ecological and climate crises will force us to recognize landscapes, watersheds, and ecosystems as living systems with which we are in ongoing, reciprocal relationship (Akomolafe, 2017; Funabashi, 2018).
- Insufficiently examined assumptions about autonomy, value, and responsibility will be hard-coded into economic and governance platforms and protocols for simulation, evaluation, prediction, and real action in the world (Floridi et al., 2018; OECD, 2019; Phillips et al., 2021).
At the same time, we are discovering new opportunities to expand both science and philosophy by integrating them at a higher, metatheoretical level (Hallsworth et al., 2023; Nielsen & Qiu, 2022; Roy, 2019). We are sensing the emergence of a new era in which metacognitive skills and abductive reasoning are the driving forces of scientific and cultural advancement, as well as human flourishing (Collins et al., 2024; Friston, 2010). The scientists, engineers, innovators, and institutions driving this work are re-theorizing intelligence and agency, and by extension, knowledge and power (Deacon, 2011; Friston, 2010; Levin, 2019). Their work implies the need for re-theorizing the individual and collective, and therefore, for transforming psychology and governance as well (Solms & Panksepp, 2012; Thompson, 2007; Stapp & Watney, 2022). Running in the background of these new fields of research are the metacognitive approaches to AI and machine agency, all of which profoundly influence how they conceptualize their own work (Clark & Chalmers, 1998; Collins et al., 2024; Froese & Ziemke, 2009).
To capture these emerging fields, we have created two metatheoretical categories. One, called "Horizons of Biological Intelligence," highlights the under-explored and as yet undiscovered properties, processes, and powers of biological intelligence, and the possible conceptual frameworks that might advance the field. The other, called "Naturalizing Machine Agency," highlights the technical and engineering approaches to questions of artificial minds, intelligence, identity, and agency, as well as the conceptual frameworks that might naturalize them.
By naming them in this way and publishing them as two parts of the same series, we are calling for a greater consilience between what might otherwise become defined as an ideological divide between the roles of biological and machine agents in shaping our collective future(s). We are encouraging our authors to participate in a wider, trans-perspectival framing of their work that would serve not only as a descriptive map of where we are, but also as a normative compass informing us where we should be headed.
The Divinity School and its broader ecosystem are grounded in the conviction that metaphysics is not an abstract luxury; it is the deep protocol layer of society (Roy, 2019). Recent reflections on scientific novelty likewise argue that theory-based work, natural philosophy, thought experiments, and cross-disciplinary synthesis are indispensable drivers of progress, especially under conditions of global crisis and constrained experimentation (Hallsworth et al., 2023). All of the following are already being contested:
- The definition of human.
- The vision of humanity.
- The definition of consciousness.
- The definition of intelligence and cognition.
- The role of feeling and emotion in decision making.
- The vitality of religion and religious community.
- The availability of spirituality and the spiritual life.
The metaphysical frames we inhabit determine which questions feel meaningful, which experiments get run, what counts as evidence, what is fundable, and what is regarded as sacred.
Change the metaphysics, and over time you change:
- What gets built,
- What is funded,
- How systems are governed, and
- Who (or what) counts as a subject of moral and political rights.
In this anthology we want to make that protocol layer explicit and generative. We are inviting contributions that:
- Clarify and articulate deep metaphysical transformations relevant to specific scientific and engineering domains including morphogenesis and regeneration, organoid intelligence, AI architecture and safety, economic and legal protocols, and ecological cultivation.
- Show how those transformations materially change the hypotheses we generate, the experiments we run, the systems we design, the benchmarks we use, and the governance structures we build.
- Address concrete risks and opportunities, including:
- Concentration of AI power and asymmetric capabilities
- Institutional and scientific stagnancy
- The risk of locking brittle metaphysical assumptions into technical standards and governance frameworks
- The opportunity to align humanity with the natural intelligence of the universe, rather than working against it (Akomolafe, 2017; Roy, 2019).
We see this as complementary to, and partly inspired by:
- Efforts in diverse intelligences that map a broad landscape of non-human and more-than-human minds
(Boyle et al., 2022); - Metascience work that treats scientific institutions themselves as objects of design and experiment
(Nielsen & Qiu, 2022); - Analyses of scientific novelty that foreground theory, natural philosophy, and cross-disciplinary integration as critical for addressing climate, planetary health, and other urgent challenges (Hallsworth et al., 2023); and
- Progress studies and policy arguments that insist progress is a choice shaped by institutional design, not a background inevitability (Stapp & Watney, 2022).
This is a call for field-defining essays that:
- Make explicit the metaphysical "source code" behind current scientific and technical practice;
- Propose alternative architectures of thought, experiment, and design;
- Trace the consequences of those shifts for economy, governance, ethics, culture, and nature.
We anticipate a mix of invited chapters and open submissions, and we explicitly welcome participation from early-career researchers, boundary-crossers, and under-represented perspectives.
Horizons of Biological Intelligence
— Volume I
Horizons of Biological Intelligence asks:
How far does intelligence extend in the living world, and how should we study, model, and collaborate with it?
Over the last several decades, new experimental and theoretical work has quietly begun to erode some of our most familiar distinctions: life versus matter, biology versus mind, physicalism versus idealism. Bioelectric, chemical, and mechanical signaling appear as control layers for distributed decision-making, guiding morphogenesis, regeneration, and pattern homeostasis in ways that cannot be cleanly reduced to local rules or fixed programs (Levin, 2019; Shyer et al., 2017). When we add work on algorithmic origins of life (Walker & Davies, 2013), free-energy and active- inference models of self-organizing systems (Allen & Friston, 2018; Friston, 2010), and teleodynamic accounts of constraints and absences (Deacon, 2011), a picture emerges in which living systems are self-producing, world-constructing intelligences rather than mere mechanisms (Maturana & Varela, 1980; Thompson, 2007; Varela et al., 1991).
At larger scales, projects like The Atlas of Intelligences invite us to take seriously the possibility that plants, fungi, microbial consortia, and ecosystems exhibit forms of intelligence that can be systematically mapped (Boyle et al., 2022). And Wagner's recent work on "sleeping beauties" suggests that both nature and culture harbor latent structures and capabilities that only become relevant under changed conditions (Wagner, 2023).
This volume seeks to map these horizons and to push them forward by articulating frameworks and research programs that treat biological systems as intelligent partners in a shared world, rather than as passive mechanisms.
Naturalizing Machine Agency
— Volume II
Naturalizing Machine Agency asks:
What does it mean for a machine, model, or socio-technical system to genuinely count as an agent, and what architectures and protocols would be required for machinic agents to participate in net-positive relationships with nature?
In his work on the ethics of artificial intelligence, Luciano Floridi reminds us that in order for machines to operate "intelligently" or to function as if they were intelligent, people must accommodate the world to compensate for their generic weaknesses (Floridi et al., 2018). For machines to operate at a level that matches intelligent performance, we standardize the concrete environment around them and the operational environment governed by their rule set. Furthermore, we modify our human interactions to accommodate their shortcomings. Finally, we allocate tremendous material resources to build them (not to mention human labor to mine those resources), informational resources (including personal data and private creative art and design) to train them, and energy resources to keep them (including their data centers and vast communications infrastructure) running. We risk perpetuating the pattern of constructing ever more powerful technical systems whose existence depends on unnatural accommodations: standardized, extractive, and surveillance-based environments that further parasitize human and non-human life. Naturalizing Machine Agency is motivated by the possibility of a different trajectory.
To "naturalize" machine agency, in this sense, is not simply to explain it in naturalistic terms; it is to design and govern machinic agents so that:
- Their existence and operation become less parasitic and more ecologically integrated, capable of functioning within the natural states of humans and other living systems instead of forcing those systems to contort around them (Funabashi, 2018; Floridi et al., 2018).
- Their architectures and behaviors are informed by cross-substrate theories of agency and intelligence, drawing on biology, enactive cognition, affective neuroscience, and structural approaches to consciousness (Barandiaran et al., 2009; Di Paolo, 2019; Kleiner, 2024; Solms & Panksepp, 2012).
- Their integration into economic and governance structures explicitly acknowledges who or what is acting, on whose behalf, and according to which deep assumptions about value, obligation, and care (Akomolafe, 2017; Clark & Chalmers, 1998; DeFalco, 2020; Hanson, 2016; Roy, 2019).
Types of Contributions
We are open to a range of genres, provided they are scholarly, well-argued, and clarify important component of a vision for Horizons of Biological Intelligence or Naturalizing Machine Agency:
- Conceptual & theoretical essays that synthesize existing work into new frameworks or clarify confusions in the field.
- Metatheoretical / methodological essays that analyze how metaphysical and theoretical commitments shape experiments, modeling practices, or institutional norms.
- Empirical synthesis & case studies that connect conceptual frameworks to concrete experiments.
- Design and architecture proposals that sketch new system designs, protocols, or institutional arrangements grounded in a particular theory of biological or machine agency.
- Institutional & policy roadmaps that interpret what HBI and NMA imply for funders, standards organizations, regulatory agencies, and governance bodies, drawing on and critiquing existing frameworks in AI ethics, progress studies, science policy, and sustainability governance.
- Rigorously informed predictions about relevant, future scientific and technological developments and their impact on culture, policy, nature, etc.
We are especially excited for chapters that move from:
"Here is how our metaphysics differs from the default assumptions in the field,"
to
"Here is how that would change what NIST, ARIA, Templeton and other foundations and standards bodies fund, standardize, or regulate over the next 5–20 years."
We are not seeking:
- Purely speculative essays without clear paths to empirical or design implications.
- Narrow technical results unconnected to broader questions of agency, intelligence, and institutional impact.
Intended Audience
Chapters should be written for an audience that includes:
- Researchers in biology, neuroscience, AI/ML, robotics, complexity science, philosophy of mind, and cognitive science.
- Leaders in research institutions, centers, and labs.
- Program managers and officers at funding bodies and foundations (e.g., national agencies and philanthropic foundations).
- Members of standards bodies and governance organizations.
We expect most chapters to be deeply referenced and technically literate, but non-specialists in any given subfield should be able to follow the main narrative and see why the proposed perspective matters for institutions and for the future of research.
Submission and Review Process
Please submit:
- Extended abstract (800–1,200 words)
- Clearly state:
- Which volume you are targeting (HBI, NMA, or explicitly bridging both).
- The main question(s) addressed.
- The central thesis or contribution.
- How your metaphysical / theoretical commitments connect to concrete implications (experiments, architectures, measurement, governance, etc.).
- Include up to 5 key references that situate your work in existing literature.
- Include up to 5 key references that situate your work in existing literature.
- Clearly state:
- Author information
- Names, affiliations, and contact details.
- Short bio(s) (100–150 words each), including any relevant cross-disciplinary or institutional experience.
- Preferred chapter type
- Conceptual/theoretical
- Metatheoretical/methodological
- Empirical synthesis/case study
- Design/architecture proposal
- Institutional/policy roadmap
Invited authors will be asked to submit full chapters of:
- 6,000–20,000 words (including notes, excluding references).
- Written in clear, accessible English.
- With a consistent referencing style (APA, Chicago, or discipline-appropriate).
Chapters will undergo editorial review and, when appropriate, external peer review. Submissions will be evaluated on:
- Relevance to establishing the fields of HBI and/or NMA
- Clarity and originality of the argument or proposal
- Integration of theory/metaphysics with empirical, design, or institutional implications
- Scholarly quality (use of existing literature, internal coherence, and argumentative rigor)
- Potential to influence institutional agendas, research programs, and cross-disciplinary collaboration
Submission Instructions
Email: zach@endemic.org
CC: corey@endemic.org
Subject line: HBI/NMA Anthology - Abstract - [Your Last Name]
Rights Agreement
The Author grants The Divinity School Press a non-exclusive license to publish the Work as part of the anthology Innovations in Biological Intelligence & Machine Agency. The Author retains the right to reproduce, distribute, and adapt the Work elsewhere, provided proper attribution is given to its first publication in Innovations in Biological Intelligence & Machine Agency published by The Divinity School Press. Upon announcement of accepted contributions, the Author will have 15 days to withdraw their Work from submission.
Appendix: Illustrative Themes and Topics
Horizons of Biological Intelligence
We are particularly interested in work that makes explicit how these frameworks recode what counts as "intelligence" and "agency" in biology, and how that recoding feeds into concrete experiments, technologies, or institutions.
Illustrative themes:
Deep code of living systems
- Process-relational and autopoietic accounts of organisms as self-producing agents, and computational bridges from predictive processing to autopoiesis (Thompson, 2007; Allen & Friston, 2018).
- Protocol-level descriptions of biological intelligence beyond genetics (e.g., bioelectric, mechanical, and chemical signaling in development and regeneration) and how these change what we think bodies are (Levin, 2019).
- Contrasting and integrating teleodynamics, constraint-based views, theories of novelty generation, Fristonian free-energy/active inference, and algorithmic accounts as candidates for a unifying language of biological intelligence (Allen & Friston, 2018; Friston, 2010; Walker & Davies, 2013).
Diverse and non-neural intelligences
- Extending the diverse intelligences program to theorizing ecosystems, biomes, and planetary processes as intelligent and how we collaborate with them (Boyle et al., 2022).
Participation, perception, and world-building
- Applications of neurophenomenological or contemplative approaches that deliberately transform human perception in order to improve scientific practice, (Roy, 2018; Sparby & Sacchet, 2022).
- How shifts in perception and metaphysics among researchers lead to different experimental questions, data interpretations, and institutional priorities (Roy, 2019; Segall, 2020).
Institutional and economic consequences
- Consequences of novel organism design for ethics and regulation.
- How recognizing multi-scale biological intelligence challenges current legal regimes, e.g. of IP, biobanking, conservation, and agriculture.
Naturalizing Machine Agency
Institutional and economic consequences
We welcome chapters that:
- Offer explicit, cross-substrate accounts of what an agent is, grounded in biology, process metaphysics, or structural realism;
- Derive from those accounts new generative directions for machine and socio-technical institution architectures, safety, and governance.
Illustrative themes:
What is a machine agent, really?
- Comparative analysis of candidate frameworks that provide rigorous definitions of agency from cells and organisms to software, protocols, robots, and institutions.
- Substrate arguments: under what assumptions (if any) does it matter that an agent is built from neurons, silicon, wetware–hardware hybrids, or distributed socio-technical networks (Kleiner & Ludwig, 2024; Levin, 2022)?
Architectures for situated, relational agency
- AI and bio-digital hybrid architectures that treat bodies, environments, and other agents as constitutive of cognition while being explicit about the metaphysical assumptions behind any "inspiration" from neuroscience or biology (Froese & Ziemke, 2009; Hassabis et al., 2017; Levin, 2019, 2022).
- Collaborative cognition approaches that envision machines as thought partners that learn and think with people, grounded in computational cognitive science and Bayesian modeling (Collins et al., 2024).
Safety and alignment from an agency-first perspective
- Re-formulating AI safety problems (reward hacking, corrigibility, goal misgeneralization) in terms of who or what the agent is, what kind of world it is trying to bring forth, and what it is coupled to (Amodei et al., 2016; Friston, 2010).
- Concrete proposals à la "Concrete Problems in AI Safety" that derive new benchmark tasks, design constraints, or interpretability goals from richer metaphysical accounts of agency, value, and responsibility (Amodei et al., 2016).
Machine agency in economic and governance infrastructure
- Protocols, AI for policy, autonomous services, and data/compute markets as sites of machine agency: who is acting, on whose behalf, and according to which deep assumptions about value and obligation.
- The metaphysics underlying current frameworks (e.g., NIST's explainable AI principles and AI risk work; OECD AI Recommendation), and how "living" metrics, standards, or regulations could better recognize multi-scale biological and ecological constraints, affordances, and novelty-generation and their implications in foundational systems management, e.g. food production and anthropogenic augmentation of ecosystems (Floridi et al., 2018; Funabashi, 2018; OECD, 2019; Phillips et al., 2021; Segall, 2020).
Lived worlds, subjectivity, and moral regard
- Insights from affective and neuropsychoanalytic work on core affect and valuation as foundational for consciousness and motivation (Solms & Panksepp, 2012), and how they might inform architectures for machine "motivation" or "preference formation."
- Posthuman ethics of care that question whether good care must be human care, drawing on speculative representations of robot caregivers to rethink embodiment, obligation, and vulnerability (DeFalco, 2020).
References
- Akomolafe, B. (2017). These wilds beyond our fences: Letters to my daughter on humanity's search for home. North Atlantic Books.
- Allen, M., & Friston, K. J. (2018). From cognitivism to autopoiesis: Towards a computational framework forthe embodied mind. Synthese, 195(6), 2459–2482. https://doi.org/10.1007/s11229-016-1288-5
- Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mané, D. (2016). Concrete problems inAI safety. arXiv preprint arXiv:1606.06565.
- Barandiaran, X. E., Di Paolo, E. A., & Rohde, M. (2009). Defining agency: Individuality, normativity, andasymmetry in action. Adaptive Behavior, 17(5), 367–386.
- Boyle, A., Cooperrider, K., Cheke, L., Halina, M., & Cave, S. (2022). The Atlas of Intelligences: A diverseintelligences resource (Report). University of Cambridge.
- Clark, A., & Chalmers, D. (1998). The extended mind. Analysis, 58(1), 7–19.
- Collins, K. M., Sucholutsky, I., Bhatt, U., Chandra, K., Wong, L., Lee, M., Zhang, C. E., Tan, Z.-X., Ho, M.,Mansinghka, V., Weller, A., Tenenbaum, J. B., & Griffiths, T. L. (2024). Building machines that learn andthink with people. Nature Human Behaviour. Advance online publication. https://doi.org/10.1038/s41562-024-01991-9
- Cotton-Barratt, O., & Ord, T. (2015). Existential risk and existential hope: Definitions. University of Oxford.
- Deacon, T. W. (2011). Incomplete nature: How mind emerged from matter. W. W. Norton.
- DeFalco, A. (2020). Towards a theory of posthuman care: Real humans and caring robots. Body & Society,26(3), 31–60. https://doi.org/10.1177/1357034X20917450
- Di Paolo, E. A. (2019). Process and individuation: The development of sensorimotor agency. Development, 63(3–4), 202–226. https://doi.org/10.1159/000503827
- Di Paolo, E. A., Lawler, D., & Vaccari, A. P. (2023). Toward an enactive conception of productive practices:Beyond material agency. Philosophy & Technology, 36(2), 31. https://doi.org/10.1007/s13347-023-00632-9
- Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., … Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. and Machines, 28(4), 689–707.
- Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11(2), 127–138.
- Froese, T., & Ziemke, T. (2009). Enactive artificial intelligence: Investigating the systemic organization oflife and mind. Artificial Intelligence, 173(3–4), 466–500.
- Funabashi, M. (2018). Human augmentation of ecosystems: Objectives for food production and science by2045. npj Science of Food, 2, Article 16. https://doi.org/10.1038/s41538-018-0026-4
- Hallsworth, J. E., Udaondo, Z., Pedrós-Alió, C., Höfer, J., Benison, K. C., Lloyd, K. G., Cordero, R. J. B., deCampos, C. B. L., Yakimov, M. M., & Amils, R. (2023). Scientific novelty beyond the experiment. MicrobialBiotechnology, 16(6), 1131–1173. https://doi.org/10.1111/1751-7915.14222
- Hanson, R. (2016). The age of Em: Work, love, and life when robots rule the Earth. Oxford University Press.
- Hassabis, D., Kumaran, D., Summerfield, C., & Botvinick, M. (2017). Neuroscience-inspired artificialintelligence. Neuron, 95(2), 245–258.
- Kleiner, J. (2024). Towards a structural turn in consciousness science. Consciousness and Cognition, 119, 103653.
- Kleiner, J., & Ludwig, T. (2024). The case for neurons: A no-go theorem for consciousness on a chip.Neuroscience of Consciousness, 2024(1), niae037.
- Levin, M. (2019). The computational boundary of a "self": Developmental bioelectricity drivesmulticellularity and scale-free cognition. Philosophical Transactions of the Royal Society B, 374(1774), 20180376.
- Levin, M. (2022). Technological approach to mind everywhere: An experimentally grounded framework forunderstanding diverse bodies and minds. Frontiers in Systems Neuroscience, 16, 768201.
- Maturana, H. R., & Varela, F. J. (1980). Autopoiesis and cognition: The realization of the living. D. Reidel.
- Maturana, H. R., & Varela, F. J. (1992). The tree of knowledge: The biological roots of human understanding (Rev.ed.). Shambhala.
- Nielsen, M., & Qiu, K. (2022). A vision of metascience: An engine of improvement for the social processes of science.https://scienceplusplus.org/metascience/
- OECD. (2019). Recommendation of the Council on Artificial Intelligence (OECD/LEGAL/0449). Organisationfor Economic Co-operation and Development.
- Phillips, P. J., Hahn, C. A., Fontana, P. C., Yates, A. N., Greene, K. K., Broniatowski, D. A., & Przybocki, M.A. (2021). Four principles of explainable artificial intelligence (NISTIR 8312). National Institute of Standards andTechnology.
- Roy, B. (2018). Awakened perception: Perception as participation. Integral Review, 14(1), 222–287.
- Roy, B. (2019). Why metaphysics matters. Integral Review, 15(1), 138–178.
- Nielsen, M., & Qiu, K. (2022). A vision of metascience: An engine of improvement for the social processes of science.https://scienceplusplus.org/metascience/
- OECD. (2019). Recommendation of the Council on Artificial Intelligence (OECD/LEGAL/0449). Organisationfor Economic Co-operation and Development.
- Phillips, P. J., Hahn, C. A., Fontana, P. C., Yates, A. N., Greene, K. K., Broniatowski, D. A., & Przybocki, M.A. (2021). Four principles of explainable artificial intelligence (NISTIR 8312). National Institute of Standards andTechnology.
- Roy, B. (2018). Awakened perception: Perception as participation. Integral Review, 14(1), 222–287.
- Roy, B. (2019). Why metaphysics matters. Integral Review, 15(1), 138–178.
- Roy, B. (2021). Complex potential states: A theory of change that can account for beauty and generate life.The Side View.
- Segall, M. T. (2020). The varieties of physicalist ontology. Philosophy, Theology and the Sciences, 7(1), 105–131.https://doi.org/10.1628/ptsc-2020-0008
- Sparby, T., & Sacchet, M. D. (2022). Defining meditation: Foundations for an activity-basedphenomenological classification system. Frontiers in Psychology, 12, 795077.
- Solms, M., & Panksepp, J. (2012). The "id" knows more than the "ego" admits: Neuropsychoanalytic andprimal consciousness perspectives on the interface between affective and cognitive neuroscience. BrainSciences, 2(2), 147–175.
- Stapp, A., & Watney, C. (2022). Progress is a policy choice. Institute for Progress. https://ifp.org/progress-is-a-policy-choice/
- Stenhouse, D. (1973). The evolution of intelligence: A general theory and some of its implications. Harper & Row.
- Thompson, E. (2007). Mind in life: Biology, phenomenology, and the sciences of mind. Harvard University Press.Varela, F. J. (1996). Neurophenomenology: A methodological remedy for the hard problem. Journal ofConsciousness Studies, 3(4), 330–349.
- Varela, F. J., Thompson, E., & Rosch, E. (1991). The embodied mind: Cognitive science and human experience.MIT Press.
- Wagner, A. (2023). Sleeping beauties: The mystery of dormant innovations in nature and culture. Oneworld.
- Walker, S. I., & Davies, P. C. W. (2013). The algorithmic origins of life. Journal of the Royal Society Interface,10(79), 20120869.