Resources

Frequently Asked Questions

    • Manipulating bioelectric signaling between cells to understand and improve regeneration, embryogenesis, and cancer suppression
    • Developing AI tools to help human scientists discover and exploit novel interfaces in biological systems for therapeutic applications
    • Understanding how evolution pivots signaling and computational mechanisms to enable organisms' plasticity and problem-solving capacities in different spaces
    • Developing conceptual frameworks for understanding unconventional cognition (memory, learning, and goal-directed competencies in cells, tissues, and organs) and learning to detect, study, and communicate with truly diverse intelligences (whether evolved, designed, or hybrid)
    • Creating novel synthetic life forms to study generic laws that govern form, function, and behavior
    • We have very fundamental theory projects, as well as projects focused on specific biomedical endpoints, all spanning the above interest area
    • Developing computational modelling tools for a bioinformatics of shape
    • Developing models of self-generated order and information storage in physiological networks
  • The central question at the heart of our work in developmental physiology, AI, and cognitive science/philosophy is: how do embodied minds arise in the physical world, and what determines the capabilities and properties of those minds? We are interested in decision-making, memory, and optimal control in a wide range of evolved, designed, and synthetic hybrid systems. We use different model systems (from gene-regulatory networks to cell groups to collectives of behavior animals) to understand multi-scale scale dynamics that process information.

  • It didn't really. This is what I had in mind from day 1, but the website reflects the current work and what could be said at a given time, without going too far beyond our actual publications. It's a step-wise unrolling, with many surprises in the details but a stable core direction.

    • We use Xenopus laevis embryos because they're ideal for physiological experiments and are amenable from the earliest stages of development.
    • We use planaria because they possess incredible regenerative ability, are smart (can learn, allowing memory and brain regeneration experiments in the same animal), and are a unique system that shows us how much morphogenetic control can be dissociated from the genetically-specified hardware.
    • We use synthetic proto-organisms (such as Xenobots) to probe the degree of plasticity and novelty in the cellular collective intelligence (despite a wild-type genome).
    • We use human cells and tissues in vitro to get closer to biomedical applications in regeneration and cancer.
    • We use the slime mold Physarum polycephalum to understand learning and problem-solving in an unconventional, nerve-less organism whose behavior and morphological change occur in the same space.
    • At times, we've also used zebrafish, chick, axolotl, bacteria, and other systems.
  • I am, fundamentally (and by training), a computer engineer with a deep interest in the philosophy of mind and I suppose that's why my perspective on these questions may be different.

  • Barring exo-biology, all we have access to is the N=1 example of life on Earth - the evolved phylogenetic tree, full of frozen accidents of the meandering path of evolution on this planet. Making general conclusions from this dataset is like testing your theory on the same data that generated it. Normal development is very robust and reliable, which obscures the power of biology for novel problem-solving. We need to expose cells and tissues to new environments and novel configurations, to really probe their competencies.

  • Yes and no. On the one hand, I think the humanities, and questions of philosophy, are very important. So I do not believe that we should exclusively favor engineering at the expense of the bigger questions of life and meaning. On the other hand, engineering is a critical (perhaps the only available) method for deciding between competing worldviews and frameworks: the best ones are the ones that enable the most fruitful relationships with the world and its diverse levels of agency (from simple matter to other humans). We can decide between ways of thinking about the world by how much new engineering (discoveries, novel capabilities) they give rise to. Not just pre-diction (or existing systems) but "pre-invention" (how much do they facilitate novel research programs).

    I view engineering in a broader sense of having a relationship with the physical world - of taking actions in physical, social, and other spaces. The cycle I like is: philosophize, engineer, and then turn that crank again and again as you modify both aspects to work together better and facilitate new discoveries and a more meaningful experience. Moreover, the "engineer" part isn't just 3rd person engineering of an external system. I'm also talking about 1st person engineering of *yourself* as engineer (change your perspectives/frames, augment, commit to enlarging your cognitive light cone of compassion and care, etc.) - the ultimate expression of freedom is to modify how you respond and act in the future by exerting deliberate, consistent effort to change yourself). I also include 2nd person engineering - communicating (signaling, behavior-shaping) and relating to agential materials and other beings.

  • Bioelectricity refers to signals carried by the voltage gradients, ion flows, and electric fields that all cells receive and emit. It has been known for over 100 years that all cells, not just excitable nerve and muscle, exhibit steady-state long-term bioelectrical activity, and that this activity appears to be instructive for cell behavior during morphogenesis. While bioelectricity functions alongside biochemical and biomechanical events, it has a unique aspect. Much like in neuroscience, bioelectricity is the computational medium with which cellular collectives make decisions (about growth and form). Evolution discovered that electrical networks are a great way to compute, long before brains and muscle came on the scene. Bioelectricity is an ancient modality that serves as the proto-cognitive medium of the cellular collective intelligence that navigates morphospace (the space of possible anatomies). As such, it is a powerful interface that cells and tissues expose to us (and to each other) that enable reprogramming for biomedical purposes (and for understanding evolutionary change).

    1. simplest/shortest: it's stored in very much the same way as information in the brain: in the electric states of cells (just like in neurons) and downstream modifications (long-term storage in cytoskeletal and transcriptional states).
    2. better: "it's stored in the stable bioelectric states maintained by cell networks." Just like in the brain, groups of cells make electrical networks that can stably store information. This is routinely modeled in neuroscience and is the basis of much of our technology; like memory circuits in volatile RAM, it's easy to store encodings in electrical states of a medium that holds patterns over long time. All tissues - not just brains - do that. So, the excitable medium which can store information is the voltage state of groups of cells (another more familiar medium is DNA in groups of cells, and there are others such as cytoskeleton structures, etc.). The notion that "body pattern is stored in the DNA" is not that simple, depending on what you're asking. What is stored in the DNA is protein sequences, which is single cell-level hardware information. Bioelectric patterns emerge from the complex dynamical interactions of ion channels and gap junctions opening and closing, and it's that physiological software that stores and processes patterning information.
    3. deeper still: There needs to be agreement on what "storing a code" really means. It's not simple, and there's a lot of work on this. Things are only codes with respect to what is reading or interpreting the code. So what we really need to do is talk about how bioelectric properties are interpreted by the tissues. There are 3 basic modes we've found: a) 1:1 prepatterns (like the electric face or your brain pattern), b) non-1:1 prepatterns encoding specific organs, like planarian head-tail info (which can be mapped onto heads or tails but it is not visually obvious like for electric face) or eye spots, or c) binary triggers that say "build whatever goes here" like the tail/leg signals (which carry almost none of the detailed info of how to build it). This is the state of the art now - interpretation - which we still poorly understand but are working on. And lest we get too comfortable with how well the "DNA code" has been decoded, let's remember that we have no ability to predict anatomy from genome (other than by cheating - comparison with known genomes) and we can't tell in advance if a frogolotl (mixed embryo of frog and axolotl cells) will have legs, by having frog and axolotl genomes.
  • Bioelectric dynamics in the body are, like in the brain, the computational medium of the collective intelligence of cell groups. Evolution discovered ion channels and gap junctions as ways to implement powerful laws of physics and computation for memory, scaling of goals, and integration of information. It uses the bioelectric layer to achieve evolvability and robust plasticity from the indirect encodings of form and function via genetic specification of hardware. Ion channels on the cell surfaces are the interface - the programming interface, if you will, to this physiological software.

    Developmental bioelectricity shows us that deep neuroscience isn't just about neurons any more than computer science is about current laptops. And indeed, it's not even clear what neurons really are: powerful bench techniques of neuroscience do not distinguish neurons from non-neural cells. The distinction is a partition that we have invented, and it's useful in some cases (like studying neuroanatomy), but obscures important biology in many cases. Neuroscience provides us with frameworks for seeing how individual competent cells scale up to emergent, large Selves. This is a far more general phenomenon than just neurons make a brain. Isn't it interesting that Alan Turing was interested in both: intelligence and morphogenesis? It's no accident, because morphogenesis is an excellent example of an unconventional intelligence (which uses the same medium - bioelectricity - as evolution chose for our brains).

  • Of course, there is always a physical story to tell about a process if you want to zoom in to the lowest level of description - it's never going to be magic, it's always physics underneath. But for example, when someone is programming a computer, or explaining reasoning to another person, it is possible for an observer to focus on the mechanics of the keyboard buttons being pressed, or the specific air waves and molecules being generated, respectively. But in both cases that observer would be missing everything that is important about the interaction, and more specifically, that frame of analysis would not facilitate either programming or effective communication. The molecular details simply don't capture the understanding and control that is inherent in the interaction between systems that have higher agency than molecules. So, when we target ion channels, we only know which ion channels to target because we understand what the other cells are tracking - not the identity of the channel protein, but the voltage pattern (and we've shown that in fact you can get the same effect by using many different channels and types of ions, as long as you get the voltage right - it's a coarse-grained master variable and the molecular details can differ considerably). While you can track the intervention as having a simple reductionist physical chain of events after we've shown an example of voltage control, the ability to infer novel interventions (i.e., discovery) requires that you understand the higher-level dynamic that's going on which cannot be captured by a story about the chemistry of that specific channel and ion. The most potent control here is gained (just like in many aspects of neuroscience and behavioral sciences) at a higher level that abstracts from the details of the many different ways there are to send a given message.

  • We don't know yet. But even more critical than the question of where it is stored, is the question of how it is imprinted onto a nascent regenerating brain and then interpreted. This gets to a core philosophical issue about personal identity that is relevant to all of us. We don't have real access to the past - at every moment, we have to actively reconstruct a model of the past from the evidence that the past has left in our brain and body - engrams for us to interpret as memories and maintain a coherent life story. So even without our head being actively cut off and regenerating, time itself is making sure that we are all like planaria, and also a bit like anterograde amnesia patients, who have to leave themselves notes every day about what's going on (it's just that for most humans, that scratchpad happens to be inside our skulls). We have to constantly interpret and reconstruct our memory engrams just like the planaria.

  • I've introduced a few, to transmit important concepts in this developing field.

    • Agential material - the subject of engineering (by evolution or by human engineers or by cells or whatever) which is not passive (can only be expected to hold its structural properties) and not even just active or computational, but has a significant degree of autonomy - an agenda, perhaps homeostatic capacity or higher, which it will execute independently, and which serves as the target of behavior-shaping interventions (not micromanagement) in optimal control.
    • Anatomical compiler - a future system representing the long-term endgame of the science of morphogenesis, that reminds us how far away from true understanding we are. Someday, you will be able to sit in front of an anatomical compiler, specify the shape of the animal or plant that you want, and it will convert that shape specification to a set of stimuli that will have to be given to cells to build exactly that shape (no matter how novel or unusual - total control). Critically, the anatomical compiler is not a 3D printer or anything like that. It is a communications device, for translating goals between your brain and the collective intelligence of some other cells (transferring your prefered anatomical outcome into navigation policies by which the cellular collective will traverse anatomical space).
    • Axis of persuadability - a spectrum containing different kinds of systems (from mechanical clocks to humans and beyond) that organizes them with respect to what kind of interventions (rewiring, setpoint editing, training, logical arguments, etc.) are optimal for prediction and control of that system. The question is, what kind of approach is needed to persuade the system to do what you want it to do. It's an engineering take on the question of agency. See Technological Approach to Mind Everywhere (TAME): an experimentally-grounded framework for understanding diverse bodies and minds.
    • Bioprompting - the ways in which biological systems signal each other in ways that hack, manipulate, or othewise exploit the competencies of the receiver to induce complex outcomes with simple signals. For example, the signals that a wasp embryo uses to get plant leaf cells to build a complex gall that is very different from the native morphology of the leaf is an example of bioprompting. I think that just as AI prompt engineers craft inputs to exploit the intelligence of things like large language models, bioengineers will be crafting prompts (not micromanaging the molecular details) to get cells and tissues to reach complex system-level outcomes.
    • Cognitive light cone - the outer boundary, in space and time, of the largest goal a given system can work towards. This is my attempt to pinpoint what all agents have in common, no matter their make-up or origin: animals, aliens, AI, swarms, etc. can all be placed on a chart showing the scale of the goals they are capable of pursuing. See The Computational Boundary of a 'Self': Developmental Bioelectricity Drives Multicellularity and Scale-Free Cognition.
    • Ionoceutical - a biomedical intervention that specifically targets the bioelectric interface exposed by cells - ion channels, gap junctions, and similar machinery that control how cells and tissues process information and regulate each others' behavior (individually, and in collectives such as tissues and organs).
    • Morphoceutical - a biomedical treatment (drug, device, and/or set of stimuli) that exerts its function by re-setting the anatomical setpoint (encoded target morphology) in a patient's tissues and thus leveraging the homeostatic mechanisms which will implement it (in contrast to current approaches which seek to specifically implement a physiological state directly).
    • Polycomputing - this idea refers to a view of computation as fundamentally observer-driven: what computation a given machine (living or otherwise) is doing depends on which computational lens some particular observer (including the system itself) is using to understand and manipulate a process. Given that, it is clear that the same physical set of processes might be simultaneously implementing multiple different computations. This idea is explored here.
    • Selflet - a Selflet refers to a thin temporal slice of a cognitive being (in a human, it would be measured in hundreds of milliseconds) across time, as in the "space-time bread loaf" of Special Relativity. It emphasizes that cognitive agents like us are not static, perduring entities but a dynamic pattern that has to re-construct itself and its past from the currently-available memory engrams in its brain, body, and environment. Selflets are snapshots of a mind's "now" moment, and some large number of Selflets integrate into what looks, to outside observers, as an entire agent lasting through time. This model facilitates thinking about memories as messages from your past self, and actions as constraining and enabling your future selves by deforming the energy landscape of the options available to you in the future (via actions that change environmental features and your own structure and information content). This model thus exploits similarities between lateral interactions between agents and "vertical" interactions between a single agent's past and future selves.
    • Synthbiosis - this one was actually invented for me by GPT-4, to capture the symbiosis of evolved and engineered material in novel chimeric configurations (e.g., cyborg, hybrot, and other novel composite beings at the level of cells, organisms, and societies). In its own words: "this new word is derived from the Greek word "σύνθεσις" (synthesis), meaning "putting together," and "βίος" (bios), meaning "life." The -sis ending is also used in other terms denoting interaction or association, like "symbiosis." Synthbiosis signifies the flourishing relationship between living and artificial or engineered forms, portraying an image of different entities coming together to create something new and beneficial. This term emphasizes the interdependence and co-prosperity that arises from this unique interconnection, reflecting the concept of "thriving together."
    • Target Morphology - the anatomical pattern to which a cellular collective will work, and the pattern that, once achieved, causes the proliferation and remodeling to stop. The ability to reach a specific target morphology is the competency of cellular collectives to navigate morphospace, and can (as we have shown) be edited in the absence of any genetic change.
    • Teleophobia - the unwarranted fear of erring on the side of too much agency when considering a new system (coupled with a lack of concern about attributing too little).
    • Xenobot - a self-organizing proto-organism discovered by the team at ICDO which forms when frog embryonic skin cells are liberated from the influence of the other cells and allowed to reboot their multicellularity. Xenobots move around on their own and have a number of other amazing behaviors, serving as a biorobotics platform and helping us to understand the plasticity of life (see full discussion in Biological Robots: Perspectives on an Emerging Interdisciplinary Field). A better definition considers the Xenobot to be not just the proto-organism itself, but the whole multi-scale system comprised of the bots, the AI used in their design, and the human scientists who are using this platform to explore the possibility space of form and function. This expanded view sees Xenobots as a platform with which to understand the latent space of biology.
  • Each of these is a metaphor, like all scientific concepts - a package of methods and relationships between ideas that offers to ways to think. It is a lens through which we can choose to view a particular context. All of these terms are observer-dependent and relative to a reference frame (a problem space), and each one has advantages and limitations. The quality of each lens is determined by how much prediction, control, and insight (ability to drive novel questions and research progress) they enable in a specific context (not by philosophical pre-commitments). Thus, proposing a precise definition of each of these is essential when embarking on a specific discussion about some system.

    Empirical utility (facilitating the making of testable predictions, and more importantly, driving new experiments and discoveries), not philosophical (armchair) commitments, should to be the criterion by which such definitions are evaluated. Moreover, a scale of analysis should be made explicit in all definitions. For example, many attempts to define decision-making break down because of an implicit focus on the molecular event itself. But this is not the only level of analysis and may be sub-optimal in many circumstances. For example, the degree to which an event is a decision has to be judged with respect to how much the optimal understanding and control of that event by an observer (e.g., scientist, or other biological system) will require knowledge of the large-scale goals, adaptive cycles, reward functions, and context - it is only defined within a context that may include evolutionary or engineering cycles. Specifically, the degree to which a process is a decision is proportional to the size of the informational light cone of spatiotemporal events that need to be considered for optimal understanding and management of that event. Very mechanical behaviors can be captured by the immediate, local pushes and pulls occurring to an object. In contrast, to properly understand the events happening in a complex agent need to consider events in the distant past (due to memory), the distant future (due to predictive capacity), and in other locations (due to integrated information across space). Similarly, the closely-related concept of free will cannot be sought in molecular events (where only mechanical necessity and quantum randomness can be found) but in the large-scale behavioral functions that are best understood as a cognitive system curating its own structure and future possibilities by rich chains of action that take place over time.

    Some definitions of relevant words as I use them:

    • Agent - a system that executes a perceptual control loop, in which it takes measurements, compares to setpoint, tries to minimize error with respect to its setpoint and prior expectations.
    • Self - an interlocked triad of: a space within which it operates, a cognitive light cone demarcating the size of goals in that space which the system can pursue, and a set of cognitive/computational processes that allow the system to navigate the space with some degree of competency). It is recognized by cognitive processes in 3rd person (by external observers such as scientists, parasites, and conspecifics), 2nd person (by the attempts to control other agents in a communication/instruction mode), and 1st person (by the system itself as the inner perspective of valence, attention, and decision-making).
    • Goal - A goal exists whenever using the tools of goal-based frameworks (e.g., cybernetics) give engineers good traction on the system (prediction and control). I don’t mean “purpose” (high-level goals where an agent has the meta-cognition to think about having goals and what they might be). And, I think even bowling balls and such have the tiniest, most basic, nano-goals because they follow least-action laws. The word “goal” is useful to indicate situations where an engineer does not need to micromanage some aspects of the system because they can offload that onto the system itself (when it has some ability to pursue goals without you controlling each step). A goal-directed system lets you, as the engineer, trust it to do stuff when you’re not around. So, does a bowling ball have such a property? Yes. Imagine you’re an engineer building a Roller Coaster at an amusement park. What you have to do is engineer a way for it to get up the hill. But you don’t have to worry about getting it down the hill - it already does that, autonomously. Following energy gradients is the simplest, most basic goal there is, and it’s not 0 on the scale of goal-directedness. Sure, it’s incredibly minimal and doesn’t quite look like the goals smart beings have (although some human psychological phenomena can be cashed out as a system following an energy landscape in terms of basic drives and rewards etc.), but I think it’s not 0. Crucially, thinking of goals this way (as a continuum) avoids pseudoproblems brought on by binary frameworks, such as the following. Do single cell organisms have goals? If so, well then little bags of chemical networks have goals. If not, well then remember that we all formed from single cells (fertilized egg), and if that didn’t have goals, and you do, then you need to have a story about when during the gradual, extended process of development it shows up.
    • Intelligence - a degree of capacity for problem-solving, also known as competently navigating a problem space to reach specific goals despite barriers and novel circumstances. Intelligence is an estimate made of a system by some other observer (i.e., it's in the eye of the beholder and relative to that observer's skill in formulating hypotheses about problem spaces, goals, and competencies and then using perturbative experiments to test them). Intelligence is but a subset of cognitive repertoire more broadly, because there are other components in some systems like exploration, play, ability to change goals (2nd order intelligence), ability to commit to become better at meeting goals and setting goals (3rd order) etc.
  • There really are no frameworks for guessing in advance how many "beings" exist inside a chunk of neural tissue like a brain, or for knowing how to evaluate an alien species with a radically-different architecture, with respect to cognitive properties. We have very few examples to work with (and for consciousness, just 1 - yourself); we need to be open to novel embodiments of mind.

    It's easier if we give up binary dichotomies. In the pre-scientific past, the options were just 2: "mind like a human's" or "mere physics, like everything else". If those are your only 2 options, then of course scientists might want to say that evolution is completely blind, robots and cyborgs are completely machines with 0 cognition, etc. etc. Acknowledging a continuum view frees us from having to draw arbitrary distinctions and instead get on with the more fruitful research program of importing powerful tools from behavior science (beyond neurons) to make testable hypotheses about what kind, and how much, cognitive capacity we can usefully show in any system.

  • People often do, but it’s not a good idea. The position of any system on the Spectrum of Persuadability is a matter of experiment, not armchair preconceptions. On the left side of that spectrum are things like bowling balls – to predict and control them, you focus on their landscape. But to control and predict complex systems (like living beings, and some kinds of robots and autonomous vehicles) the real landscape is not nearly as important as the agent’s *perception* and internal beliefs about the landscape.

    So, are cells and tissues more like a bowling ball on a landscape or like a mouse on a landscape?  The reason it matters is that it determines which kinds of tools (practical and conceptual) you are empowered to use. The standard assumption of biology is that concepts from chemistry and physics are exclusively the right tools for developmental and regenerative biology (bottom-up approaches). But this is a limiting assumption; treating this as an empirical question instead of a philosophical commitment facilitates research and new discoveries.  When we do experiments to probe systems using techniques from other fields (e.g., behavioral science), we often get surprises – intelligent behavior in unexpected places. There needs to be a kind of impedance match between tools and what they are supposed to study. The tools of chemistry and physics are low-agency apparatus, and thus they only see mechanisms and not mind.  It requires a mind to be able to detect agency and interface with it.

    My lab has been pursuing the hypothesis that the tools of computer science, cybernetics, and behavioral/cognitive sciences are even more apropos, for some purposes in the biological sciences, than those of chemistry and physics, because living tissues are not the kinds of simple machines that are appropriate for those low-agency approaches. By borrowing concepts from fields that focus on information and cognition, we discover novel competencies that we can exploit in biomedical and engineering settings.

  • I claim 2 key things.

    A) The amazing capabilities of morphogenesis are *not* simply the fact that by following simple rules, complexity reliably emerges. This open-loop emergence indeed does not require any of the cognitive approaches. Instead, what we see morphogenetic systems doing is not just rolling forward toward emergent outcomes but doing novel actions in order to reach the same goal despite perturbations. These homeostatic and allostatic competencies are not found in simple emergent systems and, by their very nature, begin to necessitate tools from the domain that best deals with agents with goals and problem-solving competencies: behavioral and cognitive science.
    B) If it were optimal to predict, control, and engineer such systems using standard tools of emergence and complexity science, that would indeed not require my approaches. The claim is that treating morphogenesis as a collective intelligence operating in morphospace provides additional control and discovery capabilities over competing traditional approaches, which are reviewed in many of our papers, such as these:

  • Assuming that these terms have binary, sharp definitions leads to thorny pseudoproblems in which it's very hard to see how this mentalistic features arise in a physical world. Instead, we should think of them as different lenses through which we see events - sometimes the physical lens is most useful, sometimes the agential. To make it clear that there is no significant gulf between physics and cognition, always ask "what would a truly minimal, evolutionarily-ancient version of this capacity look like?". This is why issues of basal cognition don't ever depend on the details of what specific forms (paramecia, Physarum, etc.) can or can't do. Regardless of whether a given simple creature has or doesn't have a certain type of learning for example, we know that there must be some simple form, produced by a gradual process of biological reproduction, that is a troublesome case sitting between obvious cognition and simple responses. If we didn't yet have data on basal cognition, we could still be assured that we simply hadn't looked at the right problem space in the right way, for some microbe or other creature.

    Unavoidably, if you go back far enough through evolution, the most minimal version of "decision", "memory", etc. will look like physics. The journey toward advanced cognition is gradual - there is no bright line; and difficult-to-classify cases are guaranteed to exist because of the evolutionary continuum (and the ability to make a chimeric system between one that has the property and one that doesn't). The difficulties disappear if we learn to ask not "Whether this system is... " but "How much ... and what kind of ... does it have?".

    We must also avoid the tendency to continuously move goalposts. This happens all the time in AI research, where people say that whatever is doable by machine now, that must not be really AI - true Intelligence is whatever we can't engineer yet. When research uncovers the mechanisms underlying any example of basal cognition, people have the tendency to say "ah, I see how that works, so then that's just physics, that's not real memory/decision/cognition". We have to get over the idea that seeing a causal mechanistic chain automatically evaporates cognition or agency. Many people are (implicitly) still expecting some sort of magic underneath that is dispelled by clear explanations and mechanisms. Of course it's physics underneath - what else could it be? The problem is that we shouldn't be looking for cognition at the lower levels - it's apparent when looking at the system top-down (in cybernetic descriptions of agents' teleonomy), not bottom up.

    Binary "real cognition" (to be contrasted with the "metaphorical cognition" to be found in cells, tissues, etc.) is a pseudo-scientific folk notion that doesn't take evolution or bioengineering seriously. Thinking of agential models of unconventional agents as "just metaphors" ignores the reality that all scientific concepts are metaphors; the question is not whether something is a metaphor, but what practical advantages any given metaphor enables. No definitions in this field which posit sharp lines are likely to survive the next few decades of bioengineering advances.

    The same is true of anthropomorphism - there is no such thing; humans have no magic that can mistakenly be bestowed on others. We have to get over our teleophobia and realize that human minds have no monopoly on decision-making, intelligence, and goals. If the expectations for those features are scaled appropriately to other systems of study, it is reasonable and essential to look for them in other implementations. All of these advanced human capacities evolved from much more simple roots during the evolution of life. The key is to formulate models that use the optimum degree and kind of cognition to model any system most efficiently.

    To help think about these things, work backwards. Don't start by asking if amoebae are conscious; start by acknowledging that you are, and then ask yourself: on your journey backwards to a quiescent oocyte (or evolutionarily, to a lower primate and back to a microbe), when does this property wink out? No-where - you will not find any clean line where a certain stage has it, and the stage just before that doesn't have it. A gradualist (continuum) view is the only defensible position, I think.

  • Cognitive claims are just engineering protocol claims. When you say that system X is some specific level of cognition, what you are really offering is a list of engineering protocols that are good for managing it, including how much autonomous functionality can be expected from it. The level of cognition of a system can be defined as the highest level of cognition that it is helpful to attribute to it when attempting to predict, control, or communicate with it. It is the cognitive level of the most efficient model on the persuadability continuum that you can apply to the system. This means it is observer-dependent, not objective/unique. Under this pragmatic stance, a level of cognitive sophistication applies not to a system but to the interactions an observer can have with that system - it's in the eye of the beholder. Thus, when you estimate the intelligence or cognition of a system, you are in effect taking an IQ test yourself, because it requires a certain degree of intelligence to recognize it in others, and it's easy to miss in unconventional agents. If you don't know what problem space the system is operating in and can't recognize how well it navigates that space, you will under-estimate cognition, often to great opportunity cost. Turing saw this clearly, framing his classic test "in the eye of the beholder".

  • No, just the opposite. My work is fundamentally rooted in the organicist tradition. But I reject the simple dichotomy, binary thinking, and zero-sum-game approach that says that in order for conventional living beings to drive the necessary amount of awe and respect, the rest of the universe has to provide a strong contrast and be entirely mindless. My framework does not reduce the importance, magic, or moral worth of living beings; however, it hopes to give insight into their essential nature that goes far beyond familiar implementations. I think that the holistic, organicist community does not take their own views seriously enough and stops short of where these ideas really need to go. It's not that living things are less amazing than we thought. It's that we did not properly appreciate what "mere matter", algorithms, and the laws of cybernetics were actually capable of. Compared to popular approaches to this deep question, it sees more life and mind, not less.

  • Yes and no. It's certainly not like the computer architecture most of us use today - a linear, deterministic, centrally-controlled process. But it does have some features which concepts in computer science really help to understand. These key similarities include the ability of muliple subsystems to encode and process symbols, to be reprogrammable (new behavior patterns from the exact same hardware), multi-scale causality (yes, chemistry at the bottom; but also, algorithms/cognition at the top which has causal power), and perhaps the most powerful concept of all: abstraction layers which hide the complexity of the micro-scale details underneath to allow efficient control by hacking the system at higher levels using targets that do not exist at lower levels. This allows evolution to work over a highly competent material, and cells and tissues to behavior-shape each other in complex and adaptive ways.

  • Because our sensory organs evolved to look outward, calibrated for medium-sized objects moving at medium speeds - our training set has been observing behavior as motion in the 3D world and that is the kind of intelligent problem-solving we are good at recognizing and managing. If we had senses (like bio-feedback) that routinely let us directly feel how well our inner organs were navigating physiological space every day, we would have no problem recognizing the function of the pancreas (for example) as intelligent behavior

  • A) One claim is that we can just do the experiments without needing all the philosophy. I think there is no such thing as experiments without a philosophy, explicit or unexamined, that constrains some approaches and facilitates others. My claim is that over the last 30 years, we've done experiments that were novel (not done) because other competing views did not suggest those experiments. Thus I believe this framework generates novel discoveries and empirical progress, as frameworks should.
    B) Organicists tend to like the part where I extend mind to unconventional aspects of the biosphere (the holistic aspects), but they do not like the fact that in my framework, this merges smoothly and continuously into "machines" on the left side of the spectrum - they prefer a sharp separation, and worry about an engineering approach that they think will diminish the moral worth and majesty of life
    C) Molecular biologists go the other direction - they tend to like the engineering and computational metaphors, but don't approve of the claim that this merges smoothly into cognitive science on the right side of the spectrum. They worry about profligately painting mental terms onto things which are well-described by mechanistic approaches (which my framework explicitly does not do).
    D) There is also lots of resistance of the form "That's just chemistry and physics, it's not cognitive", which rests on philosophical commitments to what should and shouldn't be cognitive (and unspoken, quasi-religious assumptions that humans are somehow magical and that something should be underneath the cognition that is something other than chemistry and physics). I claim that such views should be empirical claims, not a priori feelings, and need to be tested for their utility in driving research. i.e., it's then on the skeptic to specify what they think the necessary and sufficient conditions should be.

  • One implication is that a crucial test of general AI should be the capability of detecting agency in others. Machine Learning should not only exhibit cognition, but one of its skills must be to recognize and characterize cognition in other systems which it interacts with. Synthetic cognitive agents should not only be able to pass Turing Tests (or mini versions thereof, in other spaces) but should also be able to administer them.

  •  

    Useful definitions of human need to be developed for future discussions of how upset we should be when our bodies and brains are replaced with novel architectures or evolved modifications (or the natural species is supplanted entirely), how to estimate capacity of cyborgs etc. for agency and moral judgement in legal settings, and for ethical considerations of responsibility toward beings whose composition and origin are very different from our own. I don't know the right answer, but I suggest that one useful direction is to define a human as a being that can harness its IQ and goal-directed behavior at a specific level of compassion " humans have a larger cognitive boundary than other beings known to date, and we can define as human a being with a minimal level of capacity to pursue goals that are aimed outwards (rather than its own goals) - at increasing the well-being of others. "Human" should be a term that indicates achievement of a level of ethical sophistication, not directly derivable from genotype, composition, or origin story (evolved, engineered, or a mix of the two).

    Indeed, the "proof of humanity certificates" which are being developed in this time of advances of AI put the problem most clearly. What do you really want to know, as proof of "humanity"? Is it having a natural human genome, or a standard evolved set of anatomical structures? I don't believe those are useful criteria. What we really want to know, when making sure we're dealing with a human, is that they have a minimal level of competency for compassion, the right size of cognitive light cone to be able to care about the things we care about, and face the same existential battles that we do (autopoietic self-construction, an impermanent fluid self, limitations that drive and constrain actions, etc.) - beings who meet those criteria are the ones with whom we can have human relationships. The rest (what combination of evolved/engineered materials they are made of etc.) are as irrelevant as other details of origin and appearance which society has fought hard to dethrone as metrics for how we should treat each other.

  • Learning is a more effective version of what we try to do when we try to micromanage the function of a brain from the outside; a learning system is also doing the same thing, to its own brain - writing into the memory medium and controlling effectors, but doing it better than our clumsy external interventions. Evolution provides several physiological software layers on top of the lowest-level molecular modules because that's the most efficient way to control them (it's using the higher-level interfaces), and we should do it too - transformative regenerative medicine by taking advantage of the intelligence of the tissues, which enable us to work in simpler reward spaces, not gene expression spaces - using stimuli, not rewiring. See Top-down models in biology: explanation and control of complex living systems above the molecular level.

  • No, see the recent advances in information theory:

    Consider the Game of Life cellular automaton. This world has a deterministic, simple physics and you can predict all microstates going forward into infinity. And yet, if our visual system wasn't tuned to find "persistent moving objects", we would have no concept of a "glider" and then we wouldn't think of making Turing machines out of glider streams: our engineering capacities in this space are directly potentiated (i.e., objectively made better) by having the ability to conceive of macro-scale entities.

    Also, imagine what would have happened if at the turn of the 20th century, physicists thought that they could track each molecules of a gas. We might have missed out on the very deep discoveries of thermodynamics, resulting from coarse-graining and taking seriously higher levels of description. Modern biologists have the feeling that soon we will track every molecule (through big data and omics approaches) and this is keeping us from finding things like the Boyles Law for biology. Always looking at the most detailed level can obscure great truths.

    The bottom line is that we need to outgrow our teleophobia, and realize that under-estimating agency is as bad as overestimating it.

  • Each subunit has its own experience (including us). To a cell, competently going about its business of maintaining physiological homeostasis and planar polarity with its neighbors, getting even a glimpse of the immensely huge and alien goals of the person in whose body they live would be a horror that even Lovecraft could not have imagined. As of yet, we have no inkling of how to detect what supersystem we may be part of, or what problem space it is navigating.

  • My work is mostly about objective, external phenomena such as cognitive behavior. But I can say a few things. First, I think that Consciousness cannot really be studied in 3rd person. The only way to do it is to become part of the experiment; and, as mystics have long said, you can't stay the same while doing that (unlike normal, objective, 3rd person science). It can only be studied in 1st person, for example by modifying your own conscious experience by merging with your subject (in a stronger way than seeing data about their brain come from your visual system looking at instruments). See the last figure of Technological Approach to Mind Everywhere (TAME): an experimentally-grounded framework for understanding diverse bodies and minds.

  • Engineering is not just adding new circuits or components, it's more general - the rational modulation of natural systems to change their functionality or behavior. In the case of the Xenobots, we did something interesting: removing influences and constraints. By liberating the skin cells from the rest of the embryo, we unlocked a bunch of potential (of these competent subunits) which was being kept suppressed by developmental signals from other tissues. These skin cells were being told to have a quiet, boring 2-dimensional life as a barrier layer of an active system (the tadpole). On their own however, we see their default geodesic in problem space: what they would rather do, when left to their own devices, is to have a more exciting 3-dimensional life as a Xenobot. This is control by releasing constraints to reveal the native problem-solving capabilities of cell collectives that were not apparent in their default context.

    Similarly, robotics is not about micromanaging every functionality and programming every capacity directly. That is how robotics started, but it is just an early phase of the field, where the engineer works with passive, dumb parts. The more advanced phase, which we unlock by working with biological components, is that we can work with competent parts that do things we don't always have to micromanage. Learning to create autonomous machines with emergent functions (robotics) involves guided self-assembly, where we provide signals and conditions but rely on multiple levels of competency and spontaneous behavior from our materials. It is an outdated view to think of robots as necessarily being highly predictable, metallic, and precisely-engineered at all levels. Xenobots are an ideal example of robotics as a collaborative process between the human designer and materials that have competency at multiple scales.

  • Our projects are basic research aimed at understanding fundamental mechanisms and dynamics. However, once uncovered, these mechanisms suggest control points for biomedical intervention. Thus, our work suggests novel approaches to the detection, prevention, and repair of birth defects (especially involving the laterality of the heart and various internal organs and brain/craniofacial disorders), new diagnostic and treatment modalities for some types of cancers, approaches to induce regenerative repair of limbs, eyes, spinal cords, and face, and the discovery of new nootropic drugs (compounds that increase intelligence or improve memory for example). Specifically, our strategy is to find the highest-level signals with which we can communicate to cell collectives to build specific shapes, avoiding bottom-up micromanagement of pathways.

  • This term can usefully mean different things in different contexts, but how about this as a definition focused on the multiscale competency architecture. Health is a descriptor of the degree to which flow of control most successfully spans levels of organization. That is, higher levels (e.g., the social mind and advanced cognition) successfully deform the energy landscape for the lower levels (organs, cells, and molecular pathways), while those lower levels competently solve problems to allow the higher levels to communicate, delegate, and incentivize instead of micromanaging details. The levels of competition are kept just high enough to enable coordination, stress is low because each subsystem at its own scale and in its own space is close to its homeodynamic setpoint, and the boundaries of each agent at its own scale are crisp and obvious to all (avoiding dissociative identity defections, psychological as well as cancer). Adaptive control and communication relationships between Selves at all levels of organization, not just lateral homeostatic states, within the body are what is crucial for optimal health from molecular pathways to societies and ecosystems.

  • I think that evolution is not just about "how to make feature X". The parts are very competent, and they will do things on their own, by default (as our Xenobot and other experiments show). The real trick is to bend their action space so that the system's subunits do (or don't do) what's good for the large organism - it's not just about evolving mechanisms to build organs. The default has action and goal-seeking at every level at every level - the parts do things in their local problem spaces if left to their own devices. So, for evolution to adapt structure and function, it's not all micromanagement - it's "guided self-assembly" and behavior shaping, the same way that bioengineers work not with passive materials but rather agential matter. We have to modulate what the cells do - we don't micro-specify features, we try to guide them toward outcomes, if we can, but the cell collectives do all the heavy lifting. This is seen in cancer too. "Why is there cancer?" is the wrong question, because the default for cells is to replicate and migrate. The real question is, why is there ever anything but cancer - how does this normal behavior of cells get suppressed in vivo. The key effort is to achieve a mature science of collective intelligence, to learn to predict and control what the default geodesics are for cell collectives in morphological, physiological, transcriptional, and behavioral spaces.

    There are implications for the intellectual property system, for example. With classical, passive materials where everything is in what the craftsman did, patenting the craftsman's recipe makes sense. It's not yet suitable for work with active materials where the inventor is a collaborator with the material - the outcome is partly the method but it's partly what you've discovered about the competency of the agential material. It's different than trying to patent natural laws, because they are (probably?) passive and constant. Whereas agential matter (biological components, and someday multi scale engineered materials) is helping the inventor get it done - it's doing a lot of the heavy lifting, and we have to figure out how to patent cases like that. There will be more and more of that as tech evolves. It's probably the same as with inventions by AI agents - it's a collaboration.

  • Over and above useful synthetic living machines, sandbox systems like Xenobots are extremely safe ways to begin to hone the science of decreasing radical surprise. We work on creating tools to predict or manage what novel collective agents (from Internet of Things to robotic swarms to groups of cells in a petri dish to bacterial colonies) will want to do. We are surrounded by highly impactful technologies whose drives we do not understand; it is imperative to use model systems like Xenobots (swarms made of intelligent components) to begin to develop frameworks for understanding where complex systems' goals come from and how they can be guided toward life-positive outcomes.

  • One key aspect is morphological freedom (a.k.a., radical freedom of embodiment). We were all born into physical and mental limitations that were set at arbitrary levels by chance and genetics. Even those who have "perfect" standard human health and capabilities are limited by anatomical decisions that were not made with anyone's well-being or fulfillment in mind. I consider it to be a core right of sentient beings to (if they wish) move beyond the involuntary vagaries of their birth and alter their form and function in whatever way suits their personal goals and potential. We spend a lot of time talking about freedom of speech and behavior, but all of those are derived from the fundamental bodies and minds we have - disease, aging, birth defects, and the vagaries of the random evolutionary process have embodied us in ways that fundamentally limit the kinds of thoughts we can have and what we can achieve. It is everyone's right to improve as they will, and our duty as scientists (and supporters of progress) to enable methods for liberation from arbitrary constraints of the evolutionary process as it happened to occur on this planet.

    A second aspect is compassion. Each of us has a cognitive light cone which determines the size of goal states we can actively care about. By increasing our cognitive capacity, and enlarging that light cone, we become capable of greater compassion - we become able to functionally care about the well-being of more sentient beings. This is not about feeling emotions (love) toward others, but about having the cognitive depth to actively work towards the improvement of the lived experience of all creatures. It is also not about raising IQ just for the sake of newer tech; the technology is just a tool, and the more fundamental goal of increasing intelligence is to increase the facility of practical care. If all this talk of inauspicious births, compassion, and liberation of sentient beings from suffering sounds familiar, it should - there are links here to ancient ways of thinking about the world.

  • There is no magical line that separates life-improving techniques from ones that are "too much change". When early hominids went into a cave to get out of the cold rain and avoid pneumonia, they were already on a continuous journey to setting bones, brain surgery, and marrow transplants. We always use our intelligence to improve our lot and fight the vagaries of a dangerous world; there is no principled way to draw a line between improvements that are allowable and ones that should be prevented. Each technology can be debated on its own pros and cons, but there is no sense in which one can go "too far" along the path of improving life for all. Moreover, we have a moral responsibility to use our intelligence to improve life for every being.

    The sense of advance is relative; I imagine that our concerns over today's technologies will sound like this to our future descendants: "Og make wheel? Og go too far!! Wear fur, make fire to cook, ok, those good; but what next - plant seeds? Set bones? Those go too far - it's playing gods. Must make taboo!!" If that sounds too outlandish, consider the reaction of crowds to the audacity of the first umbrella. This is how our current wrangling over these technologies will seem to our descendants.

  • No longer being able to rely on what something looks like (composition - biological vs. metallic) or where it came from (evolved or engineered) to determine how you should relate to it - that does call for new ethics beyond "how much like a human brain does it look like" (see Synthetic Living Organisms: Heralds of a Revolution in Technology & Ethics). However, our outrage should be proportionally calibrated. Before one worries about autonomous pieces of skin, we have to deal with the millennia-long history of shaping living things towards others' purposes. There are primitive cultures' ancient practices of modifying pig snouts to keep them unable to root (and thus dependent on humans), etc. and more recently, factory farming. The abhorrent conditions for complex animals in factory farms are by far a bigger problem world-wide than anything that is happening with skin cells allowed to reboot their multicellularity.

  • It is likely that most cells will be able to exhibit self-assembly to novel forms of life and behavior. The problem is that currently, interesting functionality and problem-solving in other spaces besides familiar 3D space of motile behavior is hard to detect. Thus, we started with cells that could produce movement and morphogenesis - something easy to recognize and study. It is quite possible that many of the passive organoids or other ex-vivo bioengineered constructs are doing fascinating things in transcriptional, physiological, metabolic, and other spaces, but no one knows this because people tend to equate intelligence with movement. We are working to develop formalisms and tools to detect other kinds of problem-solving and exploratory behavior in unconventional embodiments, and synthetic living forms are an excellent tool for the field of Diverse Intelligence to extend our own IQ in recognizing novel functionality in unfamiliar guises.

  • Don't confuse "this is how it's always been, and how it is now" with "this is how it should be" or "this is how it has to be". In the pre-scientific era, it was possible to have a worldview in which the status quo was set up by God and thus was the way things should be because it was set up to be the best possible way. We now know that the state of the biosphere, our own anatomies, capacities, and behavioral proclivities are all the outcomes of an evolutionary process that rewards prevalence, not quality. Evolution is a meandering search that does not seek to optimize our happiness, quality of life, intelligence, ability to see truth, or any of the things to value. It basically just optimizes for adaptive biomass - the most life to be observable. Surely we can do better than the vagaries of chance and necessity have done so far.

    Consistent with this, the status quo is pretty terrible - disease and arbitrary limits on potential and quality of life abound. Those limitations (what some people call "too far") are not set by a wise creator who knows what's good for us - they are purely accidental, driven by the meanderings of the evolutionary process through the space of possibilities. There is nothing sacred or beneficial about the limits we face in our baseline state. Once we realize that there is no one setting a beneficial agenda, we have a moral responsibility to do it ourselves. There is no one else to do it for us, and failing to pursue scientific ways to improve life is a kind of cowardly moral abdication of responsibility. It is our duty to improve ourselves and our world, which fortunately we can do because our cognitive capacities enable working toward specific goals, not just blind local search. If we set our goal space to be inclusive and very large, the combination of intellect and compassion offers much opportunity to do better than "natural".

    There's even an interesting component of this in the Judeo-Christian origin story. Why did God have Adam name the animals in the Garden of Eden - why didn't he tell Adam what they were, or have the angels pre-name them? Because it is up to us to understand the world around us - to name things (discover their true nature), and create new things that didn't exist before, naming them (in the scientific sense of understanding their essence) as we go. This doesn't mean we shouldn't be constantly humble about the many deep areas of ignorance or the unexpected consequences of any action. But the solution to those limitations is more science, not less, a mind open to life-as-it-can-be, and striving for improvements for all, not artificial self-imposed limitations of a pre-scientific worldview in which we hope that someone else will do what needs to be done.

  • What keeps me up at night is the risk of committing the ethical lapse of not moving these discoveries to their full positive impact for humanity and other life forms, current and future. I worry about limitations of drive, vision, intellect, and commitment that would prevent us from implementing the moral imperative to use our minds to improve life for all, and live up to our full potential as living beings. Fear and lack of clarity leads to the opportunity cost of failing to address the enormous biomedical suffering in the world. These technologies can help us implement effective compassion, and correct the unjust disparities resulting from an evolutionary and genetic lottery which distributes a range of bodily damage across the population.

    The ethics component here is not just about what could go wrong. Often people focus on the potential problems, because "don't make things worse" hides the implicit assumption that everything is fine now, and we should just make sure we don't ruin things. This is of course something to keep an eye on, but it neglects a huge part of the equation. Things are absolutely not fine now, as obvious from the state of the world and the phone calls I receive daily from people with horrendous medical issues (there is an almost perfect correlation between young, healthy people who call me saying "stop this scary research" and ones whose children or themselves have various severe problems who call and say "what's taking you so long to find solutions"). The moral calculus of what to do must take into account the negative balance of failing to help those whose physical embodiments are impairing quality of life.

    We now know that we have not been placed, with great care for our happiness and well-being, at come carefully-curated optimum of capabilities. There is nothing special, optimal, or "right" about our current levels of IQ, susceptibility to aging and disease, and various limitations - these are just where the meandering process of evolution happened to bring us. It is up to us to rise to the challenge, move beyond the vagaries of our meandering history through genotype space and random external influences, and improve the embodied experience for all sentient beings.

  • My favorite is this passage, from Stephen King's story "Little Sisters of Eluria":

    "Jenna?

    Nothing. Only the wind and the smell of the sage.

    Without thinking about what he was doing (like play-acting, reasoned thought was not his strong suit), he bent, picked up the wimple, and shook it. The Dark Bells rang.

    For a moment there was nothing. Then a thousand small dark creatures came scurrying out of the sage, gathering on the broken earth. Roland thought of the battalion marching down the side of the freighter's and took a step back. Then he held his position. As, he saw, the bugs holding theirs.

    He believed he understood. Some of this understanding came from his memory of how Sister Mary's flesh had felt under his hands... how it had felt various, not one thing but many. Part of it was what she had Said: I have supped with them. Such as them might never die but they might change.

    The insects trembled, a dark cloud of them blotting out the white powdery earth.

    Roland shook the bells again.

    A shiver ran through them in a subtle wave, and then they began form a shape. They hesitated as if unsure of how to go on, regrouped, began again. What they eventually "made on the whiteness of the sand there between the blowing fluffs of lilac-coloured sage was one of Great Letters: the letter C."

    "Except it wasn't really a letter, the gunslinger saw; it was a curl.

    They began to sing, and to Roland it sounded as if they were singing his name.

    The bells fell from his unnerved hand, and when they struck ground and chimed there, the mass of bugs broke apart, running every direction. He thought of calling them back - ringing the bell again might do that - but to what purpose? To what end?

    Ask me not, Roland. 'Tis done, the bridge burned.

    Yet she had come to him one last time, imposing her will over thousand various parts that should have lost the ability to think when the whole lost its cohesion . . . and yet she had thought, somehow enough to make that shape. How much effort might that have taken?"