Table of contents:
Why you should think about the outcome of simulating algorithms (Motivating the question) [0/5]
What do you get if you simulate an algorithm? (Philosophy) [1/5]
Do (modern) artificial brains implement algorithms? (AI) [2/5]
Do biological brains implement algorithms? (Neuroscience) [3/5]
Implications of Simulating Algorithms: What is 1 + 2 + 3? (Philosophy) [4/5]
When science fiction becomes science fact (Major book spoilers!) [5/5]
Since I’m trying to compare the algorithms implemented in artificial brains with those implemented in biological brains, it would be a pretty moot exercise if biological brains didn’t actually implement any algorithms. My favourite human-friendly and human-relatable example of human brains implementing an algorithm is the one for perceiving the hue of light energy (i.e., colour). But we don’t have the full algorithm yet, nor are we really able to do the controlled experiments needed to test the algorithm causally.1
So, when I was looking for papers to answer a slightly different question, I came across this wonderful fly brain paper, which also happens to be a great example of how biological brains implement algorithms. It’s not the first to show an algorithm in a biological brain, but it is very elegantly and cleanly done. More importantly, although the paper is paywalled, it has very accessible (literally and intellectually) talks on YouTube from the paper’s authors (and an accessible bioRxiv version for no paywall).
Just like Nanda et al’s 2023 paper, the title of Lyu, Abbott, and Maimon (2021), “Building an allocentric travelling direction signal via vector computation” makes the paper sound less exciting that it really is (why are papers like this?). The talks based on this paper, however, make it clear why this research is valuable for understanding brain computation in general.
These talks provide different levels of detail and accessibility. If you want more details of the experiment itself, watch the video from the first author. If you want the most accessible talk, watch the last one. If you want a more algorithmic focus, watch the second one.
“Building an allocentric travelling direction signal via vector computation” by Cheng Lyu (most detailed)
“Vector addition in the navigational circuits of the fly” by Larry Abbott (algorithmic focus)
Here’s the same-ish talk by the same person, but with more details in certain places
“How brains add vectors” by Gaby Maimon and Larry Abbott (most accessible)
I recommend starting with the most accessible talk and working your way up if you want a deeper understanding. For a general introduction to fly neuroscience, this lecture by Katherine Nagel is also helpful.
These talks happen to also be a wonderful real-life, human example of different “levels of abstraction” in explanations. I hope anyone thinking hard about the nature of explainable AI is watching closely and thinking about the implications of how we might get (or fail to get) one AI to explain how another AI (which could also be itself) works, if that is the sort of AI safety route you’re placing your bets on.
I understand that published papers are generally situated in very particular niches, and I may not be squarely within the intended audience, but I am still surprised this isn’t being shouted from the rooftops by the people who think a lot about artificial/biological comparisons, or brains and their possible computations.
Lyu, Abbott, and Maimon (2021) is all about fly brains, so let’s name a hypothetical biological fly Bob. (If you are already familiar with the paper, feel free to skip this description section.)
Bob has a problem. When Bob is flying, sometimes the wind blows Bob in a direction that is different from the direction its head is facing. How does Bob keep track of where it’s going in the world?
Or, imagine you were a passenger in a car. Your traveling direction is the direction the car is going (forward), and your heading direction is the direction your head is facing (also forward). Then, you turn your head to the right, perhaps to spot a cat cafe.
How is it that turning your head to the right doesn’t immediately mess up your sense of where you are traveling relative to some object in the world?
For an example of how you can lose this particular sense (of where you are traveling relative to some object in the world) without your eyes, simply close your eyes and try to walk straight. You probably can’t for long. (You also can’t do it swimming, or driving, in case you were curious.) For bonus difficulty, close your eyes, turn your head either to the left/right 90 degrees, and then try to walk forward in a straight line while your head is facing a perpendicular direction.
Flies share the same difficulty when placed in the dark and we can see it in their brain activity. The fly has an internal compass in its brain that is normally synchronised to some external landmark outside its brain (e.g., the sun or other things), regardless of the angle its head is pointing to.
(timestamp for the compass neuron vizualisation: 11:23-11:52)
In the dark, the fly’s internal compass can’t sync to any external landmark (because its eyes aren’t getting that input signal. Just like when humans close our eyes.) So its internal compass signal drifts, and desynchronizes (a bit) relative to the external environment. Biological systems are not perfectly noiseless.
But the fly walks “straight forward” in the dark by walking at whatever angle is needed to keep the internal compass at a constant 0 degree angle (shown in the blue signal below). Which is why people can say things like, the fly “thinks” it is walking straight forward (according to its internal compass reflected in its brain activity) when it is actually walking around in circles in the “real” world outside its brain (shown with the black line below. The world angle is shown on the x-axis, the ‘allocentric heading angle’. -720 degrees is 2 full circles.)
Because brain processing contributes to output behaviour. Just like internal variables in a function.
So the humble fly walks around in circles in the dark, when it has no external cues to sync to, because of its drifting internal brain compass. Just like a blindfolded human, or humans who have lost vision. Coincidence?
Back to the main question. How does Bob keep track of where its body is going, if it can’t just rely on the direction its head is facing?
Well, in Bob’s brain, the algorithm below lets Bob know where its body is traveling relative to some landmark in the environment like the sun. It doesn’t really have a name, so I guess I’ll call it the Heading To Travelling Direction (HTTD) algorithm for now, since it turns a heading direction signal (starting at the EPG neurons) to a traveling direction signal (at the PFR neurons).
The squares are names of groups of neurons in Bob’s brain, and the orange is the computation that one set of neurons does before it sends outputs to the next set of neurons in the algorithm. The visual motion signal comes from the environment, through the eyes.
In neuronal form, or “in the brain”, just (a set of) the PFN part of algorithm looks roughly like this. (Fuller version of all both the Left and Right set of PFN neurons are here, but then it gets confusing when you visualise all the neurons together.)
Visual motion (i.e., optic flow) is the difference in image that occurs when you move, as opposed to the difference in image when something in the environment moves. It is the part of the input that does not originate from Bob’s “brain” — the same as when you type “What is the best animal in the world?” as input into ChatGPT, the (vector/matrix of) numbers that the question gets turned into behind the scenes to be multiplied and added by the AI didn’t originate from the AI.
Regular optic flow is what lets you see motion in your regular life, so hopefully there’s no need for detailed examples. But here’s my favourite illustration of what non-human motion detection might look like if our human eyes were more sensitive to motion than they currently are.
Compared to certain species of flies, human motion detection is somewhere between 4-6x slower than theirs. Flies can detect flicker up to 300 frames per second (fps). For comparison, a typical computer monitor flickers at 60 fps and a movie plays at 24fps. Flies would need movies playing at 300fps to see the movie as a smooth, continuous stream the way we see things at 24fps2, instead of as a series of still images like a slow flip book. This is why flies are so hard to swat, especially the sort of fly that preys on other flies. We move in slow-motion compared to a fly’s motion detection capabilities, so they can see our hand coming miles away. But despite their superior motion detection, as humans, we can still swat flies by aiming for the space in front of the fly when we aim to hit them, in anticipation of where the fly would be once it takes off. In other words, by “looking ahead” (or, as some might say, “predicting”) where the fly will be in half a second.3
Take Home Message: What is “Person” - “Brain”?
For the purposes of this post, you don’t need to understand exactly what a PFR neuron is or how it does the thing it does in the HTTD algorithm. The take home message for now is simply, biological brains implement algorithms too. This is the case even in a (comparatively) small brain like a fly, much less a human.
And to be clear, a fly brain is far from a human brain in terms of complexity. But a fly does hold the bare minimum qualification of having a biological brain, which can be important to some people. So, that’s why I picked this example to make my point.
Flies are biological, carbon based life forms. Flies have the same A-T G-Cs of DNA that humans do. Their neurons generate electrical action potentials in roughly the same way humans brains do. They have roughly the same suite of neurotransmitters human brains do.
Flies use the regular canon of neurotransmitters (dopamine, GABA, glutamate, acetylcholine, serotonin, etc.) to communicate between neurons, though flies do possess their own norepinephrine equivalent, octopamine. (Shin, Copland, Venton, 2018)
I’ll leave the more in-depth fly-human comparisons as a google search for you, but hopefully this post leaves you with a sketch of how biological brains really are computing systems too. If you think the navigation circuits in a fly wouldn’t apply to a human (surely we’re much more sophisticated and we don’t have heading direction neurons or a literal compass in our brains too), how much money are you willing to bet on that hypothesis?4
Insofar that every single one of the cognitive and neurosciences are slowly revealing how the brain generates the conscious and unconscious experiences a biological organism can have, elucidating which ones it cannot have, and why it cannot have those experiences5, it would be committing an unnamed fallacy to ignore all that work because of … reasons.
I’ll name the fallacy “mistaking the territory as merely in the map” for now (in ironic inversion). This fallacy can be seen in action here or here:
In other words, to conceive of robots as ‘human-like machines’ (and therefore consider their rights), implicitly means to first perceive human being in machinic terms: complicated biological information processing machines. Once we see ourselves as machines, it becomes intuitive to see machines as ‘in some sense like us’. This, we argue, comes out of the fallacy of mistaking the map for the territory. Just because one is not allowed to walk on the grass in the park, does not mean we now have to consider whether we are allowed to walk on the grass on the map, even if that map is implemented as a physical use object inside the park. We may decide you are not allowed to walk on the map, but the grounds for that decision have nothing to do what we think is appropriate for actual grass. (Birhane, van Dijk, Pasquale, 2024, emphasis mine)
Firstly, remember, maps absolutely can also be made out of the same actual grass, concrete, wood, water, rock etc. as the territory. The laws of physics do not forbid it. Terrariums, paludariums, dioramas, architectural/scale models, and miniature landscapes and the like, exist. It would be well within the bounds of reality for there to be a map of some park made of actual grass (and concrete, brick, etc.).
In fact, here’s one. This is a model/map of the Sanctuary of Loyola in Spain. The model is located at the Eureka Science Museum in Spain.
How should we treat maps which are made of the same material as the thing it is a map of, like “actual grass” from a park? Would the rules applied to “actual park grass” suddenly stop applying because we have transported a part of that grass from one place to another? Was it the size and scale of that grass which meant you couldn’t step on it?
And then next question, how should we treat maps made of a slightly different species of grass than the one in that particular park?
I agree that whatever rules one applies to the territory may or may not apply to the map. But I also think that the grounds for deciding what you can and can’t do to maps is sometimes actually related to the territory it is a map of. For example, if we create an artificial fly that seems as fly-like on the outside (behavior) and inside (neurally) as a biological fly, I am completely okay with shutting that artificial fly down if and when needed. Because I admit, I do kill biological flies sometimes. But if we’re talking about artificial cats and artificial homo sapiens? … I would hesitate at least a little bit.
I’m also not saying that people have historically never killed other clearly biological and human people, or systematically prevented certain groups of people from owning property or voting rights and such before either. I might be generally optimistic about humanity, but I am also not blind to our history. I simply would hope that those who did advocate for some of the above measures might have at least hesitated a little bit. (I also hope that maybe we listen to some lessons from history, though I realise that may be a bit optimistic.)
The fact that something “is a map” of another thing does not have much bearing on what the moral status of the material the map is made from, positively OR negatively.
The hard question is always, how can you tell when a map of X is made of (functionally or morally) the same material as the “real” X, for whatever X you’re talking about?
(If this was a map of an undisclosed forest somewhere in the world, would you care for the very living greenery in this map in a very different way compared to the care you would give to the “actual” forest it is a map of? If yes or no, why?6)
Secondly, (good) maps are a compressed sketch of the territory. If you see an icon of the Eiffel Tower near the Arc de Triomphe on Google Maps, you may suspect that you can walk from one to the other in a decently short time. It may not turn out to be the case, but the thing is, you can walk to the actual territory and check if your map was accurate or not. After you do, you can update your map, or leave it as is. Then the next time you use your map to check the distance between the Eiffel Tower and the Arc de Triomphe, you would be entirely justified in expecting the short distance on the map to translate to a short distance in the territory, provided you have no reason to think the territory has changed since.
Once upon a time, it wasn’t obvious to us that the electricity that runs through our brains and hearts was the same electricity as the thing in light bulbs. Someone (Luigi Galvani) in the 1770s actually had to ask and experimentally answer that question! Can you imagine someone nowadays asking whether the electricity in computer CPUs and quartz crystals is the same electricity in light bulbs? The answer requires understanding how electricity works and what it is (a non-trivial question!), not just how computers, light bulbs, or crystals work.
Calling brains “biological information processing machines” may have been a mere metaphor when we did not know better. But we have made much progress in all the sciences since. We have asked many more questions, and obtained many more experimental results since the 1770s (not just in neuroscience!). And so — without speaking on behalf of anyone else — it is on those scientific grounds that in 2024, I personally consider it more accurate to think of brains as “complicated biological information processing machines”. A great number of people have gone to look at the actual territory and reported back on where the map needed adjusting. It would be unwise to ignore all that hard-won evidence. But that’s just me.
How do other people tell when a metaphor has stopped being just a metaphor, I wonder?
At the very least, brains aren’t simple processing machines. And they certainly are machines in the sense that brains work in predictable ways most of time, when you zoom in close enough. If they didn’t, you would notice. There is a whole world of neurological disorders out there when the machine breaks. Or just plain fascinating neurodiversity and atypicalities when the machine works slightly differently than expected.
(This is the same way that just because ChatGPT can’t do matrix multiplication reliably without using python code, that fact in no way implies the GPUs a ChatGPT runs on doesn’t do billions of matrix multiplications with its electrical circuits near-perfectly reliably. If the circuitry were slightly off or imprecise, you would notice.)
If describing the brain as a “complicated biological information processing machine” sounds like simplifying the mind, as in:
After mid-20th century advances in computing, we have come to think of the brain as a computer, an information processing machine. By making this move, the human mind is presented much simpler than it actually is, and the computer is attributed more content than it actually deserves. Or as Baria and Cross (2021) put it: “the human mind is afforded less complexity than is owed, and the computer is afforded more wisdom than is due. (Birhane, van Dijk, Pasquale, 2024)
Then perhaps my disagreement comes from a difference in perception and perspective. I do not use the word “machine” as a synonym for “simple” or “easily understood” or even “perfectly controllable” anymore, since we scaled up deep learning.
The ordinary intelligence that I am looks at the wiki page for information theory, or an introduction to information theory chapter, and needs to slow down so much it’s like I’m reading a non-native language (Math is my 4th language, at best). I do not think “Ah yes, just information processing. So much simpler than I thought, thank goodness”, because, as it turns out, information processing is not simple.
Presenting the brain as a “complicated biological information processing machine” sounds exactly as complicated as saying the brain is a “complicated biological information processing machine” to me.
With all the accumulated might of our 21st century technology and trillions of dollars of scientific budget invested from all over the world over many years — humanity is only just now figuring out fly-sized information processing algorithms, implemented in fly-sized circuits, in fly-sized brains, in 202X!
Though, I do understand, perception of complication is in the eye of the beholder.7
This would also be a good time to remind you, I think the human mind is large, multi-layered, and very lumpily shaped. As it so happens, we experience some, but not all, of the brain’s computations as the mind. I am not saying that the implemented algorithm above means a fly navigates the world “consciously” or in a strongly deliberate, system-2 manner — the same way that you do not have to deliberately think to perceive motion or more commonly, lack of motion.
Even though most of the time you consciously see the world as a steady, stable object, your eyes are actually jerking around all the time. You don’t consciously notice your own eyes moving, because your brain has a really good motion stabilization algorithm in there, and takes care of that, if you happen to luck out on having a typically functioning brain with regards to motion detection. If you don’t and the current treatments aren’t working for you, I’m sorry we aren’t more advanced in biology yet to meet that need. But someday we will be, I hope!
You can see your own eyes’ motion with the picture below if you look at it normally, without forcing your eyes to freeze unnaturally. The picture is static, so any motion you perceive comes from eyes’ movement. But word of warning, it might make you feel a bit motion-sick if you are prone to it. If you don’t believe it’s static, take a screenshot of it yourself.
Finally, a parting thought in preparation for the next implications post.
Our human brains, minds, bodies, and organs are what distinguishes us from other biological living beings, like cats, with their cat brains, minds, and bodies. However, precious though they are, humans can lose (and have lost!) any number of body parts like kidneys, limbs, hair, skin etc., and still remain recognizable as human. You will not have lost any human rights if you lose your eyes, nor would anyone (I hope) forbid you from using the phrase “Ah, I see” if you happen to be blind.
If you systematically ask the (slightly uncomfortable) question “what can you lose and still be recognizable as a human?”, then for me, I conclude that it is down to the particular contours of the brain and mind that distinguishes a being who is human-enough, from one who is not-human-enough. You can even lose certain parts of your brain from an iron bar going through your head resulting in your personality changing, and you would still be thought of as “human”, if maybe a slightly different human than before the loss.
While I am broadly agree with the thesis of embodied and extended cognition, for the particular question of “What can I lose and still be recognizable as human?”, losing my computer/phone/internet connection would not make me automatically think “ah, I’m less human” for it, even though it would significantly damage my overall memory and intellect. Nor would I judge another person as “less human” if they lost their extended brain/body, by default. However, I will note that if someone does lose parts of their body (extended or not) and then makes such a judgement of themself and strongly expresses this judgement in so many words and actions, then I would update my judgement of them specifically.
The words an LLM outputs may not provide information about the LLM’s internal state (yet), but a human’s words sure do (most times).
Of course, my priors and defaults are very likely a generational and/or personal leaning. Perhaps in the future, losing a Neuralink or similar BCI would actually automatically affect one’s perception and judgement of being human-enough.
So.
To the extent that our brains are what affords us our human capacity to love non-living objects (e.g., sentimental artifacts) and living beings (e.g., cats), but also treat them as property or things to be swatted (e.g., flies) …
Should we not think very hard about certain questions before someone accidentally (or not) creates inhumanly-advanced artificial brains?
Such as, if you remove certain photoreceptors from a typical trichromat human, you should cause color blindness in a specific way. But if you have human-shaped ethics, you will not do that experiment for its own sake.
Notice how 24fps and 48fps look basically the same? Most human eyes can’t tell the difference.
If you thought “looking ahead” was uniquely human, it isn’t, we’ve long had GOFAI look-ahead algorithms. If you thought specifically deep-learning based algorithms couldn’t have better-than-human look-ahead capabilities…that proof-of-concept test happened in 2016 with AlphaGo vs Lee Sedol.
Though actually taking the bet feels unfair. Because Google is right there, and it’s only a matter of time (and ethics) to getting higher resolution neuronal recordings in humans.
e.g., humans can experience magenta but not UV wavelengths of light visually due to the range of wavelengths our specific photoreceptors are sensitive to and the specific processing our brains do with that visual input signal.
My best current answer is yes, because of complexity and scale.
I repeat, people are physically allowed to refuse to use certain words or perspectives, as in ”What then, do we make of our own human being, if we refuse to define it in a machinistic way?” (Birhane, van Dijk, Pasquale, 2024). But hopefully, as human beings, we can do better than a ChatGPT-like tendency to idiosyncratically refuse to entertain certain words or perspectives. And I know the previous sentence sounds judgemental, so I must add, I have no beef against these authors. Their piece on arxiv just happens to be a great distilled exemplar of a certain point of view. And having written it up is to their credit.