top of page
Writer's pictureWinston Ritson

The anthropomorphism of compute




The rise of powerful artificial intelligence (AI) systems has reignited the age-old fascination with understanding the human mind - yours, mine and everyone we know. From large language models (LLMs) that engage in seemingly natural conversations to image generators that conjure breathtaking visuals, the resemblance between machine and biological intelligence is uncanny. Yet, beneath this surface similarity lies, in my mind, a complex web of parallels and profound unknowns. In a world where the definition of intelligence is built on what we, as humans, do, because behaviour can only be intelligent if it is anthropomorphised.


The AI 'Frontal Lobe' and its intriguing capabilities

LLMs like Claude display a resemblance to the functions associated with our frontal lobe. Just as the frontal lobe orchestrates language, planning, and abstract thought, LLMs excel at manipulating text, following instructions, and offering surprisingly insightful or totally wrong responses to prompts (good and bad). These models operate by developing internal 'weightings' based on massive datasets, learning to predict word sequences (who has not had a conversation and finished the speakers sentences in their head?). This creates an impressive illusion of understanding, prompting questions about how our own semantic and linguistic abilities may be grounded - do we genuinely understand or we are doing a form of sentence completion for the world around us?

Mirroring Modularity, from specialised brain regions to AI architectures

Neuroscience has meticulously mapped out areas of the brain, revealing how the occipital lobe processes visual information, the temporal lobe handles sound, and the limbic system governs emotions. AI development has somewhat mirrored this modularity. Convolutional neural networks excel at image recognition, while recurrent neural networks find patterns in sequences like time-series data. Multimodal AI systems seek to combine these specialised modules, aiming to understand and generate content across different sensory domains.

Where consciousness eludes the algorithm

Perhaps the one of the greatest questions stemming from the AI-brain analogy centres on the elusive nature of consciousness. The subjective experience of being, that internal sense of "I", arising from the intricate dance of billions of neurones. Although AI systems display fascinating emergent behaviours, it's unclear whether they could ever develop a similar inner world. Algorithms are driven by programmed objectives, while the motivations of biological consciousness remain a mystery or are there undiscovered biological algorithms?

Is metacognition AI's insurmountable frontier?

The human brain exhibits metacognition – the ability to think about our own thinking. We can introspect, monitor our mental processes, and adapt accordingly. This recursive self-awareness is far from AI's current grasp. Could machines ever reflect on their own biases, or consciously refine their learning strategies, in other words will GPT lose the P and cease to be pre-trained? The ethical and philosophical implications are immense.

Synergy of the enigma

The AI-brain analogy invites us to examine the extraordinary nature of our own biological intelligence. At the same time, it highlights the profound gaps in our understanding of our own minds. Here's where the analogy becomes a powerful research tool:

  • Neuroscience inspiration could AI techniques like reinforcement learning or back-propagation shed light on how the brain optimises its own learning processes?

  • Testing ground for theories computational models of consciousness offer a unique environment to test hypotheses too complex or ethically problematic for direct biological experimentation.

  • The ethics of artificial minds as AI systems inch closer to seemingly human-like abilities, we confront difficult questions about personhood, responsibility, and the boundaries between natural and artificial intelligence.

Probing the depths of intelligence

The AI-brain analogy isn't about perfect mimicry. By building artificial minds, however distinct, we gain a lens through which to examine our own. The true power of this analogy isn't in reaching a perfect one-to-one correspondence. It lies in the questions it provokes: What is true intelligence? Can it exist outside of a biological form? And as we build AI's future, are we also revealing uncovering the hollow definition of intelligence?

What are your thoughts? Do you see potential for AI to inform our understanding of biological intelligence? Where do you think the analogy breaks down?


~ Wisdom is like fire. People take it from others ~ African Proverb

19 views0 comments

Comments


bottom of page