By David Beer, BBC
In 1956, during a year-long trip to London and in his early 20s, the mathematician and theoretical biologist Jack D Cowan visited Wilfred Taylor and his strange new “learning machine”. On his arrival he was baffled by the “huge bank of apparatus” that confronted him. Cowan could only stand and watch “the machine doing its thing”. The thing it appeared to be doing was performing an “associative memory scheme” – it seemed to be able to learn how to find connections and retrieve data.
It may have looked like clunky blocks of circuitry, soldered together by hand in a mass of wires and boxes, but what Cowan was witnessing was an early analogue form of a neural network – a precursor to the most advanced artificial intelligence of today, including the much discussed ChatGPT with its ability to generate written content in response to almost any command. ChatGPT’s underlying technology is a neural network. (Read more about the AI emotions dreamed up by ChatGPT)
As Cowan and Taylor stood and watched the machine work, they had no idea exactly how it was managing to perform this task. The answer to Taylor’s mystery machine brain can be found somewhere in its “analogue neurons”, in the associations made by its machine memory and, most importantly, in the fact that its automated functioning couldn’t really be fully explained. It would take decades for these systems to find their purpose and for that power to be unlocked.
The term neural network incorporates a wide range of systems, yet centrally, according to IBM, these “neural networks – also known as artificial neural networks (ANNs) or simulated neural networks (SNNs) – are a subset of machine learning and are at the heart of deep learning algorithms”. Crucially, the term itself and their form and structure are “inspired by the human brain, mimicking the way that biological neurons signal to one another”.
There may have been some residual doubt of their value in its initial stages, but as the years have passed AI fashions have swung firmly towards neural networks. They are now often understood to be the future of AI. They have big implications for us and for what it means to be human. We have heard echoes of these concerns recently with calls to pause new AI developments for a six-month period to ensure confidence in their implications.
It would certainly be a mistake to dismiss the neural network as being solely about glossy, eye-catching new gadgets. They are already well established in our lives. Some are powerful in their practicality. As far back as 1989, a team at AT&T Bell Laboratories used back-propagation techniques to train a system to recognise handwritten postal codes. The recent announcement by Microsoft that Bing searches will be powered by AI, making it your “copilot for the web”, illustrates how the things we discover and how we understand them will increasingly be a product of this type of automation.
Drawing on vast data to find patterns, AI can similarly be trained to do things like image recognition at speed – resulting in them being incorporated into facial recognition, for instance. This ability to identify patterns has led to many other applications, such as predicting stock markets.
Neural networks are changing how we interpret and communicate too. Developed by the Google Brain Team, Google Translate is another prominent application of a neural network.
You wouldn’t want to play Chess or Shogi with one either. Their grasp of rules and their recall of strategies and all recorded moves means that they are exceptionally good at games (although ChatGPT seems to struggle with Wordle. The systems that are troubling human Go players (Go is a notoriously tricky strategy board game) and Chess grandmasters, are made from neural networks.
But their reach goes far beyond these instances and continues to expand. A search of patents restricted only to mentions of the exact phrase “neural networks” produced 135,828 results at the time of writing. With this rapid and ongoing expansion, the chances of us being able to fully explain the influence of AI may become ever thinner. These are the questions I have been examining in my research and my new book on algorithmic thinking.
Mysterious layers of ‘unknowability’
Looking back at the history of neural networks tells us something important about the automated decisions that define our present or those that will have a possibly more profound impact in the future. Their presence also tells us that we are likely to understand the decisions and impacts of AI even less over time. These systems are not simply black boxes, they are not just hidden bits of a system that can’t be seen or understood.
There is a good chance that the greater the impact that artificial intelligence comes to have in our lives the less we will understand how or why
It is something different, something rooted in the aims and design of these systems themselves. There is a long-held pursuit of the unexplainable. The more opaque, the more authentic and advanced the system is thought to be. It is not just about the systems becoming more complex or the control of intellectual property limiting access (although these are part of it). It is instead to say that the ethos driving them has a particular and embedded interest in “unknowability”. The mystery is even coded into the very form and discourse of the neural network. They come with deeply piled layers – hence the phrase deep learning – and within those depths are the even more mysterious sounding “hidden layers”. The mysteries of these systems are deep below the surface.
Today there is a strong push for AI that is explainable. We want to know how it works and how it arrives at decisions and outcomes. The European Union is so concerned by the potentially “unacceptable risks” and even “dangerous” applications that it is currently advancing a new AI Act intended to set a global standard for “the development of secure, trustworthy and ethical artificial intelligence”.
Those new laws will be based on a need for explainability, demanding that “for high-risk AI systems, the requirements of high-quality data, documentation and traceability, transparency, human oversight, accuracy and robustness, are strictly necessary to mitigate the risks to fundamental rights and safety posed by AI”. This is not just about things like self-driving cars (although systems that ensure safety fall into the EU’s category of high-risk AI), it is also a worry that systems will emerge in the future that will have implications for human rights.
This is part of wider calls for transparency in AI so that its activities can be checked, audited and assessed. Another example would be the Royal Society’s policy briefing on explainable AI in which they point out that “policy debates across the world increasingly see calls for some form of AI explainability, as part of efforts to embed ethical principles into the design and deployment of AI-enabled systems”.
But the story of neural networks tells us that we are likely to get further away from that objective in the future, rather than closer to it.
Pursuing the unexplainable
Taken together, these long developments are part of what the sociologist of technology Taina Bucher has called the “problematic of the unknown”. Expanding his influential research on scientific knowledge into the field of AI, Harry Collins has pointed out that the objective with neural nets is that they may be produced by a human, initially at least, but “once written the program lives its own life, as it were; without huge effort, exactly how the program is working can remain mysterious”. This has echoes of those long-held dreams of a self-organising system.
I’d add to this that the unknown and maybe even the unknowable have been pursued as a fundamental part of these systems from their earliest stages. There is a good chance that the greater the impact that artificial intelligence comes to have in our lives the less we will understand how or why.
But that doesn’t sit well with many today. We want to know how AI works and how it arrives at the decisions and outcomes that impact us. As developments in AI continue to shape our knowledge and understanding of the world, what we discover, how we are treated, how we learn, consume and interact, this impulse to understand will grow. When it comes to explainable and transparent AI, the story of neural networks tells us that we are likely to get further away from that objective in the future, rather than closer to it.
* David Beer is professor of sociology at the University of York and is the author of The Tensions of Algorithmic Thinking: Automation, Intelligence and the Politics of Knowing


