Dr Karina Vold, Research Fellow at the University of Cambridge, joined the Digital Leadership Forum at our second AI for Good conference in October, The Ethics of Artificial Intelligence. Vold challenged attendees to consider whether AI systems could be used to complement and extend our cognitive capabilities in more advanced and sophisticated ways than they are currently.
1. We’ve always been suspicious of new technology
Vold explained that while shifts in technology are generally positive, they have historically been met with suspicion. The Greek philosopher Socrates resisted the shift from the oral to the written tradition as he thought that by writing things down we would become more forgetful and less social. “Those are exactly the same arguments that you hear against technology today,” Vold said. “You hear that Google is making us more forgetful and Facebook is making us asocial. It’s a story that’s been happening for a very long time in philosophy.”
2. New technology is redesigning tasks
When information is easily accessible we are less likely to remember the information itself, but instead how to access it. For example, we no longer need to remember phone numbers but instead just the passcode to our phones where those numbers are stored.
3. It’s time to expand our definitions of AI
Most AI definitions used today include a clause about autonomous agency. Vold challenged this definition, suggesting that we should include non-autonomous systems in our definition of AI. These systems are built to interact with humans and become intimately coupled with us as we engage in an ongoing dialogue with them. Vold argued that these systems could know us better and have a more complete record of us than any human.
“You hear that Google is making us more forgetful and Facebook is making us asocial. It’s a story that’s been happening for a very long time in philosophy.”
4. AI can help us generate new ideas and approaches
Vold told the story of AlphaGo and Move 37. In 2016 during a Go match in Seoul between world champion Lee Sedol and a computer program developed by Google DeepMind, called AlphaGo, AlphaGo played an unexpected and successful move that no human player would have played. This became known as Move 37. “One of the reasons that people think that the system came up with that move was that it wasn’t being burdened by some of our own social norms, our own game-playing norms and our own human wisdom about what’s good and what’s not good,” Vold said. “It’s really interesting when you think about situations where the stakes are higher: scientific discoveries, drug discoveries, or healthcare.”
5. Offload our weaknesses so we can focus on our strengths
“Obvious weaknesses for us are easy tasks for some systems,” Vold said, suggesting that memory processes, psychometrics, and quantitative and logical reasoning were all areas that could be offloaded. This frees up our time and cognitive capacity for more creative tasks.
6. We may actually be more biased than AI systems
Vold also argued that we should offload decision-making to systems in order to avoid bias. “We don’t really make decisions in the way we think we do,” Vold said. “A lot of times even though we think we’re making judgments in a particular way, we’re being informed by all sorts of built-in systematic biases.”
7. Beware the potential risks
While AI offers exciting opportunities to extend human cognitive capacities, Vold identified three key risks and implications to be aware of:
- Cognitive atrophy – if we become too reliant on technology we may lose our ability to perform tasks independently;
- Responsibility – we may become too removed from the decision-making process but are still held responsible for negative consequences, without the ability to understand and rectify the problem; and
- Privacy – as we put more information onto our devices we need measures to protect that data.
Watch the full presentation:
AI for Good
AI for Good – in partnership with Dell Technologies – is a programme of dedicated learning and development events which are designed to enable members of the Digital Leadership Forum to innovate with new AI technologies in a responsible way.