Applying AI for Social Good – Session Report

In March 2020, members of the Digital Leadership Forum held the third in our series of quarterly AI for good events, supported by our technology partner, Dell.

The aim of the AI for Good programme is to encourage cross-industry collaboration on key ethical issues surrounding artificial intelligence and its implementation within organisations.

Representatives from leading organisations met at CMS in London to discuss applying AI for social good, learn from academic and field experts, and work collectively towards developing professional best practices in a rapidly evolving technical and regulatory environment. 

Attendees heard presentations from Panakeia Technologies, Darktrace and DEFRA. Additionally, a panel discussion was held with industry experts from CMS, Exscientia and Panakeia Technologies.

Attendees also discussed how to address key challenges, risks and ethical questions that come with AI, and how we can reassure both businesses, and the public, that AI can be used for social good. 

Watch the presentations below:

Download the full report:

AI for Good

AI for Good – in partnership with Dell Technologies – is a programme of dedicated learning and development events which are designed to enable members of the Digital Leadership Forum to innovate with new AI technologies in a responsible way.

Ethical challenges of AI with Arash Ghazanfari

We caught up with Arash Ghazanfari, Field CTO at Dell Technologies, at our AI for Good session, The ethics of artificial intelligence, to discuss current innovations and challenges in AI technology.

AI and Cyber Security

“One of the biggest challenges that we are seeing around artificial intelligence is emerging in the space of cyber security,” Ghazanfari said, highlighting the development of deep fake technology “that’s becoming a major concern for us.”

“AI in the wrong hands can have a severe impact on a digitally transformed society,” Ghazanfari said. AI opens up some exciting new possibilities, particularly in manufacturing, where Ghazanfari noted that both the time needed to bring new products to market and the production costs are decreasing.

Automation and Employee Retention

However Ghazanfari cautioned that organisations should be careful when introducing automation, and be particularly mindful of how any transition to automation is presented to employees.

“What tends to happen is that we end up losing the best people first, and people don’t really react to it very well,” he explained. “If the intention is to free us up from those mundane tasks and move us to more valuable activities I think it can be really beneficial to the employees, as well as introducing productivity gains for the business.”

Technology for Education

We are seeing an emergence of platforms that are delivering educational content in new innovative ways. Ghazanfari is particularly passionate about using technology to improve education and make it more accessible.

“Different people consume and learn content in different ways. With AI, we are seeing an emergence of platforms that are delivering educational content in new innovative ways.”

Making Life Easier with AI

Ghazanfari is also hopeful that technology can be used to improve our lives in other areas too. “Technologies are enhancing our lives, making life easier, and democratising access to resources and access to skills. I think we are on the right path, but we shouldn’t lose touch with our humanity.”

Watch the full interview below:

AI for Good

AI for Good – in partnership with Dell Technologies – is a programme of dedicated learning and development events which are designed to enable members of the Digital Leadership Forum to innovate with new AI technologies in a responsible way.

Using AI To Extend Human Cognitive Capabilities with Dr Karina Vold

Dr Karina Vold, Research Fellow at the University of Cambridge, joined the Digital Leadership Forum at our second AI for Good conference in October, The Ethics of Artificial Intelligence. Vold challenged attendees to consider whether AI systems could be used to complement and extend our cognitive capabilities in more advanced and sophisticated ways than they are currently.

1. We’ve always been suspicious of new technology

Vold explained that while shifts in technology are generally positive, they have historically been met with suspicion. The Greek philosopher Socrates resisted the shift from the oral to the written tradition as he thought that by writing things down we would become more forgetful and less social. “Those are exactly the same arguments that you hear against technology today,” Vold said. “You hear that Google is making us more forgetful and Facebook is making us asocial. It’s a story that’s been happening for a very long time in philosophy.”

2. New technology is redesigning tasks

When information is easily accessible we are less likely to remember the information itself, but instead how to access it. For example, we no longer need to remember phone numbers but instead just the passcode to our phones where those numbers are stored.

3. It’s time to expand our definitions of AI

Most AI definitions used today include a clause about autonomous agency. Vold challenged this definition, suggesting that we should include non-autonomous systems in our definition of AI. These systems are built to interact with humans and become intimately coupled with us as we engage in an ongoing dialogue with them. Vold argued that these systems could know us better and have a more complete record of us than any human.

“You hear that Google is making us more forgetful and Facebook is making us asocial. It’s a story that’s been happening for a very long time in philosophy.”

4. AI can help us generate new ideas and approaches

Vold told the story of AlphaGo and Move 37. In 2016 during a Go match in Seoul between world champion Lee Sedol and a computer program developed by Google DeepMind, called AlphaGo, AlphaGo played an unexpected and successful move that no human player would have played. This became known as Move 37. “One of the reasons that people think that the system came up with that move was that it wasn’t being burdened by some of our own social norms, our own game-playing norms and our own human wisdom about what’s good and what’s not good,” Vold said. “It’s really interesting when you think about situations where the stakes are higher: scientific discoveries, drug discoveries, or healthcare.”

5. Offload our weaknesses so we can focus on our strengths

“Obvious weaknesses for us are easy tasks for some systems,” Vold said, suggesting that memory processes, psychometrics, and quantitative and logical reasoning were all areas that could be offloaded. This frees up our time and cognitive capacity for more creative tasks.

6. We may actually be more biased than AI systems

Vold also argued that we should offload decision-making to systems in order to avoid bias. “We don’t really make decisions in the way we think we do,” Vold said. “A lot of times even though we think we’re making judgments in a particular way, we’re being informed by all sorts of built-in systematic biases.”

7. Beware the potential risks

While AI offers exciting opportunities to extend human cognitive capacities, Vold identified three key risks and implications to be aware of:

  • Cognitive atrophy – if we become too reliant on technology we may lose our ability to perform tasks independently;
  • Responsibility – we may become too removed from the decision-making process but are still held responsible for negative consequences, without the ability to understand and rectify the problem; and
  • Privacy – as we put more information onto our devices we need measures to protect that data.

Watch the full presentation:

AI for Good

AI for Good – in partnership with Dell Technologies – is a programme of dedicated learning and development events which are designed to enable members of the Digital Leadership Forum to innovate with new AI technologies in a responsible way.

5 Ways to Transform Your Digital CX

Consumers expect increasingly high standards, and as technology continues to improve there are now more ways than ever to deliver an excellent customer experience. Read on for 5 ways to transform your digital customer experience.

1. Invest in a virtual assistant

Chatbots have come a long way since ELIZA, and while they no longer need to pass the Turing Test in order to impress us they can be extremely useful for providing immediate assistance to busy consumers. Whether your business is B2B or B2C, your customers will value a knowledgeable virtual assistant that can guide them through their purchases and queries.

2. Put a CX specialist on your digital team

Not all digital innovation needs to involve technology. We spoke with Vinay Parmar, UK Customer and Digital Experience Director at National Express, who told us how putting CX specialists from their contact centres onto their digital teams helped them to keep customer perspectives and experiences central when designing new digital products. Parmar explained that having someone from the contact centre saying “‘I take calls all day and this is what customers say-’ or ‘That’s how customers really use it and what we should be thinking about is-‘” gave the team invaluable insight. 

3. Personalise your customer experience

Broad segmentation is no longer sufficient: just 8% of respondents to a recent survey said that they would be encouraged to engage with a retail brand if they addressed them by their first name. Customers now expect hyper-personalised experiences and are much more likely to buy from brands that offer them individualised offers that suit their lifestyles. By embracing increasingly detailed datasets and machine learning technology you can create a scalable process that detects intention and promotes a frictionless customer journey. 

4. Improve your employee experience

According to PwC’s recent Consumer Insights Survey, employee experience has been shown to correlate directly with customer experience, particularly in customer service roles. Investing in an employee experience platform, which combines access to HR, Learning & Development opportunities, and other employee resources, can improve your employees’ experience and help them to deliver excellent customer service.

5. Be transparent about data

93% of online shoppers say that, compared to last year, it is the same or higher priority for companies to respect their anonymity online. Consumers want companies to be open and transparent in their handling of data – to be not just GDPR compliant but also to clearly communicate how any data is stored and used throughout.

View More Insights

Ethical and Governance Challenges of AI

Dr Jennifer Cobbe, Coordinator of the Trust & Technology Initiative at the University of Cambridge, joined the Digital Leadership Forum at our first AI for Good conference in July, Leading your organisation to responsible AI. Cobbe delivered a thought-provoking presentation, encouraging us to question how we perceive AI technology and its regulation. Here’s what we learnt:

1. It’s AI, Not Magic

While there is a tendency to make exaggerated claims about what artificial intelligence can actually do, we’re not quite at Skynet capabilities yet. Most current AI uses Machine Learning: essentially, statistical models that are trained to spot patterns and correlations in datasets and then make predictions based on these. Machine Learning is only trained to operate within what its trainers think is an acceptable margin of error. “It’s only ever going to be a proximation of the best result,” Cobbe said, arguing that AI is best suited to prediction and classification tasks, but anything more complex may be too much for it at the moment.

2. New Technology Is Not the Wild West

We often think of technology as a largely unregulated new frontier, with the law lagging far behind its bold strides, but this assumption is incorrect. Cobbe explained that existing laws apply straightforwardly to AI, including data protection laws, non-discrimination laws, employment laws, and other sector-specific laws.

3. Our AI Is Only As Ethical As We Are

“Technology isn’t neutral,” Cobbe reminded us. “If your business model isn’t ethical, if your practices aren’t ethical, if what you’re doing with AI isn’t ethical, then your AI cannot be ethical.” Fortunately, the process of introducing AI to your organisation gives you an opportunity to actively confront and address any existing issues.

“If your business model isn’t ethical, if your practices aren’t ethical, if what you’re doing with AI isn’t ethical, then your AI cannot be ethical.”

4. Regulation Can Make Us More Creative

“We should also acknowledge that advances in the law lead to advances in technology,” Cobbe said, highlighting the example of GDPR law, which encouraged the development of new Privacy Enhancing Technologies. We should welcome new regulations because the need to work within them inspires creative solutions. “The need for AI systems to be legally compliant means that designers and engineers are often tasked with finding novel ways to do what the law needs,” Cobbe said.

5. Beware of Bias

Bias manifests in many forms in artificial intelligence. Sometimes designers encode their own biases and assumptions simply by choosing which data to include (and to exclude). Machine Learning is also dependent on historical datasets, which reflect society’s existing biases and discriminatory practices. “By using historical data we do run the risk of essentially encoding the past into the future,” Cobbe said, encouraging organisations to actively guard against this.

In particular, when AI is used for classification there is a risk that it will choose to discriminate against protected groups, as in the example of Amazon’s AI recruiting tool. As we’ve already learned, non-discrimination laws apply straightforwardly to AI, and so companies face serious legal consequences for any discriminatory decisions made by AI.

6. Humans Might Actually Be Better

AI might not always be the most appropriate solution for your organisation. “If you’re using AI within your organisation then you really should be asking yourself whether you’re comfortable relying on a system which probably can’t tell you why it made a decision or why it reached a particular outcome.” Technical solutions are often framed as the best solutions to socioeconomic problems and non-technical problems, but this isn’t always the case. If a task involves qualitative data then a human will probably be a more efficient and ethical evaluator.

“While the real world is a messy, complicated thing, AI will inevitably flatten nuances and gloss over complexities,” Cobbe warned, explaining “It relies on data that attempts to quantify a world that is often qualitative in nature and provides outputs that are overly simplistic sometimes, or even just downright misleading.” “If a technology can’t do what the law requires, perhaps even if a technology can’t do what ethics requires, then the answer is simple: don’t use that technology.”

7. Hire More Social Scientists

We tend to assume that only people who studied STEM subjects need to be involved in artificial intelligence development, but Cobbe warns that this is a mistake. “We really need social scientists,” she said, as they are much more aware of the existing power-dynamics and biases in society and can help organisations to address these.

8. Good Regulation Should Stifle Bad Ideas

Not all new ideas are good ideas, Cobbe argued, and we should welcome the regulation of AI as relying on ethics and self-regulation has proven to be insufficient. We now need regulation as a baseline to protect society and to prevent unethical projects from prospering at the cost of ethical businesses. “Without legal intervention there’s a real danger that irresponsible AI becomes the defining feature of AI in the public’s imagination.”

9. The Buck Stops At You

Ultimately, it is your obligation as an organisation to ensure that you are using AI responsibly, both legally and morally. Organisations should also stay informed of emerging ethical issues. Cobbe highlighted the research work being done by Doteveryone, a London-based think tank, as a useful resource for organisations.

So what if your technology falls short of the legal and ethical requirements? Well, Dr Cobbe has an easy solution: “If a technology can’t do what the law requires, perhaps even if a technology can’t do what ethics requires, then the answer is simple: don’t use that technology.”

You can watch Dr Jennifer Cobbe’s full presentation below:

AI for Good

AI for Good – in partnership with Dell Technologies – is a programme of dedicated learning and development events which are designed to enable members of the Digital Leadership Forum to innovate with new AI technologies in a responsible way.

Build a 5-Star Customer Experience with AI

Harnessing AI: where to start? Led by Katie King, Keynote speaker on AI

We were looking at where to start with AI, I think the best takeaway from this was not necessarily just thinking, let’s go for AI because it is cool, or it’s the in thing, or it’s the buzz word of the moment but starting within the business. What do you need? What’s the requirement? What’s the objective?

Whether it’s revenue based, or brand based, or cost cutting, or getting data, whatever it may be. But the internal business objective is your starting point. And then where can AI help facilitate that? Not the other way around, not thinking where can we put AI into our business.

Feedback presented by Nathan Brown, Senior Digital Project Executive, AXA PPP Healthcare

How can you leverage chatbots in the B2B sector? Led by Diogo Coutinho, Product Lead, Shell International Ltd

We looked at starting points for chatbots in the B2B sector. We were looking at how we should go away and have a look at the big players and who can provide the service and to play around with the tools and have a look at what they offer.

Then going away and seeing why is it needed? What’s the business case? Would it make the process quicker? Does the user actually want it? Then speak to the audience and see how easy is it for them to currently extract content? And is there a better way of doing it through a chatbot service.

We also discussed the importance of putting a minimal viable product together so you’ve got a scope for what’s required. As well as interviewing the users, conducting research and speaking with the internal help desk to find out really what is needed internally and what information could be needed on the chatbot.

Feedback presented by Toni Fitch, Digital Marketing Manager, Octopus Investments

How can you utilise AI to create a compelling and intelligent CX? Led by Darren Ford, VP Global Customer Services, Artificial Solutions

We were talking about how you can utilise AI to create compelling and intelligent customer experiences. We started off by asking what do we think one of those would look like? And we got very quickly into the topics of personalisation and customisation, being able to understand the question however it’s posed and to answer it. Then we got into the importance of data for delivering that answer. And we had quite a long discussion around the availability of data and how people have tried to create some solutions. One of the things that came out is that very few people have actually tried to have one or two POCs around, but nothing that’s really got to a production level.

So, a key thing that came out of this was the need to have the right data in the right place to be able to answer the questions when posed. I think one of the things that Darren said was, it’s far better to have a very narrow solution with a great depth, rather than a broad solution that does next to nothing.

Feedback presented by Chris Bushnell, CFO, Artificial Solutions

How do you decide the best CX areas to automate with AI?

We started with a use case, we said no matter which area you focus on, you have to start with the use case, then only as a secondary stem, we need to think about the technology, although we debated that as well. The driver needs to be the business case. In terms of what the benefits could be, we were looking at how to service customers’ needs, how to understand the customer and predict customer behaviour.

There was a conversation about how to cut costs and customer lifecycle management. We also looked at some successful use cases such as complaints management. We had a very good use case, in terms of advisory services in general and how they can improve customer service, and also how to include external data like LinkedIn into your customer service, which you usually don’t do when you work manually.

Feedback presented by Gaby Glasener-Cipollone, Managing Director, Cirrus

How can you measure your AI-powered customer experience?

We discussed how AI is just an enabler. Everyone sees it as some bright shiny thing, but actually it’s just a tool. So, I think, it’s important to look at your business and consider the different functions as you normally would. Is AI helping with sales? Is it decreasing cost? Is it improving customer experience? Is it improving employee experience and the effectiveness of HR?

Feedback presented by Graham Combe, Business Development, DataArt

How can you create a culture of AI innovation?

Feedback presented by Jon Downing, Business Development Director, business mix and Jane Ruddock, Manager, PwC

Jon Downing, business mix

We really focused around the importance of creating an AI innovation culture and how it’s about putting the customer first and trying to create a series of quick wins. One of the things we discussed was to look at anything that isn’t working and try to get rid of it quickly and anything which is working to explore and deploy more effectively. We spoke about the importance of raising a very clear and compelling narrative around the work. This is particularly important when discussing AI because there’s a lot of confusion around the language, a lot of uncertainty around what this really means to people and it’s about creating clarity. We also talked about looking externally at the competition, and not just the traditional competitor organisations, but some of the challenger organisations and companies such as Amazon, Facebook, Apple who may be trying to enter into different markets.

Jane Ruddock, PwC

From PwC, one of the things that we identified around creating a culture of innovation is to truly mean that. And that it goes directly to the organisation’s core values, which enables people at all levels to be involved in the AI and innovation changes. So that could be anything from making sure that training is available, making sure there are champions involved with all of the different areas where you’re trying to apply AI and embedding it into people’s roles. It’s also really important to make sure that people feel empowered, and that they have the right training and support to carry out their roles around artificial intelligence and around innovation.

AI for Good

AI for Good – in partnership with Dell Technologies – is a programme of dedicated learning and development events which are designed to enable members of the Digital Leadership Forum to innovate with new AI technologies in a responsible way.