AI: One of society’s great opportunities or the next big threat to privacy and control?

Girl with digital imagery and symbols across her face and eyes.

This article originally appeared in the Spring/Summer 2019 issue of InTouch.

Everything we love about civilisation is a product of intelligence, so amplifying human intelligence with artificial intelligence (AI) has the potential to help civilisation flourish like never before. Right? The buzz around AI is growing – bots are replacing humans to deliver personalised communications and the era of surgical AI has commenced. However, big thinkers including Elon Musk and Bill Gates have expressed concern about the risks posed by AI.

Who is right? Is AI the answer to all our problems or the next big risk to privacy and control over our own lives? Here we speak to two leading King’s academics with very different takes on the development of AI. We also speak to a King’s Clinician-Scientist who is using robotic surgery to improve outcomes and recovery time for cancer patients.

Professor Michael Luck

Michael Luck is Professor of Computer Science and Executive Dean of the Faculty of Natural & Mathematical Sciences at King’s. According to Professor Luck, AI has been around for a long time and has the potential to change our lives for the better.

‘I’ve been working on AI for more than 20 years, like others at King’s. The very significant role of AI has now been recognised, with the creation of the new government AI Council and Office for AI, which aim to support the UK’s AI sector and maintain its leading position.

AI is already having an impact on our lives. Millions of euros are traded by machines every day in financial markets, our leisure time is increasingly guided by machines predicting what we watch and AI software is managing processes ranging from logistics in manufacturing to chatbots interacting with customers. This impact will only increase, with AI touching every aspect of society, improving efficiency and effectiveness.

If we get this right, and we must, AI’s primary area of impact will be on automating time-sucking tasks that are dirty, dull or dangerous. What I think people worry about most is the effect on employment and the economy, but in my view it will mostly be positive.

There are some incredibly sophisticated forms of AI being developed. AI is especially interesting to me not just because of the different application areas, but also because of the many disciplines that make up AI techniques and technologies – computer science, philosophy, psychology, economics and many others. There are also massive potential benefits for many King’s research areas – and that can only be a good thing.’

Professor Luck is aware of the reservations that some people have:

‘AI is sometimes viewed as a scary, disturbing future scenario in which we as humans will lose control and machines take over the world. In fact, what is far more likely to occur in the next few decades is the development of systems that combine the capabilities of humans and machines – in other words, ‘augmented’ intelligence. This means we will have the best of both worlds.’

King’s is already working on systems to ensure that AI is not misunderstood or misused. ‘In the Department of Informatics at King’s we are looking at not only the technical possibilities, but the wider societal impact. As a result, we are developing a vision for what we call ‘safe and trusted AI’. We are involving people from many fields, not just those that are traditionally tech-focused.

Our safe and trusted vision is important. First, AI should be safe in that we need to provide some degree of assurance around the technologies. There is, rightly, a pressure to ensure that software does what we intend it to. Second, and no less important, AI should be trusted because we need to have confidence in the decisions made by AI systems. As the sophistication of AI increases, this becomes ever more important.’

So, according to Professor Luck, the potential of AI is hugely positive and the risks, sometimes overblown, are already being managed:

‘Our work on AI, here in the UK and at King’s, is leading the world. While there are always areas for improvement, I am optimistic about the future and the positive role AI can play in it.’

Dr Christine Aicardi

Dr Christine Aicardi is a Senior King’s Research Fellow in the Human Brain Project Foresight Laboratory. The Human Brain Project aims to put in place cutting-edge research infrastructure that will allow scientific and industrial researchers to advance knowledge in neuroscience, brain-inspired computing and brainrelated medicine. The lab evaluates the potential social and ethical implications of the knowledge and technologies produced by the Human Brain Project. Dr Aicardi believes that, though there are some fundamental misunderstandings around AI, it does bring inherent risks that must be addressed and managed.

‘In the media, aspects of AI can get blurred – in particular, the fundamental difference between specialised AI, which currently exists, and the projected, much fantasised advent of artificial general intelligence. This means that there is a reduced understanding of what AI actually entails. As a result of this blurring, some think the main goal of AI research is to replace human intelligence; my own view is that it is more fruitful to think how it could complement it. The expansion of specialised AI technology raises important ethical concerns. For example, computer vision and facial recognition techniques can be misused when manipulating images and videos. There have also been some contentious military uses of autonomous and semi-autonomous weapons – for example, research into the use of drones to locate and attack targets without any human involvement. Many such uses of AI could be problematic if not properly regulated.’

Dr Aicardi believes that a lack of understanding means that we are not focusing enough on the real risks that current AI presents.
‘There are a few scaremongers who like to talk about the day robots develop consciousness and become more intelligent than us, and that leads to ‘taking over the world’ apocalyptic scenarios. That actually distracts us from the problematic aspects of the AI we already have. AI requires masses of data. There is currently a lot of concern about how data is collected and used. This is especially worrying when it relates to information about vulnerable populations. For example, there was widespread shock when it was recently reported that Facebook paid children as young as 13 to install software which ‘spied’ on them, without parental permission.’

Dr Aicardi believes that ethical questions surrounding AI should be addressed when people are starting out in their careers:
‘It is very important that people studying computer science and engineering should receive ethical training so they can put technical capabilities in the context of the human dimension.’

So, while Dr Aicardi, like Professor Luck, can see the potential of AI, she believes that more needs to be done to manage its development:

‘While AI may have lots of potential, we must remember that it can also serve the interests of powerful people and organisations. This means that it can be used for purposes other than the common good – financial gain, political influence, or something else. In the end, I think we need to go beyond soft regulation to also deploy hard laws to ensure that AI is managed properly.’

AI in Practice

Prokar Dasgupta has been a pioneer in the field of robotic-assisted technology for almost 20 years. He is Chair of Robotic Surgery & Urological Innovation at King’s and is an Hon. Consultant Urological Surgeon at Guy’s and St Thomas’.

Professor Prokar Dasgupta

Professor Dasgupta’s work is already having a hugely beneficial impact – meaning shorter recovery time and less pain for cancer patients.

‘About 18 years ago, we conducted a trial because I felt we needed to improve patient outcomes. We surveyed the effectiveness of a robot to remove kidney stones via the first ever random controlled trial into robot-assisted technology. Machine learning (ML) is a subset of AI, using decision-making computer algorithms to grasp and respond to specific data.

The trial showed the robot to be more accurate at placing a needle into a kidney, but slightly slower than a human surgeon. Then we worked with instruments with ‘wrists’ and 3D vision techniques. It gave us a magnification of more than 10 times and the enormous benefit of no tremors. The adoption of robotic surgery and AI is driven by a desire to improve patient outcomes. For example, a prostate recognition algorithm could make a machine learn whether an image is prostate cancer or not, thus reducing the variability in MRI readings by radiologists. Video recordings of surgeons performing RARP [robot assisted radical prostatectomy] can now be converted through a ‘black box’ into automated performance metrics, and show that not all highvolume surgeons are necessarily those with the best outcomes.*

Many of our trials have been successful, but you must always temper success with pragmatism – we also learn from what doesn’t work. For example, while we achieved excellent outcomes in areas such as prostate and kidney cancer, we had less success in reducing complications while treating bladder cancer.

And of course we involve patients in our work – they attend our trial boards and give advice. When we develop new techniques, we ask patients “Are these procedures you are willing to undertake?”. Their answers matter.’


Professor Dasgupta’s work shows the potential of AI to make real and meaningful improvements to people’s lives. Meanwhile, Professor Luck’s work shows that AI can be applied to many aspects of society – making our lives easier and more efficient. However, like any emerging technology, as Dr Aicardi attests, it is important that politicians, scientists and society keep a watchful eye on what these exciting developments are used for. It is reassuring to know that King’s plays a leading role on both the practical application and addressing the ethical implications. For now, we remain cautiously optimistic about the future of AI.

What is your opinion on the AI debate, share your views with #KingsDebate