Cambridge Science Festival examines the effects and ethics of artificial intelligence – Cambridge Network

Artificial intelligence features heavily as part of a series of events that cover new technologies at the 2020 Cambridge Science Festival (9 22 March), which is run by the University of Cambridge.

Hype around artificial intelligence, big data and machine learning has reached a fever pitch. Drones, driverless cars, films portraying robots that look and think like humans Today, intelligent machines are present in almost all walks of life.

During the first week of the Festival, several events look at how these new technologies are changing us and our world. In AI and society: the thinking machines (9 March). Dr Mateja Jamnik, Department of Computer Science and Technology, considers our future and asks: What exactly is AI? What are the main drivers for the amazing recent progress in AI? How is AI going to affect our lives? And could AI machines become smarter than us? She answers these questions from a scientific perspective and talks about building AI systems that capture some of our informal and intuitive human thinking. Dr Jamnik demonstrates a few applications of this work, presents some opportunities that it opens, and considers the ethical implications of building intelligent technology.

Artificial intelligence has also created a lot of buzz about the future of work. In From policing to fashion: how the use of artificial intelligence is shaping our work (10 March), Alentina Vardanyan, Cambridge Judge Business School, and Lauren Waardenburg, KIN Center for Digital Innovation, Amsterdam, discuss the social and psychological implications of AI, from reshaping the fashion design process to predictive policing.

Speaking ahead of the event, Lauren Waardenburg said: Predictive policing is quite a new phenomenon and gives one of the first examples of real-world data translators, which is quite a new and upcoming type of work that many organisations are interested in. However, there are unintended consequences for work and the use of AI if an organisation doesnt consider the large influence such data translators can have.

Similarly, AI in fashion is also a new phenomenon. The feedback of an AI system changes the way designers and stylists create and how they interpret their creative role in that process. The suggestions from the AI system put constraints on what designers can create. For example, the recommendations may be very specific in suggesting the colour palette, textile, and style of the garment. This level of nuanced guidelines not only limits what they can create, but it also puts pressure on their self-identification as a creative person.

The technology we encounter and use daily changes at a pace that is hard for us to truly take stock of, with every new device release, software update and new social media platform creating ripple effects. In How is tech changing how we work, think and feel? (14 March), a panel of technologists look at current and near-present mainstream technology to better understand how we think and feel about data and communication. With Dr David Stillwell, Lecturer in Big Data Analytics and Quantitative Social Science at Cambridge Judge Business School; Tyler Shores, PhD researcher at the Faculty of Education; Anu Hautalampi, Head of social media for the University of Cambridge; and Dex Torricke-Barton, Director of the Brunswick Group and former speechwriter and communications for Mark Zuckerberg, Elon Musk, Eric Schmidt and United Nations. They discuss some of the data and trends that illustrate the impact tech has upon our personal, social, and emotional lives as well as discussing ways forward and what the near future holds.

Tyler Shores commented: One thing is clear: the challenges that we face that come as a result of technology do not necessarily have solutions via other forms of technology, and there can be tremendous value for all of us in reframing how we think about how and why we use digital technology in the ways that we do.

The second week of the Festival considers the ethical concerns of AI. In Can we regulate the internet? (16 March), Dr Jennifer Cobbe, The Trust & Technology Initiative, Professor John Naughton, Centre for Research in the Arts, Social Sciences and Humanities, and Dr David Erdos, Faculty of Law, ask: How can we combat disinformation online? Should internet platforms be responsible for what happens on their services? Are platforms beyond the reach of the law? Is it too late to regulate the internet? They review current research on internet regulation, as well as ongoing government proposals and EU policy discussions for regulating internet platforms. One argument put forward is that regulating internet platforms is both possible and necessary.

When you think of artificial intelligence, do you get excited about its potential and all the new possibilities? Or rather, do you have concerns about AI and how it will change the world as we know it? In Artificial intelligence, the human brain and neuroethics (18 March), Tom Feilden, BBC Radio 4 and Professor Barbara Sahakian, Department of Psychiatry, discuss the ethical concerns.

In Imaging and vision in the age of artificial intelligence (19 March), Dr Anders Hansen, Department of Applied Mathematics and Theoretical Physics, also examines the ethical concerns surrounding AI. He discusses new developments in AI and demonstrates how systems designed to replace human vision and decision processes can behave in very non-human ways.

Dr Hansen said: AI and humans behave very differently given visual inputs. A human doctor presented with two medical images that, to the human eye are identical, will provide the same diagnosis for both cases. The AI, however, may on the same images give 99.9% confidence that the patient is ill based on one image, whereas on the other image (that looks identical) give 99.9% confidence that the patient is well.

Such examples demonstrate that the reasoning the AI is doing is completely different to the human. The paradox is that when tested on big data sets, the AI is as good as a human doctor when it comes to predicting the correct diagnosis.

Given the non-human behaviour that cannot be explained, is it safe to use AI in automated diagnosis in medicine, and should it be implemented in the healthcare sector? If so, should patients be informed about the non-human behaviour and be able to choose between AI and doctors?

A related event explores the possibilities of creating AI that acts in more human ways. In Developing artificial minds: joint attention and robotics (21 March), Dr Mike Wilby, lecturer in Philosophy at Anglia Ruskin University, describes how we might develop our distinctive suite of social skills in artificial systems to create benign AI.

One of the biggest challenges we face is to ensure that AI is integrated into our lives, such that, in addition to being intelligent and partially autonomous, AI is also transparent, trustworthy, responsive and beneficial, Dr Wilby said.

He believes that the best way to achieve this would be to integrate it into human worlds in a way that mirrors the structure of human development. Humans possess a distinctive suite of social skills that partly explains the uniquely complex and cumulative nature of the societies and cultures we live within. These skills include the capacity for collaborative plans, joint attention, joint action, as well as the learning of norms of behaviour.

Based on recent ideas and developments within Philosophy, AI and Developmental Psychology, Dr Wilby examines how these skills develop in human infants and children and suggests that this gives us an insight into how we might be able to develop benign AI that would be intelligent, collaborative, integrated and benevolent.

Further related events include:

Bookings open on Monday 10 February at 11am.

Image: Artificial_intelligence_by_gerd_altmann

Read the original post:
Cambridge Science Festival examines the effects and ethics of artificial intelligence - Cambridge Network

Related Posts
This entry was posted in $1$s. Bookmark the permalink.