New Episode of ADAPT Radio’s ‘HumanAIse’ Focuses on AI and Human Interaction

27 February 2023

ADAPT Radio’s HumanAIse series continues this month with a discussion on AI and Human Interaction with Dr Giovanni di Liberto and Dr Andrea Patane.

How do smart devices understand what we’re saying and what happens if they get something wrong? This month ADAPT Radio focuses on the interaction between machines and our voices as well as how we can build even more intelligent systems for complex interaction in the future. Joining the podcast is ADAPT TCD Researcher Dr Giovanni di Liberto, who researches human perception, and ADAPT TCD Assistant Professor, Dr Andrea Patane, whose work investigates what we can learn when machine learning fails.

Dr Patane kicked off the podcast with an informative breakdown of how a computer and an intelligent system perceive what humans are saying. A couple of years ago, machine learning ‘models’ (i.e. a file that has been trained to recognise certain types of patterns) within devices behaved more similarly to the way a human would operate. In order to develop a model that could recognise the voice of a person it needed to be an expert on how the voice of that person worked. The devices would be trained on personal speech patterns and construct a model to determine precisely what you were trying to say. Now, the majority of models can be trained on huge amounts of pre-recorded data.

Following this discussion, Dr di Liberto provided an insight into the similarities between human and machine learning. He described the machine learning model as working with a large amount of data and can be thought of as a ‘black box’ that we need to understand similar to the complexity of the human or animal brain. Further comparison can be drawn between humans and models particularly where machine learning is inspired by neuroscience in the application of developing neural networks. These ideas in human culture get recycled and used in the progress of machine learning methods. However, at their core the two are fundamentally different. Dr. Patane approaches this situation by applying particular strategies and approaches to try and interpret and understand how the model is performing certain tasks. 

Both researchers agree that ethical considerations of these technologies should be reviewed carefully. Dr Patane in particular focuses on understanding how the model can fail and what steps can be taken to prevent this in the future. For example, a machine learning neural network that can recognise and identify traffic signs from millions of data sets but when a pixel is removed from the image and changed into something the neural network has not seen before (e.g. turning it bright green as Dr Patane suggests), we do not know how the network will react. This poses a number of ethical questions as these vulnerabilities could potentially be exploited. If using this kind of technology in a healthcare setting, you want it to be as safe as possible for the patient which is why research, like Dr Patane’s, is so important.

The innovative research Dr. di Liberto’s is undertaking through collaboration with a company in Denmark that produces hearing devices highlights the beneficial work being done to improve hearing aids through these technologies. Many people will experience some form of hearing loss throughout their life and often this can be an indication of developing dementia. In one aspect of his work, Dr di Liberto develops data analysis methods applying them to brain data to identify the neural process responsible for transforming sensory stimulus into its abstract meaning. The benefits of these kinds of technologies on the quality of life for many people are outstanding. Another aspect of machine learning that Dr di Liberto sees in the future is the accessibility of machine learning for professionals. Even now in TCD there are modules on machine learning that are oriented toward other types of professionals that may deal with machine learning directly and will need to understand how to read results.

Finally, drawing the podcast to a close, Dr Patane emphasises the importance of understanding that machine learning models are a statistical model by nature and are based on mathematical assumptions of the world. As soon as these assumptions do not hold anymore, we do not know what will happen. This highlights the importance of machine learning research, like both Dr Patane and Dr di Liberto’s, in uncovering the limitations of these models so we can build even more intelligent systems for complex interactions in the future.

For further insight into the interactions of humans and AI, catch HumanAIse on SoundCloudiTunesSpotify, and Google Podcasts.

ADAPT Radio: HumanAIse is ADAPT’s newest podcast series providing an in-depth look at the future of AI, automation and the implications of entrusting machines with our most sensitive information and decisions.