Experts Explore The Next Generation of Human Computer Interaction

05 November 2021

AI applications “speak” to people and answer questions when the name of a digital voice assistant is called out. They run the chatbots that handle customer-service issues people have with companies.  But what is next for human computer interaction and voice technology? The latest in the ADAPT Centre’s Creating Tomorrow’s World webinar series focused on Human Computer Interaction and featured three experts who explored developments in the field.

Dr Benjamin Cowan, Associate Professor at UCD’s School of Information & Communication Studies and Co-Principal Investigator in the SFI funded ADAPT Centre; Alan Coleman, co-founder and CEO of Sweepr, a platform marketed to businesses to assist them to provide personalised customer care; and Dr Amelia Kelly, VP of Speech Technology at Soapbox Labs contributed to the discussion.

Speaking at the event, Dr Amelia Kelly said: “At Soapbox Labs we build tools that deliver fun, frictionless experience for kids.  Our research shows that adult voice tech fails for children’s speech. In the world that children inhabit we want to do things that are useful for children.  We focus on the gaming and education market for children.  The companies we sell to can then use the insights to come up with really engaging technologies for children.  Privacy is crucial for us and one interesting area is the area of high accuracy state-of-the-art learning on microchips – no data transfer to the cloud is needed. It remains on the toy and allows the children to have a really engaging experience.”

Alan Coleman followed and brought the audience into his company’s efforts to drive effective engagement between people and voice enabled technologies.  Speaking at the event Alan said: “We started Sweepr under the premise of solving technical support in the home. This is a multifaceted challenge where we need a plethora of contexts. The principles of how digital interactions are built requires a much more subtle and personal model for developing personalised experiences.  We are evolving our model and it is attempting to provide a truly personalised experience for customers using context to try to understand what the customer might need. The future of digital interaction needs to be hyper personalised and individualized.  These types of interactions will delight customers because it is highly effective and powerful and available 24/7.”

The final speaker on the day was Professor Benjamin Cowan who spoke about the research developments in speech based interfaces and explored where developments are taking place. “I have an obsession with speech and dialogue!  The interaction with speech agents is fascinating from a psychological dialogue based point of view.  Our research gives tools to industry on how to design systems properly.  As speech agents and interfaces are becoming commonplace, with people using speech on a regular basis to interact with devices, it is important to understand the context of how people are using these devices.  It is clear that the interaction is user led however dialogue is a joint activity so what does the future hold for speech agents and speech interactions?  This moves from intelligent assistant where an agent is used by a particular moment in time to an intelligent collaborator where the system can try to sort out problems and act as a decision making aid.  Our research in ADAPT is taking voice assistants to voice collaborators and making them a social actor or a member of the team within the space.”

The full webinar is available on YouTube: https://youtu.be/-a7HYLnolJk 

ADAPT’s Creating Tomorrow’s World webinar series runs on a monthly basis.  The next webinar will take place in December and will be listed on the ADAPT Centre website shortly.