Location: Trinity College Dublin
Level: PhD

Post Status:Specific Purpose Contract – Full-time (14 months if started 1st Dec 2020)

Research Group / Department / School: Sigmedia Research Group, ADAPT Centre, School of Engineering, Trinity College Dublin

Location: Electronic & Electrical Engineering, School of Engineering, Trinity College Dublin

Reports to: Principle Investigator, Prof. Naomi Harte

Salary:  Between €43,411 - €50,030 per annum (depending on experience)

Hours of Work: 37 hours per week (full time)

Closing Date: 12 Noon (GMT), 16th November (or until filled)

Click here for the full job description & application procedure
Click here to appy.

Whilst the role can be started remotely in Dec 2020 if practical, the ideal candidate will be in Dublin, Ireland by Jan 2021 to allow working in our Lab in TCD. Note that the PI is open to discuss work practices that include a mixture of lab/home working.

Post Summary

The Science Foundation Ireland ADAPT Research Centre (adaptcentre.ie), seeks to appoint a Research Fellow in Multimodal Interaction. The successful candidate will support research in online interaction in teaching scenarios, in the context of the recently funded SFI Covid 19 Rapid Response Project, RoomReader led by Prof. Naomi Harte in TCD and Prof. Ben Cowan in UCD. The candidate will be working with a team to drive research into multimodal cues of engagement in online teaching scenarios. The work involves a collaboration with Microsoft Research Cambridge, and Microsoft Ireland.

The candidate should have extensive experience in speech based interaction, and modelling approaches using deep learning with multimodal signals e.g. linguistic, audio, and visual cues. The candidate will also be responsible for supporting research in a number of areas including:

  • Identifying and understanding multimodal cues of engagement in speech based interaction
  • Deep learning architectures for multimodal modelling of engagement in speech interactions
  • Application and evaluation of modelling approaches to the specific case of online teaching scenarios

Thus, the ideal candidate will typically have specific expertise in speech interaction, signal processing and deep learning. Reporting to a Principal Investigator, the successful candidate will work within a larger group of Postdoctoral Researchers, PhD students and Software Developers. They will have exposure to all aspects of project lifecycle, from requirements analysis to design, code, test and face-to-face demonstrations, including with our industry partners Microsoft Research and Microsoft Ireland.

The successful candidate will work alongside the best and brightest talent in speech and language technologies, and video processing in the Sigmedia Research Group on a day-to-day basis. The wider ADAPT Research centre will give exposure to a wider range of technologies including data analytics, adaptivity, personalisation, interoperability, translation, localisation and information retrieval. As a university-based research centre, ADAPT also strongly supports continuous professional development and education. In this role you will develop as an researcher, both technically and scientifically. In addition, ADAPT will support candidates to enhance their confidence, leadership skills and communication abilities.

Standard Duties and Responsibilities of the Post

  • Identify and analyse research papers in online human interaction scenarios, specifically those relevant to online teaching
  • Identify existing datasets suitable for baseline analysis of multimodal interaction
  • Support the design and capture of new multimodal data corpus (actual task is conducted by a Research Assistant on the project)
  • Develop and adapt deep learning architectures to multimodal interaction scenarios, subsequently adapting the approaches to the specifics of online teaching interactions
  • Liaise with engineering and HCI experts to refine and influence approaches to the project at all levels
  • Report regularly to the PI of the project, and interact regularly with other team members to maintain momentum in the project
  • Dataset recording and subsequent editing and labelling for project deployment
  • Publish and present results from the project in leading journals and conferences

Funding Information

The position is funded through the SFI COVID-19 Research Call 2020.

  • Person Specification

The successful candidate will have broad experience in deep learning architectures applied to speech-based interaction. The successful candidate is expected to:

  • Have a thorough understanding of speech based interaction, including linguistic, verbal, non-verbal and visual cues
  • Be expert in deep-learning applied to speech processing
  • Be skilled at taking disparate research ideas and draw innovative conclusions or see new solutions
  • Have excellent interpersonal skills
  • Be highly organised in their work, with an ability to work remotely if necessary

Qualifications

Candidates appointed to this role must have a PhD in Engineering or Computer Science, or a closely related field

Knowledge & Experience 

Essential

  • Understanding of multimodal cues in speech based interaction
  • Experience of the development of deep learning architectures for speech processing
  • Familiarity with running of large scale experiments e.g. on a high-performance compute farm
  • Publication track record commensurate with career stage in high quality conferences or journals

Desirable

Familiarity with MS Teams environment
Experience in post-production tools for video editing
Mentoring of junior team members
Record of open source publishing of code

Skills & Competencies

  • Excellent written and oral proficiency in English (essential)
  • Good communication and interpersonal skills both written and verbal
  • Proven ability to prioritise workload and work to exacting deadlines
  • Flexible and adaptable in responding to stakeholder needs
  • Enthusiastic and structured approach to research and development
  • Excellent problem-solving abilities
  • Desire to learn about new products, technologies and keep abreast of new product technical and research developments

Benefits

  • Competitive salary and equity
  • Computer and peripherals of your choice
  • A fast-paced environment with impactful work
  • Pension
  • Day Nursery
  • Travel Pass Scheme
  • Bike to Work Scheme
  • Employee Assistance Programme
  • Sports Facilities
  • 22 days of Annual Leave
  • Paid Sick Leave
  • Training & Development
  • Staff Discounts

Sigmedia Research Group

The Signal Processing and Media Applications (aka Sigmedia) Group was founded in 1998 in Trinity College Dublin. Originally with a focus on video and image processing, the group today spans research in areas across all aspects of media – video, images, speech and audio. Prof. Naomi Harte leads the Sigmedia research endeavours in human speech communication. The group has active research in audio-visual speech recognition, evaluation of speech synthesis, multimodal cues in human conversation, and birdsong analysis. The group is interested in all aspect of human interaction, centred on speech. Much of our work is underpinned by signal processing and machine learning, but we also have researchers grounded in linguistic and psychology aspects of speech processing to keep us grounded.

Background on ADAPT

The ADAPT Centre, a world-leading SFI Centre, is Ireland’s global centre of excellence for

digital content technology funded through Science Foundation Ireland’s Centres programme. ADAPT combines the expertise of over 300 researchers across eight Higher-Education Institutes (Trinity College Dublin, Dublin City University, University College Dublin, Technological University Dublin, Cork institute of Technology, Athlone Institute of Technology, Maynooth University and National University of Ireland, Galway) with that of its industry partners to produce ground-breaking digital content innovations. The ADAPT Centre executive function is co-hosted between Trinity College Dublin and Dublin City University. ADAPT brings together more than 300 researchers who collectively have won more than €100m in funding and have a strong track record of transferring world-leading research and innovations to more than 140 companies. ADAPT partners are successfully advancing the frontiers of Artificial Intelligence (AI), content analysis, machine translation, personalisation, e-learning/education, media technologies, virtual and augmented reality, and spoken interaction, as well as driving global standards in content technologies.