Dublin, 24 May: The paper, titled ‘End to End Sign Language Translation via Multitask Learning’ was accepted for presentation at the International Joint Conference on Neural Networks (IJCNN). The paper proposes a novel architecture that jointly performs continuous sign language recognition (CSLR) and sign-translation in an end-to-end fashion. Existing tools for Sign Language Translation predominantly incorporate a two-step process of CSLR and gloss-to-text translation. This is a joint work between ADAPT Academic Dr. Mohammed Hasanuzzaman (Munster Technological University) and IIT Patna.
For this research, the team extended the ordinary Transformer decoder with two channels to support multitasking, where each channel is devoted to solving a particular problem. To control the memory footprint of the model, channels were designed to share most of their parameters with each other. Each channel still maintains a dedicated set of parameters that is fine-tuned with respect to the channel’s task. The evaluation and analysis of these tools performed in this paper indicate that the use of this multitask decoder was successful and enabled them to achieve superior performance in comparison to older Sign Language Translation models.
The paper was accepted for presentation in the International Joint Conference on Neural Networks (IJCNN) which is the premier international conference in the area of neural networks theory, analysis and applications. The conference will be held on the 18th – 23rd of June in Queensland, Australia. Visit the website here.
Access the paper here.
Twitter feed is not available at the moment.