How can we improve collective intelligence in a multilingual context?

16 November 2023

Guest article by Prof. Andy Way, School of Computing Dublin City University

Most reasonable, rational people would not find the following assertions to be problematic:

A group of people who speak different languages is potentially richer, culturally speaking, as long as people can communicate. The means of this communication will probably have an important influence on the group’s behaviour. A reflection is necessary to design multilingual groups capable of better decisions. But even if it is necessary, it may not be easy.

Despite the widespread claims of English being a lingua franca, this is far from being an actuality today. In any case, in a European context, it flies in the face of our very ideals; in varietate concordia (“united in diversity”), the official Latin motto of the EU, illustrates that the many different cultures, traditions and languages in Europe are a hugely positive asset for the continent. In Europe’s multilingual setup, all 24 official EU languages are granted equal status by the EU Charter and the Treaty on the EU. Moreover, the EU is home to 60+ regional and minority languages which are protected and promoted under the European Charter for Regional or Minority Languages treaty since 1992, in addition to migrant languages and various sign languages, spoken by some 50 million people.

In the European Language Equality (ELE) project which I coordinate, our aim is to protect this language diversity by promoting a large-scale funding programme over the next decade to ensure that all Europe’s languages are digitally viable. Our findings over the past two years demonstrate a very sorry state of affairs: despite the obvious improvements in language technology (LT) since the implementation of methods based on neural networks, language barriers still hamper cross-lingual communication and the free flow of knowledge across borders, and many languages are endangered or on the edge of extinction. On a global scale, the situation is far worse.

Accordingly, then, translation technology has a vital role to play in addressing these problems, but its capabilities have been hugely overhyped. While in principle the techniques involved in neural machine translation (NMT) apply to any pair of languages, in practice demands on data availability restrict these to a small subset of the world’s languages, so claims by multinational corporations of “bridging the gap between human and machine translation [quality]” or “human parity” are massively overblown.

In another European project in which we participate, EUComMeet, we are attempting to facilitate multilingual communication between speakers in the context of deliberative democracy. In many European countries, citizens’ assemblies have been set up as a response to the challenges besetting liberal democracies. These assemblies are participatory spaces created to improve the democratic practice by directly linking citizens with policy makers. Many of the issues faced across Europe currently are similar (e.g. immigration, climate change, Ukraine war, cost of fuel etc.), so discussions are taking place in these respective citizens’ assemblies, but in a monolingual context. What we have done in my team is build 30 NMT engines to allow real-time effective communication across language boundaries. We have road-tested these NMT systems against Google Translate, 4 and the vast majority outperform this well-regarded tool. The systems are currently undergoing testing with real users across the languages of the project (English, French, German, Irish, Italian and Polish), but initial signs are very positive that the multilingual communicative process can indeed be supported by MT.

In sum, then, we can see that real efforts are being made in Europe to facilitate effective communication between speakers of different languages. Allowing people to speak their own languages enriches the process, as users are immediately more comfortable, and coming from
different cultural and linguistic backgrounds ensures a richer, more beneficial experience for all, with better decision-making and improved outcomes as a result. If this is to be extended to speakers of all European languages – not just those that have ample resources, but languages without a strong written tradition, as well as non-oral languages like sign languages – then all our languages need to be protected and supported so that they can continue to thrive, so that speakers can operate using their language of choice, as opposed to one that is imposed on them.

This is far from easy, but the nettle has to be grasped now before it is too late. LT is playing an increasingly important role in the daily lives of many European citizens. Looking ahead, it is possible to foresee intriguing opportunities and new capabilities in this regard, but also a range of uncertainties and inequalities that may leave several groups increasingly disadvantaged. It cannot be the case that LT development is driven only by the economic status of a language’s users, rather than sheer demographic demand. If the ELE programme calling for LT support for both low- and high-resource languages is funded, then new LT systems can be developed using increased resources (experts, data, computing facilities etc.) for every European language and domain of application, to the benefit of us all.