CN

[an error occurred while processing this directive]
[an error occurred while processing this directive]
Language Technology and Translation: Translation careers in the brave new world of language models
Time: Sep 09.2024

Abstract:

With the rise in quality of output from Machine Translation, Large Language Models and other AI tools, the future of translation careers appears to be diminishing. Some people claim the arrival of Singularity when machines become better than humans.  In my talk I will explore the trajectory of the last hundred years of research into Language Models with the suggestion of pragmatic and theory-driven solutions for the future of translation careers. The pragmatic solutions are based on complementarity of LLMs and expertise of human translators, thus leading to a collaborative space where AI tools and human translators work in tandem to overcome language barriers and facilitate global communication.  The theory-driven solutions explore the inherent limits of how LLMs reflect the human use of language with the purpose of enabling communication in society. Better understanding of language functions helps with better understanding of when LLMs work and when they fail.

Speaker: Professor Serge Sharoff

Date: September 10

Time: 10:00-11:30 am, Beijing Time, China

Venue: ICSA Auditorium 136

Speaker’s Bio: Artificial Intelligence and more specifically Large Language Models, such as ChatGPT, have recently made a profound impact on how we interact with computers. Fundamental research in this area is at the core of my expertise, I've been doing this since my own PhD in the 1990s on the topic of developing a language model for Information Extraction.  Language models were small at the time, but that was the same idea of linking language to meanings. Since then, I've been doing research on better understanding of representative corpora automatically collected from the Web, as well as on using them to improve language technology for translators and everyday users. On a more general level I am interested in interpretability of language models with the aim of determining whether a model makes the right decisions for the right reasons.

Organizer: Institute of Corpus Studies and Applications