The Seventh Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2024)

@ ACL 2024 (August 11–16, 2024)

Bangkok, Thailand

NEWS

TIMELINE

SCOPE


Based on the success of past low-resource machine translation (MT) workshops at AMTA 2018, MT Summit 2019, AACL-IJCNLP 2020, AMTA 2021, COLING 2022 and EACL 2023, we introduce LoResMT 2024 workshop at ACL 2024. In the past few years, machine translation (MT) performance has been improved significantly. With the development of new techniques such as multilingual translation and transfer learning, the use of MT is no longer a privilege for users of popular languages. Consequently, there has been an increasing interest in the community to expand the coverage to more languages with different geographical presences, degrees of diffusion and digitalization. However, the goal to increase MT coverage for more users speaking diverse languages is limited by the fact the MT methods demand huge amounts of data to train quality systems, which has posed a major obstacle to develop MT systems for low-resource languages. Therefore, developing comparable MT systems with relatively small datasets is still highly desirable.


In addition, despite the fast developments of MT technologies, MT systems still rely on several NLP tools to pre-process human-generated texts in the forms that are required as input for MT systems and post-process the MT output in proper textual forms in the target language. This is especially true when it comes to systems involving low-resource languages. These NLP tools include, but are not limited to, several kinds of word tokenizers/de-tokenizers, word segmenters, morphology analysers, etc. The performance of these tools has a great impact on the quality of the resulting translation. There is only limited discussion on these NLP tools, their methods, their role in training different MT systems, and their coverage of support in the many languages of the world.


The workshop provides a discussion panel for researchers working on MT systems/methods for low-resource and under-represented languages in general. We would like to help review/overview the state of MT for low-resource languages and define the most important directions. We also solicit papers dedicated to supplementary NLP tools that are used in any language and especially in low-resource languages. Overview papers of these NLP tools are very welcome. It will be beneficial if the evaluations of these tools in research papers include their impact on the quality of MT output.

TOPICS

We are highly interested in (1) original research papers, (2) review/opinion papers, and (3) online systems on the topics below; however, we welcome all novel ideas that cover research on low-resource languages.

- COVID-related corpora, their translations and corresponding NLP/MT systems
- Neural machine translation for low-resource languages
- Work that presents online systems for practical use by native speakers
- Word tokenizers/de-tokenizers for specific languages
- Word/morpheme segmenters for specific languages
- Alignment/Re-ordering tools for specific language pairs
- Use of morphology analyzers and/or morpheme segmenters in MT
- Multilingual/cross-lingual NLP tools for MT
- Corpora creation and curation technologies for low-resource languages
- Review of available parallel corpora for low-resource languages
- Research and review papers of MT methods for low-resource languages
- MT systems/methods (e.g. rule-based, SMT, NMT) for low-resource languages
- Pivot MT for low-resource languages
- Zero-shot MT for low-resource languages
- Fast building of MT systems for low-resource languages
- Re-usability of existing MT systems for low-resource languages
- Machine translation for language preservation

SUBMISSION INFORMATION


We are soliciting two types of submissions: (1) research, review, and position papers and (2) system demonstration papers. For research, review and position papers, the length of each paper should be at least four (4) and not exceed eight (8) pages, plus unlimited pages for references. For system demonstration papers, the limit is four (4) pages. Submissions should be formatted according to the official ACL 2024 style templates (Overleaf). Accepted papers will be published online in the ACL 2024 proceedings and will be presented at the conference.


Submissions must be anonymized and should be done using the provided submission system. Scientific papers that have been or will be submitted to other venues must be declared as such and must be withdrawn from the other venues if accepted and published at LoResMT. The review will be double-blind. Authors of an accepted paper should present their paper in person at ACL 2024. Papers should be submitted in PDF to the LoResMT Open Review.


We would like to encourage authors to cite papers written in ANY language that are related to the topics, as long as both original bibliographic items and their corresponding English translations are provided.


Registration is handled by the main conference (https://2024.aclweb.org/).

ORGANIZING COMMITTEE (LISTED ALPHABETICALLY)


Atul Kr. Ojha

Chao-Hong Liu

Ekaterina Vylomova

Flammie Pirinen

Jade Abbott, Retro Rabbit

Jonathan Washington

Nathaniel Oco

Valentin Malykh

Varvara Logacheva

Xiaobing Zhao

PROGRAM COMMITTEE (LISTED ALPHABETICALLY)


Abigail Walsh, ADAPT Centre, Dublin City University, Ireland

Alberto Poncelas, Rakuten, Singapore

Alina Karakanta, Leiden University

Amirhossein Tebbifakhr, Fondazione Bruno Kessler

Anna Currey, Amazon Web Services

Aswarth Abhilash Dara, Walmart Global Technology

Arturo Oncevay, University of Edinburgh

Atul Kr. Ojha, DSI, University of Galway & Panlingua Language Processing LLP

Barry Haddow, University of Edinburgh

Bharathi Raja Chakravarthi, University of Galway

Bogdan Babych, Heidelberg University

Chao-Hong Liu, Potamu Research Ltd

Constantine Lignos, Brandeis University, USA

Daan van Esch, Google

Diptesh Kanojia, University of Surrey, UK

Duygu Ataman, University of Zurich

Ekaterina Vylomova, University of Melbourne, Australia

Eleni Metheniti, CLLE-CNRS and IRIT-CNRS

Flammie Pirinen, UiT The Arctic University of Norway, Tromsø

Kalika Bali, MSRI Bangalore, India

Koel Dutta Chowdhury, Saarland University (Germany)

Jade Abbott, Retro Rabbit 

Jasper Kyle Catapang, University of the Philippines

Jindřich Libovicky, Charles University

John P. McCrae, DSI, University of Galway

Liangyou Li, Noah’s Ark Lab, Huawei Technologies

Majid Latifi, University of York, York, UK 

Maria Art Antonette Clariño, University of the Philippines Los Baños

Mathias Müller, University of Zurich

Monojit Choudhury, (Mohamed bin Zayed University of Artificial Intelligence

Nathaniel Oco, De La Salle University (Philippines)

Rajdeep Sarkar, Yahoo

Rico Sennrich, University of Zurich

Saliha Muradoglu, The Australian National University

Sangjee Dondrub, Qinghai Normal University

Santanu Pal, WIPRO AI

Sardana Ivanova, University of Helsinki

Shantipriya Parida, Silo AI

Sunit Bhattacharya, Charles University

Surafel Melaku Lakew, Amazon AI

Wen Lai, Center for Information and Language Processing, LMU Munich

Valentin Malykh, Huawei Noah’s Ark lab and Kazan Federal University


Additional Reviewer: 


Gaurav Negi, University of Galway