ElPaCo team members Andreas Liesenfeld and Mark Dingemanse will travel to Siegen for the Technolinguistics in Practice conference, May 24-26. They will give a talk titled How human interaction can inspire convivial language technology. Abstract:
As interactive language technologies increasingly become part of our everyday lives, one of the biggest frustrations remains their rigidity: they are designed to avoid turbulence by funneling people into pre-set dialogue flows, cannot gracefully repair breakdowns, and are devoid of social accountability. This has people adapting to tools rather than the other way around. Here we critically assess notions of language and technology that lie at the base of current developments, and propose a redirection inspired by Ivan Illich’s (1973) notion of convivial tools. A general challenge in this area is that while critical and radical deconstruction is necessary, the search is on for constructive ways of applying cumulative insights from decades of empirical work on human interaction. In this contribution, we aim to address this challenge in two ways.
First, we expose the narrow linguistic and cultural roots of much of today’s NLP (Joshi et al. 2020) and contrast it with the wealth of data and knowledge available in cross-linguistic corpora of everyday informal conversation (Dingemanse and Liesenfeld 2022). Looking at everyday language use can bring to light unseen diversity in semiotic practices, and can uncover ways in which interactional resources can serve human empowerment and inspire the design of language technology (Liesenfeld and Buschmeier 2023). Second, we build on new empirical work as well as prior critical and ethnomethodological work into social interaction (McIlvenny 1993; Button et al. 1995) to develop constructive ways of evaluating artificial conversational agents. Current evaluation methods in NLP/AI focus on the putative “humanness” and “fluidity” of “language generation” (Finch and Choi 2020) — all of these unexamined notions in need of technolinguistic scrutiny. Using insights from conversation analysis, we instead make a case for an action-oriented approach to evaluation (cf. Housley, Albert, and Stokoe 2019). We report on a set of heuristics for human evaluation that can help to systematically probe the interactive capabilities of dialogue systems. Such ‘counterfoil research’ (Illich 1973) is of critical importance to arrive at tools that better support human flourishing. This work provides a stepping stone for engineers and conversation designers who seek to ground their work in observable orientations to social action, rather than automated metrics and scoreboards divorced from interactional practices.
Combining critical and practical perspectives will contribute to respecifying taken-for-granted notions of language and technology, will deepen our understanding of the irreducibly social and interactive aspects of human tool use, and will allow the translation of some of these insights into technology design and engineering practices.