ChatGPT is sufficiently well known to warrant critical scrutiny, and for this project we wrote a paper, developed a website where we track open-source instruction-tuned large language models, designed a poster for presentation at the ACM conference on Conversational User Interfaces (CUI’23) and, yes, even designed a logo that combines a key image of the open source movement with a variation on ChatGPT’s corporate logo.
Seen from Earth, the movements of celestial bodies display near-intractable complexity. When taking not a single vantage point but multiple (here, Sun and Earth), suddenly the picture changes, and new forms of order become visible (Sousanis, 2015). Likewise, key concerns of cognitive science may be illuminated by a change of perspective that locates cognition not in isolated but in interacting minds.
L: The vowel space with colour associations by a synaesthete. R: The same vowels displayed according to tongue position when produced. Visualization: Christine Cuskley & Mark Dingemanse. For an interactive version of this visual, see here.
Graphic I made for a talk about our paper on roles of iconicity in words learning. As part of this paper we briefly review the role of metaphors in theories about language & development.
Shooing words —words that people use to chase away chickens— turn out to be highly similar across unrelated languages. These illustrations by Josje van Koppen accompanied a write-up about my serendipitous finding in popular science magazine Onze Taal.
The actual table from my paper looks a lot less exciting, but it does contain additional information about language families and about words for ‘chicken’ in the same set of languages. The basic conclusions is that words for ‘shoo’, but not ‘chicken’, show strong convergence towards sibilant sounds in 17 languages from 11 unrelated language families.
Illustrations from: Renckens, Erica. “‘Ksst!’ Het Lokken En Wegjagen van Dieren.” Onze Taal, 2020.
Not strictly a scientific visualization, and not by me. Still included here because it is a compelling illustration of the central point of this essay on brain-to-brain interfaces, which deals with naïve ideas about a cyberpunk future in which we’d be connected by wires instead of words. (Source of the image is Technology Review, who got it from shutterstock.)
Ideophones have rich imagistic meanings that can be hard to describe. In explaining the ideophone minimini, four speakers of Siwu independently use gestures that are both similar (in depicting a spherical shape) and different (in size, handshape, and method of representation). Collectively, the gestures illustrate an elusive aspect of the ideophone’s meaning while also showing that its linguistic form as spoken word is more conventionalized than the gestures it comes with.
Illustration made by Frank Landsbergen for a piece on universal words I wrote for a popular science book. It covers three types of words that, each for their own reason, come out similarly across languages. The three types are: (i) interactional tools (huh? for repair, oh! for a news receipt); (ii) expressive interjections (au for ‘ouch’); and (iii) onomatopoeia (bam ‘BAM’).
Simplifying somewhat, interactional tools are similar across languages because the ecology they live in (the rapid-fire turn-taking of conversation) provides the same selective pressures across languages; a case of convergent cultural evolution. Expressive interjections may go back to ancestral vocalizations also found in our close evolutionary relatives. And onomatopoeia come out similarly to the extent that they imitate the same kinds of sounds.
On the ground is a plate of metal on which two small amounts of gunpowder have been laid to dry in the sun; besides it stands the speaker, explaining why one needs to be careful when igniting the gunpowder to test its quality: it may flare up “SHÛ, SHÛ”, a vocal depiction that is produced in precise synchrony with the two hands moving symmetrically in a quick upward motion. (The right hand holds an object.)