As a drag queen and an AI researcher, I take fantasies seriously. This research began with a glitch, or a digital symptom. I was experimenting with an increasingly widely used deepfake pipeline, inputting my own drag visage: a face with hyper-feminine makeup and a very visible mustache. The result was a mystery. But my interest lies less in the input-output dynamics with AI — prone and limited to what my recent collaborator Yağmur and I call artificial imaginaries — than in an attempt to reimagine and reconstruct artificial intelligence from within. This work is rooted in my conviction that drag has such a quality: working from within imaginaries to reconstruct them.
Language models are trained mostly on Web data, which often contains social stereotypes and biases that the models can inherit. Prior research has primarily focused on English; grammatically gender-neutral languages such as Turkish are underexplored despite representing different linguistic properties with possibly different effects on biases. This paper investigates gender bias in Turkish language models — and evaluates models for embedded ethnic bias against Kurdish communities using Kurdish-origin names and culturally contextualized associations as probes.