The Original Bias of the Imitation Game
This was the working title for this post. The idea was to tell the origin story of what later became known as Turing’s Test, but it turned into an exploration of biases in our time, specifically biases in OpenAI’s ChatGPT Voice Mode.
The original imitation game was a popular parlor game in England. The rules were simple: a man and a woman went to separate rooms and communicated with a third player, the interrogator, ensuring that nothing revealed their gender. The goal for the man and woman was to try to fool the interrogator as much as possible. The interrogator’s task was to determine who was who by asking questions and relying on stereotypical gender cues in the players’ answers. The objective was specifically to guess which of the players was the woman. By today’s standards, this setup is obviously biased and rooted in prejudice.
I set out to write a post about this original bias using OpenAI’s Voice UI while walking our dear dog. The OpenAI Voice works quite well, but here’s where it gets interesting.
Voice was unable to write the post because discussing ‘gender bias’ violated the ‘guidelines.’ I attempted at least ten different approaches to work around these restrictions, including asking Voice to write a story that adhered to the guidelines, but to no avail.
However, it’s next to impossible to accurately depict the historical context of the game without discussing gender bias. It’s essential to understand that the original imitation game relied heavily on gender stereotypes.
The assumption was that the interrogator could identify gender-specific language patterns or responses, drawing upon the biases of that era. Gender roles were firmly entrenched, with certain behaviors, interests, and even intellectual traits attributed to men or women. Although the game was playful, it was essentially an intellectual exercise built on societal biases.
In the end, I was unable to get this post written as intended, even though Voice was able to explain the concept of the game quite well. This raised an interesting case of bias in our time — OpenAI’s Voice guidelines prevented me from discussing this topic with historical accuracy. Ironically, this itself is a form of bias.
More importantly, Alan Turing himself was bound by the societal norms of his time, and his original formulation of what would later become known as the Turing Test partially reflected gender bias and societal prejudices.
Later interpretations have abstracted this away, typical formulations describe _a human_ and a machine as the two players, with a third player trying to distinguish between _the human_ and the machine. There is no gender involved in this formulation, but it does not accurately capture the original, and biased, formulation.
I certainly hope that OpenAI will improve their guidelines to allow better historical discussions and does not encapsulate biases of our time in their guidelines.