OpenAI has announced a fix for a widely criticized quirk in its ChatGPT chatbot: the overuse of em dashes (—). For months, the frequent inclusion of this punctuation mark became a telltale sign, often unfairly attributed to AI-generated text, even in cases where human writers naturally used it. The issue led to accusations of laziness and reliance on chatbots, even among individuals who legitimately preferred the punctuation.
The Problem with the “ChatGPT Hyphen”
The em dash has long been a stylistic choice for many writers, used to create pauses, emphasize phrases, or indicate abrupt shifts in thought. However, the prevalence of this mark in ChatGPT’s output—even when users explicitly requested its absence—fueled skepticism. The “ChatGPT hyphen” became an unintentional marker of AI-generated content, regardless of whether it was actually written by a human.
This led to a strange situation where writers who naturally employed em dashes faced unnecessary criticism. The punctuation became associated with a lack of originality or effort, even though its use predates the rise of large language models.
OpenAI’s Response and the Fix
For some time, OpenAI struggled to address the issue. Users reported that ChatGPT continued to insert em dashes even when directly instructed not to. The problem appeared to stem from an internal quirk in the model’s training or output generation.
Now, OpenAI CEO Sam Altman has confirmed that the issue has been resolved. In a post on X (formerly Twitter), he stated that ChatGPT will now comply with user instructions to avoid em dashes, calling it a “small-but-happy win.”
Why This Matters
The fix is significant not just for writers who dislike the punctuation, but for the broader conversation surrounding AI-generated content. The em dash became an unintentional signal of AI involvement, even when inaccurate. By addressing this quirk, OpenAI removes one layer of unnecessary skepticism and allows writers to use the punctuation without facing unfair scrutiny.
This also highlights the subtle ways in which AI models can imprint stylistic biases onto their output. The fact that ChatGPT consistently favored em dashes, even against user instructions, demonstrates how training data and internal algorithms can shape a model’s behavior.
The resolution is a minor but welcome step toward making AI-generated text less distinguishable from human-written content, at least in terms of punctuation. However, the broader challenge of detecting AI-generated content remains, as models continue to evolve and refine their ability to mimic human writing styles