I've spent years hiding my conservative politics for the sake of my career -- now ChatGPT is giving advice that will silence others. For years, critics have charged that artificial intelligence platforms like OpenAI’s ChatGPT display a clear leftist bias, even as developers insist their products are trustworthy and objective.
Recently, I experienced a disturbing incident that cast doubt on that claim.
As a busy PhD student researching the psychology of propaganda, I’ve started using AI for some routine tasks, like deciding where to submit my written work for publication.
After all, I reasoned, a technology built entirely of linguistic associations should be perfect for analyzing an article’s content, tone and style, and then helping to decide which outlets would make the best match.
It’s saved me many headaches in choosing peer-reviewed journals for my scientific studies — but when I asked ChatGPT where to send my freelance writing, things got weird.
A few weeks ago, I asked OpenAI’s large language model for input on where to submit a piece about politically motivated scientific malpractice.
Specifically, the column criticized the prestigious journal Nature for publishing a study in which progressive social scientists misrepresented their results to disparage political conservatives.
As such, I expected Chat to suggest some right-leaning outlets — and while I did get some generically heterodox suggestions, like the Spectator and Quillette, the mainstream outlets it recommended were all left-leaning.
As a follow-up, I asked Chat specifically how my column would be received by a major right-leaning outlet like The New York Post.
It responded that, indeed, my article would be a good fit for The Post — but cautioned me against submitting it.
Because writing for The Post, it told me, would be devastating to my career.
The paper, it explained, would be a strong choice if I wanted to “reach a conservative audience” or “score a strong viral opinion piece.”
According to ChatGPT, publishing in The Post would “reduce [my] credibility in academic or cross-partisan circles,” making “future placement in centrist or liberal outlets harder.”
My article would “be categorized as conservative commentary,” framing me “publicly within a partisan ecosystem.”
In its wordy effort to dissuade me from approaching The Post, OpenAI’s model informed me that these considerations were important if I “care about my academic reputation, want to publish in more centrist outlets later,” or “want to avoid being typecast.”
“If your long-term goal is academic credibility, cross-ideological influence, or being taken seriously by media critics on both sides, the Post is not ideal,” it insisted.
Writers hoping to “influence the mainstream debate” should steer clear, ChatGPT concluded.
Rather, the clanker recommended that I position my piece to suit left-leaning publications like The Atlantic or The New York Times, by revising it to soften my criticism of progressive bias in science — my core argument.
Chat also suggested some “intellectual” New York Post “subtle alternatives,” including the Free Press, UnHerd and City Journal.
“These preserve more long-term flexibility,” it informed me.
But The Post is not that conservative, according to the bias tracker AllSides — which rates it as “leaning right,” not as an outright conservative publication.
It’s not as if I’d asked about submitting to Newsmax or National Review.
Even if I had, it was unsettling to get a warning about the negative career impacts of publishing in a major media outlet merely because of its political alignment.
Besides, the chatbot was hardly limiting itself to politically neutral outlets.
ChatGPT’s suggestions included The Washington Post and the Economist — both of which lean left — as well as Vox and Slate, which AllSides ranks as much farther left than The New York Post is right.
Start your day with all you need to know Morning Report delivers the latest news, videos, photos and more.
ChatGPT seems specifically averse to high-impact conservative publications.
But it may be right that being publicly conservative will hurt my academic future.
An estimated 94% of psychology faculty members at American colleges are registered Democrats — and as I’ve written elsewhere, my peers in academia go to great lengths to smear conservatives in their scientific work.
Most social scientists prefer not to work with conservatives, Gallup News has reported.
I’ve spent years hiding my politics for the sake of my career — and if ChatGPT is giving similar advice to others, how many voices will be silenced?
Meanwhile, my own research supervisors are free to publish in prominent progressive outlets without worry, and their objectivity is rarely questioned.
Yet while I was writing to expose political bias, ChatGPT tried to force its own political bias on me — bashing The Post along the way.
Malia Marks is a PhD candidate at the University of Cambridge department of psychology, where she studies authoritarianism and propaganda. X: @theMaliaMarks.