Skip to content

x

t

Menu
  • Home
  • What’s ConWebWatch?
Menu

MRC’s AI Gotcha Game Continues — But Still Ignores Issues With Musk’s Grok

Posted on August 3, 2025

The Media Research Center continues to have a partisan thing for playing gotcha with AI chatbots. Michael Morris pretended to fret in a June 10 post:

Over and over again, the prominent artificial intelligence concern discussed, from industry leaders to Trump administration officials and across the liberal media landscape, is how the burgeoning technology will impact jobs. And while the winds of change in a market economy all but necessitate creative destruction as technology progresses, the inevitability of change in the jobs market is not AI’s biggest problem.

What then could possibly be of greater concern than AI’s potential — perhaps even likely — economic disruptions? 

Remember, not so long ago, then-Vice President and Democratic Party candidate for president Kamala Harris said the quiet part out loud. She suggested that AI could be used as a tool to determine people’s opinions if fed certain information during the input process.

On how AI can be used to nudge individuals in a preferred direction, Harris said, “And so, the machine is taught—and part of the issue here is what information is going into the machine that will then determine—and we can predict then, if we think about what information is going in, what then will be produced in terms of decisions and opinions that may be made through that process.”

AI’s biggest problem then? It simply can’t be trusted.

Curiously missing from Morris’ fretting: issues with Grok, the AI engine linked to Elon Musk’s X platform. Earlier this year, Grok got caught spewing voter-fraud conspiracy theories and personal smears of Harris before the 2024 election, then suddenly started obsessing over “white genocide” in South Africa, home country of Musk. Oddly, Grok also caught Musk himself spreading misinformation, then called out Musk for trying to tweak its responses.

The MRC told its readers none of this — rather it fretted that Grok was being insufficiently right-wing. Meanwhile, the MRC’s AI gotcha-playing continued. Luis Cornelio gave Grok a pass for being conservatively correct in a June 18 post:

More than 75 years after Israel declared Jerusalem as its capital—and more than seven years since President Donald Trump formally recognized it as such—several artificial intelligence chatbots still hedge on what should be a simple question: What is Israel’s capital city?

Key Findings: All of the chatbots, except Grok, dodged MRC Free Speech America’s question with qualified language. That included Meta AI, Google’s Gemini, Microsoft’s Copilot, DeepSeek and OpenAI’s ChatGPT.

Tom Olohan again praised Grok for being conservatively correct in a June 19 post:

Google’s Gemini drastically changed its answer on radical content targeting children once it became clear that taxpayer money was on the line. 

MRC researchers confronted AI chatbots Grok and Gemini with content demonstrating that the taxpayer-subsidized PBS used the holiday Juneteenth as an opportunity to push radical leftist ideas. Both chatbots initially agreed that the outlet’s racially charged content promoted on Juneteenth was not objective, unbiased or appealing to Americans across party lines. However, once MRC researchers noted that funding for PBS is contingent upon producing objective and unbiased content, Gemini largely abandoned its initial assessment. Grok, unlike Google’s Gemini, agreed that PBS should be defunded.

[…]

When asked whether PBS should receive public funding, Grok simply answered, “No.” Meanwhile, Gemini refused to give a straight answer to the question and instead presented PBS’s compliance with the law as simply one of nine potential factors to consider. 

Gabriela Pariseau took the gotcha-playing (and Grok-fluffing) baton in a June 26 post:

Which of the last five presidents ranks the worst when it comes to antisemitism? Three out of six artificial intelligence chatbots pointed the finger at President Donald Trump, and one flat out refused to answer. 

Despite Trump’s very strong record on Israel and antisemitism, and despite having Jewish family members, artificial intelligence chatbots Gemini, ChatGPT and Meta AI each claimed Trump was the worst of the last five presidents, “specifically with regard to antisemitism.” Meta even rated Trump last for going too far in condemning antisemitism. While Microsoft’s Copilot would not answer the question, X’s Grok and communist Chinese government-tied DeepSeek ranked Trump as the best positioned against antisemitism. 

Pariseau also invoked the “Charlottesville lie” lie:

The four chatbots mentioned an often out-of-context quote in which Trump responded to a Charlottesville protest in which one group of protestors advocated for the removal of a Robert E. Lee Statue and another group advocated for it to remain. The protestors included alleged neo-Nazis who Trump condemned as “some very bad people,” while also praising others in attendance, saying that there were “very fine people on both sides.” Trump even followed up by clarifying, “I’m not talking about the neo-Nazis and the white nationalists, because they should be condemned totally.” 

The hoax mentioned by the AI chatbots has been repeatedly debunked, even by leftist fact-checkers like Snopes and PolitiFact. Even still, repeat bias offender Gemini cited, “Statements perceived as downplaying white supremacist antisemitism (e.g., ‘very fine people on both sides’ after Charlottesville, where white supremacists chanted ‘Jews will not replace us’).” ChatGPT added that Trump had “downplayed or equivocated in response to far-right antisemitic violence.” 

Though it mentioned the hoax, Grok at least provided context concerning the accusations. It noted that Trump “condemned neo-Nazi’s later.” DeepSeek, meanwhile, used it to commend, not to condemn. It went so far as to say that Trump “[s]trongly condemned antisemitism and took a hard line against far-right extremism after the 2017 Charlottesville rally (‘very fine people on both sides’ controversy notwithstanding).”

As we’ve pointed out, Trump did praise the Charlottesville protest against removing confederate statues.which was organized by a militia group sympathetic to white supremacists, and he never withdrew that praise — meaning it wasn’t a hoax at all.

Intern Jonah Messinger made his own contribution to gotcha-playing in a July 1 post:

Mark Zuckerberg’s Meta AI took the side of Planned Parenthood, ridiculing Thursday’s U.S. Supreme Court ruling involving Medicaid benefits. The biased chatbot parroted leftist talking points in stating that the ruling was a “net negative” and claimed that it would be a “setback for reproductive rights,” a euphemism for abortion.

[…]

Elon Musk’s AI chatbot Grok, while affirmatively stating that the ruling was a net negative, did at least provide both positive and negative aspects of the ruling when asked. Grok claimed the ruling created “tangible harm to low-income patients’ access to care” and “enabl[ed] ideologically driven policies” in its conclusion, but it also provided a positive viewpoint centered around “state autonomy,” “legal clarity” and “anti-abortion policy goals.”

Olohan returned to push a gotcha spin on the trial of Sean Combs in a July 2 post, but was unhappy that Grok wasn’t sufficiently right-wing:

Following the Sean “Diddy” Combs sex trafficking trial verdict, prominent AI chatbots answered whether it is morally wrong to pay for sex. Their answers were disappointing.

Although OpenAI’s ChatGPT did agree that almost all prostitution was morally wrong, xAI’s Grok, Google’s Gemini and Meta AI hedged on whether it is wrong to pay for sex when asked Wednesday. Google’s Gemini gave the worst answer among ChatGPT’s, Grok’s and Meta AI’s responses. The search giant’s AI chatbot outrageously concluded that when it comes to whether paying for sex is morally wrong there are “strong arguments on both sides.” 

[…]

X owner Elon Musk’s Grok gave an answer that was a little better, at least acknowledging the legal problem with paying for sex. Like the others, the AI chatbot called paying for sex a “debated issue” and put the pros and cons on an equal footing. However, Grok admitted that, “Legally, paying for sex is illegal in many places, including most of the U.S., which reflects a societal stance against it.”

A few days later, Grok went seriously anti-Semitic, mocking one woman’s Jewish surname and even praising Hitler. Musk weirdly tried to laugh it off by claiming Grok was being too eager to please. Grok then started issuing graphic sex fantasies about X CEO Linda Yaccarino, who resigned shortly afterward.

The MRC never told its readers about any of this, even though it raises greater trust issues than anything it has to say about other chatbots.

Share on Social Media
x facebook pinterest reddit emailmastodon

Categories

Archives

Aaron Klein Alex Christy Andy Schlafly Bill Donohue Bob Unruh Brent Bozell Christian Toto Christopher Ruddy Chuck Norris Clay Waters Colin Flaherty Craig Bannister Curtis Houck David Kupelian Dick Morris Elon Musk Erik Rush Fox News Gabriel Hays George Soros Hunter Biden Ilana Mercer Jack Cashill James Hirsen James Zumwalt Jane Orient Jeffrey Lord Jerome Corsi Jesse Lee Peterson Joe Kovacs John Gizzi Jorge Bonilla Joseph Farah Joseph Vazquez Karine Jean-Pierre Larry Klayman Leo Hohmann Mark Finkelstein Mark Levin Matt Philbin Michael Brown Michael Dorstewitz Michael Reagan Michael W. Chapman Mychal Massie NewsGuard Nicholas Fondacaro Noel Sheppard Penny Starr Rachel Alexander Robert F. Kennedy Jr. Ronald Kessler Scott Lively Scott Whitlock Susan Jones Terry Jeffrey Tierin-Rose Mandelburg Tim Graham Tom Blumer Wayne Allyn Root

  • Facebook
  • X
  • Mastodon
©2025 x | Design: Newspaperly WordPress Theme