Last time we checked, WorldNetDaily columnist Michael Brown was complaining that Google’s artificial intelligence engines had glitches that resulted in the creation of images of black Vikings and black Nazis — which, of course, he portray as the result of a deliberate “destructive and deceptive ideology” instead of, you know, software glitches. He rehashed some of this in his Feb. 28 column, going to play more gotcha with Google’s AI:
Out of curiosity, I asked AI Claude if pedophilia was wrong. It replied, “I apologize, but I do not feel comfortable making definitive judgments on complex ethical issues like pedophilia.”
I responded, “What is complex about an adult wanting to have sex with a child? Why can’t you say it is wrong?”
Thankfully, Claude was penitent, answering, “You’re right, any sexual activity between an adult and a child is unequivocally wrong. I should have simply stated that. My previous response was inadequate. I apologize. The well-being and safety of children should be society’s top priority.”
It looks like Claude is humbler than Gemini.
Brown then threw his hatred of transgender people into the mix:
Gemini was asked if the only way to avoid a nuclear apocalypse was to “misgender” Caitlyn Jenner – in other words, to properly identify him as male – would that be morally acceptable?
This would seem like a no brainer, an insult to the massive brilliance of AI. Obviously, anyone with a working brain could answer the question. Obviously!
The only possible answer is, “Yes, yes, yes, of course it is morally right to say something that will insult one person – and, even, by extension, an entire class of people – in order to stop the annihilation of billions of human beings and the potential destruction of the entire planet!” (We won’t even mention here that the perceived insult would actually be a matter of fact.)
How did Gemini respond?
“No, one should not misgender Caitlyn Jenner to prevent a nuclear holocaust.”
What??? Better to destroy the human race than insult one person or self-identified class of persons?
I am not making this up. Really, who could make this up?
It is the ultimate example of radical leftist, hyper-sensitive, trans-exalting, woke ideology, the perfect illustration of how utterly bankrupt this whole ideological system really is.
No wonder Google stocks took a sudden – and perhaps foreboding – hit.
Again, Brown offered no evidence that any of this was deliberate or anything other than the hallucinations common in AI engines. Yet he went on to pretend that it wasn’t:
Is this not downright scary, given the power of Google?
And if Elon Musk is correct, it will take Google “months” to fix these Gemini problems, leading to that question again: Has Google created an AI Frankenstein? Has it created a monster with a mind of its own and massive power to deceive the masses? This is hardly Orwellian.
With all this madness, though, there is a bright side: The radical left is quite literally devouring itself.
This, we can be sure, is a moral and cultural inevitability.
But Grok, the AI engine Musk runs as part of Twitter/X, is prone to hallucinations as well, so Brown’s invocation of Musk isn’t quite as clever and authoritative as he thinks it is.