top of page

AI Should Give You Trust Issues

  • Writer: Michael Trotter-Lawson
    Michael Trotter-Lawson
  • 2 hours ago
  • 7 min read

This is a horror story. This is real life.


In 1984, James Cameron’s classic sci-fi action film The Terminator was released. In the movie, a cybernetic assassin played by Arnold Schwarzenegger is sent back in time to kill Sarah Connor, the woman who would eventually give birth to John Connor, leader of the human resistance in a post-apocalyptic future dominated by the hostile artificial intelligence Skynet. The Terminator is just one of a collection of stories about how AI could take over the world and threaten humanity. While Skynet, Hal 9000, and other hostile AI programs make for very compelling fiction, the true threat of AI is much more subtle.

 


“I'm afraid I can't do that”


Artificial intelligence today is not real artificial intelligence. What we refer to as AI has no actual “intelligence”. These programs cannot think or act without human input, and they require millions of lines of human writing to approximate “human” writing. AI image and video generation work the same way; they depend on a vast quantity of training data that has been manually labeled or described so that the AI programs can extrapolate new photos and videos. Despite employees at Claude talking about their program showing evidence of “anxiety”, the truth is that Claude, ChatGPT, Gemini, and all the rest can only imagine concepts like anxiety.


This becomes even more evident when you look at the limitations and failures of generative AI. If you prompt ChatGPT or Google Gemini to generate an image of a full glass of wine, it cannot do so. As shown below, it can only create images showing a glass of wine filled to a normal amount. 



Why? Because AI does not know or understand how wine or wine glasses work. It cannot comprehend the physics of how liquids actually interact with objects in the real world. When you ask AI for a picture of a glass of wine, the program consults its training data, specifically the data people have labeled “images of a glass of wine”, and returns its best approximation of a “full” glass of wine. The AI does not know what “full” means; it does not know what anything means. So, since wine glasses are nearly always filled halfway, especially when they’re about to be photographed, AI has only ever “seen” images of a half-full (or half-empty for my fellow pessimists) glass of wine, and cannot imagine what a full glass of wine would even look like.


That’s on the image side of things, but it also alludes to another major AI shortcoming: hallucination. Notice how Gemini returned those incorrect images instead of an error message like “sorry, I don’t understand” or “my database does not include full glasses of wine”? That’s another core limitation of AI; it cannot say “I don’t know”.


That may be an overgeneralization, since these programs can occasionally admit to lacking information (usually when it comes to current events), but since AI is basically a fancy auto-complete program, it does not know what information in its database is true and what’s false. When asking AI about molecular biology, its just as likely to cite Reddit as it is a Harvard dissertation. Fortunately, these programs do typically prorvide their sources now, so you can go and do further research and verify those claims independently (though many people will just take the AI at face value).


You may be wondering, “isn’t this supposed to be about the dangers of AI? Why is he just writing about how stupid and incompetent it actually is?”. Fair point, but my intention is to show that we should not be worried about hostile artificial super intelligence (at least not anytime soon) like the movies display. Rather, we need to worry about the combination of modern AI with hostile human intelligence.

 


“…the final battle would not be fought in the future. It would be fought here, in our present.”


I want to go back to that picture of a glass of wine.



It was not what I asked for, but it is still a remarkable picture. Without the watermark in the bottom right corner, I would not be able to point out anything that gives this away as an AI generated picture. In fact, if you look closely at the reflection in the wine, you can even see a photographer supposedly taking the picture! What is also impressive is how the second image perfectly matches the background of the first image. One of the earlier tells that something was AI-generated was that backgrounds would be inconsistent between different various takes. It is absolutely stunning to see how far AI image generation has come. It is also terrifying.


Since consistency in image generation has improved so much, social media has become awash with accounts that are completely AI-generated. Bots on social media have existed nearly as long as social media has, but with AI and very light human input, these modern bots are considerably more convincing. These bots are now able to respond accurately to the content they’re tied to, whether that be their own account description, a post they’re commenting on, or even in the comments section of YouTube videos. If managed well, the only way to tell if these accounts are fake is by investigating the age, quality, and consistency of the content.


Sadly, there is no simple “is this AI?” button to debunk these online impostors. However, there are methods to find suspicious content and verify real accounts. Let’s break it down:


1.      Account Age: If the account was started any earlier than 2023, it’s not AI. Maybe they’ve pivoted to AI-generated content more recently, but it was at least started before these AI programs were convincing enough.

2.      Unrealistic Posting Frequency: How often are they posting? Do they have hundreds or thousands of posts despite only existing for a few weeks? If their schedule seems impossible to keep for a human, it’s probably AI.

3.      Famous Friends: Do they have lots of pictures of themselves with celebrities? Obviously, lots of people meet celebrities and are excited to get pictures with them, but if they have an unrealistic number of pictures with famous people, it could be AI.

4.      Normal Friends: Do they have pictures of themselves with other, normal people? If so, are they tagged in the post? Do they have their own account? If they do, are they following this suspicious account? Consider your own relationship with friends online and see if it adds up.

5.      Classic AI Slop: As AI improves, this is going to become harder and harder to spot; that’s why this is at the end of the list. Look for text that’s corrupted or nonsense, especially if it’s not the subject of the image. Patterns are also difficult for AI, so look out for things like plaid or other kinds of line art that doesn’t quite line up.  AI has gotten much better at hands and fingers, but occasionally, it’ll still trip up on those as well.



Of course, AI no longer just does images; we have to worry about videos and deepfakes too. Fortunately, it is typically easier to spot AI-generated videos, but that is also likely to get harder over time. As far as things to look for regarding AI videos specifically:


1.      Physics: Watch out for anything that completely defies physics and logic. Sometimes weird things do happen in real life, but if it seems overly suspicious, be overly suspicious yourself.

2.      Watermarked: [NP2] Most AI-generated video programs right now imprint some form of watermark on the video. Sora, one of the most popular options today, has a Sora watermark that constantly moves around on the video. So, many of these AI videos will have strange artifacts left over from where the user has attempted to remove the watermark in post.

3.      Voice of Reason: AI-generated voices in these clips are typically flat and unnatural. Think about what they’re saying, and if a real person would sound like that.

4.      Prompting Suspicion: All these videos have been prompted by a person, usually with just a few lines of text. For that reason, the subject is always going to be perfectly in shot, background characters and environments are going to look very uniform, and if there’s text in the video, there’s a good chance that text from the prompt made it into the product itself.


You may be thinking, “well I’m not much of a social media person” or “I’m not going to follow real people on Instagram, let alone fake ones” or “if I see or like a funny AI video online, who cares?”. The issue is that our modern, internet-driven world is way too interconnected to dismiss the rise of AI content. News stories often arise from social media posts, and politicians are becoming more comfortable using, posting, and reposting AI-generated content of themselves and their opponents. The social media companies do not have an incentive to regulate or prevent content that just serves to drive more engagement and make more money, and the politicians who see this as an effective way to campaign or spread misinformation beneficial for themselves are likewise unmotivated to affect change. So, the only effective defense here is widespread public education before it’s too late. AI is currently the most effective weapon in the battle against truth itself.

 


“All those moments will be lost in time... like tears in rain.”


This is not the AI apocalypse we were taught to fear. In a way, this is much worse, because it’s real. People with malicious intentions are using AI to deceive and manipulate, and if we don’t wise up to their tricks, we will quickly find ourselves in a post-apocalyptic future of our own making. The good news is that it’s not too late. People are smart and resourceful, and the average person is better at spotting hogwash than you might think, especially if they know what to look for. The trickiest part is learning to accept truths that you would rather not believe, while disbelieving falsehoods that affirm your personal biases.


Just remember that AI is out there now. It can't be bargained with. It can't be reasoned with. It doesn't feel pity, or remorse, or fear. And it absolutely will not stop... ever, until we, collectively, stand up to it.

Comments


bottom of page