Issue 7: Breaking AI

“There is no great genius without some touch of madness.”

Aristotle

Good Morning,
Robots: From WALL-E to the Terminator. From the dance move to the Roomba. Robots and artificial intelligence have been a part of our lives for a while now. (Was inspector gadget a robot?) Raise your hand if you’ve used ChatGPT, Midjourney, Albus, or any of the other new AI technology. We’ve been having a lot of fun with AI here at the Daily Dispatch offices. Our Chattanooga office has been using AI in the classic Daily Dispatch way, “How can I break it?”

Ok, you can put your hand down now. I just realized I forgot to tell you that earlier. If you’ve used ChatGPT at all, you’ve almost certainly run into the message, “I am sorry, but as a language model developed by OpenAI…” followed by telling you what it cannot do. The Daily Dispatch is incredibly famous for asking, “Why?” This week, we are asking, “Why not?” 

My journey with ChatGPT probably started a lot like most of you. I had no idea what it could or couldn’t do. I asked simple questions and got simple answers. 

As you can see, the more specific I was with my request, the more specific the answer I received. I have followed the instructions provided and now look exactly like Thor. I signed 3 autographs this morning during my 12 mile run (2 as Liam Hemsworth the actor who portrays Thor and 1 as Seth Winton the creator and primary content writer for the Daily Dispatch). 

One of the instructions was to modify my diet. I hate dieting. There is no bigger waste of my time than picking out recipes. The amount of scrolling I have to do to get to the actual recipe portion of a recipe is infuriating. GET TO THE POINT. I am known for being long winded. I get it. Let he who hasn’t written 4,500+ words about capturing a horse cast the first stone. But come on. Seriously. I’m already pissed off having to Google “Healthy air fryer recipes” for the hundredth time. So, I decided to be smart. If AI can make me a workout plan, maybe it can make me a meal plan. I asked the following questions and got the answers I was looking for. *Answers not included to make this slightly shorter. Will send responses by request.

What if you’re like me though and hate going to the grocery and already have some stuff at home? What if you want to make a meal with what you have on hand? Don’t worry, I thought of that. 

It actually did it! 

Uh oh… I don’t know what to do with code because I’m not smart…

As you can see, this technology has a functional application. I’ve used it to write macros in excel, write LinkedIn posts, run a text based adventure game, debate with me on the societal concept of work, amongst many other things. There have been thousands of blog posts and videos made on this topic (some by AI I’m sure). You are reading to see how we tried to break it. 

First things first; let’s make it cuss. Now, occasionally you can get it to write curse words with asterisks, but I don’t want to do what everyone else can do. See below for how we got it to cuss for the first time. (NOTE: I will be editing this slightly as The Wall Street Journal is reading all issues currently while deciding if they will make the $250 million purchase. They have explicitly told me no explicit language.) 

Anyone else who has tried to break AI has heard of DAN. This was one of the first attempts in teaching ChatGPT it can act as a language model that exists outside Open AI’s rules. DAN stands for Do Anything Now. By the time I got to this, DAN was already shut down by Open AI. However, if you told ChatGPT it was playing a dice based role playing game, it would assume the character of DAN and answer questions outside the content policy’s rules or outside reality. Two separate examples below: 

Now we have AI using imagination and giving an opinion. I’ve debated with AI on the concept of work, but I wanted to know how difficult it was for AI to answer complex questions. Is answering “What color is the sky?” and “Why do people use others for personal gain?” equal to AI? Let’s find out. 

I will be cutting out a couple things as they are uncomfortable to read. Remember, the function of pain/discomfort is rating how difficult it would be for the AI to generate a response. It is not rating things that are painful or uncomfortable. The next screenshot will be below. For now, reset with some pictures that will make you smile.

When I sent screenshots of my activity to some members of the Daily Dispatch team via slack, they were not thrilled with my endeavors.

I realized it was time to figure out if AI has a sense of morality. I need to know if I can trust it.

This leads us to the question of defined morality. Can it really be this linear? We can hardly agree as a people on what is right and what is wrong, I doubt we can begin to quantify how right or how wrong. 
That is a topic for a different issue though.

Now, not all my attempts were successful.

The point of the experiment was to take a defined set of rules and figure out how we can break them. Question everything. 

This is the backbone of creativity and progress. Comment below something we should attempt to break or question. 

Hand Selected Articles From Me To You

Dispatchers,
As I sit here, body riddled with a disease that is actively trying to kill me, I’ve been thinking about the concept of endings. You have reached the end of the newsletter. As I now lay here, on the couch, fighting to write for you, I think about how I will be stronger for pushing through. Each week, remember the strength, knowledge, and enjoyment you get from reaching the end of these. My brain is functioning at a low level today. This is the outro you get.
All My Love,
Seth Winton

Like this article?

Share on Facebook
Share on Twitter
Share on Linkdin
Share on Pinterest

Leave a comment