ChatGPT and the future (present) we face (2023)

ChatGPT and the future (present) we face (1)

Until ChatGPT stops being the top AI news, I guess we'll get stuck talking... Just kidding, I'll make sure to intertwine other topics or we'll burn out.

There is still much to be said about ChatGPT's immediate and long-term impact. I wrote about itwhat ChatGPT isandhow to make the most of it, Abovethe challenge of identifying its outcomes, andthe threat it poses to Google and traditional search engines, but I have yet to address what the risks and harms aresome have foreseenare already taking shape in the real world.

A month after its release we can all agree that ChatGPT has gone mainstream and taken AI as a field with it. As an anecdote, a friend who doesn't know anything about AI came up to me and talked about ChatGPTBeforeI told him about it. It was my first time - and I'm not the only one.

For this reason, there is an urgent need to talk about the consequences of AI: ChatGPT has reached people much faster than any resources on how to use it well or definitely not to use it. The number of people using AI tools today is greater than ever (not just ChatGPT; Midjourney has 8 million members on the Discord server), which means more people than ever will be doing sosentShe.

Contrary to my prescient/speculative essays, this is not about such thingscould bitebut about things thathappen. I'll elaborate on ChatGPT because the world is talking about it, but most of the following could apply to other types of Generative AI with proper translation.

The Algorithmic Bridge is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

ChatGPT damage is no longer hypothetical

Last Friday, January 6th, security research group Check Point Research (CPR) published a chilling article entitled “OpwnAI: Cyber ​​criminals use ChatGPT.” While not surprising, I wasn't expecting it to happen so soon.

had CPRpreviously studiedhow malicious hackers, scammers, and cyber criminals could exploit ChatGPT. They demonstrated how the chatbot "can generate a full flow of infections, from spear phishing to running a reverse shell" and how it can generate scripts that run dynamically and adapt to the environment.

Despite OpenAI's guard rails, which appeared as an orange alert when CPR forced ChatGPT to take action against the Acceptable Use Policy, the research group had no problem generating a simple phishing email. "Complicated attack processes can also be automated using the LLMs APIs to generate other malicious artifacts," they concluded.

ChatGPT and the future (present) we face (2)

CPR researchers were not satisfied with the evidence that ChatGPT could do thishypothetical(One of the most common criticisms skeptics receive is that the potential risks they warn about never result in any real harm). They wanted to find real-world cases of people abusing it in similar ways. And they found it.

CPR has analyzed and at least found "several major underground hacker communities".three specific examples of cyber criminals using ChatGPTin a way that not only violates the Terms of Service, but could become directly and measurably harmful.

First, an info stealer. In a thread titled "ChatGPT - Malware Benefits," one user shared experiments in which he "recreated many malware strains." As CPR noted, the OP's other posts revealed that "this person [aims] to show less technically skilled cybercriminals how to use ChatGPT for malicious purposes".

ChatGPT and the future (present) we face (3)

Second, an encryption tool. A user named "USDoD" posted a Python script with "encryption and decryption capabilities". CPR concluded that "the script can easily be modified to encrypt another person's computer completely without user interaction". While USDoD has "limited technical skills", he is "involved in a wide variety of illegal activities".

ChatGPT and the future (present) we face (4)

The final example is fraud activity. The title of the post is pretty descriptive: “Abusing ChatGPT to Script Dark Web Marketplaces.” CPR writes, “The cybercriminals released code that uses a third-party API to get current cryptocurrencies…prices as part of the dark web market payment system.”

ChatGPT and the future (present) we face (5)

It's clear that ChatGPT, which is free and very intuitive to use, is a magnet for cybercriminals, including those with low technical skills. As Sergey Shykevich, Threat Intelligence Group Manager at Check Point, explains:

“Just as ChatGPT can be used for good purposes to help developers write code, it can also be used for malicious purposes. While the tools we analyze in this report are fairly basic, it's only a matter of time before more sophisticated threat actors improve the way they use AI-based tools.”

That ChatGPT is a driver of internet security issues is not a hypothesis being reinforced by fearmongers, but a reality that is hard to deny. For those who argue that this was possible before ChatGPT, two things: First, ChatGPT can bridge the technical gap. Secondly, scaling plays a big role here – ChatGPT can automatically script in seconds.

OpenAI shouldn't have released ChatGPT - so early

Cybersecurity, disinformation, plagiarism… Many people have repeatedly warned about the problems ChatGPT-like AIs can cause. Now there are many malicious users.

Someone could still try to argue in favor of ChatGPT. Maybe it's nottheproblematic – the advantages may outweigh the disadvantages – butperhapsit is. And a "maybe" should be enough to make us think twice. OpenAI lowered its vigilance after GPT-2 was found to be "harmless" (they saw "no clear indications of abuse so far') and they never brought it back.

I agree with Scott Alexander that "Maybe it's a bad thing that the world's leading AI companies can't control their AIs.” Maybe reinforcement learning through human feedback isn't good enough. Perhaps companies should find better ways to exercise control over their models if they want to unleash them in the wild. Maybe GPT-2 wasn't that dangerous, but a few iterations later we have something to worry about. And if not, we've got it covered in a few more.

I'm not saying OpenAI hasn't tried - they have (they've even been criticized for being too conservative). What I am proposing is that if we continue this mindset of “I tried to get it right so now I have the green light to unblock my AI” into the short-term future, we will encounter more and more downsides that aren't advantage would make up for.

A question has been bugging me for a few weeks: if OpenAI is so concerned about getting things right, why haven't they set themselves upthe watermark schemeto identify the outputs of ChatGPTBeforeReleasing the model to the public? Scott Aaronson is still trying to make it work -a month afterthe model went completely viral.

I don't think a watermark would have solved the fundamental problems that this technology poses, but it would have helped by buying time. Time for people to adapt, for scientists to find solutions to the most pressing problems, and for regulators to make relevant laws.

GPT detectors are the last (healthy) frontier

Due to OpenAI's inaction, we are left with timid attempts to build GPT detectors that could offer people a means to avoid AI disinformation, scams or phishing attacks. Some have tried to repurposea 3 year old GPT-2 detectorfor ChatGPT thoughit doesn't work. others likeEduard Tian, a CS and journalism senior at Princeton University, have built systems specifically for ChatGPT from the ground up.

So far more than 10,000 people have tested GPTZero, including me (Here is the demo. Tian builds a product for thatMore than 3,000 teachers have already registered). I'll admit I only managed to fool it once (and only because ChatGPT misspelled a word), but I didn't try too hard either.

The detector is very simple, it evaluates the "perplexity" and "laceration" of a block of text. Perplexity measures how much a sentence "surprises" the detector (i.e., how far the distribution of returned words does not match what is expected from a language model), and burstiness measures the constancy of perplexity across sentences. Simply put, GPTZero capitalizes on the fact that humans tend to write a lot stranger than AIs - which becomes apparent once you read a page of AI-generated text. It is so boring …

At a<2% false positive rate, GPTZero is the best detector out there. Tian is proud: "Humans deserve to know when the writing is not human."he told the Daily Beast. I agree - even if ChatGPT doesn't plagiarize, it's morally wrong for people to claim to be the authors of something ChatGPT wrote.

But I know it's not infallible. A few changes to the output (e.g. misspelling a word or adding your own word) can be enough to trick the system. Asking ChatGPT to avoid repeating words works fine,as Yennie Jun shows here. And finally, GPTZero could soon become obsolete with new language models appearing every few weeks - AnthropicAI unofficially announced Claude, which has been provenfrom Riley Goodside's analyses, is better than ChatGPT.

andGPT-4is around the corner.

This is what some people like to call a game of cat and mouse - and the mouse is always one step ahead.

Blocking ChatGPT: A bad solution

If the detectors worked well, many people would get angry. Most want to use ChatGPT without barriers. For example, students could not cheat on written essays because an AI-savvy professor might know about the existence of a detector (it has already happened). The fact that more than 3,000 teachers have signed up for Tian's incoming product says it all.

However, since detectors are not reliable enough, those who do not want to face the uncertainty of having to guess whether or not a written deliverable is the product of ChatGPT have chosen the most conservative solution: banning ChatGPT.

The Guardian reported on Friday that "New York schools have banned ChatGPT.” Jenna Lyle, a department spokeswoman, cited “concerns about the negative impact on student learning and concerns about the safety and accuracy of the content” as reasons for the decision. While I understand the teachers' point of view, I don't think this is a wise approach - it may be the easier choice, but it's not the right one.

Stability.ai's David Ha tweeted this when the news broke:

I confirm (and have done it before) the problems schools face (e.g. widespread undetectable plagiarism), but I have to agree with Ha.

Here's the dilemma: this technology isn't going away. It's part of the future - probably a big part - and it's super important that students (and you, me and everyone else) know about it. Banning ChatGPT from schools is not a solution. As Has's tweet suggests, banning it could be more damaging than allowing it.

However, students using it to cheat on exams or write essays would waste their teachers' time and effort and hamper their development without realizing it. As Lyle says, ChatGPT can prevent students from learning "critical thinking and problem-solving skills."

What is the solution I (and many others) envision? The education system needs to adapt. While this is more difficult, it is the better solution. Considering how broken the school system is, it can definitely be a win-win situation for both students and teachers. Of course, it goes without saying that until then, it's better if teachers have access to a reliable detector - but let's not use that as an excuse to avoid adapting education to these changing times.

The education system hasa lotroom for improvement. If it hasn't changed in so many years, it's because there wasn't enough incentive to do so. ChatGPT gives us a reason to rethink education.

People have suggested ad hoc solutions, like asking students to cite sources (ChatGPT makes them up), write essays only in person, or evaluate the process rather than the end result. I think that restructuring the education system from the ground up is a more robust choice. The only piece missing from this puzzle is the willingness of those making the decisions.

AI is the new internet

It really feels like it. Some have compared AI to thisfire or electricitybut these inventions have slowly been integrated into society and are too far behind in time. We don't know how that felt. AI is more like the internet, it will change the world. Very fast.

In this essay I have tried to capture a future that is already present rather than future. One thing is that AIs like GPT-3 or DALL-E exist and quite another thing is that everyone in the world is aware of them. These hypotheses (e.g. disinformation, cyber hacking, plagiarism, etc.) no longer exist. It's happening here and now, and we'll see more desperate measures to stop them (like building junk detectors or banning AI).

We have to assume that some things will change forever. But in some cases we have to defend our position (like artists do with text-to-image or minorities before with classification systems). Regardless of who you are, AI will reach you in one way or another.

If you want to avoid being swept away by hyped narratives, become a victim of AI-powered scams, be caught off guard by an unexpected development, or learn to capitalize on the opportunities while understanding the shortcomings, learn how to not to feel overwhelmed. and learn to remain indispensable in your job, then you should keep educating yourself about what's going on in AI.

The Algorithmic Bridge is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

References

Top Articles
Latest Posts
Article information

Author: Moshe Kshlerin

Last Updated: 13/12/2023

Views: 5557

Rating: 4.7 / 5 (77 voted)

Reviews: 84% of readers found this page helpful

Author information

Name: Moshe Kshlerin

Birthday: 1994-01-25

Address: Suite 609 315 Lupita Unions, Ronnieburgh, MI 62697

Phone: +2424755286529

Job: District Education Designer

Hobby: Yoga, Gunsmithing, Singing, 3D printing, Nordic skating, Soapmaking, Juggling

Introduction: My name is Moshe Kshlerin, I am a gleaming, attractive, outstanding, pleasant, delightful, outstanding, famous person who loves writing and wants to share my knowledge and understanding with you.