Students who use AI to cheat warned they will be exposed as detection services grow in use

Companies that develop software to detect if artificial intelligence or humans authored an essay or other written assignment are having a windfall moment amid ChatGPT’s wild success. 

ChatGPT launched last November and quickly grew to 100 million monthly active users by January, setting a record as the fastest-growing user base ever. The platform has been especially favored by younger generations, including students in middle school through college.

Surveys have found that about 30% of college students reported using ChatGPT for school assignments since the platform launched, while half of college students say using the system is a form of cheating. 

AI detection companies, such as Winston AI and Turnitin, are revealing that the wild success for ChatGPT has also benefited the tech detection firms as teachers and employers look to weed out people submitting computer-generated materials as produced by humans.

AI LIKENED TO GUN DEBATE AS COLLEGE STUDENTS STAND AT TECH CROSSROADS

Open AI logo

The OpenAI logo on a website displayed on a phone screen and ChatGPT in the AppStore displayed on a phone screen in Krakow, Poland, June 8, 2023.  (Jakub Porzycki/NurPhoto via Getty Images)

“It all happened within a week or two. Suddenly, we couldn’t keep up with demand,” John Renaud, the co-founder of Winston AI, told The Guardian.

Winston AI is billed as the “most powerful AI content detection solution” on the market, with 99% accuracy, according to the company. Users can upload written content they want verified, and, in just a matter of seconds, the system will report if the materials were likely generated by a computer system such as ChatGPT or written by a human.

COLLEGE STUDENTS OPEN UP ABOUT ARTIFICIAL INTELLIGENCE IN THE CLASSROOM: ‘EVERYONE IS USING CHATGPT’

Winston AI will provide users with a “scale of 0-100, the percentage of odds a copy is generated by a human or AI,” as well as look for potential plagiarism.

Renaud explained that AI-generated materials have “tells” that expose it as computer-generated, including “perplexity” and “burstiness.” Perplexity is defined by the company as tracking language patterns in a writing sample and determining if it follows how an AI system was trained or if it appears to be unique and written by a human. 

Burstiness is “when a text features a cluster of words and phrases that are repeated within a short span of time.”

Renaud told Fox News Digital he believes “the main question and concern with AI detection is if it will become undetectable one day.”

“The fundamentals of generative AI works with predictive data,” he explained. “All the models, including ChatGPT, Bard, Claude, Stability Text, have been trained on large datasets and will return outputs that are ‘predictable’ by well-built and trained AI detectors. I strongly believe this will be the case until there is true AGI (Artificial General Intelligence). But, for now, that is still science fiction.

“So, in the same way that generative AI is trained on large datasets, we trained our detector to identify key patterns in ‘synthetic’ texts through deep learning.”

Renaud said he was initially “very worried” about ChatGPT, but his worries have since eased. AI will always have “tells” that other platforms can detect, he said.

LIBERAL MEDIA COMPANY’S AI-GENERATED ARTICLES ENRAGE, EMBARRASS STAFFERS : ‘F—ING DOGS–T’

“With predictive AI, we’ll always be able to build a model to predict it,” he told The Guardian.

empty classroom setting

The interior of a school classroom (iStock)

The Winston AI co-founder said the platform is mostly used to scan school essays, while “publishers scanning their journalists’/copywriters’ work before publishing” has gained traction and landed in the second spot for the platform’s most common use.

“AI detection needs are likely to grow outside of academia. We have a lot of publishers and employers who would like to get clarity on the originality of the content they publish,” Renaud added in comments to Fox News Digital.

The chief product officer of Turnitin, another company that detects AI-generated materials, recently published a letter to the editor of The Chronicle of Higher Education arguing that AI materials are easily detected.

EDUCATORS ARE EXPLORING AI SYSTEMS TO KEEP STUDENTS HONEST IN THE AGE OF CHATGPT 

Turnitin’s Annie Chechitelli responded to an essay published in The Chronicle of Higher Education authored by a student at Columbia University who said, “No professor or software could ever pick up on” materials submitted by students but actually written by a computer.

“In just the first month that our AI detection system was available to educators, we flagged more than 1.3 million academic submissions as having more than 80 percent of their content likely written by AI, flags that alert educators to take a closer look at the submission and then use the information to aid in their decision-making,” Chechitelli wrote.

She added that students might assume today’s technology can’t detect AI-generated school work, but they are simultaneously making a poor bet that tomorrow’s technology won’t pick up on the cheating.

GhatGPT openAI logo

ChatGPT in an illustration from May 4, 2023 (REUTERS/Dado Ruvic/Illustration)

“Even if you succeed in sneaking past an AI detector or your professor, academic work lives forever, meaning that you’re not just betting you are clever enough, or your process elegant enough, to fool the checks that are in place today — you’re betting that no technology will be good enough to catch it tomorrow. That’s not a good bet,” she wrote.

Similar to Renaud, Chechitelli argued that AI materials will always have “tells” and that tech companies looking to uncover the computer-generated materials have crafted new ways to expose the AI-generated materials.

IVY LEAGUE UNIVERSITY UNVEILS PLAN TO TEACH STUDENTS WITH AI CHATBOT THIS FALL: ‘EVOLUTION’ OF ‘TRADITION’

“We think there will always be a tell,” she told The Guardian. “And we’re seeing other methods to unmask it. We have cases now where teachers want students to do something in person to establish a baseline. And keep in mind that we have 25 years of student data to train our model on.”

Chechitelli said Turnitin has also seen a spike in use since the release of ChatGPT last year and that teachers have put more emphasis on thwarting cheating than in previous years.

Artificial Intelligence

One type of generative AI, ChatGPT, has recently taken the world by storm.    (iStock)

“A survey is conducted every year of teachers’ top instructional challenges. In 2022 ‘preventing student cheating’ was 10th,” she said. “Now, it’s number one.”

College students surveyed by College Rover earlier this year reported that 36% of their professors threatened to fail them if they were caught using AI for coursework. Some 29% of students surveyed said their college has issued guidance on AI, while the majority of students, at 60%, said they don’t believe their school should outright ban AI technologies.

Amid concern students will increasingly cheat via AI, some colleges in the U.S. have moved to embrace the revolutionary technology, implementing it into classrooms to assist with teaching and coursework.

CLICK HERE TO GET THE FOX NEWS APP

Harvard University, for example, announced it will employ AI chatbots this fall to assist teaching a flagship coding class at the school. The chatbots will “support students as we can through software and reallocate the most useful resources — the humans — to help students who need it most,” according to Harvard computer science professor David Malan.

Leave a Reply

Your email address will not be published. Required fields are marked *