Precision

Gradr.ai can detect text generated with leading AI tools such as ChatGPT and Anthropic AI with a success rate between 30-96%, and with a very low rate of false positives (0.5%). Gradr.ai prioritizes minimizing false positives first and then having the highest possible precision.


How good Gradr.ai is at recognizing AI-generated text depends on how mixed the AI-generated text is with original text, the structure of the text, and language. The rate of false positives remains consistently low regardless of these factors.


In addition, Gradr.ai is able to detect original text that has been rewritten using artificial intelligence and AI-generated text that has been attempted to be masked with paraphrasing tools such as Quillbot.


The way Gradr.ai performs the measurements is through machine learning and a training set of texts that contains both real and AI-generated text. This allows the software to learn to recognize the different patterns and characteristics that are typical of AI-generated text. We have different models for English and Norwegian, and the degree of precision will therefore vary from language to language.

Original texts checked with Gradr.ai

99.5% Recognized as original

0.5% Incorrectly recognized as AI

AI-generated texts checked with Gradr.ai

60-70% Recognized as AI

30-40% Uncertain or not recognized

Precision

Gradr.ai can detect text generated with leading AI tools such as ChatGPT and Anthropic AI with a success rate between 30-96%, and with a very low rate of false positives (0.5%). Gradr.ai prioritizes minimizing false positives first and then having the highest possible precision.


How good Gradr.ai is at recognizing AI-generated text depends on how mixed the AI-generated text is with original text, the structure of the text, and language. The rate of false positives remains consistently low regardless of these factors.


Original texts checked with Gradr.ai

99.5% Recognized as original

0.5% Incorrectly recognized as AI


In addition, Gradr.ai is able to detect original text that has been rewritten using artificial intelligence and AI-generated text that has been attempted to be masked with paraphrasing tools such as Quillbot.


AI-generated texts checked with Gradr.ai

60-70% Recognized as AI

30-40% Uncertain or not recognized


The way Gradr.ai performs the measurements is through machine learning and a training set of texts that contains both real and AI-generated text. This allows the software to learn to recognize the different patterns and characteristics that are typical of AI-generated text. We have different models for English and Norwegian, and the degree of precision will therefore vary from language to language.

Although Gradr.ai has a success rate of 30% - 96% when it comes to detecting AI-generated text, it's important to note that there can still be a risk of false positives. This means that it's possible that Gradr.ai could incorrectly identify a text as AI-generated when it was actually written by a human. We have managed to reduce this to about 0.5% for English and Norwegian as of October 16, 2024.

Note that this often means that only smaller parts of a text are incorrectly marked as "Likely AI", in the same way that you can get matches with a plagiarism check that don't necessarily hold true. It is therefore important to always make your own assessment and review text that is marked as "Likely AI", in the same way that you relate to plagiarism control.

How It Works

Gradr divides the text into fragments, analyzes each separately, and gives each a rating. These ratings are compiled and summarized into one of the following four classifications:

  • Contains AI-generated text
  • Uncertain if the text contains AI-generated text
  • Probably does not contain AI-generated text
  • Everything looks good, no signs of AI-generated text

Examples



Example of an assessment of a text that is completely written by ChatGPT (GPT-3.5, 26.08.2023).

Likely AI
It is 99.99% likely that a majority of the sentences in this part of the text are AI-generated.
Human
0.1%
AI
99.9%
Henrik Wergeland was a nationalistic poet and author who fought for the Norwegian language and culture. He wanted to promote the use of Norwegian in writing and speech and worked to revitalize and modernize the Norwegian language. Wergeland argued that Norwegian was a separate language and that having a literary language of its own was important for national self-esteem, as it could more authentically express Norwegian thoughts and feelings than Danish.
    
On the other hand, we had Peter Andreas Munch, a historian and philologist who was also involved in the language debate. Munch was more conservative and believed that the Norwegian language was too young and underdeveloped to function as a fully-fledged written language. He feared that a transition to Norwegian as a written language could lead to linguistic chaos and loss of understanding between the Nordic countries.

The conflict between Wergeland and Munch represented two opposing views on national identity and linguistic expression. Wergeland wanted to create a Norwegian cultural identity through language, while Munch argued for maintaining Danish as the official written language to preserve a degree of cultural and linguistic continuity.

In the long term, Wergeland's views gradually prevailed. The Norwegian language and national identity were strengthened through nation-building in the 19th century, and in 1885 it was decided that Norwegian should be the official written language in Norway. This period marks an important milestone in the development of the modern Norwegian language and nation.
Likely AI
It is 99.99% likely that a majority of the sentences in this part of the text are AI-generated.

Note that some words or sentences in the text may still be written or modified by humans, but probably not a majority.
Human
0.1%
AI
99.9%

Contact us

Interested in using Gradr.ai or have any questions? Contact us and we'll help you out.

For new inquiries:

For customer service:

Gradr.ai - Tribble AS

Bergstien 11B

0172 Oslo


NO 931992449 MVA