Just use AI…is a common phrase that you will hear nowadays.
Writing blogs? Just use AI.
Generating images? Just use AI.
Submitting school work? Just use AI.
In a world where it is extremely effortless and efficient to use AI, it’s also undeniable that people are slacking off behind the work generated by technology, and the results are not entirely satisfactory.
In professional fields, especially in the content creation industry, there is a growing concern about content writers submitting works that are AI-generated without giving prior notice to their clients. Ethics has also become a major debate among content creators, agencies, and clients as the use of AI increases. Whether AI is killing creativity is a whole new topic on its own.
One might argue that it is all about working smart and not hard, but who’s to blame when Google penalises (which already has started) AI-generated content and de-ranks the sites that use such content?
As the need to differentiate between human and AI-generated content grows day by day, AI content detector tools have become a necessary precaution for many people.
Although there was no clear documentation of which AI content detector tool was introduced to the market first, some of the most widely used AI detectors nowadays include:
Let’s clear the air first. There’s no tool that can produce 100% accurate results or claim to do so. Some even market their tools as “highly accurate.”
But how accurate is highly accurate? AI Detection tools are notorious among writers and agencies alike for their false positive and false negative results. It is difficult to set a standard accuracy rate as each tool has different benchmarks that are highly dependent on factors such as:
As an agency that takes pride in human-written content and works on several multilingual content projects on a daily basis, we sometimes have to use AI content detection tools to add an extra layer of assurance to the projects we deliver.
And we did what any curious person with a set of data on their hands would do. We put on our detective hat and decided to do some accuracy tests using Originality.ai and Copyleaks.
The AI detection feature for Originality.ai comes in two versions: website platform and Chrome extension.
Its Chrome extension uses the revisions feature of a Google Doc to analyse the content and generate a video and an accompanying report. The video gives you a visual overview of the entire editing process in the doc - a sped-up recording of character-by-character/word-by-word replay, while the report gives you a breakdown of the creation of a piece of content over time.
For the purposes of this research, we used a working file from one of our previous blog posts and ran an analysis using this extension tool. Here are the results we received:
The report file not only reveals the number of revisions by its contributors, corresponding word counts, and time stamps but also generates a chart showing the progress of the character count over time. The video report acts as additional evidence to support these findings.
From our observations, this tool is the closest thing we have to real proof of genuine writing. It is particularly useful in checking how a piece of content is created character-by-character over time.
However, there is always a possibility that the writer may have AI-generated content on the other screen and type it out in the submitted document. Even if that is the case, both the video and the report will betray such evidence; a naturally written piece will have parts deleted, rewritten, and moved around as the writer goes along. A piece that is copied word for word from elsewhere will not have these patterns.
The main drawback of this tool is that it requires the content to be written within the file in order for revisions to be registered. Content copied from a Word document or another Google Doc will not work.
Here, we tested six articles in two versions (with and without heading) through CopyLeaks. Three of the articles were in en-CA, and the other half were written in fr-CA. Our goal was to observe the difference in both the plagiarism and AI scores when uploading a document vs. copy-pasting it as free text.
(P.S. For confidentiality, we will not mention the names of the articles or clients in the following table.)
Let’s group our findings into two parts:
Almost all articles uploaded to Copyleaks as document files have lower Plagiarism scores and lower AI scores (low Human-written content detection) compared to the ones that were copied and pasted to the tool. The only article with no difference in the scores is written in fr-CA.
Five out of six articles with no headings have lower plagiarism scores and a higher percentage of Human Text scores than those with headings. It is interesting to note that the article with the opposite results was the same outlier from the previous comparison set.
We must also mention that the scores changed when we re-ran the tests again.
From this, we can probably conclude that AI content detection tools like Copyleaks are not reliable enough to be used exclusively. We can see how the scores easily fluctuate with just a difference in how the text is uploaded to the tool, adding/removing headings from the text, and using different source languages. If Copyleaks must be used, we recommend using the Originality Chrome Extension in tandem.
The use of AI detectors is a double-edged sword. On one hand, it is a useful tool for checking the authenticity and originality of content in academia, SEO, and content marketing industries. And it can give managers some peace of mind to know that they're doing all they can to detect and prevent AI content from being used on their websites.
On the other hand, the inherent inaccuracy of AI detectors could impact the professional relationship between writers and clients. Imagine if you were a writer who meticulously researched a topic and wrote each word just to be told that it was written by AI—solely based on false positive results. It wouldn’t be fair at all.
Of course, it is equally important to look out for false negative results too! If AI content was mistakenly flagged as human-written content, this could potentially harm the SEO of the site it is published on.
Hence, just as with anything that can produce highly variable results and is data-dependent, AI detectors should be used with their limitations in mind.
We’ve always been transparent with our stance on using AI content: We do not and will never use AI-generated content to deliver our clients’ projects.
Here at IGC, we are SAPIENT. That means we truly value the years of expertise, dedication, and effort our team members, writers, and linguists put into their work. More importantly, we trust the professionalism of the people we work with.
If you’d like us to produce high-quality iGaming content or work with us, feel free to get in touch or schedule a call with us.