AI Is MSG: Good Luck Avoiding It

Well before all of the hype about Artificial Intelligence and the fear that followed, lots of software programs and applications were utilizing AI technologies including: Scrivener, Microsoft—Word, Excel, and PowerPoint, Google Search, Amazon’s Alexa, Adobe’s Sensei, Netflix, and many more. Grammarly has been using AI technology since it started correcting author’s faux pas in 2009.

You are using AI when you ask Siri a question, query Google Assistant, or get directions to a Thai restaurant. That language translation ap uses AI and so does your favorite social media platform where you posted a picture of your delicious Pad Thai. AI is used in everything from e-commerce to healthcare, from financial services to hiring—forget humans in HR, Amazon is using AI to choose the next delivery person!

If you think you can avoid AI, good luck. It is the MSG flavoring our digital world. If it is not already in everything now, it will be—unless we put some laws in place that spell out the parameters and side-effects, which we don’t even know yet, but certainly include heart palpitations and sweating!

An author recently told me that a publisher asked them to sign a contract stating they are not “using” AI. That seems unenforceable and fruitless based on the above. The author is already using AI and doesn’t even know it. I used AI to write this article: I used a search engine and you will find links to sources. I used spellcheck software. I used social media to send it to you. All powered by AI. But there is a difference between ‘AI-assisted’ and ‘AI-generated.’ My use is ‘AI-assisted.’ And that’s okay. [There is an exception highlighted below as an example.]

AI is a real concern for authors and publishers. Before AI, there was already an issue with scams directed at authors like offers to display your book at trade shows, promises of movie deals, and paid courses teaching authors how to create “contentless books” on Amazon. (This is a scam urging authors to sell low-content books through KDP with expectations of big returns. It is a scam because even books with plenty of great content may be difficult to sell among the many thousands on Amazon.)

Some fear that AI will take over the world and eat us. A more realistic fear is that AI will make it even more difficult to determine what is real from what is replicated, falsified, and stolen.

Since AI hit public awareness, scammers have started generating books using AI by prompting things like: write a novel similar to Heinlein’s Sci Fi books with the lead character named Darren and make it based in San Fransisco. AI can steal an author’s style and string words together, albeit badly. Here is an example I just generated from ChatGPT:

“The neon-lit streets of San Francisco hummed with life, each flicker of light casting long shadows across the fog that draped the city like a cloak. Darren Bishop strolled through the throngs of people, his hands buried deep in the pockets of his trench coat, the collar turned up against the chill of the evening. With a mop of unruly hair and eyes that held a glint of curiosity, he navigated the labyrinth of alleyways and boulevards with the ease of a native.”

ChatGPT


Authors are allowed to use AI tools to assist when creating work published on Amazon and they do not need to disclose this use. Only when the work is AI generated does it need to be disclosed.

― Amazon

The LA Times reported on October 20, 2023, that Douglas Preston joined a lawsuit with “a host of other big-name authors, including John Grisham, Jonathan Franzen, Jodi Picoult and George R.R. Martin — the notoriously slow-to-publish ‘Game of Thrones’ author who, Preston says, joined the suit out of frustration that fans were using ChatGPT to preemptively generate the last book in his series.”

This lawsuit, and others, will help determine the breadth and scope of GPT use in the future. For now, authors and publishers, as well as teachers and employers, are trying to get a handle on the use of AI tools and generated content. We’ve got a lot of questions:

  • Is it okay to use AI tools when creating?
  • Will someone steal my work if I publish on a writing website or blog?
  • Can you tell if writing or art is AI?
  • Will AI robots take over the world and kill us all?

FIRST

Amazon currently asks authors to disclose if their work contains any AI-generated material, yet it may be quite some time before they eliminate the use of AI generated work, if ever. Amazon now notifies authors, “We require you to inform us of AI-generated content (text, images or translations) when you publish a new book or make edits to and republish an existing book through KDP (Kindle Direct Publishing). AI-generated images include cover and interior images and artwork.”

Fox news said that in spite of Amazon’s updated rules, They will not “require publishers to disclose when content generation is AI-assisted, meaning the works were authored by the author or publisher with the use of AI tools to ‘edit, refine, error-check, or otherwise improve’ the content.” https://www.foxnews.com/us/amazon-crack-down-self-publishers-using-ai-generated-content

This means that using an AI-based tool like ChatGPT to generate ideas, and one to edit, like Grammarly, is just fine as long as the author created the work themselves. Their creation is seen as ‘AI-assisted’ and not ‘AI-generated.’ The company said, “It is not necessary to inform us of the use of such tools or processes.”

Authors are allowed to use AI tools to assist when creating work published on Amazon and they do not need to disclose this use. Only when the work is AI generated does it need to be disclosed.

SECOND

Authors have risked their work being stolen long before artificial intelligence hit the scene, but it is a wise author who researches to ensure each writing platform they use protects their writing. If an author’s writing is loose and unprotected on the internet, can it be used in AI generated content? Can ChatGPT find it and use it?

Authors have risked their work being stolen long before artificial intelligence hit the scene, but it is a wise author who researches to ensure each writing platform they use protects their writing. If an author’s writing is loose and unprotected on the internet, can it be used in AI generated content? Can ChatGPT find it and use it?

According to the Authors Guild, “All of the large book AI training datasets that we are aware of were compiled from ebook piracy sites—precisely because there are no legal databases of books that are on the open internet, and AI developers have not obtained licenses to do so.” https://authorsguild.org/news/practical-tips-for-authors-to-protect-against-ai-use-ai-copyright-notice-and-web-crawlers/

Piracy. So, yes, it is possible for your work to end up in the AI database without your knowledge or permission. However, someone’s stolen use of your work cannot be copyrighted. They cannot steal your book, put their name on it, and copyright that book. “AI-generated material cannot be protected under copyright law. Work that combines human creativity with AI-generated material may be copyrighted if the human author’s contribution is significant and identifiable.” US COPYRIGHT OFFICE POLICIES (16 March 2023).

Even though I have included a portion of writing in this article (above and quoted) that was generated using ChatGPT, according to the above, I can legitimately copyright this article because my contribution is significant and identifiable. Importantly and additionally, I have clearly identified which portion of this work was generated by someone or something other than myself.

What can you do to protect your authorship, while still trying to get your work in front of readers?

Add the language: This Work Is Copyrighted. “Copyright, a form of intellectual property law, protects original works of authorship including literary, dramatic, musical, and artistic works, such as poetry, novels, movies, songs, computer software, and architecture. … Your work is under copyright protection the moment it is created and fixed in a tangible form that it is perceptible either directly or with the aid of a machine or device.” US Copyright Office.
In addition to ensuring you have a copyright notice on your books and articles, you can add the following language recommended from Authors Guild: “NO AI TRAINING: Without in any way limiting the author’s [and publisher’s] exclusive rights under copyright, any use of this publication to “train” generative artificial intelligence (AI) technologies to generate text is expressly prohibited. The author reserves all rights to license uses of this work for generative AI training and development of machine learning language models.”
You can also block GPT bots from pulling information from your website. The Authors Guild includes instructions on how to keep GPT bots from crawling your website here: https://authorsguild.org/news/practical-tips-for-authors-to-protect-against-ai-use-ai-copyright-notice-and-web-crawlers/
Most importantly, make informed and not fear-based decisions.

THIRD

Can you tell if writing is AI generated? Yes and no. While there are no 100% accurate tools to determine if writing was generated by AI, companies are working hard to create one. Currently there are some horror stories about the failures and inaccuracies of AI detectors:

A teacher failed an entire class of students because when he ran their essay papers through ChatAI it claimed the student’s work was generated by ChatAI falsely. https://www.rollingstone.com/culture/culture-features/texas-am-chatgpt-ai-professor-flunks-students-false-claims-1234736601/
AI detectors claim the US constitution was written by AI. https://arstechnica.com/information-technology/2023/07/why-ai-detectors-think-the-us-constitution-was-written-by-ai/
AI detectors are “unreliable and easily gamed.” https://hai.stanford.edu/news/ai-detectors-biased-against-non-native-english-writers
Right now, authors who legimitaly create their own work have to worry about being falsely accused of using AI. https://contentwriters.com/blog/how-to-avoid-an-ai-detection-false-positive/
Amazon thinks it is fine to use AI to create ads to sell your book, just don’t use AI to create your book. https://techcrunch.com/2023/09/13/amazon-debuts-generative-ai-tools-that-helps-sellers-write-product-descriptions/

I, personally, trust the human mind to determine if AI is being used more than I trust AI to detect itself. There are things computers simply cannot comprehend, like our experience of pain, love, pride, determination, intelligence, disgust, and will. And boredom. AI writing is boring. And repetitive. It doesn’t know how or when to break the rules, even if it can follow them. AI writing is bad. It lacks emotion and is frequently incoherent.

Universities are particularly struggling to determine if a student’s writing is their own. “In a shockingly candid admission, Turnitin [a supplier for plagiarism software] admitted in a recent video that their AI-detection software should be taken ‘with a grain of salt‘. In addition, they say that instructors will need to be the ones to ‘make the final interpretation’ regarding what is created by generative AI.” The software detector company determined the human brain was the better AI detector! From the Scholarly Kitchen article, “Publishers, Don’t Use AI Detection Tools!” https://scholarlykitchen.sspnet.org/2023/09/14/publishers-dont-use-ai-detection-tools/

“We welcome the responsible use of AI-assistive technology on Medium. To promote transparency, and help set reader expectations, we require that any story created with AI assistance be clearly labeled as such.”

Medium.com

How are other writing platforms dealing with AI? Medium.com’s policy is the following: “We welcome the responsible use of AI-assistive technology on Medium. To promote transparency, and help set reader expectations, we require that any story created with AI assistance be clearly labeled as such.” Because of this statement, I will disclose not only that I generated the quoted paragraph above from ChatGPT, but also that I have used Word’s spellcheck, (which, by the way, does not like my sentence structure! AI requires rule-compliance!)

FORTH

Will Robots take over the world and eat us all? Yet to be seen. However, so far AI is not aware. It is a inauthentic. It cannot create but only replicate. YOU ARE creative and irreplaceable. There are things only human’s can do where computers completely fail, for example tiling. Physicist Sir Roger Penrose asserts that a computer, or computer controlled robot, is not capable of genuine intelligence because intelligence requires understanding which requires awareness. But that’s a topic for my next article, AI ISN’T FUNNY (and never will be).

So just keep scratchin’.

Keep Scratchin’
Scroll to Top