Artificial intelligence (AI) and ChatGPT are everywhere these days. I mean EVERYWHERE! It is learning and evolving more quickly than many of us were expecting. Yes, AI is taking many industries by storm and, in many use cases, is a valuable addition to labs. Think data analysis and research.
But, while ChatGPT and AI as a whole have some truly astonishing potential, there are also very real risks and problems that can arise when using these new technologies to produce content for marketing science-based businesses. So, before you let ChatGPT write your next article, landing page, blog, or white paper, you’ll want to consider these factors.
How ChatGPT was trained and how it works
First, let’s understand how ChatGPT was “trained.” Its training consisted of being “fed” a vast amount of internet content and published works to build its knowledge base and understand how humans write. For example, Wikipedia was used in training ChatGPT. In addition, humans participated in supervised training to protect it from dangerous content and to refine the responses.
The way ChatGPT works is by inputting “prompts.” It then generates answers using an algorithm to predict the most common response to your prompt or question.
Seems easy enough, right? In theory, it would be a beneficial tool to cut time, effort, and cost out of the creative marketing process. However, several pitfalls and legal questions arise as a result.
The risks related to using ChatGPT
If you’re a science marketer, you need to be aware of the significant problems ChatGPT can create – and be ready to address them. Here are four significant pitfalls of ChatGPT:
- Unreliable or biased writing
Because ChatGPT was “fed” all different types of internet content (even with the moderators attempting to shield it from harmful information), the facts and information that ChatGPT returns to your prompt can be unreliable or even downright incorrect.
In an ideal world, AI would be trained on peer-reviewed science and, therefore, would provide only facts. However, ChatGPT’s training datasets included social media posts, meaning that ChatGPT can show negative bias towards marginalized people and return results supporting outright conspiracy theories despite the creators’ best efforts to stop this behavior. Because of this, misinformation can leak in. The most common answer isn’t always the best or even correct answer.
- Sounds like AI
ChatGPT generates its answers word-by-word. This leads to the responses sounding like they were written by AI – not a human with original thoughts and emotions. Think about the last time you read a news article online – did it sound like a human actually wrote it? Most people can tell when ChatGPT has written an article, post, or paper. While ChatGPT’s output is relatively decent for some industries and uses, this is often not true for science.
A dangerous problem with ChatGPT is “hallucinations.” When generating answers to your prompts, if it doesn’t know, it will make something up – or, in other words, it lies. This article by Herb Lin shows excellent examples of ChatGPT “hallucinating,” giving wildly incorrect information and, at times, completely fabricating information in response to prompts from the author.
There are many reasons why this may happen, but the bottom line is that ChatGPT cannot be trusted to give you the facts. Sometimes, ChatGPT will invent entirely fictional quotes from famous figures, fabricate entire studies, attribute papers to authors who’ve never written them, or even create fake URL addresses for websites that do not exist.
- You don’t own the content
ChatGPT can’t have its own ideas or opinions. Even if ChatGPT could imitate human writing flawlessly, there are still legal issues that come into play with AI-generated content. Because ChatGPT generates answers to your prompts with material “learned” through its training, it is possible that a response could contain copyrighted material. In addition, US copyright laws state that a human must create content. This means that work generated by AI can’t be copyrighted or protected.
What does using ChatGPT mean for science-focused content marketers?
If you were thinking that ChatGPT can replace your content writers and editors, we hope that understanding some of the major pitfalls of the app will give you pause. Any of these four pitfalls can sink your credibility in the public arena – which would damage your future goals if you are a science-based B2B organization.
When it comes to content marketing for science-based businesses, you need to keep your human writers and editors! Why? If you want to use ChatGPT, you’ll need your team to avoid the above mentioned pitfalls.
- Fact-Checking: Your writers and editors must ensure that all information in your draft content is reliable, unbiased – and – most of all, true and correct.
- Copyrights: Content generated must be heavily revised and edited to create an original work.
- Natural Language – Content must be rewritten for your brand’s human voice and style.
- Limitations – ChatGPT doesn’t have the capability to browse the web in real-time or access up-to-date data. This means that its vast knowledge base, while extensive, only spans up to the year 2021. Users should keep this limitation in mind when seeking the most current or updated information.
Using ChatGPT responses as a starting point and working from there can be an effective, fast, and safe way to utilize AI in your creative marketing workflow. It’s vital that you question and fact-check what you receive and recognize bias or misinformation.
The somewhat good news
While there are issues with ChatGPT, it can still be a useful tool (like Grammarly, Hemingway, or the myriad of other AI-assisted apps) to help you in your creative process. It doesn’t, however, allow for the replacement of a talented content creator or editor.
It can help with your ideation and explaining complex concepts in a simplified way. ChatGPT can also rephrase your sentences and even write in specific styles. ChatGPT can be great for sparking creativity and getting you “unstuck” when you’re having issues with a particular part of your work or wording. But, the answer you get is only as good as the prompt you put in. If you’re going to use ChatGPT, you should experiment with different questions and re-generate responses often. It will take work, but getting valuable ideas from ChatGPT is not out of the question.
Whether you decide to use AI in your work or not, it’s important that you know what risks you run as well as what it can and cannot do.
Brandwidth Solutions serves the healthcare, life sciences, energy, and contract pharma industries. We work with companies that want to make the most of their marketing – who want their marketing empowered to help drive leads – and ultimately sales. If you want to move your product or service forward in a smart way, we want to work with you. Call us at 215.997.8575.