Artificial Intelligence (AI) currently dominates news and media coverage, including the big questions around the speed of its evolution and the ethics of its use. Joshua Lee, a writer, college instructor, and police detective, graciously agreed to answer some of my questions about the ethics of using ChatGPT and other forms of artificial intelligence for workplace writing. The following is the result of our conversation.
Erika: I heard that you just wrote an article about how police can use AI (such as ChatGPT) for police work. What AI applications do you see for workplace writers both within and outside of police work?
Josh: Writing is an essential function for nearly every profession. If you look at the most influential people in every field, they all have one thing in common: they write. Artists, photographers, landscapers, doctors, lawyers, accountants, scientists, and even police officers have to learn to write well if they are going to influence change.
I recently wrote an article on how police officers and management can use AI chatbots like ChatGPT to help make educated decisions about complex issues that face law enforcement. Low staffing, low police morale, and the lack of community support lead to higher crime, higher police use of force encounters, and higher attrition. These are not easy issues to solve on our own, and technology can help.
AI chatbots can benefit workplace writers by giving them easier access to research, articles, white papers, and best practices for their field. They can then use that information to write influential content for their industry.
Erika: I also hear that you are working on an article about ethics and the proper use of chatbots. I wonder if you could speak to ethics issues and the proper use of chatbots for professional writers in police writing and also in writing for other workplace uses. For example, is it ethical to use chatbots to write content such as the following:
- a blog article for a non-profit?
- a user manual?
- an email marketing campaign?
- social media posts?
- an in-house newsletter for employees?
- content on public websites, such as the Motor Vehicles Department or IRS.gov?
- grant proposals?
Josh: I have been involved in the ethical use of AI for nearly ten years. It’s clear that ethical issues for police applications are different from those of professional writers simply because of constitutional rights and privacy issues. If an officer misuses AI, he could lose the case, a bad guy could go free, an innocent person could suffer harm, or the officer himself could go to jail or be sued.
While constitutional rights and privacy may be beyond a typical writer’s purview, professional writers can learn some best practices to keep them safe within the ethical boundaries of advanced technology.
1) Understand what AI chatbots are and do
Chatbots like ChatGPT gather data through open sources at a ridiculously fast rate. They learn from previous questions and answers which allows them to answer faster. Not only do AI chatbots gather data, but they also analyze the questions contextually. That means you as the writer will have to give context to your question to get the best answer possible. You can ask questions like “what is the capital of Italy” and it will give you a response just like Google, but that is not what AI chatbots are used for. They were developed to have a deeper conversation just like talking to a friend, coworker, parent figure, or professor.
2) Chatbots are like kids, don’t abuse
One of the most important things to remember is that chatbots can be manipulated to give you a response based on the context of the question. This is the same as asking your kids questions but rephrasing your question to elicit the response you want to hear, or when police officers intentionally ask leading questions to elicit a confession. If you intentionally phrase your question so that the chatbot gives you the answer you want to hear vs. what the chatbot should tell you but you just don’t want to hear, then you publish that response, that is unethical. It also leads to a lot of misinformation.
3) Beware of your biases
Chatbots aren’t inherently biased, but all humans have implicit biases that impact chatbots’ responses and how they learn.AI users must understand their own biases first before using AI for decision making.
4) Cite sources
When you use an AI chatbot like ChatGPT, you are using a tool to find information that you may or may not have been able to find without that tool. When you get a response from a chatbot, it is important to find the true source of the chatbot’s response. Where did that chatbot get that information? I have been using ChatGPT since its release and I found that most of the time ChatGPT can provide the source where they got its information. Some resources academic , others are not. If the system gives the source, verify it yourself before using it. If the system can’t give you a source, don’t use that information in any published work.
Erika: Yes! And check those sources to make sure the sources are reputable and reliable. Here’s an example of what can happen if you don’t.
Chatbot hallucinations
As an experiment, I asked ChatGPT to write an article about building pet shelters. Then I asked ChatGPT to tell me the sources it used to create the article. ChatGPT then gave me a list of what looked like reputable, relevant sources, such as the American Society for the Prevention of Cruelty to Animals and the Humane Society. But when I checked those sources—actually going to the URLs that ChatGPT provided—the source was “not found.” That is, the information was not available at the web address that ChatGPT gave me.
In addition, ChatGPT could not tell me which parts of the article it wrote were from which source.
Did ChatGPT hallucinate these sources? Did it make them up? I don’t know, so I can’t use that information in a public report. Or I can confirm the information by doing my own search.
Giving false information—such as claiming that non-existent resources exist—is something that chatbot technology is known for. You can learn more about these so-called “hallucinations” in the New York Times article, What Makes A.I. Chatbots Go Wrong (Cade Metz, March 29, 2023). A chatbot can give us misinformation because the database upon which it bases its answers contains misinformation or outdated information.Bing was more upfront: when I asked it the same questions, it told me that it didn’t use outside sources to create its article about building an animal shelter. The articles it created was—according to Bing—generated based on its own “internal knowledge and information.”
The sources it gave me when I asked Bing to generate a list of sources for more information about building animal shelters included some reputable sites but also one site that contains some inadequately edited, suspicious content.The takeaway: as Anna Mills—curator of the resource area AI Text Generators and Teaching Writing for the Writing Across the Curriculum Clearinghouse—says, don’t use AI chatbots if you don’t have the expertise to evaluate their output[i]
“Garbage in, Garbage out”
There is a lot of good information on the web, but there is also plenty of false and undocumented information. Chatbots tell us what is in their database regardless of how true or relevant that information is.
Integrate and cite your sources—including AI-generated text–properly, even at work
We might be used to reading online articles that don’t tell us where the “author” got information, but leaving out our sources remains an unethical practice. Students are used to providing sources for school-based research work, but when they transition to the workplace, they are not always sure whether they still need to cite their sources.
My advice for them is that in the workplace they need to always cite their sources to avoid legal issues and to maintain the reputation of their employer and their own reputations as honest writers.Nevertheless, using academic citation methods in the workplace is not always appropriate or necessary, so learn your industry’s expected method of using and citing sources.[ii]
Be a part of the solution: my suggestion is to let your readers know which part of your content is your writing and which is AI, just as we let our readers know when another human is writing part of our content. The Chicago Manual of Style agrees and also provides more guidance, such as how to include the prompt you used in ChatGPT and what to include in citations in formal and informal writing.[iii]
When unscrupulous writers, naïve writers, or careless writers use AI to write web content, that content could be full of good, not-so-good, and garbage information.
References allow readers to check the truth and relevance of content. However, if those references point to sources that are non-existent or no longer available, then I mistrust the whole article.Because of chatbots like ChatGPT, when I read articles online that give just a list of references at the end instead of using footnotes linking sources to the corresponding information inside the article, I suspect the quality and truth of that article.
The rise of chatbots might help the writer to create content more quickly, but it also makes my job as a reader much harder.
5) Transparency
Josh: Users should inform their readers when they use AI in their writing. There is nothing wrong with using tools in your work, but what is wrong is taking full credit and not being transparent on where you got that information.
When I use ChatGPT, I include a quick disclaimer at the end. Something like: “This article was written by giving prompts to ChatGPT.”
Now is it ethical to use chatbots to write things like blogs, user manuals, emails, social media? Sure, but only as long as you follow the five best practices detailed above.
Erika: I use Grammarly to catch typos in my writing. Are there ethical issues with that?
Joshua: Not at all. Grammarly, ProWritingAid, PerfectIt, and even Microsoft’s Spell and Grammar Checkers are all AI tools. There is nothing wrong with using any of those.
Erika: If I am editing someone else’s work, do I need to tell my client that I am using ChatGPT or Grammarly?
Joshua: Now this is a good question. As an editor, you use tools to help you get your job done quickly and effectively. Using ChatGPT or Grammarly is no different than how photographers use Photoshop to edit their pictures.
In my contracts, I tell all my clients that I use various tools including AI technology to aid me as a writer and editor. Your client is hiring you for a specific job, and tools like AI are just tools as long as you use them ethically.
Erika: Why is it important for workplace writers to think about the ethics of using chatbots like ChatGPT?
Josh: Good question. Without sounding too nerdy, AI chatbots like ChatGPT or Google’s Bard fall under the technical umbrella of machine learning (ML). Chatbots learn from each question you ask them and how you reply to its responses.
A new AI program or chatbot is not inherently unethical. It is only when AI starts learning from biased or unethical human imput when it becomes “corrupted.”AI chatbots are just like kids. If you teach a machine unethically or immorally, the machine will develop into something that responds unethically and immorally. If you teach a child to be unethical or immoral, chances are they will develop into an unethical and immoral person. This is very scary when you are talking about a machine that can be significantly faster than a human.
As writers, we have to be careful about what we ask and the context behind the question.
Erika: Is there any other information about ethics that you would like to share with workplace writers as they navigate this technology?
Joshua: Take the time to learn who you are as a person. Take implicit bias courses and truly try to understand how and why you make the decisions that you do. The more you understand yourself, the better you will be able to use these types of systems. Also, don’t intentionally use the technology for nefarious purposes.
Erika: Thank you for answering my questions, Joshua. Where can we read more of your work?
Josh: I have an active blog on Police1 at https://www.police1.com/columnists/joshua-lee/
[i] Anna Mills. Workshop: Writing With and Beyond AI; April 21, 2023 with Ron Brooks at Northern Arizona University)
[ii] To learn more about how to cite sources in a few types of workplace content, here are a few sources:
- Workplace reports: David Taylor’s YouTube video, How to Cite Sources in Business Writing includes how and why to use direct quotes and paraphrases.
- Marketing content: https://www.carnegiehighered.com/blog/how-to-harness-artificial-intelligence-marketing/
Infographics: This blog article on the Venngage (a “business visuals” firm) website includes ways to cite a source without using academic style: https://venngage.com/blog/how-to-cite-an-infographic/
[iii] https://www.chicagomanualofstyle.org/qanda/data/faq/topics/Documentation/faq0422.html