AI is transforming many facets of business, including public relations. A recent study by the Chartered Institute of Public Relations found that AI tools now assist as much as 40% of tasks performed by PR and comms professionals. Indeed, AI tools are already shaking up daily workflows, unlocking efficiency gains and creativity, and enhancing comms professionals’ ability to support marketing and C-suite initiatives. 

One way we’ve been exploring how AI can support our work at SourceCode is through the use of chatbots. Generative AI chatbots leverage natural language processing to have more open-ended and free-flowing conversations. Well-known examples include Claude and ChatGPT for mostly text-based responses and DALL-E and Midjourney for text-to-image responses. It is the utilitarian nature of generative AI chatbots that makes them appealing work partners that can help spark new ideas and remove the monotony of work all at the same time. 

The key to a good AI partner is crafting thoughtful prompts and providing clear instructions that guide the chatbot’s behavior in a way that is aligned with your goals. 

Prompts, prompts, and prompts

A well-crafted prompt is a linchpin in getting accurate, relevant, and quality information. Consider the following examples and compare the vague prompts versus detailed, empathetic prompts in creating a speaking submission:

Poor: “Give me a list of topics for me to propose to speak at SXSW”

Poor: “What should I talk about at CES Unveiled?”

Poorly constructed prompts offer extremely vague context and provide the chatbot with no information on how to respond appropriately. While responses should only be seen as a starting point, responses from these prompts will not give you a solid place to start.

Good: “I’m the head of product design at a consumer bank. I’m interested in humanizing design, accessibility, and inclusivity in design, and the power of design to make banking accessible. I’m going to a premier tech conference where the audience is people in UX, UI, product design, and product engineering. I’m applying to speak at SXSW, a renowned conference where thought leaders share their insights and ideas. The theme of the conference stage I want to speak at is “the power of design.” Help me propose five topics to give a talk at the event. For each, include a session topic and a 100-word brief.”

Good: “I’m unveiling a new product called “The SmartWatch.” It is a cutting-edge hybrid smartwatch featuring new and exclusive features, including the revolutionary TempTech24/7 module for continuous body temperature variation tracking. It is the only smartwatch approved and marketed for medical use. I’m attending CES Unveiled, a premiere conference where tech companies announce new, exciting products. I will announce this product there and showcase its features. Draft a submission that provides a concise overview of our product, emphasizing key features and benefits, for the organizer to consider whether they want me to announce my product at CES Unveiled. It will be 500 words max.” 

A well-crafted prompt is key to getting the most out of generative AI chatbots. The prompt should clearly state the desired task or output by framing the question at hand, establishing the preferred tone and level of detail, and giving examples if helpful. With the relevant context, the bot can be steered towards the desired response. 

Generative AI chatbots are easy to use once you know how to use them. However, the job is not done when seemingly satisfying responses are churned out. There are things to be mindful of every time a response is generated.

ChatBot Watch-Outs 

When using AI bots for work, the responsibility sits with human employees to ensure responsible and secure AI use that is aligned with company policies. You can not depend on  AI alone to determine appropriate deployment. It’s crucial to validate the accuracy of the response, understand the ethics and security implications, and exercise your own critical judgment instead of fully entrusting the chatbots.

First and foremost, as our clients’ trusted partners, PR pros need to keep client’s confidential information secure. Any information that is not publicly available should never be shared with a chatbot. AI chat logs have been leaked before, and agencies must preserve trust by preventing breaches of client’s sensitive information. Besides, information shared with chatbots is often used to train the models as well, meaning that the data is stored for a certain period in the chatbot’s server. This may pose a risk as the platform could be compromised in the future. 

Examples of information that should never be shared with chatbots include:

  • Client business plans or non-public product information
  • Non-public client or prospect PowerPoint slides or documents
  • Paid or confidential analyst reports
  • Paid market insights to detail key findings
  • Confidential research data

Double and triple-check AI outputs for accuracy before public release. Watch closely for any plagiarism or copyright/trademark infringement. While AI can synthesize concepts, verbatim passages lifted from existing sources reveal when information is assembled disingenuously or the bot lacks a true grasp of the required analysis to produce its own wording.

Here are a few tips to tackle the issue:

  • Corroborate with reliable sources: Check any factual statements from the chatbot against trustworthy reference materials like textbooks, news articles from reputable publishers, and data from respected sources. If the chatbot’s claims don’t align with known facts, treat them as suspect.
  • Evaluate supporting evidence quality: For any outside evidence the chatbot cites, evaluate if the sources are authoritative, impartial, current, and support conclusions in context. Don’t just accept footnoted links without scrutiny.
  • Ask follow-up questions: If a response seems questionable or too good to be true, ask additional probing questions to validate your understanding. See if the chatbot’s explanations remain logical and consistent.
  • Check for plagiarism: Paste distinctive phrases from complex responses into a search engine or plagiarism-checking tool to watch for copied text the AI may falsely present as original work. 
  • Spot check randomness: Fact check occasional responses even without obvious flags, as quality can vary across questions. Periodic spot checks will keep the chatbot honest.

Last but not least, lean on guidance set by trusted organizations in your field. As with integrating any complex emerging technology, it is advisable to consult with experts and craft thoughtful guidelines on its use. The guidelines will serve as a compass to which your employees could refer to as they ensure ethical development and deployment, promote inclusivity, and address concerns around potential misuse or harm, instilling responsibility and accountability around this powerful tool.

Conclusion

Generative AI presents a powerful tool with the potential to positively impact public relations workflows, sparking creativity and supporting content development. With responsible implementation, AI can elevate strategic communications to new heights – helping workers deliver better results in less time. As such, there is little need to fear the possibility of job loss. Instead, PR and comms professionals should focus on learning to properly embrace this technology. For leaders in the public relations space, it is crucial to thoroughly think through the ethics and governance of AI in order to ensure its responsible and strategic implementation. With sound oversight and integration, AI stands to significantly benefit communications outcomes rather than negatively disrupt existing jobs.