2026 must be the year of transparency for AI in PR

In November, PRSA's Board of Ethics and Professional Standards released a comprehensive update to its AI ethics guidelines. If you missed it, I recommend downloading it immediately. This isn’t just another lofty statement regarding the promise and perils of AI usage in communications. It’s practical, thoughtful, and deeply actionable. 
Most importantly: Transparency now has its own section. In that section, I believe, lies the future of our industry. 

No trust without transparency

As AI tools develop, trust has become the most important battleground for businesses, including PR firms. 
Clients, reporters, and consumers all understand what democratized access to LLMs and other forms of generative AI has unleashed on society. Not all of it is useful or pretty. They’ve learned to evaluate what their eyes see and what they read with an increasingly skeptical lens. PR practitioners have always had to fight harder and yell louder than most professions to demonstrate their value and credibility within the media ecosystem, and AI has raised the bar on us yet again.
The way PR professionals meet that new standard is not to avoid AI, but rather to be radically transparent about how and why they use it. Success in PR and media relations hinges on trust. And there is no trust without transparency. 
As the founder of a communications firm working with technology and media brands, it’s been stunning to watch just how quickly attitudes toward and adoption of AI tools have shifted, both internally and externally. As recently as two years ago, many individuals (even within our own industry) were questioning whether AI would destroy the modern PR agency. To me, the answer to that question was always clear: It will if we don’t lean into it with intention. 
And we have. We’ve embraced AI tools, experimented, and built processes and protocols. We’ve pressure-tested capabilities and reinvented the human role across every stage of communications, from ideation to planning to execution. For us, the gap isn’t figuring out how to use AI as much as it is about communicating the value it brings to our clients. That starts with transparency. 

The playing field is leveling

The lack of transparency with which many PR organizations have embraced AI can be attributed to two realities of the early generative AI days:
1) At first, we didn’t know what we were doing. It’s hard to be transparent with others when you’re not even clear in your own mind. Early on, we were experimenting in real time. We tested bots, compared outputs, and tried to separate real value from noise. We also made plenty of rookie mistakes. I remember us holding prompt-writing trainings on paragraph-long prompts that took longer to write than the task we were trying to speed up! 
That was the phase we had to go through. We kept things mostly internal because we did not want to overclaim a capability we could not repeat reliably. We focused on building standards, judgment, and a workflow we could trust. It took time, experimentation, and ambiguity tolerance. 
2) Clients and journalists remained vocal in their apprehension about AI. Some companies and publishing organizations had knee-jerk reactions to the advent of gen AI tools. Others mandated that everyone in their orbit begin using these tools immediately. PR firms were stuck in the middle, initially having to tailor their use of AI tools and disclosures to the personalities of the people they answered to. Transparency was disincentivized. 
Things have changed. Today, PR firms have a much better understanding of what AI tools can reliably handle within their daily workflows and where human oversight and intervention are most crucial. More importantly, most clients and publications have come to understand AI as a force that requires realignment, not pushback. They expect PR firms to use these tools to do their jobs better, and they’ve adapted accordingly.
Take, for example, the process of pitching a byline for publication. When ChatGPT launched in 2022, many publications raced to implement policies stating that no generative AI tools could be used in the creation, writing, or editing of an author’s work. These policies, however, were a flash in the pan as reporters and editors adapted to technology that, at first blush, seemed an existential threat.  
Now, many media outlets have settled around a more reasoned approach. Prohibitions have given way to disclosure requests. Harvard Business Review, for example, now includes a field in its submissions form that asks: “Did you use Generative AI tools to aid in your submission or article proposal? If yes, how?”

Putting the transparency playbook into action

At my firm, we’ve embraced AI as the accelerant that it is. We’ve done so with caution and intention. We strive for adoption with integrity, and PRSA’s guidelines have played a critical role in this process. 
One of the most prominent manifestations of our push for transparency in AI use is the new section on the Broadsheet website dedicated to our philosophy and application of AI across the full lifecycle of integrated comms: in discovery, strategy development, testing, launch, measurement, and more. At the same time, we’ve actively worked with many of our clients to develop and implement their own AI ethics and use policies. 
It’s important to acknowledge that implementing AI use and transparency frameworks is a journey, not a destination. Even as we step back to appreciate the progress we’ve made, we know new developments in areas like agentic AI will demand that we never rest on laurels. These systems will introduce new levels of autonomy to tasks and decision-making, bringing new opportunities and challenges to our industry and clients. 
PRSA’s commitment to keeping pace with the evolving role of AI in our industry and to grounding its guidance in ethical questions is laudable and invaluable. As practitioners, we must use the tools being put at our disposal. As innovators, we should also challenge ourselves to go faster and further and continue experimenting. The only thing that can get in the way is a lack of transparency. Radical transparency is how we maintain trust through periods of disruption and change. 
Author’s note: No AI tools were used in the development or writing of this article. And yes, that meant it took me a lot longer. 

Next
Next

Communications in a Maturing Market