How Marketers Can Claim Their Spot At The AI Discussion Table

By: Peter Prodromou

As AI becomes increasingly embedded in business, it’s interesting to consider the role senior marketing professionals can play in shaping how it’s implemented from the vantage points of ethics, transparency and corporate strategy.

It has long been my belief that, as marketing professionals, we should have a seat at the C-level table around issues like AI, which not only drive the direction of an organization but also carry myriad implications for issues ranging from ethics to social good to employment practices.

One would assume, given the velocity of adoption, that most organizations have a rapidly evolving model in mind, partially formed strategies and points of view on these issues. Yet many, particularly those from larger enterprises, are only now scratching the surface.

However, with the encroaching reality of AI, it is essential that leaders work swiftly to develop strategies that safeguard against unintended consequences. A central element of this is including marketing professionals in strategic planning, given the roles we play in shaping public image and maintaining a brand.

Executive counsel

Our primary role needs to be informing executive leadership of the conversations shaping AI implementation from key publics, including employees, customers, policy bodies and influencers. We should be providing a dossier of data that can help leadership best understand the implications of AI on the business and the constituents it serves, enabling them to make better decisions about adoption and rollout.

Content in the dossier should include breaking news, instances of AI acting in ways not anticipated and examples of instances where it has served customers well or failed. When a leading AI company like OpenAI faces the prospect of losing 500 of its 700 employees in a high-profile HR event, CEOs whose companies are contemplating or are actually using its solutions should feel uneasy.

Understanding the implications on their own business use cases is essential.

Code of ethics

We should be at the forefront of understanding the ethical dilemmas presented by AI, from preservation of privacy to potential headcount reductions. The analysis needs to be thorough, all encompassing and – in concert with attorneys and other advisors – translated into something simple to understand and communicate.

A code of ethics might be as simple as reaffirming commitments to customer and employee privacy. Or it may be a more substantial governance document communicating instances when and where an organization will use AI, where logical cutoff points exist and the oversight in place to ensure compliance.

AI is rooted in accessing massive amounts of data. As such, it is essential that any code of ethics focuses on ensuring integrity and security of this information.

These statements need to mirror the desired perception of the brand and its values.  Moreover, the process needs to remain dynamic to address rapidly changing circumstances.

Change management

We need to be vocal in establishing change management programs in partnership with leadership, HR, legal and other key departments. While we may not be the tip of the spear, we are the engine for translation and distribution and must function as such.

Issues management

We should be prepared for issues and crises that may arise as a result of unanticipated occurrences attributable to glitches or related human error. Again, it is essential that we tackle this through the lens of ethical behavior – a readiness for disclosure and implications.  

It is also essential that we have a role in educating key publics about what is being done to offset and avoid these situations in the future. As such, we need to be working hand-in-glove, from the outset, with IT, legal and the departments using AI.

We should be developing scenarios, crisis mitigation strategies and action plans to ensure containment and utmost safety in the event of an unanticipated occurrence. There are documented instances of AI fabricating information, making specific value judgements and more.

We have to be prepared that something like this might happen at scale – for example, using an AI-enabled search to conduct research and inadvertently using bogus data to support a major presentation or product launch. Common sense suggests that human interaction will catch errors. But if the growing belief is that AI is infallible, these safeguards will fall away. An organization needs to be prepared for all possibilities. 

Adaptive change

Finally, we need to be participants in shaping adaptive change – for example, the implications of AI on a workforce, people management and other implications, such as headcount reduction. People remain the most precious asset in organizations, and the potential for displacing them with technology needs to be handled with the utmost care.  

In sum, ours is a mission-critical role during this dynamic period of change. We need to embrace it and function as key members of leadership.

Data-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.

Follow Boathouse and AdExchanger on LinkedIn.

For more articles featuring Peter Prodromou, click here.


More Insights

Previous
Previous

What Your CEO Really Thinks

Next
Next

Boathouse’s Third Annual CEO Study on Marketing and the CMO