Stamford, Connecticut – October 29, 2024 – Gartner has released research that emphasized the necessity for corporate leaders to act against the escalating reputational risks caused by generative AI-enabled technologies. The research also indicates an alarming reality, where 80% of consumers do not know what is authentic or fake while interacting with online content.
“From easily disseminated deepfakes spreading disinformation to impersonation attacks and unintentional employee misuse, the implementation of GenAI is riddled with risks,” warns Amber Boyes, Director Analyst in the Gartner for Communications Leaders Practice.
The study surveyed 2,001 consumers and suggested five strategies that chief communications officers can deploy to safeguard their companies. These include strengthened owned media credibility, enhanced social media monitoring, transparent AI use policies, scenario planning for potential attacks, and employee empowerment through controlled AI experimentation.
Consumer demand for transparency is also high since 75% of the respondents expect that the brands will disclose whether they used GenAI in producing the content or not.
“Communications leaders sit at the front lines of safeguarding and enhancing an organization’s reputation,” adds Boyes, emphasizing the critical role of maintaining trust in an AI-driven landscape.
The findings come as more organizations face the paradox of benefitting from AI and at the same time curbing any risks it poses to their reputation. According to Gartner’s study, it is possible to achieve this by reinforcing internal policies but ensuring that communication with the external parties remains fluid.
Latest Stories:
AI-Powered Warehouse Automation Revolutionizes E-Commerce Fulfillment
Lenovo and Cisco Unveil Collaborative Solutions to Revolutionize Hybrid Work and Enterprise AI
Meta and IndiaAI Unveil Srijan Center to Transform India’s AI Landscape