AI News

Experts Call for AI Regulations to Tackle Cyber Threats at SHIELD-2025

SAP’s Q4 Sees Major AI and Cloud Transformations in Asia Pacific

At the SHIELD-2025 conclave in Telangana, cybersecurity experts underscored the importance of strict AI regulations to address cyber threats caused by artificial intelligence. The two-day event, organized by the Telangana Cyber Security Bureau (TGCSB) and Society for Cyberabad Security Council (SCSC) in collaboration with The Times of India, featured a panel discussion on AI’s role in cyber threats, deepfake manipulation, and blockchain technology.

Need For Strict AI Regulations

Krishna Sastry, a partner in cybersecurity at Ernst & Young, discussed AI’s role in cybersecurity defense. “AI is already being used for preventative and detective purposes,” he stated, pointing to its effectiveness in thwarting phishing attempts and server attacks. However, he emphasized that automation does not mean AI will replace human oversight in cybersecurity.

Senior advocate Krishna Grandhi advocated for a tiered approach to AI regulation, distinguishing between different risk levels. “It is difficult to regulate AI with a one-size-fits-all approach,” he explained. “High-risk AI systems involved in health and finance would need a different set of regulations, while low-risk AI systems can go as far as self-regulating themselves.”

Concerns about AI-powered disinformation were raised by Sharat Kavi Raj, Inspector General of the Rajasthan police crime records bureau. He warned that AI-generated content has been linked to misinformation campaigns, influencing public perception and even sparking communal tensions. “Fake news is a major concern. All democracies are saying that AI and fake news are being used to influence voters,” he noted.

Other Experts Weigh In

Sunil Bajpai, chief trust officer at Tanla Platforms, called for regulatory measures that acknowledge human tendencies rather than outright prohibiting deception. “Deception should not be completely rejected, it is human nature to deceive, we can only have mechanisms,” he stated, suggesting solutions like watermarking to differentiate between real and AI-generated content.

The discussion also highlighted the significance of data security in AI development. Sastry cautioned that AI’s effectiveness depends on high-quality data. “AI needs clean data, otherwise it’s garbage in, garbage out,” he said, raising concerns about the potential for hackers to manipulate AI systems through data poisoning.

Meanwhile, South Korea’s data protection regulator has flagged concerns about the Chinese AI startup DeepSeek. The watchdog suspects the company’s chatbot may have transmitted user data to ByteDance, adding another layer to the ongoing global scrutiny over AI-driven data security.

Source: https://timesofindia.indiatimes.com/city/hyderabad/smart-regulation-of-ai-to-combat-cyber-threats-crucial-experts/articleshow/118366868.cms

Latest Articles

Former OpenAI CTO Mira Murati Announces AI Startup Thinking Machines Lab

South Korea Suspends DeepSeek Downloads Over Alleged Data Sharing

Gartner: 40% of AI Data Breaches to Stem from Cross-Border GenAI Misuse by 2027

What is your reaction?

Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
Kritika Mehta
Kritika is a journalist at Times of AI, with over two years of experience specializing in financial and technology reporting. She has a keen eye for uncovering emerging trends and delivering detailed, thought-provoking insights into the tech industry. Kritika crafts compelling stories that engage readers and enhance their understanding of the evolving world of artificial intelligence. Her ability to blend analytical precision with clear communication makes her a trusted voice in technology journalism.

Leave a reply

Your email address will not be published. Required fields are marked *

More in:AI News