AI Legal Safeguards: Development Stage Embedding

2 Minutes Read Listen to Article
Share:    

Feb 18, 2026 19:03

x
Embed techno-legal safeguards in AI systems during development, says Principal Scientific Adviser. Address bias, ethics, & accountability.
AI Legal Safeguards: Development Stage Embedding
Illustration: Dominic Xavier/Rediff.com
New Delhi, Feb 18 (PTI) Techno legal safeguards should be embedded directly into artificial intelligence (AI) technology systems at the stages of design and development, Principal Scientific Adviser to Government of India, Ajay Kumar Sood said on Wednesday.

According to Sood, such embedded compliance mechanisms can help address growing concerns around algorithmic bias, accountability and ethical AI usage across sectors ranging from governance to industry.

At a session of AI Impact Summit here, Sood was asked whether India should chart its own path and how is it addressing concerns around algorithmic bias and ethical AI use across sectors.

Responding, he underscored that India's recently released AI governance framework lays strong emphasis on building "safe and trusted AI" through a techno-legal approach that integrates regulatory requirements into the design, development and deployment of AI systems.

Sood, who chairs the India AI Governance Framework released by Ministry of Electronics and Information Technology (MeitY) in November 2025, said one of its key verticals or "sutras" focuses on safety and trust.

"We have articulated what safe and trusted AI means. Drawing from our experience with Digital Public Infrastructure (DPI 1.0), we have proposed a techno-legal framework. What this means in simple terms is that you embed legal requirements right into the technology at the stage of development, deployment and use," he said.

He acknowledged that questions may arise on whether embedding legal safeguards could affect model performance, increase latency or slow innovation.

"All those concerns are worth examining. But if you are not careful and are in a rush, legal issues will catch up with you in a few years. If damage is done, the first question will be: who is responsible?" he cautioned.


Sood called for examining how the adoption of techno-legal or other safety frameworks can be incentivised.

"This is not only for government. Even institutions implementing AI processes must ensure systems are safe. We should also examine how to incentivise adoption of techno-legal or other safety frameworks. If properly incentivised across companies and institutions, it can make a significant difference," he said.

Weighing in on potential collaboration with the United States, he identified two cutting-edge domains where joint efforts could yield transformative outcomes: AI integrated with quantum computing, and "physical AI".

"Using quantum computers in AI and in quantum computing itself is still a nascent area. If India and the US come together in these two frontier technologies -- AI and quantum computing -- I believe we can make a difference," he said.

He also flagged "physical AI" including robotics and humanoid systems as an area that deserves greater attention.

"Physical AI is not as prominent as it should be. Robotics and humanoids that operate in environments where we do not want humans to be present can have immense impact," Sood added. PTI RSN KRH

ANU

ANU
Share:    

TODAY'S MOST TRADED COMPANIES

  • Company Name
  • Price
  • Volume

See More >

Moneywiz Live!

Home

Market News

Latest News

International Markets

Economy

Industries

Mutual Fund News

IPO News

Search News

My Portfolio

My Watchlist

Gainers

Losers

Sectors

Indices

Forex

Mutual Funds

Feedback