An interview with Anushree Saha, General Counsel and Company Secretary at Qure.ai in India.
How will AI reshape the role of the General Counsel (GC) in India over the next few years? Which areas of the in-house legal function are most immediately being transformed by AI and where are the biggest opportunities?
The role of GCs in India would evolve from being strictly that of a legal advisor to being a front-line strategic partner, especially in highly regulated sectors. While AI can redefine the future of work, any usage of AI would need clear oversight through defined AI usage policies, SOPs and hands-on training to build future ready in-house teams. The risks and limitations of AI should also be clearly communicated and documented. Besides this, we can already see a shift in the role of GCs from being reactive to more proactive and predictive, slowly bridging the gap between regulators, technology teams and business leaders. From waiting for regulatory change to be affected, GCs can now anticipate it and help the organisation stay a step ahead by enforcing guardrails.
I feel that GCs also play an important role in setting standards for responsible AI adoption and governance. With the proliferation of AI in any company, it would be impossible for the legal function to work in a silo since successful AI adoption would rely heavily on cross-functional collaboration. Ultimately, AI will transform the GC’s role into an enabler who drives faster innovation responsibly, enforces trust in AI-enabled operations, and builds future-ready legal and compliance capabilities within the company.
AI is transforming how in-house legal teams operate by helping companies do more work in-house, optimise spend and focus on strategic thinking and nuanced analysis. AI is already streamlining core workflows such as contract review, due diligence, research and query handling, allowing in-house legal teams to prioritise high judgment work and quality over quantity. AI’s real promise lies beyond efficiency. The real opportunity lies in automated end-to-end contract reviews or administrative tasks with minimum human oversight, helping predict litigation risks, track regulatory shifts and identify patterns in negotiation or compliance behaviour. This will transform the legal team from a cost centre into a data-driven partner that helps the business make smarter and faster decisions.
How do you foster innovation and agility within your legal team?
By removing noise and giving clarity. Clear priorities, context about the business, and direct communication go a long way. Innovation is less about slogans and more about judgment: knowing when to simplify, when to automate, and when to step in early. I encourage the team to focus on what truly matters and avoid chasing perfection where speed and pragmatism are more valuable. When people understand the goal and have room to think, agility comes naturally.
What are the key ethical and legal risks of using AI in legal work and how can GCs balance innovation with accountability?
Confidentiality and data privacy breaches, bias in outputs, inaccurate advice or AI hallucinations, ambiguity over ownership leading to IP risks and unclear liability when AI influences legal decisions are some of the key ethical and legal risks of using AI. For GCs, the challenge is to harness AI efficiency without compromising on ethics or regulatory exposure. Blindly relying on AI can result in errors and biases, which could create significant liabilities for businesses. Further, unlike high-risk healthcare AI tools which must meet certain regulatory standards before commercialisation, legal-tech tools have no similar recognised standards, making reliance on their outputs risky.
Since the legal profession has always been about accountability, governance must nurture innovation. To balance innovation with accountability, GCs should adopt a responsible-by-design approach, setting clear internal AI-use policies, mandating human oversight on legal judgment calls, and embedding audit trails and accuracy checks into workflows, which was also highlighted in Deloitte’s latest report on Responsible AI(1). To effectively harness AI while staying mindful of its inherent risks and limitations, GCs must proactively build internal guardrails and best practices that make AI use both responsible and effective. This would include:
Ultimately, responsible use of AI comes down to transparency and documentation, so that innovation does not come at the cost of accountability.
How can legal leaders help their teams – and the wider business – prepare for this change, balancing human judgment with machine efficiency?
I genuinely believe that AI is an assistive tool and not a human replacement. Preparing a legal team for AI isn’t about pushing lawyers out of the picture but rather empowering them to use AI efficiently – blending machine speed with human judgment. The goal should be to augment legal intelligence with machine intelligence, without having to compromise on ethics and trust.
The key to preparing for this change is training and awareness. Every legal professional today should have basic AI literacy which would involve an understanding of how AI tools work, what data they rely on and where the risks lie. We frequently evaluate various legal AI tools in the market to identify those that truly enhance efficiency, implementing them only after careful evaluation and comparison. While smaller in-house teams may not have the luxury of a second pair of eyes for review, AI acts as a trusted reviewer and supports quality control. Legal can act as both an accelerator and a guardrail, enabling innovation responsibly.
The best results for getting AI right can only be harnessed when learning is constant. Whether it’s a pilot, a sandbox project or a failed experiment: it is still progress. Because every lesson learnt makes any in-house team more adaptive and agile. Another thought would be to designate ‘AI Champions’ who can be appointed across functions to promote safe and responsible AI use across the company. Their role should be to spread awareness regarding anonymisation, confidentiality, etc. and promote and oversee responsible usage to ensure data security. This would be a great way to create ownership and structure without slowing down innovation.
Do you think India’s legal and regulatory environment is evolving fast enough to keep pace with the rise of AI?
That’s a very relevant question, especially for a country like India which is at the intersection of rapid technological adoption and evolving governance frameworks. While the whole world is still playing catch-up with the speed and complexity of AI advancements, India has made significant progress in the last few years. For example, recently on November 5, 2025, the Ministry of Electronics and Information Technology (MeitY), under the IndiaAI Mission, released the India AI Governance Guidelines(2), a comprehensive framework to steer AI development, deployment and governance across sectors. NITI Aayog’s National Strategy for AI(3), the DPDP Act(4), sectoral guidelines by bodies like MeitY(5) and the RBI(6) are all important inception points that reflect intent and direction. However, as compared to the more specific regulations such as the EU AI Act, we are still some distance away from a comprehensive, AI-specific law that addresses issues like algorithmic accountability, explainability or cross-sectoral risk management.
The judiciary, too, has taken a very cautious and case-by-case approach. For instance, the Kerala High Court’s recent guidance on responsible use of AI tools(7) shows that there is increasing awareness around AI in the legal sphere and a need for mature enforcement of responsible use. The truth is that technology will always move faster than regulation, so the onus is on GCs to self-govern responsibly and take a view that “regulating while waiting for regulation” is the only sustainable approach. This means creating policies for use of generative AI tools, internal and ethical frameworks around data governance, risk controls and human oversight even before the law mandates it.
India is definitely moving in the right direction. The foundation has been laid and what is encouraging is that policymakers are showing genuine intent to move at a fast pace. With the right collaboration between industry and regulators, India can very much set the benchmark for responsible and inclusive AI governance.
As a GC, how are you personally approaching AI use in your team and are there any tools or practices you’ve found valuable (or concerning)?
Our approach to AI adoption within the legal function has been consciously balanced: ambitious, but responsible. AI should be viewed as an assistive capability, not an autonomous one. Technology can be used to take away the “mundane tasks” such as performing first-level contract reviews, due diligence, addressing multiple business queries and undertake primary legal research, so that lawyers can focus on analysis, negotiation and value-creation. The idea is not to reduce headcount but to deepen the team’s analytical capacity and accelerate value delivery.
We have invested efforts in creating internal knowledge bases such as Legal FAQs that serve as self-service knowledge hubs for all the business teams. By allowing the cross-functional teams to be self-sufficient, this model frees up bandwidth and improves response speed. There is also a lot of promise seen in the areas of AI Integration within CLM tools which can make the contracting process smarter, quicker and cleaner. Responsible AI adoption is less about the tools and more about the mindset. By running pilots, documenting learnings and building policies that align with organisational data-protection standards, AI ultimately becomes a part of the team’s DNA in the form of a reliable co-pilot that helps us work faster and smarter, while maintaining the rigour, judgment and ethical standards that remain uniquely human.
(1)https://www.deloitte.com/content/dam/assets-zone3/us/en/docs/services/consulting/2024/us-ai-institute-trustworthy-ai-in-practice.pdf
(2)https://www.pib.gov.in/PressReleasePage.aspx?PRID=2186639
(3)https://www.niti.gov.in/sites/default/files/2023-03/National-Strategy-for-Artificial-Intelligence.pdf
(4)https://www.meity.gov.in/static/uploads/2024/06/2bf1f0e9f04e6fb4f8fef35e82c42aa5.pdf
(5)https://www.meity.gov.in/static/uploads/2024/02/11ab.pdf (6)https://rbidocs.rbi.org.in/rdocs/PublicationReport/Pdfs/FREEAIR130820250A24FF2D4578453F824C72ED9F5D5851.PDF
(7)https://images.assettype.com/theleaflet/2025-07-22/mt4bw6n7/Kerala_HC_AI_Guidelines.pdf