Level up your tech stack at TechNX.ca (TechNX)
c/o Squall Inc.
P.O. Box 1484, Stn. B
Ottawa, Ontario, K1P 5P6

Why agentic AI needs guardrails

Ankita Upadhyay of Thomson Reuters says agentic AI can offer benefits, but only if processes are in place to prevent risk

Ankita Upadhyay, Thomson Reuters. (Courtesy Vector Institute Facebook)

The era of agentic AI is here. Strong guardrails are needed to keep it from taking actions that can put organizations at risk.

That was the message Ankita Upadhyay, senior director, AI enablement at Thomson Reuters told attendees at the Vector Institute’s Remarkable conference in Toronto on Feb. 19.

Upadhyay reminded attendees how quickly AI research and development has moved toward autonomous AI systems. In 2022 and 2023, she said, everyone was amazed AI had the ability to write poems, answer questions and summarize documents.

At that early stage, users had “read‑only access” to models with no ability to modify the underlying systems or feed proprietary data into them. By 2025, LLM-driven AI systems were able to not only read and summarize documents, but also to view the entirety of an organization’s data. 

When integrated with SharePoint, Google Drive and Google Docs along with retrieval‑augmented generation (RAG) systems, organizations were able to roll out chatbots that could do more than answer a simple question, she said.

“You were now feeding your own organization’s data to these LLMs to get specific results pertaining to your organization,” she explained.

Now, these systems can “read, write and view access” more documents and produce results and do tasks with more autonomy than ever before.

“As soon as you think about someone having write access, that means they can do a lot of things,” she warned. “This is a very interesting era where people can be motivated to do a lot of things using agents, but the most important thing is to think about where to stop and where we cannot use AI to make those decisions. It becomes really important how we put guardrails around it.”

When agents act on their own

To illustrate what is at stake to the attendees of agentic AI systems operating without guardrails, Upadhyay pointed to agentic AI operating in a legal firm. These agents could operate so autonomously, they can “do discoveries and also file motions without letting the lawyers know,” she said.

In other businesses, such agents could take actions that may affect billing, how computing or cloud resources are allocated, or make business decisions without anyone being aware of the actions being taken. It could even involve something as mundane as a financial agent approving travel expenses without grasping budget constraints within a company.

“If the agents do not follow the correct procedures… that is a huge liability,” she said.

These examples underscore what she called a “governance gap,” between secure data acquisition and the unpredictable ways autonomous systems may use that data. Even when organizations rely on proprietary datasets, she argued, “how you’re using that data to take decisions – that’s where the gap is.”

Putting people back in the loop

The way to avoid such problems is to put people back in charge of how such agents get developed and what they are allowed to do; and just as important, having people review what the agents are recommending or want to do.

She explained this by showing a set of images of herself in a kitchen. The first was of her as a sous-chef, chopping vegetables, stirring a pot and doing other tasks to get food prepared. Then she had an image of herself as the executive chef checking the final dish.

She argued that AI powered agents can be used to do “redundant tasks” to free up staff to focus on more critical work and outcomes to be achieved. In a kitchen the redundant tasks would be the chopping of vegetables or stirring a pot of soup. The more important task is to have someone at the end to review the results – the final plated meal and the tastes of the various parts.

“Use agents for redundant tasks,” she said. “You don’t need to put guardrails on the sub‑task, but you need to put guardrails around what your end result is.”

She even suggested that as an extra level of oversight to prevent agents from making mistakes, companies put in place asynchronous monitoring agents that can real-time perform data-quality validation and automated audits and to check those steps against corporate policies.

Upadhyay said going forward, as more autonomous agents start to be adopted by corporations, developers should follow several steps to make sure agents don’t go off the rails: sit down with clients to map out what the agents are to accomplish, define policies early, identify edge cases where problems can happen, and add the guardrails well before any code starts to be written.

“Don’t think about this toward the end,” she cautioned. “Organizations have been into situations where they put this toward the end and there have been repercussions.”



Industry Events