• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Fintech.ca

 
 
  • News
  • Events
  • Interviews
  • Thought Leaders
  • Techtalent.ca
  • About

Inside TD’s Responsible AI Strategy

March 31, 2026 by Robert Lewis Leave a Comment

As artificial intelligence reshapes the financial services industry, the question of how large institutions govern and deploy the technology responsibly has never been more pressing. For TD, that question is not an afterthought but a design principle, one that runs from front-line branch staff all the way to the executive suite.

TD has built its approach around a Trustworthy AI framework anchored in pillars that include fairness, explainability, privacy and accountability. The bank has invested in training partnerships with organizations like the Vector Institute and Columbia Business School, and is now turning its attention to the next frontier: agentic AI.

We spoke with Jesse Cresswell, a researcher leading responsible AI development at Layer 6, TD’s machine learning research arm, about how the bank tailors responsible AI education across its workforce, what it takes to keep a deployed model accurate and fair over time, and why the bank sees 2026 as a pivotal year for AI agents in back-office workflows.

How does TD define its overall philosophy on responsible AI?

JC: Responsible AI is a core part of how we manage the risks of AI across the Bank. We know that our relationship with our colleagues and clients is based on trust. That’s why we have developed a strong governance framework around Trustworthy AI so that we can explore AI in a way that allows us to maintain that trust — and maybe even improve it. 

AI is a tool — like a hammer. Ultimately, humans are the ones who are responsible for developing it, deciding how it gets used and deploying it. We’re the ones who have to earn the trust of our communities, not the technology itself. We’re doing this by taking ownership and responsibility for the guardrails we’ve set for our models and how our models perform within them.

Can you explain how TD uses this philosophy for responsible AI when it comes to workforce training? In what ways does TD’s approach to AI education differ for its general workforce versus the specialized teams working directly on AI development?

JC: We think of responsible AI as being a core part of our risk management practice across the Bank and so we tailor our training to all the different levels of colleagues who are interacting with the technology.

Before our front-line colleagues start working with new AI tools like our generative AI virtual assistants in our contact centres, our Canadian branches, TD Securities or TD Wealth, they’re trained on how to safely and efficiently use them. My Trustworthy AI team runs training sessions for our model developers to teach them how to integrate our best practices, governance framework and principles into their work. 

At the executive level, our leaders regularly take part in responsible AI training with external organizations in our AI ecosystem like the Columbia Business School. Last year, 475 TD executives attended a two-day training session run by Columbia that focused on teaching them how they could help responsibly deploy AI in their businesses.

For the technical teams at Layer 6, what does responsible AI training look like today compared to five years ago, specifically regarding data fairness and privacy?

JC: Our responsible AI training dates back to when we first started introducing predictive AI models about eight years ago. But since then, the AI landscape has evolved from focusing on predictive models to generative AI — and our training has adapted with it. 

When predictive models dominated space, the challenges were different. Our training and guidelines focused on fairness and privacy. We set guidelines and best practices for our developers that taught them how to best ensure our predictive models came to fair decisions and that they were leveraging client data in a safe way. With generative AI models, our guidelines still focus on fairness and privacy, among other factors, but the considerations are different. 

If we’re building a generative AI virtual assistant, for example, we need to ensure it gives equally complete and accurate answers in English and French. On the privacy side, the considerations are different as well. Our generative AI models leverage proprietary data about our policies and procedures. That means our developers have to be aware of how controlled our systems are, who has access and what data we’re allowing the generative AI model to look at.

As the AI landscape continues to evolve, we’ll continue to evolve our responsible AI training, while building new guardrails on models and best practices for our developers.

How have partnerships with organizations like the Vector Institute and Columbia Business School informed the guardrails being built into TD’s systems today?

JC: Our relationships with the Vector Institute and Columbia Business School have both influenced the implementation of our Trustworthy AI guardrails. 

TD is a founding sponsor of the Vector Institute and our relationship with them has, for years, helped us understand the ‘how’ of AI. Our co-research projects often focus on better understanding how generative AI systems are working and evolving so that we can develop the techniques to build the right guardrails. Our colleagues also regularly attend Vector workshops and training sessions focused on responsible AI. In 2025, over 500 TD colleagues attended these workshops, training sessions, webinars and bootcamps. 

The training sessions we’ve developed with Columbia have a different purpose but still influence our guardrails in important ways. Those sessions have been focused on responsible AI training for executives to communicate the importance of guardrails. It’s important for executives working with AI to understand the business impact of responsible AI. When they do, they can help support the development and implementation of AI guardrails in their businesses.

TD’s Trustworthy AI framework is built on pillars like “explainability” and “fairness.” How were these principles built into a model like AI Prism, and how do you ensure the model remains accurate and safe over time?

JC: Before our models ever come into contact with users, they have to clear a series of checks and balances established by the Trustworthy AI team, and by independent model validators internally. We evaluate models for accuracy, fairness, explainability, privacy and accountability. These aspects can’t be improved in a vacuum. Attempting to improve one can often result in another being harmed unintentionally. That’s why our team analyzes them holistically to better assess and improve the trust in them.

Once our models like TD AI Prism have been deployed, we continue to regularly monitor their performance to ensure they don’t stray from these values.

The bank’s leadership has called 2026 “the year of agentic AI.” What is the next big goal for the Trustworthy AI team in terms of integrating agentic AI workflows across the bank?

JC: We’re approaching agentic AI in a different way. The technology is still in its early days and we’re focusing on using it to help make our back-office processes more efficient. We’re not giving our AI agents the freedom to do whatever they want. Our AI agents function like a train. We’re building the tracks by instructing our agents on what they should be doing at every step of a workflow. A train can only follow its tracks and similarly, our AI agents can only complete the tasks we’ve assigned them in the way we’ve instructed them. 

As we start integrating the technology into our workflows, we’re beginning with lower-risk applications and maintaining our human-in-the-loop strategy. We think of AI agents as assistants that are going to help our colleagues complete their tasks more efficiently. When our AI agents complete their tasks, our colleagues are reviewing and verifying the accuracy of their work before it’s used. They act as another guardrail in our Trustworthy AI framework.

Filed Under: Interviews, News Tagged With: TD Bank

 
 

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

 
 

Email Newsletter

  • Facebook
  • LinkedIn
  • RSS
  • Twitter

Founding Sponsors

Recent Posts

  • Interac, Kijiji Bring Verified Identity to P2P Marketplace
  • Sezzle Brings Fintech Flexibility to Canada with Launch of ‘BNPL’ Card
  • Scotiabank Launches Unified Enterprise AI Solution to Enhance Employee Work
  • Visa Launches New Fintech Tool as AI Agents Take Over Digital Commerce
  • Nmbr, Paiday Partner to Modernize Payroll

Copyright © 2026 Incubate Ventures | Calgary.tech · Decoder.ca · CleanEnergy.ca · Legaltech.ca · Techtalent.ca · Techcouver.com · | Privacy