• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Fintech.ca

 
 
  • News
  • Events
  • Interviews
  • Thought Leaders
  • Techtalent.ca
  • About

Formic AI’s CEO on Launching in Canada and Making AI Safe for High-Stakes Environments

November 24, 2025 by Robert Lewis Leave a Comment

As more financial institutions, insurers and public sector organizations look to integrate artificial intelligence into their day-to-day operations, the pressure is on to make sure these tools are not only smart, but safe.

Formic AI, a Canadian company that just launched its enterprise platform publicly, believes that for AI to work in high-stakes environments, it has to be explainable and reliable, especially in sectors where decisions need to be justified to clients, regulators, or the public.

We spoke with CEO Daniel Escott about what “trustworthy AI” really means in this context, why Canada is a critical market to launch in, and how legal risks around black box models are starting to shape enterprise AI decisions.

Daniel, Formic AI launched in Canada with a focus on “trustworthy enterprise AI.” What does that actually mean for regulated sectors like finance, insurance and public infrastructure?

DE: When we talk about trustworthy AI, we mean systems that are not only technically accurate but also aligned with professional, legal, and ethical standards. In Canada, especially in industries like law, finance, engineering, and the public sector, decisions need to be documented and explainable. If an AI tool gives a wrong answer or makes a risky recommendation, it’s not enough to say “well, the system made a mistake.” Someone has to be accountable. We’ve seen recent lawsuits, like the New York suit against Cohere, where certain kinds of hallucinations may even constitute their own form of liability for users or developers.

So, for us, trustworthy AI means that every response you get from our system is linked to the existing file where the model found the relevant information. There’s no guesswork, no hallucinations, and no opaque logic that can’t be explained. We want people to be able to trace the answer back to the original material and say, “this is where that insight came from.” That’s the standard that regulated sectors are asking for now, and honestly, it’s overdue.

Trust is a big topic in this space. What steps have you taken to make sure Formic’s platform is something institutions can really rely on?

DE: We started by looking at where the trust breaks down in other systems. A lot of companies are adopting these giant language models that are incredibly powerful, but you don’t really know how they work under the hood. You don’t know what data they were trained on or why they gave a particular answer. That’s a problem if you’re a bank or a government department.

So with Formic AI, we’ve made sure that every answer includes a non-generative citation to the actual source material for each response. You can easily go back and check the source without the need to reverse-engineer what the AI could have considered. You can verify it directly. We also built our technology to run independently in Canada, or even internally on a local network, without relying on foreign infrastructure or services. For sectors that care deeply about Canadian compliance and governance, that control matters. We’re giving organizations something they can actually stand behind.

Can you walk us through a few early use cases where Formic AI is making a difference?

DE: We’ve seen a lot of interest in four key areas so far.

The first is high-stakes knowledge retrieval and verification. Organizations that work with complex and authoritative data sets often need absolute clarity on where their information is coming from. Whether that’s legal content, financial reports, or internal policy documents, they cannot risk relying on generative AI that might produce an inaccurate summary. The Formic Engine addresses this by using sentence-level citations that link every response directly to the original source document. This makes it especially useful in cases where accuracy and verifiability are non-negotiable.

Another common use case is secure analysis of sensitive proprietary data. Many enterprises are restricted by data governance rules that prevent them from sending confidential material to public cloud-based models. Because our system can be deployed either in a virtual private cloud or on a local server that sits behind a firewall, organizations can safely use Formic to synthesize and navigate large volumes of sensitive internal data without it ever leaving their environment.

We’re also seeing adoption for defensible decision support. In professional settings, particularly those governed by regulation, it is not enough to simply receive a result from an AI system. People need to be able to explain how that result was reached. Our platform supports this by providing source-linked responses that let the user trace the logic behind every answer. That transparency is critical for reviews, audits, and decisions that must be justified to stakeholders or regulators.

The last area is sustainable and cost-efficient data organization. Traditional language models are very resource intensive. They require significant computing power and energy, especially for large-scale deployments. Our architecture is designed to be far more efficient. For compute-heavy retrieval tasks, it has been benchmarked at approximately one thousand times more efficient than standard retrieval-augmented generation (RAG) systems. That allows teams to work with large internal databases in a much more practical and environmentally conscious way.

Why did you choose Canada as your launch market?

DE: There are a lot of reasons. First, Canada has strong regulatory expectations across sectors like law, finance, and government, and our team has industry-leading experience in AI regulation in these sectors, so launching here gave us a chance to prove that the platform could meet the highest standards right out of the gate.

But more than that, we saw a gap. There’s been a lot of talk about AI sovereignty in Canada, and while we’ve made progress on key issues like infrastructure and cloud strategy, we’re still importing a lot of our intelligence. We’re either running foreign models trained on Canadian data or, in some cases, Canadian models trained on foreign data, and calling it “local,” and that’s just not good enough.

We wanted to build something that doesn’t just live in Canada but also works in a way that’s compatible with how Canadian institutions operate and what Canadian society expects. That means transparency, accountability, and being able to back up every answer with real information.

AI regulation is coming quickly. What are you doing to help clients stay compliant while still innovating?

DE: The big shift we’re seeing is that people are realizing AI is no longer just an IT decision. It’s a legal and reputational issue. If you use a model improperly trained on unlicensed content or hallucinates a response that violates a copyright or trademark, you’re potentially liable. That’s already happening in courtrooms today, and the risks are growing.

So what we’re doing is making sure our clients don’t have to worry about that. Our platform only gives answers it can prove. It cites sources, avoids unknown training data, and gives teams the tools to validate outputs before they act on them. Our approach is that it’s more important to build our technology the right way than to build it the fast way. Human and societal considerations too often fall by the wayside when we talk about AI and other emerging technologies, but we’re finally seeing in the market that these considerations remain important for both governance and purchasing decisions. That approach allows us to innovate confidently without losing sleep over what might go wrong later.

For more information about Formic AI and to request a demonstration, visit formic.ai.

Filed Under: Interviews Tagged With: Formic AI

 
 

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

 
 

Email Newsletter

  • Facebook
  • LinkedIn
  • RSS
  • Twitter

Founding Sponsors

Recent Posts

  • Nominate Canada’s Next Fintech Breakouts
  • Bizcap Relaunches Canadian Operations as Part of Global Growth Strategy
  • Trulioo, Google Building Architecture for the ‘Future of Agentic Commerce’
  • Symcor Appoints Michaela O’Connor as Chief Growth Officer
  • Propel Launches an International Bank as Part of Long-Term Growth Strategy

Copyright © 2025 Incubate Ventures | Techtalent.ca · Techcouver.com · Calgary.tech · Decoder.ca · CleanEnergy.ca | Privacy