Back to news and insights
Webinar

Methodologies for AI Assessments, Reviews and Audits – an NYC Bar Association podcast

FRA's Rim Belaoud joined leading experts to discuss frameworks and regulatory approaches to AI governance.

November 24, 2025

Forensic Risk Alliance (FRA) Manager Rim Belaoud joined a New York City Bar Association (NYCBA) podcast, where she shared her expertise in data analytics and AI governance with legal and compliance professionals navigating the complex landscape of AI assessments, reviews, and audits.

The comprehensive discussion, moderated by the NYCBA’s AI Task Force Co-Chair Jerome Walker, brought together thought leaders including Azish Filabi (American College Center for Ethics and Financial Services), Nikhil Aggarwal (Deloitte Anti-Money Laundering), and Lenka Molins (Oxford Internet Institute) to explore critical methodologies for ensuring AI systems remain safe, fair, and effective

Key Takeaways for Legal and Compliance Professionals

The podcast (available in full here) is part of the NYC Bar Association's ongoing series on AI and digital technologies. The episode provided essential guidance for professionals grappling with AI governance challenges. Rim’s contributions, alongside insights from fellow experts, offered both theoretical framework and practical tools for implementing effective AI oversight.

As organizations continue to deploy AI at scale across critical functions – from credit scoring to fraud detection to compliance monitoring – the methodologies and frameworks discussed in this podcast provide a roadmap for ensuring these powerful technologies serve their intended purposes while protecting stakeholders from unintended consequences.

Key highlights from Rim Belaoud

Clarifying the AI Assessment Landscape

Rim addressed a key challenge facing organizations: the confusion created by AI's rapid evolution and accompanying terminology. "AI brought a lot of confusion in relation to all the taxonomy," Rim explained, noting that alongside the technology itself came "a wave of new concepts, new frameworks, even new job titles" that can feel overwhelming to organizations trying to ensure compliance.

She distinguished between three key concepts that are often conflated:

  • AI Review: The narrowest scope, focusing on single models through "one-off checkups" examining how algorithms were built, documented, and whether they fit their intended purpose
  • AI Assurance: A broader, ongoing framework encompassing risk assessments, impact assessments, certifications, and audits designed to build trust across an entire organization's AI portfolio
  • AI Audit: The most formal and structured approach, typically independent and benchmarked against standards like ISO, producing documented findings for regulators and stakeholders
"If we make the analogy with finance," Rim noted, "this is your external audit of the financial statement. It's not optional, it's formal, and it should be independent. And that's what gives it weight."

Azish emphasized that terms like responsible AI and trustworthy AI are widely used but lack consistent definitions, making governance harder. She noted NIST’s risk management framework as a practical starting point for organizations.

Practical Guidance for Overwhelmed Organizations

Lenka highlighted that international audit frameworks vary widely, and most are voluntary with low accountability—underscoring why organizations should not assume compliance just because a vendor claims adherence to ISO or OECD principles.

Understanding that many organizations feel overwhelmed by compliance requirements and budget constraints, Rim offered actionable strategies for launching AI oversight initiatives:

  1. Start with Inventory and Prioritization

"The first step is to identify what is an AI model and list the AI use cases," Rim advised. She emphasized that AI isn't limited to complex neural networks or generative AI tools – simpler statistical models like logistic regression or decision trees also qualify and shouldn't be overlooked.

  1. Risk-Based Approach

Organizations should focus full reviews on high-impact models used for core business decisions or sensitive applications. "Lower risk models can get lighter checks," she explained, allowing for efficient resource allocation.

  1. Leverage Existing Processes

Rather than creating entirely new frameworks, Rim recommended extending existing compliance infrastructure: "Extend GDPR impact assessments to AI, include AI in internal audit processes, and add AI risk to your risk registers."

  1. Standardized Documentation

She advocated for using model cards tailored to each model's risk level, capturing essential information including purpose, data sources, identified risks, performance metrics, and human oversight processes.

Core Values Driving International AI Regulation

Rim identified convergent principles across jurisdictions: "Whether it's the European AI Act, the NIST guidance in the US, the OECD principles or ISO standards, the message is consistent. AI must be trustworthy, responsible, auditable and non-discriminatory."

She emphasized that these aren't merely aspirational goals but practical imperatives: "The end objective is to ensure that AI is safe, fair, and effective... Just as we do it in other areas we're more comfortable with because we have more historical knowledge."

Special Expertise in Financial Crime Prevention

Nikhil reinforced that traditional rule-based monitoring in AML is failing, and AI models now rely on supervised and unsupervised learning to detect anomalies—making explainability and adaptability critical in reviews.

Speaking from experience building risk scoring models for global banks, Rim highlighted a critical ethical challenge in this space: "One ethical dilemma is how to handle geographic risk. On one hand, regulators recognize that certain jurisdictions carry higher money laundering or terrorism financing risks... But on the other hand, using country of origin or nationality might create discriminatory outcomes."

This balance between regulatory compliance and fairness represents a key challenge that must be addressed from the earliest stages of model development. Rim stressed the importance of making models "defensible" – able to justify decisions to both regulators and affected customers.

Five Common Pitfalls to Avoid

Based on her consulting experience helping clients navigate the AI landscape, Rim identified critical mistakes organizations frequently make:

  1. Treating AI like traditional software: "AI isn't deterministic. It learns and drifts and evolves in time. Static reviews won't work."
  1. Ignoring data quality: "Weak data means weak models. It's garbage in, garbage out. Too often, the focus is on algorithms while data quality and integrity are overlooked."
  1. Reviewing in isolation: "A model can look perfect on paper technically but completely fail in practice if the context isn't understood."
  1. Lack of independence: "When the same team that built the model also reviews it, blind spots are inevitable."
  1. Blind spots when vetting vendors: Many organizations "assume that the provider has already ensured quality, fairness, and compliance, but relying blindly on vendor assurance is very risky."

Building Effective Audit Teams

When asked about the skills needed for AI audits, Rim emphasized the importance of cross-functional collaboration. Technical experts bring deep understanding of models, pipelines, and deployment security, while legal, risk, and compliance professionals provide essential business context and risk identification capabilities.

"Technical people, and I can speak for it because I come from a technical background... many of the risks I didn't even think of," she reflected. "It was really with speaking to the compliance teams that I realized that, and they had the knowledge to challenge these aspects."

The Future of AI Assurance

Looking ahead, Rim envisions AI assurance growing in importance as the technology becomes increasingly central to professional and personal life. "With hundreds and thousands of use cases, we will need more trained professionals and certified auditors," she predicted.

She sees the field evolving similarly to financial audits: "Just like financial audits matured over decades, I think AI audits will evolve, will become more structured and standardized."

The ultimate goal, in Rim's view, is protection: "At the end of the day, it's really protecting us from the risks of AI... to avoid any negative impact on organizations, customers, society, and ourselves."


The full podcast episode "Methodologies for AI Assessments, Reviews and Audits" is available on Apple, Spotify, YouTube and iHeart.

No items found.
News

FRA Wins GIR Awards Investigations Consultancy of the Year 2025

December 3, 2025
Article

When Traditional Finance Adopts Crypto: Updating Risk Assessments and Controls

December 3, 2025
Article

Corruption Evidence in Arbitration: From Red Flags to Expert Reports

November 25, 2025
Article

Corruption Evidence in International Arbitration: Typologies

November 25, 2025