The Marquis Breach: What Happens When Your Vendor’s Security is Worse Than You Think

The Marquis breach isn’t just another vendor incident—it exposes a systemic failure in how SaaS vendor risk is assessed, governed, and accounted for by financial institutions.

I was winding down my workday last week when one of my analysts posted a link in our team chat—another BleepingComputer article about a data breach. This one was different, though. Marquis Software Solutions, a vendor I’d never heard of, had just disclosed that attackers had compromised data from 74 financial institutions and over 780,000 customers.

That evening, I started digging. Could this happen to us? How did this actually happen? What patterns was I seeing that I’d seen before? What I found made me realize we had a case study worth digging into—and boy oh boy, there’s a lot we can learn from it.

Fair warning: this is going to be a long, in-depth analysis with several turns. But stick with me, because this incident highlights multiple systemic risks that every organization with SaaS vendors needs to understand.

Marquis Software Solutions is a Plano, Texas FinTech company founded in the mid-1980s, providing marketing automation, CRM, compliance reporting, and data analytics to over 700 financial institutions. Founded in the mid-1980s, Marquis likely began with on-premises software—cloud infrastructure wasn’t exactly widespread back then. Over time, as hosting and subscription models became viable and attractive, the company appears to have evolved toward a SaaS-like offering, now marketing itself as a central data platform provider. As a SaaS provider today, client institutions share customer data—names, addresses, SSNs, account numbers, financial data—to Marquis’s centralized cloud environment. One vendor, hundreds of institutions worth of sensitive data, all aggregated in one place.

According to public breach-notification filings and reporting by security news outlets, the breach exposed personal and financial data of customers across 74 banks and credit unions — reportedly over 780,000 individuals (BleepingComputer, SecurityWeek, Comparitech). In post-breach disclosures, Marquis states it has implemented enhanced security measures, including firewall patching, enabling multi-factor authentication on VPN/firewall accounts, rotating passwords and deleting unused accounts, increasing log retention, applying geo-IP filtering on remote access, and deploying endpoint detection and response (EDR) tools. (Emery Reddy, Main Attorney General, Iowa Attorney General)

Reviewing the list of controls in those notices, these aren’t advanced defenses — they’re baseline security measures that should have been in place years ago. Any financial institution walking into an FFIEC exam with these gaps would be facing serious supervisory findings.

But then I came across CoVantage Credit Union’s notification to the New Hampshire Attorney General, and the language they used tells the real story.

The filings do not explicitly state these controls were missing before the breach. However, the specific verbs used — enabling MFA, deploying EDR, applying lockout policies, increasing log retention — indicate these were new implementations, not enhancements of mature, existing controls. If Marquis had been strengthening or expanding established safeguards, the disclosures would typically use words like enhanced, expanded, or improved.

Instead, the verbs used across multiple disclosures — CoVantage’s filing, Marquis’s own statements quoted in GovInfoSecurity, and independent analysis from SOCRadar — strongly suggest these controls were not previously in place or were not consistently enforced.”

Even the Iowa Attorney General filing submitted by Marquis’s own counsel uses the same first-time implementation language. It states that Marquis has ‘implemented additional security technologies and processes’ since the incident — again, not ‘enhanced’ or ‘expanded’ existing controls, but implemented them. That word choice aligns with the CoVantage AG filing (‘enabling MFA,’ ‘increasing logging retention,’ ‘applying lockout policies’), Marquis’s own notification quoted by GovInfoSecurity, and the remediation steps summarized by SOCRadar. Across all disclosures, the verbs point to the same conclusion: these were new deployments of foundational controls, not improvements to an existing security program.

We’ll dive deeper into what this language reveals about their security posture later, but the verbs alone tell you these weren’t enhancements—they were first-time implementations of controls that should have existed for years.

This breach demonstrates a structural problem I’ve been wrestling with for years: SaaS vendors operate as largely unregulated data aggregators while we—the financial institutions—bear full accountability for breaches that happen entirely outside our control. When one SaaS provider gets compromised, it instantly becomes a multi-state, multi-institution incident. And class action lawsuits are now targeting both the breached vendor AND the financial institutions, arguing that “they” “failed to adequately vet or oversee the vendor.” (Brown v. Marquis Software Solutions, Inc. et al (CoVantage + Marquis), Geoffrey v. Marquis Software Solutions, Inc. (CoVantage + Marquis), Erban v. Marquis Software Solutions, Inc. et al (Gesa Credit Union + Marquis))

The Regulatory Reality: You’re Accountable, Period

Let me be absolutely clear: federal regulation leaves zero ambiguity about who is accountable when a vendor breach occurs. It’s us — the financial institutions.

Under the GLBA Safeguards Rule (16 CFR § 314.4(f)(2)), financial institutions must select and oversee service providers and ensure they maintain appropriate safeguards. NCUA articulates the same expectation for credit unions — a principle equally applicable across the sector: “Credit unions are responsible for safeguarding member assets and ensuring sound operations irrespective of whether or not a third party is involved.” And the FFIEC’s June 2023 Interagency Guidance drives the point home for all banking organizations: “A banking organization’s use of third parties does not diminish its responsibility to meet these requirements to the same extent as if the activities were performed in-house.”

Customers don’t have relationships with your vendors — they have relationships with you. When a vendor gets breached, you face the customer notifications, you face the regulators, and you may face litigation alleging “inadequate vendor oversight.” That accountability never shifts, even when the root cause lives entirely outside your environment.

The Due Diligence Failure That Should Terrify Everyone

Here’s what should keep every CISO and business leader awake: current vendor due diligence is clearly failing us. Think about Marquis for a moment—a 40-year-old company that likely underwent hundreds of due diligence engagements and SOC 2 reviews over the decades. Based on the AG filings we analyzed, they clearly didn’t have adequate protections in place to secure sensitive data for 780,000+ customers.

If standard due diligence processes failed to identify these fundamental gaps at a mature vendor serving 700+ regulated institutions, what does that say about our ability to assess vendor risk across our entire portfolio?

Strategic Questions Every Organization Must Answer

This breach forces uncomfortable conversations that leadership can no longer avoid:

  • Risk Classification: When SaaS vendors hold customer PII in their infrastructure, should they be governed by your cyber risk tolerance or your vendor risk tolerance standards? Because most organizations treat these very differently.
  • Verification Standards: What level of evidence-based verification do you require beyond attestations and SOC 2 reports? Marquis proves that standard assurance mechanisms aren’t sufficient.
  • Ongoing Visibility: How do you demonstrate ongoing visibility into vendor security posture between annual assessments? How do you detect deteriorating controls before they lead to breaches?
  • Risk Thresholds: What’s your acceptable risk threshold for vendors holding customer SSNs and financial account data? More importantly, how do you articulate and defend that threshold when regulators ask?

The Impossible Balance

These aren’t academic questions—they require real decisions with real trade-offs:

  • Security assurance depth versus vendor onboarding speed
  • Evidence-based verification versus attestation acceptance
  • Operational costs versus breach prevention investments
  • Business agility versus risk visibility

The challenge: SOC 2 reports are proving less effective as assurance mechanisms, yet requiring more rigorous verification creates friction with business units and vendors. How do you navigate this gap while meeting regulatory expectations for meaningful oversight?

The Marquis breach demonstrates that our current approach isn’t working. The question is what we’re going to do about it.

What We’re Going to Examine

The Marquis breach isn’t just another vendor incident—it’s a case study in systemic failures that every organization with SaaS dependencies needs to understand. To be clear: this isn’t an attack on Marquis or the 74 financial institutions that used their services. They’re dealing with the same structural challenges we all face. I’m using this breach because it just happened, it’s well-documented, and it illustrates patterns I’ve been seeing across the industry for years.

In the analysis that follows, we’ll dissect how this breach happened, why standard protections failed, and what it reveals about the broader risks facing financial services. This could have been any SaaS vendor, any set of financial institutions—the underlying issues are industry-wide, not company-specific.

This is going to be comprehensive. We’ll examine the attack timeline, the regulatory implications, the litigation patterns emerging from vendor breaches, and the fundamental security architecture problems that make these incidents inevitable. Most importantly, we’ll analyze what this means for how you assess, contract with, and monitor SaaS vendors going forward.

Analysis Sections:

  • Incident Timeline and Breach Mechanics – The 74-day notification delay, CVE-2024-40766 exploitation, and what the timeline reveals about vendor incident response maturity and the patching problems that keep repeating.
  • Why This Breach Matters Beyond Marquis – The structural accountability problem we all face, how standard assurance mechanisms are eroding, and why SOC 2 scoping has become a marketing exercise rather than meaningful security validation.
  • The “Bare Minimum” Security Evidence – What post-breach remediation reveals about the actual control gaps that existed, and how vendor security evidence often obscures rather than illuminates real risk.
  • Scale of Impact: The Multiplier Effect – How SaaS concentration risk works in practice, the visibility gaps that make detection impossible, and why the current innovation-without-accountability model is unsustainable at industrial scale.
  • The Lock-In Problem: Why Timing Matters – The fundamental challenge of vendor risk management timing, how leverage erodes post-contract, and why the startup promise problem makes due diligence increasingly meaningless.
  • Strategic Implications for Vendor Risk Management – Why contractual protections aren’t actually protection, the classification problems that matter for risk assessment, and the impossible position leadership faces when accountability doesn’t match control.
  • Conclusion: The Systemic Nature of SaaS Vendor Risk – The broader implications for how we approach vendor dependency at scale, and why individual organizational solutions can’t address industry-wide structural problems.

Each section builds on the previous analysis, but you can jump to specific areas based on your immediate concerns. The goal isn’t to provide easy answers—it’s to give you the data and analysis needed for informed strategic decisions about vendor risk in your environment.

Let’s start with what actually happened and why it took so long for anyone to find out about it.

Incident Timeline & Breach Mechanics

Timeline of Failure

  • August 14, 2025 — Breach occurs
  • September 2025 — Marquis engages Rapid7 (GovInfoSecurity)
  • September–October 2025 — Forensic investigation underway
  • Late October 2025 — Scope confirmation completed (inferred from Oct 27 CU notifications)
  • October 27, 2025 — Marquis notifies financial institutions
  • November 2025 — Additional security controls implemented (per Iowa AG filing)
  • November 26, 2025 — Individual consumer notifications begin
  • December 2–4, 2025 — AG filings become public

This timeline exposes a critical accountability gap that I see in vendor contracts constantly. Credit unions are required to notify NCUA within 72 hours once they reasonably believe a reportable cyber incident has occurred, and to complete formal reporting within 10 days. But that belief is entirely dependent on when the vendor chooses to tell us an incident has happened.

Here’s the problem:
Most vendor contracts still use vague language like “as soon as reasonably practicable,” with no definition of what counts as discovery. Vendors and their legal counsel often interpret “discovery” as the point when the forensic investigation is completed and the scope is validated — not when the compromise is first detected.

Under this interpretation, Marquis could argue that the 74-day delay in notifying its 74 financial-institution clients was “reasonable,” because the investigation was “still ongoing.”

But that creates an impossible situation for us:
Regulators expect rapid notification once we become aware of a reportable cyber incident — yet our ability to become aware is entirely dependent on vendor discretion. If a vendor waits 74 days before disclosing unauthorized access and exfiltration of customer data, then for 74 days we cannot:

  • meet our regulatory reporting obligations,
  • assess customer risk,
  • initiate protective measures,
  • or even confirm that an incident occurred.

The regulatory clock doesn’t pause for vendor investigations — and our compliance depends entirely on when a vendor decides the incident is “validated” enough to tell us.

Attack Vector and Exploitation

Based on public reporting, the Marquis intrusion aligns with the broader Akira ransomware campaign targeting SonicWall devices in mid-to-late 2024. According to BleepingComputer, Akira operators were exploiting a SonicWall zero-day beginning in early September 2024 to steal VPN credentials and one-time passcode (OTP) seeds, allowing them to bypass MFA entirely.

Although SonicWall later released patches, many organizations—likely including Marquis—were still compromised because patching alone did not reset stolen credentials or OTP seeds. BleepingComputer reports that Akira continued signing into SonicWall VPN accounts even when MFA was enabled, strongly indicating that the attackers had extracted the underlying OTP seeds during earlier exploitation.

The Patching Problem That Keeps Repeating

Let me step back for a moment, because this incident highlights something I see over and over again in breach reports. All too often, I read about incidents that “could have been prevented with proper patching”—but that’s only part of the story. In this case, based on BleepingComputer’s analysis, patching alone wouldn’t have fully resolved the issue.

Remember Spectre? Microsoft patched the CPU vulnerability, but did you know there was a registry key that needed to be set as well? Otherwise, it didn’t really get fixed. I know this because for two years I went back and forth with my IT counterparts: “Still shows vulnerable.” “But I installed the patch!” “Did you set the registry key?” “But I installed the patch!” “YOU NEED TO SET THE REG KEY!”

Here’s the reality: patching is hard, patching is tedious, and hardening is even harder. But when it comes to vulnerabilities and patches, you need to read the patch notes—not just assume “I patched it, therefore I fixed it.” This SonicWall vulnerability required organizations to not only apply the patch but also reset all potentially compromised credentials and OTP seeds. How many organizations actually did the full remediation?

Even fully patched SonicWall devices remained vulnerable if credential and seed resets didn’t occur, enabling attackers to authenticate long after the vulnerability itself was closed. This is exactly why we need our leadership teams and business counterparts to understand that we need the time to do patch management right—not just check a box that says “patched.”

I especially feel for the small IT and security teams dealing with this. When you have 20 people in IT who all have day jobs, there’s constant friction around patching timelines. But use examples like Marquis to build the business case: “Here’s what happens when we don’t get remediation right.” Automate what you can, but read those patch notes and validate that fixes are actually complete. (I’ll dive deeper into practical approaches for resource-constrained teams in a future piece.)

The lesson isn’t just “patch faster”—it’s “understand what complete remediation actually requires and give security teams the time to do it properly.

The Ransom Payment That Solved Nothing

A now-deleted filing by Community 1st Credit Union – also reported by Comparitech – revealed that “Marquis paid a ransom shortly after 08/14/25″—yet customer data was still compromised and exfiltrated.

This likely explains the 2.5-month notification delay. One interpretation is that Marquis management probably believed that paying the ransom resolved the incident without requiring disclosure. It was only after engaging legal counsel and forensic investigators—who discovered the actual extent of data exfiltration and unauthorized access—that the obligation to report became unavoidable.

The forensic investigation most likely revealed what attackers don’t disclose when they’re collecting ransom payments: the full scope of network compromise, lateral movement, and data theft. Paying ransom may stop the immediate encryption, but it doesn’t evict the attacker or close the vulnerabilities that enabled initial access.

Here’s something that should concern every CISO: threat actors typically aim to maintain persistence. If a target paid once, they’ve demonstrated willingness to pay again, making them an attractive target for future attacks.

Timeline with Forensic Engagement:

  • August 14, 2025: Initial breach detected, ransom paid shortly after
  • September 8, 2025: External forensic expert engaged to identify affected individuals
  • October 27, 2025: Investigation completed, financial institutions notified (74 days post-breach)
  • November 26, 2025: Individual customer notifications began (104 days post-breach)

What This Timeline Reveals About Incident Response Maturity

The 25-day gap between breach detection and engaging forensic experts tells me everything I need to know about Marquis’s incident response preparation:

Strongly suggests they had no effective incident response plan. A mature IR plan triggers immediate forensic engagement upon detecting compromise. You don’t wait three and a half weeks to call in experts.

This timeline strongly indicates a lack of in-house security expertise. Organizations with dedicated security teams recognize that paying ransom doesn’t eliminate breach notification obligations. Someone should have known this immediately.

This delay suggests they had no forensic retainer. The delay suggests they spent weeks finding a firm, negotiating contracts, and getting investigators on-site. A $10-20K annual retainer ensures immediate response when breaches occur. For a company serving 700+ financial institutions, this should have been table stakes.

Likely over-reliance on cyber insurance. Many organizations depend solely on insurance-provided forensics, but insurers prioritize assessing coverage eligibility—verifying you maintained those attested controls—not expediting response. Most cyber policies now mandate MFA, segmentation, and password rotation. Sound familiar?

This demonstrates a reactive rather than proactive security posture. Marquis treated the incident as “contained ransomware” until forensic analysis revealed the full scope of data compromise—indicating they may have fundamentally misunderstood breach notification requirements despite serving 700+ regulated financial institutions.

For a 40-year-old vendor in the financial services space, this level of incident response immaturity is inexcusable.

WHY THIS BREACH MATTERS BEYOND MARQUIS

The Marquis breach matters not because of Marquis specifically, but because it demonstrates a pattern I’ve been seeing across nearly all SaaS providers in financial services.

The Structural Problem We All Face

SaaS vendors aren’t regulated like we are. They operate outside FFIEC/NCUA regulatory scrutiny and are bound only by contractual terms, not mandated cybersecurity standards. Most prioritize speed-to-market and customer acquisition over cybersecurity maturity, operating with thin security programs and minimal detection capabilities.

Here’s what really gets me: vendors create concentration risk at scale. When attackers compromise a single SaaS provider, they don’t hit one institution—they hit a data hub serving dozens or hundreds of institutions simultaneously. Marquis served 700+ institutions; one breach became 74+ institutional incidents instantly.

And guess who bears the accountability? Not the vendor. When breaches occur, customer impact letters reference us. Members call us, not the vendor. Regulators examine us, not the vendor. We provide credit monitoring, manage notification requirements, and absorb reputational damage—even when the breach occurred entirely outside our environment.

The Accountability That Never Goes Away

Let me be crystal clear about something: regulators and the law leave no ambiguity about third-party accountability. It’s ours, period.

Regulators and the law are clear about third-party accountability:

  • GLBA Safeguards Rule: 16 CFR § 314.4(f)(2) requires financial institutions to oversee service providers by “requiring your service providers by contract to implement and maintain such safeguards.” Since we select and contract with vendors on behalf of customers, we’re accountable for their data protection practices.
  • NCUA Third-Party Risk Guidance: NCUA Letters to Credit Unions 07-CU-13, 24-CU-02, and 25-01: “Credit unions are responsible for safeguarding member assets and ensuring sound operations irrespective of whether or not a third party is involved.” Hiring a vendor doesn’t transfer our accountability—it adds risk we must manage. Seven out of ten cyber incidents reported by credit unions involved third-party vendors.
  • FFIEC Interagency Guidance on Third-Party Relationships (June 2023): “A banking organization’s use of third parties does not diminish its responsibility to meet these requirements to the same extent as if its activities were performed by the banking organization in-house.” Management is responsible for developing and implementing third-party risk management policies, procedures, and practices. Choosing a SaaS vendor doesn’t transfer the risk; regulators still expect us to oversee the vendor.

Translation: Hiring a vendor doesn’t transfer our accountability—it adds risk we must manage. “They’re our vendor” is not a defense when breaches occur.

What Makes This Case So Instructive

Here’s what should terrify everyone: Marquis likely underwent hundreds of due diligence reviews from the 700+ institutions they serve. They almost certainly had SOC 2 certification. Financial institutions had contracts with security requirements. Yet security controls appear to have been inadequate.

This raises the uncomfortable question: if standard vendor assurance mechanisms failed this spectacularly, what does that say about our entire approach to vendor risk management?

The Erosion of Standard Assurance Mechanisms

The Marquis breach exposes several emerging challenges in vendor risk management that extend beyond any single vendor:

SOC 2 Scope Control and the Marketing Department Problem

Here’s something that drives me nuts: vendors control what gets audited in SOC 2 assessments. They select specific Trust Services Criteria while excluding others, or define narrow system boundaries that conveniently exclude critical infrastructure.

But here’s the real problem—SOC 2 has become a sales and marketing tool rather than a security validation mechanism. I see compliance platforms openly marketing SOC 2 as a way to “close larger customers in less time,” “shorten sales cycles,” and “turn compliance into a growth strategy.” In many organizations I’ve worked with, SOC 2 initiatives are funded by marketing and sales departments—not security or IT—because certification has become a gateway for landing deals.

Think about the incentive structure here: when revenue teams control security audit scope and budget, the goal becomes passing audits efficiently to unlock sales, not comprehensively validating controls.

This creates predictable outcomes that I see over and over: narrow audit scopes that exclude inconvenient systems, “SOC 2-compliant packages” designed to pass audits efficiently, and vendor resistance to expanding scope beyond bare minimums.

A SOC 2 report confirms that what was audited met standards at a point in time—it doesn’t guarantee comprehensive coverage, operational effectiveness, or continuous compliance. But try explaining that to a business unit that wants to onboard the vendor yesterday.

The Due Diligence Resistance Pattern

When security teams attempt evidence-based verification beyond attestations, I see the same predictable pattern play out every time:

Vendor: “Your requirements are more stringent than other institutions. These questions are too burdensome.”

Business unit: “The vendor is complaining that IT/InfoSec/Vendor Management is creating barriers and delaying timelines.”

And guess what happens? Due diligence gets constrained to email Q&A with questions submitted days in advance, allowing completely scripted responses. We can’t determine if vendors actually have dedicated security staff, can’t assess fourth-party risk from outsourced security, and can’t verify that documented practices match reality.

Here’s the maddening part: when breaches occur, the first question is always “Why didn’t due diligence identify these gaps?” This question conveniently ignores all the constraints that got placed on the verification process.

The Marquis case is the perfect example of this dysfunction. The vendor served 700+ financial institutions over nearly 40 years, undergoing hundreds of due diligence reviews. Yet post-breach filings reveal they lacked MFA on VPN accounts, proper password rotation, adequate logging, geo-IP filtering, and EDR deployment.

If hundreds of due diligence processes failed to identify these baseline gaps, the real question should be: “Why didn’t the vendor implement fundamental controls despite serving 700+ regulated institutions?” Instead, accountability falls on security teams for not catching deficiencies that vendors deliberately obscured through prepared responses and attestations.

This dynamic puts us in an impossible position: we have accountability for outcomes but no real authority to verify controls. It’s maddening, and it’s exactly how we end up with situations like Marquis.

The Definitional Manipulation Problem

Here’s another pattern that drives me absolutely crazy: vendors routinely redefine industry-accepted security terms to match whatever they already have built, rather than actual industry standards.

“SSO” suddenly means session persistence within their application, not federated identity using SAML or OAuth. “MFA” becomes password plus security questions, not true multi-factor authentication per NIST SP 800-63B standards. I’ve seen vendors claim they have “encryption at rest” when they mean database password protection, or “network segmentation” when they mean VLANs with no access controls.

The problem is, without evidence-based verification before contracts are signed, you discover these definitional games during implementation—when your leverage to demand corrections has completely evaporated. By then, you’re stuck explaining to leadership why the “SSO integration” they thought they were paying for doesn’t actually work with your identity provider, or why their “MFA” doesn’t meet your security standards.

I’ve been in too many post-implementation meetings where vendors suddenly clarify what they “actually meant” by the terms they used in sales presentations. It’s infuriating, and it’s completely predictable if you know what to look for.

The “Trust the Expert” Defense

Oh, this one really gets under my skin. When concerns are raised—whether from security, IT, compliance, or risk management—objections are inevitably met with: “The vendor is the expert in their domain. They’ve been doing this for years. They have more resources and expertise than we do. We should trust their judgment.”

This logic fundamentally misunderstands accountability. Look, if we outsource a function, we still must maintain sufficient expertise to properly govern and manage that relationship. We can outsource execution, but we absolutely cannot outsource understanding the design decisions, validating how it’s implemented, and verifying it operates correctly.

Here’s a scenario I see all the time: we sign contracts with two different vendors and direct them to integrate their solutions. To properly govern this, we need to understand why that integration approach was chosen, how it’s being implemented, and whether the result actually meets our requirements and security standards. Without that understanding across the full lifecycle—from design through implementation to ongoing operation—we can’t effectively govern the relationship or manage the risks we’re still accountable for.

This requires maintaining expertise sufficient to ask tough questions and verify vendor claims. Without that capability, we can’t determine whether controls actually exist, whether we’re being oversold capabilities, or when vendors are failing to protect our data.

Deferring entirely to vendor expertise doesn’t transfer risk—it creates blindness to risk. And guess who still gets blamed when things go wrong? Not the vendor we “trusted.”

The Marquis Reality Check

Let’s talk about what should really concern everyone. Marquis has operated since 1986–1987—nearly four decades—and serves more than 700 financial institutions. Their 2021 acquisition by Rockbridge Growth Equity indicates substantial institutional backing and an expectation of mature operational practices. On paper, organizations with this longevity, scale, and customer base should have well-developed security programs aligned to industry frameworks.

But as we explored earlier, post-breach filings reveal that Marquis implemented foundational controls like MFA, EDR, proper logging retention, and geo-IP filtering after the incident. The language used—ensuring, deploying, applying—strongly suggests these were first-time implementations of baseline security measures.

So here’s the question that should keep every CISO awake: If a 40-year-old provider serving 700+ regulated institutions didn’t appear to have these basic controls in place, what does that say about the effectiveness of:

  • Standard vendor due diligence questionnaires, which routinely ask about MFA, least privilege, and log retention?
  • SOC 2 examinations, which specifically evaluate access control, monitoring, and change management?
  • Contractual security requirements, which almost always mandate “appropriate administrative, technical, and physical safeguards”?
  • The industry assumption that vendor longevity and market presence equate to security maturity?

These questions aren’t about attacking Marquis—they’re about recognizing a systemic failure in how we assess vendor risk.

The Systemic Challenge

This pattern repeats across the vendor landscape—large established providers, mid-sized companies, and new startups alike. This problem isn’t unique to any single vendor type or maturity level. Current due diligence approaches are proving insufficient to identify these gaps, yet those of us responsible for vendor oversight face significant constraints that I see every day:

  • IT, InfoSec, and vendor management teams operate with limited resources while managing expanding vendor portfolios
  • Vendors resist rigorous scrutiny, characterizing thorough security validation as “burdensome” or “more stringent than other customers require”
  • Business units pressure for faster onboarding, viewing security diligence as friction rather than protection
  • Vendors themselves aim for minimum viable compliance rather than comprehensive security maturity

Here’s the paradox that drives me crazy: The same assurance mechanisms that failed to prevent this breach are the mechanisms we’re forced to rely on. When we attempt deeper verification, we encounter resistance from both vendors (who view it as burdensome) and business units (who view it as delaying operational objectives). Yet regulatory accountability remains entirely ours, and customer impact is entirely ours when breaches occur.

We bear full accountability for vendor failures while having limited ability to verify the security posture of the vendors we’re forced to trust. We face a system where attestations and questionnaires are accepted as sufficient evidence—despite repeated demonstrations that they are not—because the alternatives face resistance from all sides.

When customer data is processed in a SaaS environment, our security capabilities become significantly limited. We lose direct visibility, can’t perform our own event detection, and can’t enforce or validate the maturity of vendor controls or logging. However, during a breach, regulators, customers, and legal actions still hold us accountable. We can transfer operational tasks to vendors, but accountability never leaves our desk.

This creates an impossible position: How do we fulfill our accountability for vendor security when we’re constrained from verifying vendor security beyond attestations that have repeatedly proven insufficient?

THE “BARE MINIMUM” SECURITY EVIDENCE

Post-Breach Remediation Reveals Control Gaps

According to data breach notifications filed with state Attorney General offices, Marquis implemented the following controls after the breach:

  • Updating and patching firewall devices
  • Rotating passwords for local accounts
  • Deleting unused accounts
  • Enabling multifactor authentication for all firewall and VPN accounts
  • Increasing logging retention for firewall devices
  • Applying stricter account lockout policies and geo-IP filtering
  • Deploying endpoint detection and response (EDR) tools

Source:

Data breach notifications filed with Maine, New Hampshire, Iowa, Texas, and Massachusetts Attorney General offices, as documented in CoVantage Credit Union’s November 26, 2025 filing to the New Hampshire Attorney General and reported in American Banker and BleepingComputer.

What the Language Reveals

The AG filings do not explicitly state the controls were absent before the breach. However, the specific language used—enabling MFA, deploying EDR, applying lockout policies—strongly suggests these were new implementations rather than enhancements to existing controls. If Marquis had been strengthening or expanding existing measures, the filing would have used language like “enhanced,” “strengthened,” or “expanded.” Instead, the verbs suggest these controls were not previously in place or not consistently enforced.

The specific areas addressed in post-breach remediation indicate gaps in:

  • Multi-factor authentication coverage: “Enabling” MFA for firewall and VPN accounts suggests it was not previously enabled
  • Password and account lifecycle management: Rotating passwords and deleting unused accounts were identified as immediate priorities
  • Network access controls: “Applying” geo-IP filtering and lockout policies indicates these protections were not active
  • Endpoint visibility: “Deploying” EDR tools suggests Marquis lacked endpoint detection capabilities
  • Logging and monitoring: Increasing retention indicates prior logging was insufficient for forensic investigation

These Controls Represent Long-Standing Industry Baselines

The controls identified in AG filings are not advanced or emerging security practices—they represent foundational expectations established and widely adopted for years:

  • CIS Critical Security Controls: MFA requirements for remote access and administrative accounts were strengthened in CIS Controls Version 7 (2016) and expanded further in Version 8 (2021)
  • NIST Guidance: NIST began promoting MFA adoption as standard practice in 2016, with draft SP 800-63-3 recommending MFA for all assurance levels
  • CISA Cybersecurity Performance Goals: CISA’s CPGs require MFA on all remotely accessible accounts, minimum 12-character passwords, and logging of all authentication attempts
  • FFIEC IT Examination Handbook: Multi-factor authentication, password management, logging retention, and endpoint security have been core examination topics for financial institutions for over a decade

These controls have been industry-standard expectations for 5-10+ years. Multi-factor authentication for remote access, password rotation, logging retention, endpoint detection, and account lifecycle management are routinely validated during financial institution examinations and are expected at even small community banks and credit unions.

The Broader Implication

Based on the language used in the AG filings, a vendor serving 700+ regulated financial institutions for nearly 40 years appears to have lacked baseline protections that would be unacceptable in even the smallest credit union’s environment. Any one of these controls could potentially have detected or prevented the breach. This pattern—where established vendors serving highly regulated industries lack foundational security controls that would be required of the institutions they serve—illustrates the structural challenge financial institutions face in third-party risk management.

SaaS vendors often prioritize speed-to-market and customer acquisition, and unlike the financial institutions they serve, they operate outside regulatory examination frameworks that would validate baseline security maturity. The result: vendors may accumulate sensitive data from dozens of regulated institutions while maintaining security programs that would not meet the minimum standards expected of those same institutions.

SCALE OF IMPACT: THE MULTIPLIER EFFECT

The SaaS Reality: Innovation Without Accountability

The financial services industry is undergoing significant transformation. SaaS adoption is accelerating, FinTech startups are proliferating, and “cloud-first” has become the default strategy. The likelihood of finding modern, feature-rich solutions that can be deployed on-premise is diminishing rapidly. This shift is not hypothetical—it’s the operational reality across the industry.

Over the past 8-9 years, this trend has intensified: core functions, customer-facing services, compliance tools, and data analytics are migrating to vendors operating in “the cloud.” This creates an important question the industry has not fully confronted: Have we thought through the implications of this dependency? Can we trust these vendors? Should we trust them implicitly?

Many SaaS vendors—particularly newer FinTech entrants—prioritize innovation and speed-to-market. Their expertise lies in product development, user experience, and rapid iteration. But expertise in technology does not equal expertise in security. Many of these companies are funded with the explicit goal of building to an acquisition target, not building for long-term operational resilience. They’ve discovered an industry that implicitly trusts them to “do the right thing” with sensitive customer data—but that trust is often misplaced.

The industry cannot afford to accept this imbalance passively. Yes, there are well-resourced companies that invest heavily in security infrastructure and operational maturity. But the barrier to entry in SaaS is low, and regulatory scrutiny of vendors remains minimal compared to the institutions they serve. The result: financial institutions bear full accountability for vendor failures while having limited ability to verify vendor security practices.

Concentration Risk in Practice

When attackers compromised Marquis, they didn’t breach one institution—they breached a data hub acting as shared service provider for hundreds of organizations. Over 780,000 individuals were impacted across 74 banks and credit unions, demonstrating how vendor consolidation transforms individual security failures into systemic operational risks.

But Marquis isn’t even the worst example we’ve seen. In May 2020, Blackbaud—a cloud/SaaS provider for nonprofits, universities, healthcare organizations, and more—was hit by ransomware. Attackers exfiltrated data before Blackbaud paid the ransom and claimed to have blocked further unauthorized access. The breach impacted thousands of organizations worldwide that used Blackbaud’s services, with many customers subsequently having to notify donors and users of data exposure.

The SEC later charged Blackbaud for failing to reasonably safeguard personal information and not adequately disclosing the breach to investors. One vendor’s security failure became thousands of organizations’ crisis management problem.

Marquis served over 700 institutions, meaning compromise of one SaaS provider instantly becomes a multi-state, multi-institution incident, a class-action magnet, and a systemic operational risk. A breach of a SaaS provider with broader reach—such as core banking platforms, CRM systems, loan origination systems, or digital banking providers—would have exponentially greater impact.

The Visibility Gap

Financial institutions often have limited insight into vendor security posture. Banks and credit unions lack direct visibility into a vendor’s internal security controls, patch management status, VPN usage, or incident detection capabilities. Once customer data leaves our environment for a SaaS service, we lose visibility, event detection capability, the ability to enforce logging fidelity, and cannot validate internal control maturity.

This creates an unbalanced risk model: the vendor controls the security environment, but the financial institution bears the consequences of failure.

The Path Forward: Accountability Must Match Dependency

If the financial services industry is moving irreversibly toward SaaS and cloud-based solutions—and all evidence suggests it is—then the industry must collectively raise the bar for vendor security standards. We cannot allow the convenience of modern technology to override our fiduciary responsibility to protect customer data.

This requires:

  • Rejecting implicit trust: Vendors may be innovative, but innovation does not equal security maturity
  • Demanding evidence, not attestations: SOC 2 badges and questionnaire responses are insufficient
  • Holding vendors accountable contractually: Security failures must have meaningful consequences
  • Elevating vendor governance: SaaS vendors holding customer PII should be governed with the same rigor as internal systems

Otherwise, breaches like Marquis will continue, and financial institutions will continue bearing the reputational, regulatory, and financial consequences while vendors face minimal accountability beyond potential contract disputes.

Historical Context: We’ve Seen This Before — The Dot-Com Boom and the Return of “Security Last”

To understand why today’s SaaS ecosystem carries so much unmitigated risk, let me take you back to something we should have learned from already—the dot-com boom.

In the late 1990s, during the dot-com boom, companies raced to capture market share, deploy new features, and get acquired before their competitors. Technical debt was accepted. Security debt was ignored. The prevailing strategy was simple: grow fast, exit fast, and let someone else figure out the long-term risks.

Very few companies built with operational resilience in mind because resilience didn’t help valuations—momentum did. Investors rewarded speed, not security. Regulations were minimal. And consumers lacked the awareness to demand better. Innovation flourished, but it did so on a fragile, insecure foundation that collapsed as quickly as valuations did.

Fast-forward twenty-five years, and the same incentive structure has re-emerged—this time within SaaS and FinTech.

SaaS Is the Dot-Com Boom at Industrial Scale

The parallels are hard to ignore:

  • Rapid market capture over disciplined security engineering
  • VC pressure to iterate quickly and aim for acquisition
  • Minimal regulatory oversight of vendors, even those handling sensitive data
  • Widespread, unexamined trust by financial institutions

The only real difference is that today’s technologies operate at a magnitude the dot-com era never imagined. A single SaaS breach no longer affects thousands; it affects millions. Vendors no longer host simple static websites; they host core operational functions for regulated institutions. The surface area and blast radius have expanded exponentially, while the underlying culture of “security is someone else’s problem” remains largely intact.

The financial institutions, and specifically the credit union’s role in this dynamic warrants examination. Credit unions have increasingly invested in FinTech innovation through CUSO venture structures and strategic partnerships, funding startups designed for rapid growth and acquisition. While this drives innovation, it also creates a paradox: financial institutions fund vendors optimized for speed and market capture, then bear full regulatory accountability when those same vendors experience security failures. The industry is simultaneously investing in and being harmed by the same incentive structures that prioritize momentum over security maturity.

Why This History Matters

This moment is not an anomaly—it’s a continuation of a structural pattern that appears whenever innovation significantly outruns accountability. SaaS, like the dot-com predecessors before it, is expanding faster than governance frameworks can adapt.

The credit union industry faces a unique paradox: through investments and strategic partnerships, credit unions are funding the creation of vendors built with the same incentive structures they will later be held accountable for managing. The industry is both investor and victim, simultaneously enabling and suffering from vendor security failures.

If the industry does not internalize the lessons of past failures, it will repeat them—only at a scale that transforms isolated vendor weaknesses into sector-wide operational risks.

Here’s how I’d revise this section with more practitioner voice:

THE LOCK-IN PROBLEM: WHY TIMING MATTERS

The Vendor Risk Management Timing Challenge

Here’s a pattern I see over and over again: organizations involve security teams too late in the vendor selection process. By the time security gets to review vendor controls, contracts are nearly signed, business units have committed to timelines, and relationships have been established.

This creates a predictable disaster: sales teams build rapport and trust with business units, demonstrating features and capabilities that address operational needs. Business units fall in love with the solution. Only then does security get involved and begin asking basic vetting questions—questions that should have been asked on day one. When security identifies gaps or concerns, we’re now disrupting an established relationship rather than preventing a problematic one from forming.

Discovering inadequate security during implementation leaves only bad options: accept the risk, implement compensating controls we have to fund and maintain, or attempt costly contract renegotiation. Meanwhile, security teams get blamed for “causing problems,” “disrupting vendor relationships,” and “making waves”—despite identifying risks we’re contractually obligated to manage. The animosity isn’t because security asked hard questions; it’s because we asked them too late, after emotional and operational investment had already occurred.

The earlier security gets involved, the easier it is to walk away. Initial vetting questions don’t require deep technical expertise—they require asking basic questions before anyone falls in love with the solution. When security can flag concerns before relationships solidify, the conversation becomes “let’s find a better vendor” instead of “you’re blocking our project.”

Pre-Contract vs. Post-Contract Leverage

Before signing, we hold all the power: Vendors want our business, we can require security controls as contract conditions, we can walk away, and security gaps are negotiable deal points.

After signing, leverage evaporates: Vendors already have our revenue, security improvements require their investment with no return, switching costs create dependency, and business units resist changing established workflows.

The Contract Erosion Problem

Here’s something that really frustrates me: many vendor relationships predate the SaaS security awareness that emerged over the past 8 years. I regularly review legacy vendor contracts that are 6+ years old with no security requirements or breach notification language whatsoever. In many cases, relationships began when solutions were on-premises, then migrated to cloud/SaaS delivery models—but contract language was never updated to reflect the massive change in data custody and risk profile.

Without contractual security obligations, vendors have no binding duty to notify within specific timeframes, there’s ambiguity about incident response responsibilities, security control requirements are undefined or outdated, and we have no contractual recourse for security failures.

When vendors lack explicit notification requirements, they default to their own timelines—which prioritize legal review and investigation completion over our notification obligations. This may explain why breach notification delays of 74+ days occur even when regulators expect 72-hour internal notification and 10-day reporting.

The Startup Promise Problem

Here’s where this gets particularly frustrating with startup vendors. Startups often make promises their sales teams can’t back with actual security maturity. Once contracts are signed, fixing those gaps suddenly isn’t a priority anymore. Funding cycles, technical debt, and M&A pressure mean “security later” becomes “security never.”

I’ve watched this pattern play out repeatedly: enthusiastic sales presentations about security features that turn out to be roadmap items, not reality. By the time you discover the gap during implementation, they’re focused on the next funding round or acquisition target—not on delivering the security controls they promised.

The Critical Takeaway

Initial due diligence represents our highest-leverage moment. Security must be involved before vendor selection, requirements must be contract conditions, and “we’ll implement that next quarter” must trigger dated contractual milestones or disqualification.

CISA gets this. Their Secure by Demand guidance released in August 2024 was brilliant—essentially a call to all regulated industries, small and large, to unify and demand these vendors do the right thing. Build security in from the start, extend visibility so we can actually see what’s happening in their environments, demand better or walk away from them. I love the message, I love the idea, and I’ll stand behind it completely.

But here’s the reality: since it got released, I don’t see much tangible change. Change takes time, I get that, but until a vast majority of us start asking those demanding questions—and we’re actually supported by our business units when we do—vendors won’t feel enough pressure to change their approach.

The last thing I want is another law trying to fix this. Just look at HIPAA—that didn’t solve much, caused a lot more administrative burden, and we’re still seeing healthcare-related breaches regularly. We need market pressure, not regulatory mandates that create compliance theater while missing the actual security problems.

But due diligence isn’t one-time: contract renewals and service model changes must trigger security reassessment and contract updates. Legacy vendor relationships with outdated contract language provide no protection when delivery models have fundamentally changed. Without pre-contract leverage and ongoing contract maintenance, we accept whatever security posture vendors choose—which may be inadequate even for vendors serving hundreds of regulated institutions for decades.

Here’s how I’d revise this section with more practitioner voice:

STRATEGIC IMPLICATIONS FOR VENDOR RISK MANAGEMENT

Why “Contractual Protections” Aren’t Actually Protection

Here’s something that drives me crazy: SaaS vendors typically operate outside regulatory scrutiny and are only bound by contractual terms—not by mandated cybersecurity standards. Contractual clauses provide legal recourse after a breach, but they do absolutely nothing to prevent the breach itself. Insurance coverage and indemnification don’t restore customer trust or prevent regulatory scrutiny.

The Core Problem We Can’t Solve

Vendors create the risk through inadequate security, but we absorb the regulatory and reputational impact. This is vendor risk management’s fundamental paradox: accountability can’t be outsourced, risk can’t be transferred, yet we remain fully responsible for outcomes we can’t directly control.

The Classification Problem That Matters

When SaaS vendors host customer data in their infrastructure, is this a vendor risk or a cyber risk? The distinction matters more than most people realize for governance, and it reveals a real and known governance flaw in how many institutions operate.

Some institutions categorize SaaS as “vendor risk,” which often has:

  • Lower board scrutiny
  • Higher acceptable risk thresholds
  • Weaker metrics and reporting
  • Less rigorous evidence requirements
  • Slower escalation paths

Meanwhile, cyber risk tolerance is usually much stricter because regulators expect it. We have examination standards to meet, compliance requirements to satisfy, and customer data protection obligations that don’t disappear just because we’re using a vendor.

Here’s what’s actually happening: When we engage a SaaS vendor, we’re choosing to outsource infrastructure management, not accountability. The vendor’s data center becomes an extension of our infrastructure—we’ve simply chosen to have someone else manage it. But we retain full accountability: we chose to entrust customer PII to this environment, we chose to accept their security decisions, and we bear all consequences when their security fails.

This isn’t my opinion—it’s the only correct interpretation from a supervisory standpoint. The data doesn’t care where it lives, and the regulatory and operational risks are identical whether it’s in our data center or theirs.

The Strategic Question Leadership Must Answer

Should SaaS vendors handling customer PII be governed by our cyber risk tolerance standards rather than vendor risk tolerance standards? If we wouldn’t accept inadequate MFA, logging, or patching in our own environment, why would we accept it in a vendor’s environment that holds our customer data?

The answer should be obvious: SaaS vendors with customer PII must be held to the stricter standard. Anything else is a governance failure that regulators will eventually identify and address.

The Impossible Position

Here’s the mismatch that keeps me up at night: We have accountability for security outcomes but no authority over security management when it comes to SaaS solutions. We can’t mandate that the vendor patches their systems, can’t enforce MFA on their accounts, can’t control their logging standards, or direct their incident response. Yet when breaches occur in the vendor’s environment, the impact is identical to breaches in our own:

  • Same sensitive data compromised (SSN, account numbers, financial data)
  • Same customer impact and notification requirements
  • Same regulatory consequences and examination findings
  • Same reputational damage and trust erosion
  • Same legal liability exposure

The critical difference: In our data center we control remediation; in the vendor’s environment we depend on their priorities, their timelines, and their investment decisions.

The SaaS Governance Challenge

Accountability without authority—that’s the position we’re in. When customer data is processed in a SaaS environment, our security capabilities become significantly limited. We lose direct visibility, can’t perform our own event detection, and can’t enforce or validate the maturity of vendor controls or logging. However, during a breach, regulators, customers, and legal actions still hold us accountable. We can transfer operational tasks to vendors, but accountability never leaves our desk.

The Strategic Question Leadership Must Answer

Should SaaS vendors handling customer PII be governed by our cyber risk tolerance standards rather than vendor risk tolerance standards? If we wouldn’t accept inadequate MFA, logging, or patching in our own environment, why would we accept it in a vendor’s environment that holds our customer data?

We’ve outsourced infrastructure—but regulators, customers, and the law hold us accountable as if we still control it.

CONCLUSION: THE SYSTEMIC NATURE OF SAAS VENDOR RISK

SaaS vendors are now one of the highest systemic risks we face as financial institutions. The Marquis breach provides the model we should expect to see repeated:

  1. Vendor with inadequate security controls gets breached
  2. Weak internal detection allows prolonged attacker access
  3. Widespread impact across multiple financial institutions
  4. Customer data directly compromised
  5. Our reputations get damaged
  6. Lawsuits claiming we failed to govern the vendor

But this assumes the breach actually gets disclosed. The Marquis timeline itself suggests a more concerning risk: a 2.5-month delay between breach detection and client notification, with that now-deleted filing indicating ransom payment shortly after the incident. This delay pattern suggests Marquis may have initially believed the incident could be contained without disclosure—only reporting after forensic investigators and legal counsel determined notification was unavoidable. A mature, 40-year-old company serving 700+ institutions, with access to legal counsel and forensic expertise, still delayed notification for 74 days while regulators expected 72-hour internal reporting.

Now think about the SaaS startup with seven full-time employees, no dedicated security team, no forensic retainer, and no experienced legal counsel on speed dial. When they detect suspicious activity, who makes the call about whether it’s “reportable”? Who determines the scope? Who tells them they’re legally obligated to notify client institutions? The barrier to “deciding” an incident doesn’t require disclosure is far lower when there’s no mature incident response program, no legal team pushing for transparency, and significant financial incentive to avoid reporting a breach that could destroy the company.

Here’s the reality that should concern every one of us: we lose visibility the moment data leaves our environment. We depend entirely on vendor detection capabilities, vendor judgment about disclosure obligations, vendor access to competent legal counsel, and vendor willingness to report incidents that could trigger contract penalties, damage reputation, or—in the case of startups—end the business entirely. The more concerning scenario is one we may never know occurred: a vendor concludes no disclosure obligation exists, and customer data is compromised without notification ever reaching us.

We don’t control the security maturity of SaaS platforms, but we’re fully accountable for the data we send to them. We can’t verify whether breaches occur, yet we bear full regulatory and reputational consequences when they do. This imbalance defines modern vendor risk.

Another major SaaS breach will occur—the question isn’t if, it’s when. The financial services industry is moving irreversibly toward cloud-based solutions and SaaS dependencies. We can’t reverse this trend, but we can choose how we govern it.

The Marquis breach demonstrates that standard assurance mechanisms—SOC 2 certifications, security questionnaires, contractual indemnification—are insufficient to prevent significant control gaps at vendors serving hundreds of regulated institutions. The cost of inadequate vendor governance gets measured in breach response, regulatory enforcement, litigation defense, credit monitoring expenses, and permanent reputational damage.

We can’t outsource accountability—only infrastructure. The strategic questions this breach raises—about risk tolerance, due diligence rigor, contractual requirements, and the balance between vendor selection speed and security visibility—don’t have simple answers. But they’re questions leadership must address, because the alternative is accepting that breaches like Marquis will continue, and that we’ll continue bearing the full consequences while vendors face minimal accountability beyond potential contract disputes.

Table of Contents

Found this benifical?

Subscribe to be notified when we publish new content!

Support this work

If you liked this and want to support more analysis like it, consider buying me a coffee.

Post Discussion

Contribute to the Discussion

Scroll to Top