Commercial real estate (CRE) finance is operating in a new risk environment where fraud exploits human behavior and technology. Fortunately, the sector is showing the will to move away from check use, by leaning into digital payments as part of a mission to root out payments fraud in 2026.
In a panel discussion at the 11th annual U.S. Bank Commercial Real Estate Treasury Conference, experts traced how B2B payment fraud is changing, and what’s working to stop it. Talk centered on a what’s what of fraud typologies (business email compromise/BEC, account takeover/ATO, vendor impersonation, deepfakes, ransomware), then onto layered controls and new techniques to reduce loss in CRE payments.
Panelists agreed that fraud must be treated as an ongoing risk, not a series of isolated incidents. It may sound obvious, but that’s not always so. With a cyberfraud “fog of war” causing confusion and clouding thieves’ activities, neutralizing threats is far more difficult.
The reality is that CRE payments live on a hybrid attack surface spanning email, SMS, video, collaboration tools, and treasury portals. This is paying off handsomely for ambitious scammers patiently using social engineering and multi-step deception. “Cybersecurity breaches are helping fraudsters have that global reach,” noted Charles Banks of U.S. Bank’s Information Security Services.
Human Behavior and the Rise of Social Engineering
Bottomline’s Katie Elliott pointed to the surge in remote devices and approval workflows as needing better controls. She also nodded to the rise of newer fraud forms like “quishing” (rerouting payments and expanding entry points with malicious QR codes) as an example of how relentless fraudsters are with AI, especially when payers aren’t highly connected with payees.
“Our reliance on doing things without human contact has allowed social engineering to skyrocket,” she said. For CRE payables groups, this shift shows up as rushed vendor changes and very convincing (but often fake) documentation.
Consider how generative AI is multiplying both the volume and style of attacks. The panel walked through a real-life deepfake video meeting that tricked a well-run organization into authorizing a large transfer to a fraudulent account. Attackers pair an email as pretext along with “validation” on live video, where cloned voices and faces remove remaining doubt.
“If you’re online, it can be faked,” Banks said.
Building Layered Defenses Around Identity and Behavior
Demonstrating fraud’s effectiveness at scale, U.S. Bank’s Kasia Harvell described one enterprise-grade fraud campaign against a treasury management platform used by multiple institutions. A faux fraud alert, branded to perfection, harvested user credentials through a fake linked page, and immediately attempted logins across targets. By the time it was uncovered, roughly 190 mule accounts were already staged for cash-out.
“It just illustrates how sophisticated that particular threat actor was,” she said, adding that fraud is now highly fragmented, with specialized roles spread across the fraud landscape.
An effective operational response to fraud attacks in 2026 is layered defenses organized around behavior and identity. Banks emphasized a risk-based approach that mixes cyber threat intelligence, dark web monitoring, strong authentication, and embedded security within business lines. But the human mind is decisive in the last mile of a transaction. Referring to the audience, he commented, “You are all endpoints in that security process.”
Elliott stressed that urgency is the attacker’s favorite trigger, so teams should normalize micro-pauses in high-value approval paths. She framed it simply: “Slow down to speed up.” Pausing makes space for out-of-band callbacks and second factor checks that aren’t dependent on a single channel. And “don’t give away those multifactor codes,” she said.
Vendor Impersonation, Ransomware, and Payments Hygiene
Impersonation has moved to the fore in many fraud cases because it takes longer to detect. The panel advocated disciplined vendor change management and verified callbacks to trusted, known-good contacts, not to numbers or emails inside the request.
Elliott also highlighted using outsourced Accounts Payable (AP) models that include bank account authentication and structured change workflows. “Identify where you’re weak, and find those partnerships,” she said.
Ransomware remains a catastrophic consequence, but often of small lapses. Charles Banks revisited a high-profile incident where social engineering of an IT help desk led to a password reset, lateral movement, and operational lockdown. The cost was immediate and severe, both in cash and capacity. “It was a behavioral doorway,” he said. The lesson for CRE is that a single weak verification step can escalate into enterprise-wide risk, and quickly.
AI’s double-edged impact was a recurrent theme in the discussion. Offensively, it raises the baseline of impersonation across email, voice, and video, increasing the burden on identity proofing. Defensively, it can triage “hundreds of millions of events,” as Banks put it, allowing teams to focus on meaningful anomalies.
Success, the panel agreed, depends on rolling out AI with the same risk-based discipline used for cloud adoption: controls first, scale second.
Low-Tech Controls that Block High-Tech Scams
Harvell emphasized shared secrets and relationship signals as resilient low-tech controls that AI can’t easily spoof. For time-sensitive approvals, code words known only to the counterparties cut through well-crafted impostors. “[Use] something that only the two of you know,” she said. It’s a human method for blocking sophisticated social engineering.
Hybrid work continues to expand the physical and digital attack surface. Elliott recounted seeing travelers leave unlocked devices unattended on planes (a small lapse with outsized implications when combined with weak verification). Device hygiene, screen locks, and cautious handling of voicemail and public bios reduce impersonation fodder. Harvell warned that just “five to 10 seconds of your voice” can enable convincing deepfakes.
There was also accord on steps CRE firms can take to detect fraud (even AI), as follows.
The 5-Point CRE Payables Fraud-Fighting Checklist
- Verification First: For any vendor bank detail change or urgent payment request, require an out-of-band callback to a validated number already on file. Never trust numbers or links provided in the request.
- Human-powered MFA: Implement code words or shared secrets for high-value approvals; if anything feels rushed or unusual, re-verify on a different channel. This thwarts deepfake-assisted step-ups.
- Train for Omni-Channel Threats: Simulate attacks across email, voice, SMS, QR, and video so teams recognize social engineering in every medium. “You absolutely cannot protect yourself from what you don’t know,” Elliott said.
- Third-Party Risk Rigor: Evaluate partners for identity and access controls, change management, and incident response readiness. As Banks framed it, ensure third parties are as safe as you are.
- Redefine ‘Sensitive Data’: Protect one-time passcodes, voice samples, video presence, and org charts the way you protect bank account numbers. Lock down voicemail and minimize public details that enable impersonation.
What “Good” Looks Like in 2026
Moderator Andy Sullivan of Bottomline pressed the panel on what constitutes “good” CRE fraud defenses in today’s threat environment. Pragmatism ruled, as panelists reiterated a layered, risk-based posture blending technology, disciplined procedures, and education.
Panelists noted that the objective isn’t perfect prevention; it’s lowering the probability and impact of fraud while sustaining the pace of business. As Elliott noted, consistency beats heroics, and small, repeatable controls prevent big failures.
For CRE payments professionals, fraud is persistent, multichannel, and increasingly AI-enabled. But successful fraud still hinges on rushing people past verification. Bad actors count on it. That being so, a winning model is to slow the riskiest steps, verify identity through a separate trusted channel, harden vendor changes, and deploy AI to augment (not overtake) human judgment.
Harvell doubled down on a previous point, restating that CRE operators must “consider fraud to be a risk, not an event.” With that nuanced mindset, CRE treasury and payments teams can meaningfully reduce losses and help operations grow in 2026.