OFFICIAL PUBLICATION OF THE NEBRASKA SOCIETY OF CERTIFIED PUBLIC ACCOUNTANTS

2026 Pub. 8 Issue 1

AI’s Next Five Trends for 2026

What Organizations Need to Know

Finger pressing graphic that says AI

Working with a defense contractor in February, I was talking with one of their senior operations leaders—smart, practical, running a tight organization with thin margins and a proud reputation. He leaned in and said, “Chuck, I’m not trying to become a tech company. I’m trying to stop my team from making a bad decision with a tool they don’t fully understand.”

That sentence is 2026 in a nutshell.

AI is no longer a “digital initiative.” It’s becoming a daily operating reality—in customer communication, hiring, training, scheduling, compliance documentation, and even how criminals target your business. And that means leaders have to do two things at the same time:

  1. Capture the upside (speed, consistency, productivity, insight).
  2. Reduce the downside (errors, fraud, privacy leaks, reputational damage, legal exposure).

Here are the five trends I see shaping 2026, and what organizations should do about each—especially if you’re leading a small or medium-sized organization.

Trend 1: AI will become the new front office for small businesses.

In 2026, the “front office” won’t just be your receptionist, your inbox, or your office manager. It will be the invisible layer helping your team:

  • respond to inquiries faster
  • draft estimates and proposals
  • follow up on leads
  • summarize customer requests
  • write policies, checklists, and standard emails

This is the good news: speed and consistency become easier—even for small teams. The bad news: AI doesn’t understand your business the way your best people do. It produces confident language, whether it’s right or wrong.

Quick example: Working with an HVAC organization in South Carolina, I asked for a recent copy of a proposal/contract they planned to submit for a reasonably significant contract. I placed the document in AI for evaluation. Within moments, it flagged six apparent errors in the agreement, one of which could have voided it—dates that didn’t match!

The CEO of the company, who just stopped in to catch a portion of our meeting, was shocked. He was livid since he had just received the document back from outside counsel paying them a tidy sum for their review. His comment, “Why do I need lawyers when AI does it better?”

While I cautioned him not to jump to conclusions, the truth is AI is great at some things and over time will be relied upon more and more with little regard to verification. Think of it like this, we use a computer or calculator and assume it’s right. AI takes that to another level.

The ethical tension: customers can’t tell the difference between a human response and an AI-assisted one—and they will hold you accountable either way.

What Leaders Should Do Now

  • Decide where AI is allowed to draft—and where a human must approve.
  • Establish “no-go zones” (client-sensitive data, regulated language, legal commitments).
  • Train staff on a simple rule: AI can assist the writing; humans own the truth.

Trend 2: “AI-powered scams” will hit SMBs harder than ever.

Fraudsters have always targeted the easiest entry point: busy people, rushed decisions, and trust.

Generative AI makes scams faster to produce, more believable, and more personalized—exactly what criminals need to scale deception. The FBI has warned that criminals are using generative AI to facilitate fraud at greater scale and believability, and that AI is increasingly being used in cybercrime targeting individuals and businesses.1

This matters because many organizations still operate on “informal trust” systems:

  • “We’ve always paid invoices this way.”
  • “That email looked like the vendor.”
  • “It sounded like the boss.”

AI-assisted impersonation doesn’t have to be perfect. It only has to be convincing once.

Quick example: As a VP in a public company who also speaks/consults on AI ethics and fraud prevention, I was asked if AI could defeat the expense control mechanism that reviews all receipts for fraud. Interesting question. So, with their approval and blessing, I asked the AI to create a receipt for a particular expense that appeared on my report. (To be clear, the expense was legitimate, and I had the original receipt.) AI produced a believable receipt, which was submitted. The “fake” receipt passed with flying colors.

This is a critical trend that deserves attention!

What Leaders Should Do Now

  • Add a “verification pause” to money movement: no urgent payment changes without a second channel (phone call to a known number, not the email thread).
  • Tighten vendor change controls.
  • Teach teams the new reality: polished language is no longer a sign of legitimacy.

Trend 3: AI agents will move from “chat” to “doing” work inside business systems.

In 2024-2025, many organizations experimented with AI as a chatbot: ask a question, get an answer.

Quick example: In 2025, Verizon had moved toward AI chatbots answering more than half of their customer service inquiries, and a major funeral provider began using chatbots to answer their incoming calls for service details.

In 2026, the major shift is toward agentic AI—tools that don’t just respond, but can take actions across workflows: create tickets, update records, draft customer responses, pull reports, route approvals, and orchestrate tasks across systems. We’re already seeing major enterprise platforms partner to embed “AI agents” into business software and customer service workflows.2

This is where productivity can explode—and where risk can multiply.

Because the moment AI is allowed to “do,” not just “suggest,” you need clarity on:

  • permissions
  • audit trails
  • accountability
  • error correction
  • safety checks (what the system should never do)

What Leaders Should Do Now

  • Start with “human-in-the-loop” designs: AI drafts/actions require approval.
  • Require logging: decisions, data sources, and changes must be traceable.
  • Define ownership: someone is accountable for outcomes, even when AI touches the process.

Trend 4: The “AI Operating Guide” will become a legal and reputational necessity.

Let me say this plainly: the absence of an AI operating guide is becoming a liability.

Many organizations are already using AI informally—employees are experimenting, departments are adopting tools, vendors are slipping AI into platforms. And when something goes wrong, the organization can’t credibly say, “We had reasonable controls.”

In the U.S., one of the strongest practical anchors for responsible AI governance is the NIST AI Risk Management Framework and its Generative AI Profile, which provides structured guidance for identifying and managing AI risks.3

Globally, regulation is also tightening. The EU AI Act is rolling out progressively, with a full roll-out timeline extending into 2027. Even if you’re not based in Europe, many vendors and partners will align to it—and its logic will influence expectations around transparency, risk classification, and oversight.4

A better way to say it: If you don’t define how AI is used in your organization, you’re letting risk define it for you.

What Leaders Should Do Now (The Practical Minimum)

Create a simple, usable AI Operating Guide that covers:

  • approved tools (and prohibited tools)
  • what data cannot be entered (client/member data, confidential info, regulated content)
  • disclosure rules (when AI assistance must be acknowledged)
  • human review requirements (what must be checked before sending/publishing)
  • decision ownership (AI advises; humans decide)
  • incident response (what to do if an AI output causes harm or a privacy issue)

This is not bureaucracy. This is protection.

If you need a sample to go by or require some help with a starting point for your organization, reach out to chuck@chuckgallagher.com—happy to help.

Trend 5: Proof, authenticity, and trust will become the new currency.

In the old world, trust was built on familiarity: a known email address, a familiar voice, a recognizable logo.

In 2026, trust will increasingly be built on verification:

  • Is this message authentic?
  • Is this image real?
  • Did this person actually say this?
  • Is this policy accurate—or AI-generated guesswork?

The stakes are especially high for associations, credentialing bodies, safety-sensitive industries, and professional services—because your value is credibility. And credibility can be damaged by one misattributed quote, one fake “announcement,” or one AI-generated mistake presented as fact.

What Leaders Should Do Now

  • Add lightweight authenticity practices (confirmed channels, secure document sharing, verified contacts).
  • Train teams on red flags for deepfakes and impersonation.
  • Treat “trust” as an operational system—not just a cultural value.

Pay Attention!

If you’re leading an organization trying to navigate AI responsibly, here’s what you might consider moving forward:

  • AI trend briefings tailored to your industry (no hype, real scenarios)
  • AI Operating Guide workshops that produce a usable policy and training plan
  • Ethics-first AI training that helps teams move faster without stepping into preventable risk
  • Fraud and impersonation readiness sessions so leaders can reduce exposure as scams evolve

I’m not trying to turn your organization into a tech company. I’m trying to help you become a trust company—because that’s what organizations will be competing on in 2026.

Five questions to spark discussion in your organization:

  1. Where is AI already acting like our “front office,” and what’s our quality-control plan?
  2. What is our most likely vulnerability to AI-powered scams—and who owns prevention?
  3. Which workflows should never be automated without human approval—and why?
  4. If we had to defend our AI use to a regulator, board, client, or member tomorrow, what would we point to?
  5. What does “trust” look like in an AI era—and how do we operationalize it?

Chuck Gallagher is a vice president at a public company and a professional speaker, author, and consultant specializing in business ethics, AI ethics, and fraud prevention. With a leadership background in sales and marketing and years of experience advising organizations across industries, Gallagher helps executives and teams make smarter decisions, strengthen trust, and apply emerging technologies responsibly. His work equips organizations to reduce risk, protect their reputations, and build cultures where integrity and innovation go hand in hand. Visit Chuck Gallagher’s website or contact him at (828) 244-1400 or chuck@chuckgallagher.com.

Catch Chuck Gallagher in person at the Nebraska Society of CPAs’ Not-for-Profit and Governmental Accounting Conference, June 8-9 at the Nebraska Innovation Campus Conference Center in Lincoln, Neb., for his presentation on “The AI Revolution: From Hype to Hands-On.”

  1. Criminals use generative artificial intelligence to facilitate financial fraud, Public Service Announcement I-120324, FBI Internet Crime Complaint Center, Dec. 3, 2024, https://www.ic3.gov/PSA/2024/PSA241203
  2. OpenAI and ServiceNow strike deal to put AI agents in business software, The Wall Street Journal, Jan. 20, 2026, 
  3. https://www.wsj.com/articles/openai-and-servicenow-strike-deal-to-put-ai-agents-in-business-software-57d1da5c
  4. AI Risk Management Framework, National Institute of Standards and Technology Information Technology Laboratory, accessed Feb. 11, 2026, https://www.nist.gov/itl/ai-risk-management-framework
  5. Timeline for the implementation of the EU AI Act, AI Act Service Desk, European Commission, accessed Feb. 11, 2026, https://ai-act-service-desk.ec.europa.eu/en/ai-act/timeline/timeline-implementation-eu-ai-act

Get Social and Share!

Sign Up to Receive this Publication in your inbox

More In This Issue