Managing AI Risks and Preparing Your Team for the Future
As we explored AI implementation on our Manufacturing Minute podcast, I asked Randy Lowman about the critical concerns every finance leader faces: what risks should we keep in mind when using AI with sensitive data, and how do we prepare our teams for an AI-enabled future?
Editor's Note: This blog post is adapted from a transcript of our Manufacturing Minute podcast episode featuring Randy Lowman of Lake Turn Automation. The content reflects the conversational nature of the original recording.
The Biggest Risk: Uninformed Use
Randy's answer surprised me. The biggest risk isn't the technology itself—it's uninformed use.
The Shadow AI Problem
Randy introduced me to a term I hadn't heard before: shadow AI. This happens in organizations without guidelines or policies, where employees naturally gravitate toward free consumer tools like consumer ChatGPT or Claude. In doing so, they may accidentally share sensitive data. If that includes financials or payroll information, you could be facing a breach reporting incident and serious trouble.
This is why every organization needs a clear AI usage policy. Surprisingly, Randy estimates that only 30 to 35 percent of organizations have one in place. If you don't have a policy yet, that should be your first priority.
Building Your AI Framework
Once you have a usage policy, you need an approved vendor list—a whitelist of approved tools. Randy recommends solutions like OpenAI Business or Enterprise as good options.
The key is this: if you don't give your employees an approved solution, they'll use whatever they can find on their phone or computer. It's better to provide a safe path forward than to try to prevent all AI use.
Then follow up with team training—not just on the technology, but on data sensitivity and appropriate use. Every employee should know:
- Which tools are safe to use
- Which tools they must use
- Which data are off limits
- How to handle sensitive data when necessary
The Traffic Light System for Data
Randy shared a practical framework for categorizing data when using AI: green, yellow, and red. This applies when using business-grade AI tools like Copilot, OpenAI Business or Enterprise, or Claude Teams—not free consumer versions.
Green Data (Safe for Use): This is the bulk of what you'll use AI for. It includes PDFs, case studies, templates, policies, or questions about GAAP or finance process improvement. This is generally available information that carries minimal risk.
Yellow Data (Proceed with Guardrails): This is where it gets interesting for finance teams. You might want AI to look at contracts, client financials, or proposals. You can do this with enterprise tools, but Randy recommends sanitizing all identifiable information first. For example, run financials through AI but remove company names and other identifying details. You can add them back when you're done. With proper controls and guidance, yellow data can be used safely.
Red Data (Off Limits): This includes any personally identifiable information (PII) such as personal identifiers, W-2s, pay stubs, bank account numbers, privileged information, and confidential data. This should stay in-house, regardless of what tier of enterprise AI you subscribe to.
To use AI with red data, you need a custom AI solution. These can be built within Microsoft environments and can even be HIPAA compliant, but they require an IT team or a specialized firm to implement.
The Over-Trust Problem
Beyond data security, there's another significant risk: over-trusting AI output. AI can sound extremely confident while being completely wrong. Randy noted that even with new training criteria, ChatGPT is still wrong 18 to 20 percent of the time.
This is why you need a human in the loop. Don't blindly trust AI results. As the saying goes: trust, but verify.
The Cultural Risk
Randy identified a cultural risk that exists on two extremes. On one end, some organizations ban AI completely. This just drives usage underground, where employees use unapproved tools without oversight. On the other end, some organizations don't establish any guidelines and let AI use run wild. This promotes shadow IT and is equally risky.
The right answer is in the middle: clear policies, approved tools, and proper training. This transforms AI from a liability into a competitive advantage.
Our Approach at Brown Edwards
At Brown Edwards, we take AI use very seriously. We're extremely conscientious about client information. We train our teams to be thoughtful about what they're putting into chat windows.
The reality is simple: once information goes into an AI system, it's in there. It can't be undone. This awareness and caution are essential.
Preparing Your Finance Team for an AI-Enabled Future
Beyond managing risks, we need to think about how to prepare our teams for long-term success with AI.
AI Isn't Replacing People—It's Redefining Roles
Randy immediately addressed the concern I hear most often from finance teams: will AI replace us? His answer was clear and reassuring. AI isn't replacing people, but it is redefining roles.
There's a phrase circulating in our profession: "AI won't replace accountants, but accountants that use AI will." I think that captures the reality perfectly.
The finance professionals who will thrive are those who know how to pair their judgment—our deep understanding of debits, credits, and financial principles—with this new breed of tools. We bring something AI cannot: professional judgment, ethical reasoning, and contextual understanding of business.
Bridging the Gap
Here's an interesting insight Randy shared: AI tools don't come with traditional user manuals. They're built by data scientists who may not understand business processes. This creates a huge need for professionals who can bridge that gap—people who understand both finance and technology.
This is actually great news for finance professionals. Analysis, control, and precision are our core strengths. These qualities form an excellent foundation for understanding where AI works, where it doesn't, and how to stay focused on attacking repetitive work so we can do more advising, interpreting, and improving processes.
There will be tremendous demand for these bridge-builders over the next decade and beyond.
Building AI Literacy
You don't need to become a data scientist, but you do need people who understand the technology and can use it responsibly and validate results. This is all about creating a learning culture.
Randy emphasized using small teams and pilots to share small wins. This builds confidence and capability across the organization without overwhelming anyone.
A Top-Down Initiative
AI in an organization is a top-down initiative. It starts with the executive team and works its way down. This is different from many technologies in the past that bubbled up from IT departments.
This means finance leaders need to be actively involved in AI strategy and implementation. We can't delegate this entirely to IT or wait for someone else to figure it out.
Creating a Culture of Innovation
Perhaps the most challenging aspect for many finance teams is cultural. We have to create a culture that's tolerant of innovation and failure.
I'll be honest—this is hard for us accountants. When something doesn't balance, that's a problem. Failure traditionally equals bad outcomes in our world. But with AI, many experiments won't work, and that's actually okay.
Randy emphasized that failure is how learning happens. It's how you discover what AI can and can't do, which enables you to make informed decisions or test again in twelve months to see if the technology has improved.
Remember: Progress Over Perfection
Randy closed with advice that resonated deeply with me: the goal isn't perfection, it's progress.
Finance teams need the freedom to explore, learn, and adapt. That's what a future-ready finance team looks like as we move into the next several years.
It never hurts to share an idea about how AI could improve a process. You never know where that suggestion might lead. I know many accountants worry whether AI will replace their jobs. I don't think any of us believe that's the case—we'll need accountants for years to come.
What AI does is make our jobs easier and let us focus on the areas where we should really be focused: areas requiring human intervention, judgment, and strategic thinking. Those capabilities remain uniquely human and uniquely valuable.
The Bottom Line
Managing AI risks and preparing for the future aren't separate challenges—they're two sides of the same coin. With clear policies, proper training, approved tools, and a culture that embraces learning, you can harness AI's benefits while protecting your organization and your clients.
The finance professionals and teams that embrace this approach will not only survive but thrive in an AI-enabled future. They'll find their work more rewarding as they spend less time on repetitive tasks and more time on strategic, value-added activities that leverage their unique human capabilities.
Thank you for following this series from our Manufacturing Minute podcast. I'm grateful to Randy Lowman for sharing his insights, and I'm excited to continue bringing you conversations about the trends and topics shaping the world of manufacturing and finance.
-1.jpg)
