Concrete Results: How Lithko Is Applying AI to Productivity, Risk, and Decision Making

Real Use Cases from the Field — and a Practical Roadmap for Getting Started

Brown Edwards & Company | Winter Construction Webinar Series

A Practitioner's Perspective  

One of the things I appreciated most about our Winter Construction Webinar was that we did not just bring theory — we brought proof. After Geoff Marsh laid out the strategic landscape for AI in construction, Sandy Steiger, Director of Data Analytics at Lithko Contracting in Cincinnati, Ohio, took us inside a real organization doing the real work of building AI capabilities. Sandy has spent 22 years in data science and analytics, working across grocery retail, consumer packaged goods, and logistics before coming to construction. She also teaches at Miami University in Oxford, Ohio. Her perspective was equal parts grounding and inspiring.

Her message at the outset was refreshingly honest: Lithko is not an AI expert. They are in the beginning stages of this journey, just like most organizations. Anyone claiming they have fully figured out AI right now is probably not being straight with you. What Lithko has done is take a methodical, problem-first approach — and they have real use cases and early results to show for it.

What AI Actually Is — and Why It Matters to Get This Right  

Sandy reinforced something Geoff also emphasized: AI is not new. The first discussions of machine learning — a foundational element of artificial intelligence — go back to the 1940s. Statistical pattern recognition, regression modeling, algorithmic decision-making: these have been tools of data scientists for decades. What is new is the explosion of generative AI and large language models like ChatGPT and Claude, which have made AI accessible and visible to everyone.

The important thing to understand is that AI is not one thing. It is an ecosystem. Automation, pattern recognition, machine learning, generative models, agentic orchestration — these are all different tools within the broader AI umbrella, and knowing which one applies to your problem is a critical skill. As Sandy put it: AI is about machines being programmed to think and act like humans. It is not replacing humans, but it is designed to take the tedious, repetitive parts of human work off the table so people can focus on what they are genuinely better at — creativity, reasoning, complex decision making.

A key insight from Sandy: if you have ever run a regression analysis in Excel, you have done artificial intelligence. The goal is not to be intimidated by the label. The goal is to identify which form of AI fits the problem you are trying to solve.

The Framework: Define the Problem Before You Touch the Solution  

Every AI initiative at Lithko follows the same three-step foundation: define the problem, identify the right solution, and implement while measuring and iterating. Sandy was emphatic that most organizations get this backwards. They start with the solution — where can we use AI?' — instead of starting with the problem.

When a business stakeholder comes to Sandy's team with a request, her team works through five non-technical questions before any technology discussion happens. What is the business problem and why is it important? What does today's current state look like, and what makes it inadequate? What does success look like, and what one or two metrics will define it? What behaviors or processes need to change? And what should the future state look like once the solution is in place?

Sandy shared a memorable caution on measuring success: she once had a stakeholder come in with 32 metrics they wanted to track for a single initiative. Her response was firm — if you are measuring 32 things, nothing is important. Know the one or two outcomes that genuinely matter and build your initiative around achieving those.

On the data foundation: AI is made up of data. Good AI requires good data. You will never have perfect data — no one will — but you need data that is high quality enough to give you insights you can rely on. If you feed bad data into an AI system, you will get bad output. That is not a technology problem; it is a data discipline problem.

Three Types of AI — and When to Use Each  

Sandy walked through three categories of AI solutions that Lithko uses as a lens when identifying the right approach for a given problem.

Logic-based automation is the starting point for most organizations. This is the if-then rules engine: if X happens, then do Y. When inputs are clear and conditions are stable, logic-based automation produces consistent, repeatable, explainable results. Sandy noted that some AI purists might argue this is not 'true' AI — but for organizations just beginning their journey, automating tedious manual tasks is the perfect first step. It removes fear, builds trust, and frees people up to do better work. When an employee realizes they no longer have to spend ten hours a week on a task that a machine can handle, they become your best advocate for the next initiative.

Predictive AI is more complex. This is the statistical synthesizer — models trained to identify relationships and patterns in data, producing a score, a classification, or a probability that something will happen. Predictive AI takes time to build well, requires continuous model tuning, and demands high accuracy before organizations trust it enough to act on it. Lithko typically targets 90 percent accuracy or higher for models used in operational decision making, though 85 percent can be acceptable for directional insights. The payoff is that predictive AI takes deeply complex data and produces a single, clear output that a business leader can act on.

Generative and agentic AI rounds out the picture. Generative AI — tools like ChatGPT, Claude, and Grok — generates content, answers questions, and drafts documents from natural language prompts. Sandy was candid that she initially felt using generative AI in her data science role was somehow 'cheating.' She was wrong. These tools have made her significantly more productive and creative. They pressure-test ideas, offer new angles, and produce solid first drafts faster than any other method. The key is understanding that generative AI predicts answers — it does not know them — so fact-checking and validating outputs is essential. Agentic AI takes this further by orchestrating multiple AI agents together: routing a question to the right agent, having one agent's output feed another, and building toward a system that can handle complex workflows while keeping humans in the loop.

Use Case 1: Automating the Weekly Trade Hours Report  

The first use case Sandy walked through was not glamorous — and that was exactly the point. Lithko's senior leadership relies on a weekly trade hours report to gauge business performance: how much work is out in the field, how the organization is tracking over time. The problem was that a single team member was producing this report entirely by hand, running SQL queries, manually updating Excel files, and distributing them through a personal email. When that person was on vacation or sick, the report either did not get produced or that person had to log in from their time off.

Beyond the operational fragility, the report was inconsistently timed, not always trusted (updates and corrections came in hours later some weeks), and delivered in a static format that showed a single number rather than trends over time.

Lithko's team worked through the five-question problem definition with the stakeholders. Success was defined clearly: the report arrives in every recipient's inbox at the same time every week, the data is accurate and trusted, it comes from a shared team distribution list (not a single person's email), it is role-based so only the right people see the right data, and it is delivered through Power BI rather than Excel.

The solution was a fully automated pipeline that runs in the background and produces the report without anyone having to touch it. Sandy's team did put it through extended testing before activating full automation — they wanted to be certain the data was populating correctly before removing the human safety net. That patience is a model worth following.

Use Case 2: The Lithkopedia Chatbot  

Lithko operates with multiple Centers of Excellence (COEs) across 26 business units. These COEs run lean — three to four subject matter experts per center — but they are being inundated with questions from an organization of over 6,000 employees. Across one COE focused on high-performance concrete, three subject matter experts were receiving a constant stream of emails, Teams messages, and phone calls ranging from very simple to highly complex questions. They were spending roughly 60 to 70 percent of their time on simple, repetitive questions — leaving too little time for the complex problems that genuinely needed their expertise.

The measure of success was inverted: the team wanted to spend more than 80 percent of their time on complex inquiries, not the simple ones.

The solution was Lythkopedia — a chatbot built on top of Lithko's unstructured data that serves as the first line of defense for questions. The chatbot answers routine, repetitive questions. Only when it cannot answer a question does it route the inquiry to the appropriate subject matter expert. The accuracy threshold was set at over 90 percent — the team was unwilling to deploy a chatbot that might give incorrect answers and drive bad operational decisions.

The longer vision for Lithkopedia is full AI orchestration: building out agents for each COE and then having a single Lithko chatbot serve as the front door for the entire organization, routing questions automatically to the right agent behind the scenes. The user never needs to know which agent answered — they just get the right answer.

Use Case 3: SightSense — Predicting Safety Incidents Before They Happen  

The third use case was the most powerful. Lithko's safety team came to Sandy's group with a clear priority: keeping coworkers safe. They wanted to move from reacting to incidents to predicting and preventing them.

Their current state: territory safety leads rely on experience and gut feel to determine which job sites to visit. They log 'near hits' — moments where an incident almost occurred but didn't — as leading indicators of true incidents. They have years of OSHA recordables, first aid incidents, and near hits in their data. The success metric is straightforward: reduce the recordable incident rate.

The future state they envisioned was called SightSense. Using three years of historical incident data — including who was on site, what phase of the job was underway, overtime levels, weather, and many other variables — SightSense predicts the likelihood of a safety incident occurring at specific job sites and business units across the organization. Territory safety leads no longer rely solely on gut feeling to decide where to be. SightSense tells them where the risk is highest so they can be in the right place at the right time with the right information.

Lithko's goal for the future state is to make a safety outlook part of every daily project planning meeting for operations leadership. If the predicted likelihood of an incident is low, there may not be much to discuss. If it is high, the team can proactively talk through the specific protocols and actions needed to reduce that risk before it becomes a real injury. That is AI driving a genuine, measurable safety outcome.

How to Get Started — No Matter Where You Are  

Sandy closed with practical advice organized around the most common barriers organizations face.

If data quality is holding you back, stop trying to fix all the data at once. Pick one high-value problem, identify the five to ten data fields that matter most to it, and audit those fields. Fix processes at the source — not in the back end. When data enters your systems incorrectly, the fix belongs at the source, not in a back-end patch. Assign a business-level data owner for every initiative — not a technical person, but the person who owns the data and is accountable for its accuracy.

If you are unclear on where to start, find your friction points. Where are people spending a disproportionate amount of time on tasks that do not add real value? Those are your best AI candidates. A useful exercise: run a workshop with your business leaders, have them bring their highest-friction workflows, recurring decisions, and repetitive tasks, and evaluate each one against three questions — is it repetitive, is it data-driven, and would a faster insight change the outcome? If the answer to all three is yes, you have an AI candidate.

If security and governance concerns are holding you back, start by classifying your data. Most fear comes from not knowing what is sensitive. Establish clear guardrails: which tools are approved, what data can and cannot flow into AI systems, and how you are keeping humans in the loop. Your AI policy is the foundation here.

If you lack technical capability, do not start by hiring AI talent or buying tools. Understand your problem first. Then look at what your existing platforms already offer — many have AI capabilities you are not using. Only once you know what you are trying to solve should you build out the cross-functional team you need: a technical lead, a domain expert, and someone accountable for security.

And if cultural resistance is the obstacle, do not talk about replacement. Talk about augmentation. Start with the curious explorers — the people who are already excited about AI — not the skeptics. Let their early wins become the word of mouth that brings others along. Make the 'why' crystal clear from the beginning so people understand that AI is about freeing them to do better work, not about taking their job.

The Bottom Line  

What Lithko is demonstrating is exactly what Geoff Marsh described in theory: a methodical, problem-first approach to AI that builds genuine organizational capability over time. They are not trying to do everything at once. They are not chasing headlines. They are solving real friction points, measuring real outcomes, and building the trust and infrastructure that will allow them to do more.

The most important thing any construction organization can do right now is stop asking 'where can we use AI?' and start asking 'what problems are keeping our leaders up at night, and which of those problems can AI help us solve?' That shift in framing changes everything.

We are grateful to Sandy Steiger and Lithko Contracting for the transparency and generosity of sharing their journey with our clients. It is rare to get this level of real-world detail, and we hope it gives you a clearer picture of what practical AI adoption actually looks like in construction today.

 

Contact Us

 

Back to Blog