What I Learned Running an AI Integration Project
Lessons from leading a Google Vertex AI integration into a live enterprise product — what went smoothly, what didn't, and what I'd do differently.
I recently led an AI integration project using Google Vertex AI for an enterprise client. The goal was to embed machine learning predictions into an existing product — not build a standalone AI tool, but weave AI into something people were already using every day.
Here's what I learned.
The hardest part isn't the technology
Everyone assumes the hard part of an AI project is the model. It's not. The models on platforms like Vertex AI are genuinely impressive and well-supported. The hard part is everything around the model — the data pipelines feeding it, the business logic deciding when to show predictions, and most of all, getting users to trust and act on what the AI tells them.
We spent maybe 20% of the project on the model. The other 80% was data quality, integration work, UX decisions, and change management.
Data quality will surprise you
The data we thought was clean wasn't. Fields that were supposed to be consistent had dozens of edge cases. Historical records had gaps nobody knew about. We had to budget two full sprints just to clean and validate the training data before we could do anything meaningful with it.
My advice: before any AI project kicks off, do a proper data audit. Not a quick look — a real audit where someone digs into the actual records. What you find will reset your timeline expectations.
Set up a model governance process from day one
Who decides when a model is good enough to deploy? Who monitors it after launch? Who gets alerted if predictions drift? We figured these things out mid-project, which created confusion and delayed sign-off.
Define your model lifecycle process — training, evaluation, deployment, monitoring — before you start building. It's much harder to retrofit governance later.
Explain the AI to your stakeholders in plain terms
Stakeholders want to know: "Will this make us look stupid if it gets it wrong?" That's a fair question. We answered it by being upfront about confidence thresholds, showing historical accuracy on real data, and building in a human review step for low-confidence predictions.
The more transparent you are about how the model works and where it struggles, the more trust you build. Never oversell AI accuracy — it will come back to bite you.
The payoff is real
Despite the challenges, the integration delivered measurable value. Prediction accuracy was good enough to reduce manual review time significantly, and users adopted the feature faster than expected once they understood it.
AI projects are harder to deliver than standard software projects. But when they work, they have a compounding impact that's hard to get any other way.
Want to discuss this or work together?
Get in Touch