← Lab Notes
deployment greenhouse field-ai

The Demo Worked Perfectly. Then We Deployed It.

What two years in a greenhouse taught us about the gap between building AI and shipping it.

March 22, 2026 · FDF Labs
Everyone can build an AI dashboard. Give a developer two weeks, a dataset, and a cloud instance and they'll hand you something that looks impressive in a pitch meeting. Clean interface. Real-time charts. Simulated sensor data flowing across the screen. Investors lean forward. Stakeholders nod. Then you deploy it. And a lizard short-circuits your relay at 2pm on a Tuesday.
We're living this right now at FDF Labs. Our core R&D operation is a hydroponic greenhouse in Asunción, Paraguay — a real agricultural business supplying one of the top 20 restaurants in the world. A place where if the ventilation fails on a hot day, you lose the crop. No staging rollback. No incident report. Just dead plants. A lizard got into the relay housing this afternoon and took down the ventilation system. The misting pump is off. Temperatures are rising. The dashboard is screaming alerts. This is not a simulation. This is what nobody shows you in a pitch meeting.
When we started building our sensor and monitoring systems, the early prototypes worked exactly as designed. Temperature readings came in clean. Humidity logged correctly. The dashboard looked great on a laptop in an air-conditioned room. Then we put it in the greenhouse. The connectors corroded. Not because we used cheap components — because we underestimated what sustained humidity does to metal contacts over weeks, not hours. You don't see that in a demo. You see it at 6am when your sensor array goes silent and you're standing in a greenhouse trying to figure out why, coffee in hand, crop on the line. The lizard was worse. A lizard got into the relay housing and short-circuited the ventilation system. The entire architecture had to be redesigned — not the software, the physical architecture. New housing. Better sealing. Different relay placement. Nobody writes “assume lizard intrusion” in a requirements document. Nobody thinks to. Until they're holding a fried relay while the temperature climbs and the crop is on the line.
This is the gap the industry doesn't talk about honestly enough. Building an AI system that works in demo conditions is essentially a solved problem. The tools are good. The frameworks are mature. The cloud infrastructure is reliable. A talented team can mock up almost anything in a few weeks and make it look production-ready in a slide deck. Deployment is a different discipline entirely. In the field you're dealing with condensation inside enclosures, power fluctuations your code has no fallback for, physical conditions that no amount of desk research prepares you for. The code that ran perfectly in testing throws an exception at 3am because nobody accounted for the sensor coming back online out of sequence after a power flicker. The dashboard that looked flawless on a MacBook stops updating because the edge device lost connectivity and nobody wrote a retry loop. These aren't edge cases. They're the job. The difference between a demo and a deployment isn't technical sophistication. It's whether you've ever had to waterproof something, rethink an enclosure design because of local wildlife, or sit with a system through its first real failure in real conditions.
The connectors taught us to overspec every physical component by at least one grade above what conditions require on paper. The lizard taught us that any housing with a gap is an invitation. The 3am crashes taught us that fallback logic is the first thing you write, not the last — and that “the system will just restart” is not a fallback, it's a wish. None of this shows up in a pitch deck. None of it impresses anyone in a demo room. But all of it is the difference between a system that runs for a week and one that runs for years. We're in year two.
The demo is easy. The deployment is the actual work. If you're building AI systems meant to operate in the real world — not in simulated environments, not in controlled demos, but actually in the field — the hardware is where humility begins. The software is almost never the problem. It's everything around the software that breaks. Now if you'll excuse us, we have a lizard to deal with and a temperature curve that isn't going the right direction. Anyone telling you otherwise is still waiting for their lizard.

Originally published on Substack