Photo to verified data, instantly
Was 15-20% with tape measures
Client investment (50% grant funded)
Progress, billing, quality — updated automatically
Deep Purple built a mobile app that turns a photograph of completed construction work into a verified, GPS-stamped measurement with ±3% accuracy. The operative takes a photo, draws an outline, and submits. Two minutes. The data is in the system immediately. Progress reports, billing, quality review, and productivity tracking update automatically. No paper dockets. No back-office reconciliation. No week-long delay.
Deep Purple developed a custom React Native mobile app using OpenCV and ArUco marker calibration for an Irish construction company to automate field measurement, progress reporting, and quality oversight across multiple sites.
Picture this. It is Tuesday morning. You manage a specialist construction company. You have 30-plus operatives spread across six or seven active sites in three countries. They are doing heritage and infrastructure work using traditional construction methods. Specialist materials. Skilled tradespeople. Competitive tenders on measured works contracts with tight margins.
Every operative records their daily output on paper. Triplicate docket books. They measure what they built with a tape measure, write it down, and submit it to the office. Back office staff reconcile these dockets, usually days later. Sometimes a week.
The measurement itself is quick. The problems are everything that happens after it.
Tape measure readings on irregular surfaces carry significant error. When the company compared docket totals against verified remeasurements on completed sections, the variance was consistently 15-20%. On a measured term contract where payment is tied directly to meterage, that kind of error means money left on the table or disputes with the client's quantity surveyor.
But accuracy is only half the problem. The real pain is the delay. Dockets sit in pockets, in vans, in site offices. They arrive at head office in batches. Someone has to enter them, cross-check them, chase the missing ones, resolve discrepancies. By the time the data is usable, the working week it describes is already over. Problems that could have been addressed on Tuesday are only visible on Friday.
Project managers have no way to know which sites are on track without driving out to check. Client progress reports are compiled manually from reconciled dockets, not from live data. Billing is based on reconstructed figures, not verified evidence. Quoting for new work is based on spreadsheets and experience, not verified historical data.
Disputes over measured quantities are common and time-consuming. Operatives are paid based on area completed, so inaccurate measurement means unfair pay. When weather stops work, there is no evidence to back up a lost-day claim.
The business was flying without instruments. The managing director knew the data existed inside the business. Every day, every operative, every site was generating information about productivity, materials, weather impact, and quality. None of it was being captured in a way that could be used.
For more on how Deep Purple approaches computer vision for construction and field operations, see our service page.
We looked at what was available. Nothing fit.
Most construction software is designed for main contractors managing subcontractors on large sites. This company is the specialist subcontractor. Their needs are different. They need to measure output in the field, across multiple remote sites, often with poor or no mobile connectivity.
Generic measurement apps assume controlled conditions. A construction site in February, on an exposed stretch of motorway or a heritage building in a rural area, is not a controlled condition.
The workforce is multi-lingual. Many operatives do not have English as a first language. Any system that relies on typing, reading instructions, or navigating complex menus will fail in the field. It has to be simple. Big buttons. Minimal text. Take a photo. Draw an outline. Submit.
No off-the-shelf tool could do what the client needed: let an operative outline an area on a photograph and get an accurate measurement from it. That sounds simple. It is not. To calculate area from a photograph, you need a reference object of known size in the image. Even with a reference object, accuracy depends on lighting conditions, the angle the photo was taken from, surface irregularity, and whether the camera can detect the reference reliably. This is a genuine engineering problem, not a feature gap in existing software. On top of that, any solution had to work offline, because these operatives work in places where a phone signal is a luxury.
And measurement was only part of the problem. There was no system available that could take field submissions and turn them into verified progress reports, quality oversight, and billing data automatically. The tools that exist handle one piece. None of them connected measurement to reporting to quality to payment in a single workflow.
There was also nothing that gave operatives visibility of their own earnings. These are people paid per square metre. They want to know where they stand: how much they have completed this week, what that means for their pay, and whether they are ahead or behind their own pace. No off-the-shelf construction tool provides that. The system we built does.
The core idea is simple. An operative photographs the area they completed that day. They place a small calibration marker flat against the surface. They draw an outline around the area with their finger on the phone screen. The system calculates the area from the photograph using the marker as a known reference for scale. Two minutes. The data is in the system.
The calibration marker is an ArUco marker. A 200mm square printed pattern that OpenCV can detect automatically. Because we know the marker is exactly 200mm, the system calculates how many pixels equal one millimetre. From there, it computes the area inside the drawn outline.
This approach was chosen deliberately over more complex alternatives. The operative's workflow does not depend on connectivity. They can capture, annotate, and queue submissions offline. The area calculation itself runs on a lightweight Python microservice when the submission syncs to the backend. No heavy ML inference. No GPU infrastructure. No dependency on real-time cloud processing at the point of capture. And the calculation is deterministic. The same photograph always produces the same result. For a system that determines payment, that matters.
The operative opens the app, takes a photograph, places the marker, draws the outline, selects the site from a dropdown, and submits. GPS coordinates and a timestamp are captured automatically. The submission syncs to a Node.js backend. The Python microservice detects the ArUco marker, calculates the scale, and computes the area. Processing happens in the background. The operative gets an immediate confirmation and moves on to the next section.
Here is what changes for the business. Every submission updates the project dashboard instantly. At the end of every working day, the company knows exactly how much was done, on which site, by which operative. Progress reports are not compiled from paper at the end of the week. They are live. Client billing data is based on verified, photo-evidenced measurements, not reconstructed docket totals. Someone in the back office can review the quality of work from submitted photographs the same day it was completed, without visiting the site. And the app includes a feedback function, so operatives can flag issues (materials, conditions, access problems) that go straight to the office in real time.
Reviewers see every submission on a web dashboard with the calculated area, the photograph, the GPS location, and the timestamp.

ML depth estimation (PyTorch). We investigated using machine learning models to estimate depth and surface area from photographs. Too heavy. These models need significant processing power and, in most practical deployments, cloud connectivity. An operative standing on a scaffold in a field with no phone signal cannot wait for a cloud API to respond. ArUco markers with OpenCV achieve the accuracy we needed without any of that overhead.
Large vision models (Depth Anything, DiffusionEdge, PiDiNet). We prototyped with several large vision models during early development. We found no meaningful improvement over the marker-based approach for our constraints, and the computational requirements were prohibitive for mobile deployment. We removed them.
External AI APIs (Replicate and similar). We rejected any dependency on external services for core measurement functionality. The system must work without internet. Full stop. No external API calls for anything critical.
In-app messaging. We considered building a chat system into the app. We rejected it. If an operative needs to talk to their project manager, they make a phone call. Building a messaging system adds complexity, development time, and maintenance overhead to solve a problem that a phone call already solves. Simpler is better.
The system was delivered in three phases.
The mobile app, the CV measurement engine, the backend, and the reviewer dashboard. This is what the hard metrics below are based on. The POC proved the technology works in real field conditions and established the accuracy baseline.
Real-time dashboards showing progress across every active project. Role-based access for management tiers. Per-operative productivity tracking. Weather integration that automatically pulls conditions for each site and correlates them with output. Anomaly and overlap detection. Earnings tracking linked to confirmed meterage. Phase 2 also builds the scaffolding for Phase 3: back office staff grade every submitted photograph for work quality. Good, bad, and why. This builds a labelled dataset that grows with every submission.
A machine learning model trained on the graded dataset from Phase 2. It flags quality issues automatically as new photographs come in. A human reviews every flag. The system learns from every correction. Phase 3 also delivers structured client-facing reports. A contractor who can show a government body or main contractor "here is our quality control system, here is the photographic evidence, here are the measurements and conditions for every day of the project" has a serious advantage in competitive tenders.
| Component | Technology |
|---|---|
| Mobile App | React Native (Android and iOS) |
| Backend | Node.js |
| CV Microservice | Python, OpenCV |
| Calibration | ArUco markers (200mm, binary fiducial) |
| Database | PostgreSQL |
| Cloud Hosting | Google Cloud Platform (EU, GDPR compliant) |
| Web Dashboard | React.js |
| LLM Interface | Natural language queries to operational data (Phase 2) |
| ML Quality | Trained on labelled quality grading dataset (Phase 3) |
All data is stored in EU-region Google Cloud infrastructure. The system handles GPS-tagged photographs of identifiable workers and is designed to comply with GDPR from the ground up.
Anyone can describe how a system is supposed to work. This is how it actually went.
Construction sites do not have controlled lighting. We dealt with direct sunlight causing glare on surfaces. Low winter light. Shadows from scaffolding and structures. Rain on surfaces changing how the camera reads texture and colour. The CV system had to handle all of these. We tuned the ArUco detection parameters extensively across different lighting conditions until we had reliable detection in the range of conditions the operatives actually encounter.
This sounds trivial. It was not. A standard printed ArUco marker does not survive a construction site. It gets wet, dirty, torn, and trampled. We had to find markers that were durable enough for daily outdoor use in construction conditions, the right size for the working distances involved, and reliably detectable by the camera.
The operatives use different phones. Different manufacturers, different camera specifications, different Android versions. The same photograph taken on two different devices can produce different results. We tested across a range of devices and had to set a minimum hardware specification to guarantee consistent accuracy.
Not every surface can be photographed face-on. Tight spaces, awkward access points, scaffolding in the way. The system had to handle photographs taken at angles that were less than ideal. We worked on calibration logic that accounts for perspective distortion when the photo cannot be taken perpendicular to the surface.
Getting operatives to draw an accurate outline on a phone screen sounds easy. In practice, it took several iterations. The interface had to work with cold hands, gloves, wet screens, and users who are not comfortable with technology.
Defining where one completed area ends and another begins. On irregular surfaces using traditional construction materials, the boundary is not always obvious. This was one of the harder problems. We refined the system to handle the edge cases that real-world conditions throw at you.
These are measured results from Phase 1. The POC. Real submissions from real operatives on real construction sites. Accuracy was validated by comparing CV-calculated areas against physical spot-check remeasurements on completed sections.

| Metric | Before | After |
|---|---|---|
| Data from site to office | Days (paper dockets, manual reconciliation) | Instant (photo submission, automatic processing) |
| Measurement accuracy | 15-20% error (tape on irregular surfaces) | ±3% target accuracy, consistently achieved |
| Progress visibility | Lagging by days, compiled manually | Real-time dashboard, updated with every submission |
| Evidence of work completed | None | GPS-stamped photograph with calculated area |
| Quality review | Required physical site visit | Same-day photo review from the office |
| Operative feedback to office | Delayed (paper, phone calls) | Instant (in-app feedback to back office) |
| Client progress reporting | Manually compiled from reconciled dockets | Based on live, verified data |
| Measurement queries | Manual, time-consuming | Photo evidence with calculated area |
Every submission is automatically tagged with GPS coordinates, a timestamp, the operative's identity, the site, and a full-resolution photograph. This is an auditable record. It can be used for payment verification, measurement queries, progress reporting, client billing, and quality evidence.
"The measuring was never the hard part. The hard part was getting the numbers back to the office and trusting them when they got there. Now the lads submit a photo, the system calculates the area, and I can see it on the dashboard before they have packed up for the day. The back office is not chasing dockets any more. The progress reports build themselves. And when a client asks where we are on a project, I can tell them exactly, with photos to back it up."
Spending days reconciling paper dockets?
Let's look at your process on a 20-minute call.
The Phase 1 POC was a €45,000 project. Enterprise Ireland covered 50% through their grant programmes. The client's investment was €22,500.
| Phase 1: Proof of Concept | |
| Project value | €45,000 |
| Enterprise Ireland grant (50%) | €22,500 |
| Client investment | €22,500 |
Time to ROI. The system eliminates days of back-office reconciliation every week. It removes the cost of measurement uncertainty, reduces wasted PM travel to sites for progress checks, and gives the company verified data for client billing instead of reconstructed figures. The €22,500 client investment pays for itself within weeks, not months.
Enterprise Ireland offers two programmes that fit projects like this. The Exploring Innovation grant supports proof of concepts and prototype development. The Digital Process Innovation grant supports companies implementing new digital processes. Both offer 50% funding.
For a detailed guide to Enterprise Ireland grant programmes, see our complete guide to AI grants in Ireland.
The obvious change is that paper dockets are gone. What used to take days of back-office reconciliation now happens instantly when the operative submits a photograph. That alone would justify the investment.
But the real change is data.
For the first time, the company has verified, timestamped, GPS-tagged records of every piece of work completed on every site, every day. That data feeds into everything.
The company knows where it stands at the end of every day. The dashboard shows real-time progress. A PM can check six sites from the office in five minutes instead of spending two days driving between them. Problems that used to be invisible until Friday are visible on Tuesday.
Client billing is based on verified evidence. Progress reports are not compiled from reconciled paper dockets any more. They are generated from verified, photo-evidenced measurements. When a client asks where the project stands, the answer comes with photographs and calculated areas, not estimated totals.
Quality is visible from the office. Someone in the back office can review submitted photographs the same day the work was completed. They do not need to drive to site. If something does not look right, they know about it immediately, not at the end of the week.
Operative feedback flows instantly. The app includes a feedback function. If an operative encounters a problem on site — materials, access, conditions — they flag it in the app. It goes straight to the office. No waiting for a phone call or a note on a docket that arrives three days later.
Payment is fair and transparent. Operatives are paid per square metre on measured works contracts. The system removes arguments about quantities. Every measurement has a photograph and a calculated area. Both sides can see the same number. This is not surveillance. It is fairness.
Quoting gets better with every project. This is the compounding advantage. The system captures area completed, operative, materials, method, weather conditions, and time. Over months and years, the company builds a verified dataset that says: "On a project like this, with these materials and this method, in these conditions, our average output is X square metres per day per operative." That transforms quoting from experience and spreadsheets to evidence-based pricing.
Weather claims have evidence. When weather stops work, the system has the data. Automatic weather pulls for every site. Correlated with productivity. A lost-day claim backed by verified weather data is harder to dispute.
Client reporting proves quality. Once the planned Phase 3 ML quality system is live, the company can provide clients with structured evidence of quality control on every section of work. For government contracts and large infrastructure projects, this is a competitive differentiator.
The data grows more valuable every month. Every submission adds to the dataset. More data means better predictions, better quotes, better evidence, and better decisions. This is the flywheel. It compounds.
The operational savings were immediate, but the business impact went further.
This system was built for a specialist construction contractor. But the core technology applies to any business where field teams need to measure, document, and report.
Deep Purple builds custom computer vision systems for Irish businesses. We consult on whether AI is the right fit, we build the system if it is, and we help you access government funding to reduce the cost.
Deep Purple built and deployed a production computer vision system for an Irish construction company that replaced paper dockets with instant, photo-evidenced, GPS-stamped submissions. Measurement accuracy improved from 15-20% error to ±3%. Progress reports, billing, and quality review now update automatically with every submission.
No pitch, no pressure. Just an honest look at whether computer vision could help your field operations.
Or start with a €1,250 AI Assessment →
CTO, Deep Purple AI Consulting
Barry Gough is the CTO of Deep Purple AI Consulting. With an MSc in Computer Science from University College Dublin, where machine learning was a core focus of his studies, and over 20 years building production software systems, Barry brings formal ML training and deep hands-on engineering experience to every AI and data analytics engagement.
Barry completed his masters at UCD in 2011, studying ML algorithms, statistical modelling and data-driven systems just as big data techniques were maturing and deep learning was about to transform the industry. At Purpledecks (Deep Purple's predecessor consultancy), he spent nearly a decade progressing from Senior Developer to Head of Operations, leading the technical delivery of enterprise projects that increasingly incorporated machine learning, computer vision, data classification, predictive features and recommendation engines for commercial clients across Ireland and the UK.
In 2023, as CTO of Reactable AI, Barry architected and built an autonomous AI marketing engine from the ground up, a self-learning system that generates and optimises marketing campaigns across channels. This was one of Ireland's earliest production deployments of autonomous AI agents, requiring him to design systems where AI made real decisions with real consequences.
At Deep Purple, Barry leads all technical delivery: AI system architecture, machine learning model development, data pipeline engineering, and manages a team of experienced ML engineers and applied statisticians. His combination of formal ML education, a decade of incorporating AI into commercial projects and hands-on experience architecting autonomous AI systems means clients work with a technical lead who can make genuine engineering decisions about AI.
Deep Purple AI Consulting (deeppurple.ai) is an AI consultancy and custom software development company based in Ireland. We help established businesses identify where AI can make a real difference, then build the systems to make it happen. Senior-only delivery. Grant-funded where possible. No hype.
We use cookies to ensure our website works properly and to help us improve it. You can accept all cookies or customise your preferences. See our Cookie Policy for details.