One of the questions we hear most often from production engineers evaluating a new visual-inspection cell is: do we need deep learning, or will rule-based machine vision do the job? The honest answer is that most real production lines benefit from a mix. Knowing where to draw the line keeps the project on budget and avoids over-engineering.
What rule-based vision does well
Rule-based tools — pattern matching, geometric measurement, blob analysis, edge detection, fixed-threshold OCR — have decades of refinement behind them. They are the right choice when:
- The defect or measurement is deterministic. Position, presence/absence, dimensional checks on parts that come in with consistent appearance.
- The inspection has zero training-data budget. Rule-based tools work from a single golden image plus the engineer’s intent.
- Cycle time is sub-millisecond. Pure C/C++ rule pipelines outpace deep-learning inference for the simplest tasks.
- Auditability matters. A rule’s behaviour is fully specified; a model’s is not.
Where deep learning is the only answer
- Defects vary in appearance. Scratches that come in different lengths, depths, and orientations. Surface anomalies on shiny metal where lighting flares move with each part. Glass cracks where the crack’s optical signature depends on substrate thickness.
- The defect catalogue is open-ended. A model trained on “good parts only” can flag novel anomalies that no rule-based system was configured to catch.
- The part has high cosmetic variability that operators tolerate. Watch dials with slight gradient drift, plastic-injection moulded handles with stochastic shine variation, anodised aluminium where colour drifts batch-to-batch.
- OCR is on curved, shiny, or laser-etched surfaces. Read-rate on PET bottles, aluminium cans, and laser-marked stainless steel jumps from ~95% rule-based to ~99.99% with deep-learning OCR like our Retina-Olive.
The hybrid is usually right
In practice, the most production-stable cells combine both. A rule-based dimensional check verifies the part is in the right position, then deep learning handles the cosmetic and anomaly steps. The rule-based stage is fast and auditable; the deep-learning stage covers what rules cannot specify.
What to budget for
Deep learning needs three things rule-based vision does not: labelled training data, GPU compute at inference time, and a retraining workflow for when production changes. None of these are obstacles — 3HLE handles the labelling pipeline, ships the IPC class sized to your model, and trains your team on the no-code retraining UI — but they need to be planned for.
How 3HLE scopes the choice
During the feasibility phase, we run a sample of your parts through both paths. Rule-based pipeline in OpenCV / VisionPro, deep-learning pipeline in Retina A.I. Side-by-side accuracy and cycle-time numbers tell us which approach (or which hybrid balance) deserves the production-engineering investment. Typical feasibility takes 2-4 weeks and produces a documented recommendation, not a sales pitch.