Pick & Place robotique guidé par vision — Détection d’objets IA
Traditional robotic pick & place cells require rigid fixtures, perfect part orientation, and weeks of engineering to adapt to new products. Our vision-guided approach eliminates all three constraints.
How it works
A machine vision camera mounted above or on the robot arm captures images of incoming parts in real time. Our Retina AI deep learning software identifies each part regardless of orientation, rotation, or position. The software then computes optimal gripping coordinates and sends them to a Universal Robots cobot, which picks and places each part at line speed.
Key capabilities
No fixtures required — parts can arrive in any position or orientation
Real-time inference — 30+ fps on an edge industrial PC
Multi-part handling — the same cell can pick different parts without reprogramming
Integrated inspection — parts are checked for defects in the same cycle
Safe human collaboration — UR cobots meet ISO/TS 15066 safety standards
Packaging lines, component assembly, medical device handling, watch case manipulation, electronics kitting, and any process where parts arrive with natural variation.
Traditional robotic pick & place cells require rigid fixtures, perfect part orientation, and weeks of engineering to adapt to new products. Our vision-guided approach eliminates all three constraints.
How it works
A machine vision camera mounted above or on the robot arm captures images of incoming parts in real time. Our Retina AI deep learning software identifies each part regardless of orientation, rotation, or position. The software then computes optimal gripping coordinates and sends them to a Universal Robots cobot, which picks and places each part at line speed.
Key capabilities
No fixtures required — parts can arrive in any position or orientation
Real-time inference — 30+ fps on an edge industrial PC
Multi-part handling — the same cell can pick different parts without reprogramming
Integrated inspection — parts are checked for defects in the same cycle
Safe human collaboration — UR cobots meet ISO/TS 15066 safety standards
Packaging lines, component assembly, medical device handling, watch case manipulation, electronics kitting, and any process where parts arrive with natural variation.
Traditional robotic pick & place cells require rigid fixtures, perfect part orientation, and weeks of engineering to adapt to new products. Our vision-guided approach eliminates all three constraints.
How it works
A machine vision camera mounted above or on the robot arm captures images of incoming parts in real time. Our Retina AI deep learning software identifies each part regardless of orientation, rotation, or position. The software then computes optimal gripping coordinates and sends them to a Universal Robots cobot, which picks and places each part at line speed.
Key capabilities
No fixtures required — parts can arrive in any position or orientation
Real-time inference — 30+ fps on an edge industrial PC
Multi-part handling — the same cell can pick different parts without reprogramming
Integrated inspection — parts are checked for defects in the same cycle
Safe human collaboration — UR cobots meet ISO/TS 15066 safety standards
Packaging lines, component assembly, medical device handling, watch case manipulation, electronics kitting, and any process where parts arrive with natural variation.