About us
We build real-time analytics that run directly on smart cameras and edge devices: object detection, tracking, classification, and distance/velocity measurement. We’re a small, hands-on team that ships reliable software where performance and clean engineering matter.
What you’ll do
• Develop and deploy on-device AI applications (native or containerized) on embedded Linux.
• Integrate object detection & tracking models.
• Build calibration tools (intrinsics/extrinsics, homography) for distance and speed measurement.
• Optimize models for embedded inference (TFLite/ONNX Runtime, quantization, profiling).
• Work with device SDKs/APIs to control video, optics/zoom, parameters, events, and overlays.
• Contribute to CI/CD, logging/metrics, and robust error handling.
Minimum qualifications
• Strong C++ and Linux development (memory, concurrency, performance).
• Experience with computer vision / AI inference (OpenCV + TFLite/ONNX Runtime or similar).
• Comfortable with Docker and cross-compiling for ARM.
• Ability to consume HTTP/REST APIs and learn new SDKs quickly.
• Clear communication, pragmatic debugging, and ownership mindset.
Nice to have
• On-device AI experience on smart cameras or other edge hardware (NPU/DSP/SoC).
• Working knowledge of camera geometry (intrinsics, extrinsics, homography) and basic tracking algorithms.
• GStreamer or low-latency video pipeline experience (RTSP, overlays, metadata).
• Model optimization: int8 quantization, performance tuning, (rotated) NMS.
• Implemented oriented bounding boxes (OBB), re-ID, or multi-object tracking.
• Experience designing an abstraction layer over multiple vendor SDKs.
How we work
• Lean, engineering-led team with quick feedback loops.
• Autonomy with responsibility for production quality.
What we offer
• Real product impact on edge AI—not just experiments.
• Opportunity to grow into technical leadership across platforms.
• Flexible schedule, remote-first culture, supportive team.
Apply
Send your CV/LinkedIn, a short note on a relevant project (edge AI, embedded vision, or SDK integration), and links to repos or demos if you have them.