Website Caterpillar

Cat Robotics

As an Autonomy Perception Engineer, you’ll develop algorithms that enable autonomous systems to understand the world and navigate safely. That includes detecting, tracking, and predicting the motion of pedestrians and vehicles, as well as characterizing obstacles and terrain. We’re looking for a primarily hands-on engineer, who’s capable of taking real-world problems, turning them into well-defined projects, surveying and selecting the right approach, developing quick prototypes, bringing them to production, and mentoring jr. engineers.

Job Duties

Be a part of the Perception team on our autonomous vehicle platform.

Research, prototype, and bring to production perception algorithms that enable autonomous and semi-autonomous systems to understand the world around them and navigate safely

Design and implement computer vision systems on an autonomous vehicle platform

Work with data from various perception sensors including Lidar, Camera, and Radar

Work with simulation and AV system testing teams to guide simulation sensors and perception evaluation in support of the perception stack

Optimize on-board perception code

Design metrics and architect software to gauge performance against requirements

Mentor and grow more junior members while interacting with senior technical leaders

Basic Qualifications

Bachelors or Master’s degree in Robotics, Computer Science, or Engineering

5+ years relevant work experience

Industry experience implementing perception algorithms for detection, tracking, segmentation, pose estimation.

Experience with a variety of sensors including LIDAR, stereo/mono cameras, radar, and IMUs

Experience with real-time sensor fusion (e.g. IMU, lidar, camera, odometry, radar)

Industry experience building optimized perception pipelines using ROS, C/C++, OpenCV and/or CUDA

Excellent C++ coding, strong engineering practices, debugging/profiling skills

Strong foundation in mathematics and fundamentals (3D geometry, linear algebra)

Ability to convert research papers into production implementation.

Top Candidates will also have

Knowledge of robotics and frameworks such as ROS

Experience with SLAM, filtering, and state estimation techniques

Experience with machine learning and classification, exposure to deep learning frameworks

Experience working with GPUs, particularly on embedded hardware

Publications in top-tier computer vision/robotics conference (CVPR, ICCV, ECCV, ICRA, IROS)

To apply for this job please visit