As the Perception Architect, you’ll research, prototype and bring to production algorithms that enable autonomous & semiautonomous systems to understand the world around them and navigate safely. That includes detecting, classifying, tracking, and predicting the motion of pedestrians, vehicles and other classes of interest, including various obstacles and terrain. We’re looking for a primarily hands-on engineer, who’s capable of taking real-world problems, turning them into well-defined projects, surveying and selecting the right approach, developing quick prototypes & bringing them to production. The Architect will work with senior level technology strategists, mentor junior engineers and have design input to current and future technologies and strategies. You will also have input on how to build the team and help lead them as the technology evolves over time.
– Bachelor’s degree in Computer Science or equivalent.
– 6+ years industry experience building projects involving sensor fusion/robotics, machine learning and/or deep learning for detection, tracking, segmentation, and depth estimation to completion.
– Experience with classical computer vision techniques, as well as neural networks and developing resource-constrained online algorithms.
– Ability to assess a problem and determine whether to use classical detection vs machine learning
– Experience working with GPU and/or FPGA embedded systems
– Hands-on experience with TensorFlow (preferred), PyTorch, Keras, or similar frameworks.
– Fluency in Python, excellency in C++, strong engineering practices, debugging/profiling skills.
– Ability to survey literature for ideas and convert research papers into production implementation.
– Industry experience implementing perception algorithms for detection, tracking, segmentation, terrain estimating/mapping, pose estimation.
– Experience with a variety of sensors including LIDAR, stereo/mono cameras, radar, and IMUs
– Experience with real-time sensor fusion (e.g. IMU, LIDAR, camera, odometry, radar)
– Knowledge of robotics and frameworks such as ROS
– Industry experience building optimized perception pipelines using ROS, C/C++, Open source libraries (OpenCV, OpenPCL, OpenSLAM, etc), and/or CUDA
– Strong foundation in mathematics and fundamentals (3D geometry, linear algebra)
– Ability to design and propose metrics to assess the performance of algorithms and systems.
Top Candidates will have
– MS or a higher degree in Computer Science or equivalent
– Exposure to NVIDIA Jetson, familiarity with related tools like TensorRT, ONNX, and DeepStream
– Production industry experience with LIDAR/depth-based 3D perception algorithms
– Knowledge breadth in the academic literature of machine learning based perception algorithms
– Independent project management and execution
– Off Highway machine application knowledge or experience
– Experience setting up and leading teams to deliver complex systems to customers
– Multi-function team leadership experience
– Build portable code with a production intent focus
– Publications in top-tier computer vision/robotics conference (CVPR, ICCV, ECCV, ICRA, IROS)
To apply for this job please visit caterpillarcareers.ttcportals.com.