What you will be doing
- Train and evaluate 3D perception models for object detection, segmentation, and lane detection
- Improve model performance across challenging conditions such as long range and sparse point clouds
- Deploy perception models to production and own the full lifecycle from research to real-world integration
- Diagnose and resolve deployment issues including latency, accuracy degradation, and edge case failures in the field
- Monitor model performance in production and iterate rapidly based on real-world feedback
- Build auto-labeling pipelines using vision-language models to accelerate data annotation
What will you have
- MS or PhD in CS, Robotics, or a related field
- Hands-on experience with 3D object detection on LiDAR point clouds
- Experience using VLMs for auto-labeling or offline perception tasks
- Strong Python and PyTorch skills
- Familiarity with large-scale dataset pipelines and annotation workflows
- Experience with multi-object tracking or sensor fusion is a plus
Salary Range
$154,900—$222,365 a yearSalary pay ranges are determined by role, level, and location. Within the range, the successful candidate’s starting base pay will be determined based on factors including job-related skills, experience, certifications, qualifications, relevant education or training, and market conditions. These ranges are subject to change in the future. Depending on the position offered, equity, bonus, and other forms of compensation may be provided as part of a total compensation package, in addition to comprehensive medical, dental, and vision coverage, pre-tax commuter and health care/dependent care accounts, 401k plan, life and disability benefits, flexible time off, paid parental leave, and 11 paid holidays annually.