Purpose of the Job:Design and implement advanced perception algorithms and pipelines for Brightskies’s Level 3 and Level 4 autonomous driving systems, focusing on robust multi-sensor environment understanding, real-time performance, and deployment on embedded automotive platforms.
Responsibilities and Duties:Develop state-of-the-art 2D/3D object detection, classification, tracking, and segmentation models using LiDAR, camera, radar, and ultrasonic data. Implement multi-sensor fusion frameworks using probabilistic modeling, geometric alignment, and temporal association. Integrate perception modules into high-performance, low-latency pipelines for real-time embedded deployment. Profile and optimize CPU/GPU utilization, addressing memory, concurrency, and throughput constraints. Apply CUDA, Tensor RT, and model optimization techniques to achieve resource-efficient real-time inference. Conduct validation in both simulation environments and real-world driving conditions. Maintain automated CI/CD pipelines for integration, testing, and deployment. Collaborate across perception, localization, planning, and control to ensure stack-wide interoperability. Research and adapt emerging methodologies to enhance perception robustness and accuracy.
Education:Bachelor’s, Master’s, or Ph. D. in Computer Science, Electrical Engineering, or Robotics.
Experience:3+ years of solid expertise in autonomous driving perception systems with practical experience in multi-sensor integration and fusion (LiDAR, camera, radar, ultrasonic), including probabilistic frameworks, geometric alignment, and association. Proven experience in end-to-end autonomous driving perception pipelines, from raw sensor data acquisition to embedded deployment, ensuring reliable integration with localization, planning, and control stacks. Hands-on experience in autonomous vehicle deployments, including calibration, synchronization, validation, and on-road testing under varying operational conditions (urban, highway, adverse weather) Strong knowledge of LiDAR, camera, radar, and ultrasonic raw data processing. Proficiency in C++ (modern standards) and Python for algorithm development and integration. Experience in developing or integrating prediction modules for dynamic object behavior forecasting. Practical experience with deep learning frameworks (Tensor Flow, PyTorch) for perception tasks. Deployment experience on embedded and automotive-grade platforms (e.g., NVIDIA Drive, Jetson, FPGAs). Familiarity with ROS / ROS2, middleware communication, and real-time constraints. Strong background in probabilistic state estimation (e.g., Kalman filters, Particle filters) and geometric vision. Hands-on experience with simulation platforms (CARLA, or others) and real-world validation. Experience in model acceleration and optimization pipelines for edge inference. Experience in advanced computer vision and sensor fusion techniques for autonomous driving. Additional:Knowledge of GStreamer and streaming pipelines for high-bandwidth data. Experience with Agile methodologies, Git-based workflows, and CI/CD integration. Understanding of robustness testing, safety validation, and functional safety considerations (ISO 26262 is a plus).
Skills and Abilities:Programming: C++17/20, Python 3.x Frameworks & Libraries: ROS, ROS2, Tensor Flow, PyTorch, Open CV, PCLSensor Technologies: LiDAR, camera, radar, ultrasonic Optimization: Tensor RT, CUDA, quantization, pruning, graph optimization Simulation Tools: CARLA, LGSVL, or others Algorithms: Sensor fusion, tracking, SLAM, state estimation, point cloud processing Toolchain: Docker, Foxglove, Valgrind, Gtest, Netron, and NVIDIA NSight. Soft Skills: Problem-solving, analytical thinking, precision in implementation, adaptability to evolving technologies