- LocationPune, India
Role Summary:
As the AGI/DL Software Development Lead, you will be responsible for overseeing the architecture, development, and deployment of AGI and DL frameworks for robotics applications, with a focus on humanoid and quadruped platforms. This position requires a robust understanding of AGI, DL, robotics perception, simulation environments (such as ROS and Gazebo), and advanced processing for LiDAR and point cloud data. You will lead a team of AI and robotics engineers to deliver scalable, real-time AI systems that enhance automation capabilities within complex manufacturing and industrial domains.
Core Responsibilities:
1. Strategic AGI/DL Architecture and Vision
• Develop and Implement AGI Architecture: Define an end-to-end AGI architecture for autonomous robotic systems, focusing on modular, reusable components that can support humanoid, quadruped, and other robotics applications. Establish a vision for AGI capabilities, such as reasoning, planning, and adaptability, in a manufacturing environment.
• Long-Term AI Strategy: Develop and manage a multi-phase roadmap for AGI and DL initiatives, balancing short-term deliverables with long-term strategic objectives. Align AI development goals with broader organizational priorities, ensuring that AGI capabilities are scalable and adaptable for diverse robotic tasks.
• Cross-Platform AGI Deployment: Architect solutions that leverage both cloud and edge computing to enable decentralized processing, supporting robots in dynamic environments where real-time processing is crucial.
2. AI/ML Model Development and Optimization
• Model Selection and Customization: Identify and implement the most suitable deep learning models (e.g., CNNs, RNNs, Transformers, GNNs) and AGI methodologies for specific robotic functions such as navigation, manipulation, and human-robot interaction.
• Generative AI and Reinforcement Learning: Explore generative AI models like GANs and VAEs for creative problem-solving and reinforcement learning (RL) approaches such as deep Q-learning and PPO for training adaptive behaviors. Develop and implement RL algorithms that can train in simulated environments and transfer learnings to physical robots.
• Optimization for Performance: Leverage NVIDIA's CUDA, TensorRT, and cuDNN to optimize model inference on GPUs for high-performance, real-time applications in manufacturing. Focus on model compression, quantization, and other techniques to balance speed and accuracy.
3. Robotics and Simulation Development
• ROS Integration for Real-Time Control: Lead the integration of AGI/DL models with ROS (Robot Operating System) to enable seamless control and perception across multiple robotic platforms, including humanoids and quadrupeds. Develop ROS nodes for handling real-time data, robot motion control, and sensor fusion.
• Simulation Environment Development with Gazebo: Use Gazebo and other simulation environments to create realistic models for testing and validating AGI and DL algorithms in virtual settings before physical deployment. Develop scenarios that simulate industrial environments for training and testing perception, navigation, and manipulation tasks.
• Humanoid and Quadruped Robotics: Design and optimize DL models specifically for humanoid and quadruped platforms, focusing on stable locomotion, object interaction, and adaptive learning. Address challenges in multi-degree-of-freedom (DOF) control and balance for dynamic, real-world scenarios.
4. Perception and Sensor Fusion
• Advanced Computer Vision: Implement computer vision models for real-time object detection, segmentation, and scene understanding. Utilize advanced neural network architectures (e.g., YOLO, Mask R-CNN, and Vision Transformers) for robotics applications requiring high spatial awareness.
• LiDAR and Point Cloud Processing: Develop algorithms for processing and analyzing LiDAR data to support 3D mapping, SLAM (Simultaneous Localization and Mapping), and obstacle detection. Implement point cloud processing pipelines for spatial understanding, distance measurement, and real-time navigation.
• Sensor Fusion Techniques: Fuse data from multiple sensors, including LiDAR, cameras, and IMUs, to create a unified, robust perception system. Use sensor fusion algorithms to improve the accuracy and reliability of spatial mapping and robot localization in changing environments.
5. Software Development and Deployment
• Containerized Deployments with Docker and Kubernetes: Utilize Docker for creating containerized applications and Kubernetes for orchestrating these containers across cloud and edge devices. Develop infrastructure to deploy AGI/DL models seamlessly across different environments for scalable, reliable robotic solutions.
• MLOps and CI/CD Integration: Implement CI/CD pipelines in collaboration with DevOps and MLOps teams, ensuring smooth deployment, monitoring, and retraining of models in production environments. Establish version control and model performance tracking to manage iterative improvements effectively.
• NVIDIA AI Stack Optimization: Use NVIDIA tools (e.g., DeepStream SDK) to accelerate and optimize model deployment on GPU-enabled devices, achieving low-latency performance for real-time applications in robotics.
6. Team Leadership and Mentorship
• Lead and Develop AGI/DL Engineers: Manage a team of AI, ML, and robotics engineers, providing mentorship on best practices, technical problem-solving, and professional development. Cultivate a collaborative and results-oriented team culture.
• Technical Guidance and Code Reviews: Provide regular technical feedback, conduct code reviews, and ensure adherence to high coding standards. Share knowledge on advanced AGI/DL concepts, robotics frameworks, and sensor technologies.
• Goal Setting and Performance Management: Set ambitious yet attainable goals for the team, tracking individual and team performance metrics. Recognize achievements and provide constructive feedback to foster ongoing improvement and engagement.
7. Innovation and Research in AGI and Robotics
• Stay Current with Technological Advancements: Stay informed about the latest research and developments in AGI, DL, and robotics. Drive continuous innovation by exploring new methodologies, tools, and frameworks applicable to industrial automation and autonomous systems.
• Research and Development of Novel AGI Concepts: Lead R&D initiatives into AGI areas such as transfer learning, meta-learning, and multi-modal learning. Identify ways to enhance robots' ability to learn from and adapt to novel tasks in unpredictable environments.
• Open Source and Community Engagement: Support and encourage team contributions to open-source projects in AI and robotics. Participate in industry forums, conferences, and developer communities to share knowledge, gather insights, and foster collaborations.
8. Performance, Security, and Compliance
• Real-Time Optimization for Robotics: Ensure AGI/DL models meet stringent performance requirements, such as low latency, high accuracy, and robustness to variations. Optimize models to run efficiently on both cloud and edge devices for real-time robotics applications.
• Data Security and Privacy Protocols: Implement data handling protocols that prioritize security, privacy, and integrity, especially in sensitive manufacturing environments. Address potential ethical considerations and privacy concerns with transparency.
• Ethical and Regulatory Compliance: Ensure all AGI/DL models comply with relevant ethical AI standards and industry regulations. Document processes to support accountability, traceability, and fairness in AI decision-making.
Required Qualifications:
• Education: Master's or Ph.D. in Computer Science, Robotics, AI, or a related field, with a focus on AGI, DL, and robotics.
Experience:
• 10+ years of experience in AI, DL, and robotics, with at least 5 years in a senior leadership role.
• Proven experience in deploying autonomous robotic platforms, particularly humanoids and quadrupeds, in industrial environments.
• Deep expertise in ROS, Gazebo, and DL frameworks (e.g., TensorFlow, PyTorch) for robotics.
Technical Skills:
• Robotics Motion and Control: Advanced understanding of multi-DOF control, kinematics, and dynamics in humanoid and quadruped robots.
• Perception Systems (LiDAR, Point Cloud): Extensive experience with LiDAR sensors, point cloud processing, and computer vision techniques for real-time mapping and navigation.
• NVIDIA Stack Proficiency: Proficiency with CUDA, TensorRT, and other NVIDIA tools for optimized model deployment on GPUs.
• Programming: Proficiency in Python and C++ for high-performance computing, model development, and ROS integration.
• Cloud and Edge Computing: Experience with AWS, Azure, or Google Cloud for cloud-based AI solutions, and familiarity with edge computing platforms like NVIDIA Jetson for on-device AI processing.
• Reinforcement Learning Frameworks: Proficiency in OpenAI Gym, Stable Baselines, or similar RL libraries for training adaptive behaviors in robotic systems.
• Simulation Environments: Experience with advanced simulation tools like NVIDIA Isaac Sim or Webots for realistic robotic simulations.
• Sensor Fusion Libraries: Familiarity with sensor fusion libraries such as PCL (Point Cloud Library) for processing 3D point cloud data.
• Deep Learning Model Optimization: Experience with ONNX (Open Neural Network Exchange) for model interoperability and optimization across different hardware platforms.
• Version Control and Collaboration: Proficiency with Git and GitHub for collaborative software development and version control in large-scale AI projects.
• Distributed Computing: Experience with Apache Spark or similar frameworks for distributed computing in large-scale machine learning applications.
• Natural Language Processing: Familiarity with NLP libraries such as spaCy or NLTK for processing and understanding natural language in human-robot interaction scenarios.
• Embedded Systems: Knowledge of embedded systems programming and real-time operating systems (RTOS) for low-level robot control.
• Data Visualization: Proficiency with data visualization libraries like Matplotlib or Plotly for analyzing and presenting complex robotics and AI data.
Preferred Qualifications:
• Generative AI and LLMs: Experience with GANs, VAEs, and large language models for advanced robotic applications.
• DevOps and MLOps: Strong experience in DevOps/MLOps practices, including CI/CD, model monitoring, and retraining.
• Agile Project Management: Proven experience with Agile methodologies, particularly in AI and robotics projects.
• Open-Source Contribution: Track record of contributions to open-source AI/ML and robotics projects.
• Manufacturing Domain Knowledge: Familiarity with manufacturing and industrial automation requirements, including regulatory compliance.