backGo to search

Lead Data DevOps - MLOps Expertise

hot
bullets
Data DevOps, MLOps, MLflow, Kubeflow, CI/CD, Infrastructure, Docker, Kubernetes, Apache Spark, Databricks, Python, Pandas, TensorFlow, PyTorch, Jenkins, Git, GitHub Actions, Terraform Cloud, Ansible, Data Analysis
bullets
Hyderabad, Bangalore, Pune, Chennai, Gurgaon

We are seeking a skilled and driven Lead Data DevOps Engineer with strong MLOps expertise to join our team.

The ideal candidate will have a deep understanding of data engineering, automation in data pipelines, and operationalizing machine learning models. The role requires a collaborative professional capable of designing, deploying, and managing scalable data and ML pipelines that align with business goals.

Responsibilities
  • Develop, deploy, and manage CI/CD pipelines for data integration and machine learning model deployment
  • Implement and maintain infrastructure for data processing and model training using cloud-based tools and services
  • Automate data validation, transformation, and workflow orchestration processes
  • Collaborate with data scientists, software engineers, and product teams to ensure seamless integration of ML models into production
  • Optimize model serving and monitoring to enhance performance and reliability
  • Maintain data versioning, lineage tracking, and reproducibility of ML experiments
  • Proactively identify opportunities to improve deployment processes, scalability, and infrastructure resilience
  • Ensure robust security measures are in place to protect data integrity and compliance with regulations
  • Troubleshoot and resolve issues across the data and ML pipeline lifecycle
Requirements
  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field
  • 8+ years of experience in Data DevOps, MLOps, or related roles
  • Strong proficiency in cloud platforms such as Azure, AWS, or GCP
  • Experience with Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or Ansible
  • Expertise in containerization and orchestration technologies (e.g., Docker, Kubernetes)
  • Hands-on experience with data processing frameworks (e.g., Apache Spark, Databricks)
  • Proficiency in programming languages such as Python, with knowledge of data manipulation and ML libraries (e.g., Pandas, TensorFlow, PyTorch)
  • Familiarity with CI/CD tools (e.g., Jenkins, GitLab CI/CD, GitHub Actions)
  • Experience with version control tools (e.g., Git) and MLOps platforms (e.g., MLflow, Kubeflow)
  • Strong understanding of monitoring, logging, and alerting systems (e.g., Prometheus, Grafana)
  • Excellent problem-solving skills and the ability to work independently and as part of a team
  • Strong communication and documentation skills
Nice to have
  • Experience with DataOps concepts and tools (e.g., Airflow, dbt)
  • Knowledge of data governance and tools like Collibra
  • Familiarity with Big Data technologies (e.g., Hadoop, Hive)
  • Certifications in cloud platforms or data engineering