Key Responsibilities
- Design, develop, and deploy machine learning, deep learning, and AI models for production.
- Build and maintain scalable data pipelines for feature extraction, model training, and inference.
- Implement and manage end-to-end MLOps workflows for model versioning, monitoring, and CI/CD automation.
- Conduct exploratory data analysis (EDA), feature engineering, and model optimization.
- Work with structured and unstructured datasets, including text, images, and time-series data.
- Collaborate with data scientists, software engineers, and stakeholders to translate business needs into ML solutions.
- Develop model evaluation processes, A/B testing, and continuous monitoring for model performance.
- Deploy models using cloud platforms (AWS, Azure, GCP) or containerized environments (Docker/Kubernetes).
- Create technical documentation, architecture designs, and ML best practice guidelines.
- Stay updated with the latest advancements in AI, ML, NLP, and deep learning technologies.
- Bachelor’s or Master’s degree in Computer Science, Data Science, AI/ML, Statistics, or a related field.
- Strong hands-on experience with Python and ML libraries such as:
- Scikit-learn, TensorFlow, PyTorch, XGBoost, LightGBM
- Solid understanding of machine learning algorithms, deep learning, NLP, and model lifecycle management.
- Experience working with cloud ML services:
- AWS Sagemaker, Azure ML, Google Vertex AI
- Experience with data processing frameworks (Spark, Databricks, Airflow).
- Strong understanding of Linux environments, Docker, Kubernetes, and CI/CD pipelines.
- Experience with SQL and NoSQL databases.
- Familiarity with software engineering principles, version control (Git), and Agile methodologies.