Job Description
We are seeking a highly skilled Data Platform Engineer to support our distributed streaming platform for handling real-time streaming data. This role is ideal for someone with deep expertise in Kafka, Kubernetes and microservices, also cloud platforms like Google Cloud Platform (GCP) or Microsoft Azure. You’ll play a key role in designing, deploying, and optimizing scalable, real-time data pipelines in a cloud-native environment.
Key responsibilities:
Design, deploy, and manage Kafka clusters in GCP or Azure environments
Operate and maintain Kafka on Kubernetes using Helm, Operators, or custom configurations
Collaborate with cross-functional teams in building data pipelines, integrating with Kafka and building microservices
Monitor, troubleshoot, and optimize Kafka performance and reliability
Automate infrastructure and deployment processes using Infrastructure-as-Code tools
Ensure security, compliance, and high availability of Kafka systems
Qualifications:
Strong hands-on experience with Apache Kafka in production environments
Proficiency with Kubernetes (GKE, AKS, or self-managed clusters)
Solid understanding of cloud infrastructure in GCP and/or Azure
Experience with CI/CD pipelines and DevOps practices
Strong programming skills in Python, Java, or similar languages
Familiarity with monitoring tools (e.g., Prometheus, Grafana) and logging systems
Excellent communication and documentation skills
Preferred Qualifications:
Experience with Confluent Platform or Confluent Cloud
Knowledge of Terraform, Helm, or other IaC tools
Background in stream processing (Kafka Streams, RabbitMQ Streams, Flink, RocksDB, etc.)
Certifications in GCP, Azure, or Kubernetes (CKA/CKAD)
Experience working in agile, cross-functional teams