We are looking for a seasoned Data Engineer to join our growing data team.
Responsibilities
Responsibilities
- Own and lead the development of scalable, efficient, and robust data pipelines (ETL/ELT) from ingestion to delivery
- Design and implement data processing workflows to support business analytics and machine learning
- Drive the migration of our current Hadoop-based infrastructure to a Spark-based architecture on AWS
- Collaborate with cross-functional teams including data analysts, data scientists, and software engineers to define data requirements and optimize data flow
- Ensure data quality, observability, and compliance across systems
- 4+ years of hands-on experience as a Data Engineer
- Strong programming skills in Python and advanced proficiency in SQL
- Hands-on experience with developing ETL/ELT pipelines using Spark and SQL
- Experience working with cloud-hosted data warehouses such as Hive, Snowflake
- Solid understanding and experience with the Hadoop ecosystem and distributed computing tools (e.g., EMR, Hive, Presto/Trino, Athena, AWS Glue)
- Proven expertise in data warehousing concepts, data modeling, and best practices
- BA/BSc in Computer Science, Engineering, Information Systems, or equivalent
- Excellent analytical skills and a strong attention to detail
- Strong communication skills in English (written and verbal)
- Demonstrated ability to work both independently and in a collaborative team environment
- Familiarity with Kafka, Confluent, Fluentd, Spark, and Airflow
- Experience with data visualization tools (e.g., Tableau)
- Previous experience in the media or TV industry
- Flexible work arrangements
- 20 working days per year is a Non-Operational Allowance and settled to be use for personal recreation matters and are compensated in full
- Collaborative and supportive team culture
- Truly competitive salary
- Help and support from our caring HR team