Our customer is searching for a Freelance DATA engineer!
As a Senior Data Engineer, you will design, build, and maintain scalable data pipelines to enhance our analytical and business intelligence capabilities.
You’ll work closely with data scientists, analysts, and stakeholders to ensure data quality, accessibility, and security, while troubleshooting and improving data systems. Cross-functional collaboration is key to gathering requirements and delivering data-driven solutions.
Key Responsibilities
- Design and build robust, scalable data pipelines using PySpark to process large datasets.
- Utilize Python's Object-oriented programming principles to develop clean, reusable, and modular code for various data operations.
- Write complex SQL queries to perform data extraction, transformation, and loading (ETL) tasks.
- Collaborate with cross-functional teams to gather requirements and deliver data-driven solutions.
- Maintain data security and compliance with company policies and regulatory standards.
- Troubleshoot and debug issues within data pipelines and related applications.
- Continuously improve existing systems by integrating new data management technologies and software engineering tools.
- Mentor colleagues who want to work with data engineering but have no/less experience.
Role Requirements
- Bachelor or Master of Science in Data Science, Statistics or Computer Science or related fields.
- Minimum 3 years of experience in working within data analytics or relevant areas, in particular as data engineer, preferably within Automative Industry.
- Proficient in PySpark, Python (with a strong understanding of object-oriented programming), and SQL.
- Experience working with Airflow, github, artifactory etc.
- Experience working with open source Bigdata tool stack
- Knowledge in Data lake
- Experience with data modeling, warehousing, and ETL processes.
- Strong analytical skills and ability to work with large, complex data sets.
- Excellent problem-solving abilities and detail orientation.
- Familiarity with cloud platforms (e.g. Azure), Data bricks and related services is a plus
- Understanding of internal Volvo data flow is a bonus