Employment Type: Full-Time
Location: Remote (Europe-based candidates only)
Experience: 5–7 Years
Recruitment Partner: HireOn (hiring for an international client)
Overview
HireOn is recruiting an experienced Data Engineer for its international client based in Sweden. The role requires strong hands-on expertise in Databricks, Teradata, and Neo4j, with proven experience in designing scalable data pipelines, integrating diverse datasets, and building high-performance data solutions.
This is a remote position, but only candidates currently residing in Europe are eligible due to project compliance requirements.
Key Responsibilities
Location: Remote (Europe-based candidates only)
Experience: 5–7 Years
Recruitment Partner: HireOn (hiring for an international client)
Overview
HireOn is recruiting an experienced Data Engineer for its international client based in Sweden. The role requires strong hands-on expertise in Databricks, Teradata, and Neo4j, with proven experience in designing scalable data pipelines, integrating diverse datasets, and building high-performance data solutions.
This is a remote position, but only candidates currently residing in Europe are eligible due to project compliance requirements.
Key Responsibilities
- Data Engineering & Development
- Design and build scalable data pipelines using Databricks (Spark/PySpark).
- Develop and maintain ETL/ELT workflows across multi-environment data platforms.
- Integrate structured and unstructured datasets for analytics and reporting.
- Build and optimize Teradata data models for performance and reliability.
- Implement graph-based solutions using Neo4j (data modeling, Cypher).
- Solution Design & Architecture
- Collaborate with solution architects and business stakeholders to define data requirements.
- Contribute to system design discussions and architecture enhancements.
- Ensure strong data quality, validation, and governance across all pipelines.
- Performance Optimization
- Troubleshoot and tune Spark jobs, Teradata SQL queries, and end-to-end workflows.
- Maintain high availability and performance of data pipelines.
- Monitor and automate data operations where possible.
- Collaboration & Documentation
- Work closely with BI, Data Science, and Platform Engineering teams.
- Create clear documentation for architectures, pipelines, and technical processes.
- Communicate effectively with remote and multicultural teams.
- 5–7 years of hands-on experience as a Data Engineer.
- Strong expertise in:
- Databricks (Spark, PySpark, Delta Lake)
- Neo4j (graph modeling, Cypher queries)
- Teradata (SQL, performance tuning, data modeling)
- Proficiency in Python for scripting and development.
- Experience working with cloud platforms (Azure/AWS/GCP), with Azure preferred.
- Strong understanding of ETL/ELT, data modeling, and distributed processing.
- Excellent analytical, problem-solving, and communication skills.
- Ability to work independently in a remote, cross-cultural team setup.
- Experience with CI/CD pipelines for data workflows.
- Knowledge of data governance, quality frameworks, or metadata management.
- Exposure to real-time data processing tools (Kafka, Event Hub, etc.).
- Remote role, but strictly for Europe-based candidates due to compliance.
- Opportunity to work with a global team using modern, cutting-edge data technologies.