You will join a product-driven team working on a next-generation Digital Twin SaaS platform. The product provides a real-time, end-to-end view of complex industrial processes, enabling data-driven decision-making at scale.
As a Data Engineer, you operate at the intersection of data integration, platform engineering, and business understanding. You help translate complex business data into a robust, scalable graph-based data model that powers advanced analytics and optimization use cases. This is a hands-on engineering role within a fast-growing product environment, working closely with backend engineers, data scientists, DevOps, and solution experts.
What you’ll do
- Design, build, and evolve a scalable data integration framework that feeds a graph-based digital twin of large industrial processes.
- Integrate data from diverse client systems and sources, ensuring consistency, performance, and correctness across complex data flows.
- Develop and optimize ETL pipelines capable of processing large-scale datasets efficiently in cloud-based environments.
- Continuously improve pipeline performance by reducing processing times, increasing throughput, and preparing the platform for future scale.
- Contribute to the design and implementation of new features in the data integration layer, supporting product evolution.
- Collaborate closely with cross-functional teams across the full development lifecycle, from solution design and implementation to testing and rollout.
- Write clean, maintainable, and well-documented code that supports long-term platform stability.
- Take ownership of data integration tracks within projects, ensuring clear communication, documentation, and alignment with stakeholders.
What are we looking for?
- You have 2–3 years of experience as a Data Engineer, preferably in a cloud-based or product-oriented environment.
- You have solid hands-on experience with ETL concepts and big data processing, and you are comfortable working with large and complex datasets.
- You are proficient in Scala, PySpark, and SQL, and you have experience working with platforms such as Azure Synapse and/or Databricks.
- You understand modern system design principles and have experience with source control, CI/CD pipelines, and cloud platforms (Azure, AWS, or GCP).
- You are comfortable collaborating across disciplines and can bridge technical implementation with business needs.
- You take ownership of your work end to end and approach problems with a solution-oriented mindset.
- You are fluent in English and communicate clearly in a technical and business context.
- You hold a university degree in Information Technology or a related field.
What do we offer?
Location: Mechelen (hybrid)
Start date: As soon as possible
Duration: 3 months - extension possible
Contract: Freelance
Vacancies that may also interest you
)
