You will join an AI & Analytics team within a public-sector data platform, working on data-driven solutions that support a more efficient, transparent and evidence-based government. As a Data Scientist, you build and operationalise advanced machine learning and AI models that turn large, complex datasets into actionable insights for policy makers and operational teams.
You collaborate with data engineers, data architects and domain experts to prepare, model and transform data for AI use cases. You apply state-of-the-art techniques in machine learning, deep learning and natural language processing (NLP) and ensure your work is transparent, reproducible and ethically sound. You also help shape the AI & Analytics strategy and support the broader digital transformation of the public sector.
What you will do
You will design, train and implement machine learning and AI models on large, heterogeneous datasets, and translate business questions into analytical experiments and AI use cases.You prepare data through cleaning, feature engineering and transformation, and evaluate models using robust validation and tuning techniques.
You will explore and apply modern LLM tooling such as LangChain and/or LangGraph where relevant, and leverage cloud platforms (e.g. Azure, Databricks) for scalable data processing and model development. You will also analyse data flows, optimise queries and contribute to performant and reliable data pipelines.
You will communicate insights through clear reports, dashboards and presentations tailored to different audiences, from management summaries to more technical documentation. You actively share knowledge within the team, coach less experienced colleagues, and contribute to reusable components and best practices for AI and analytics.
Throughout, you work within a secure, controlled analytics environment, ensuring data privacy, minimal data usage and a value-driven approach to business logic.
What are we looking for?
Core data & AI skills
- Solid experience with machine learning algorithms and model lifecycles.
- Hands-on experience with NLP and pre-trained models.
- Strong skills in data cleaning, feature engineering and dataset preparation.
- Experience with model evaluation, including cross-validation and hyperparameter tuning.
- Proficiency in Python with a focus on data/ML libraries (e.g. pandas, scikit-learn, PyTorch, TensorFlow, etc.).
Data platforms & engineering
- Proven experience with databases: SQL, Oracle.
- Experience in optimising/tuning databases and queries.
- Experience analysing data streams and building or working with data pipelines.
- Experience with reporting and analytical outputs (dashboards, reports, KPI views).
- Familiarity with cloud platforms, ideally Azure; Databricks is a plus.
- Experience with scripting (e.g. Shell, Perl, Python) in data contexts.
Tools & ecosystem
- Experience with modern AI tooling such as LangChain and/or LangGraph.
- Comfortable working in a secure, governed analytics environment.
Soft skills & language
- Strong analytical and conceptual thinking; able to make well-reasoned, evidence-based choices.
- Clear communicator who can adapt messaging for different audiences (management vs. technical).
- Team player who also likes autonomy and a results-driven way of working.
- Passionate about optimisation and continuous improvement, with patience for complex stakeholder environments.
- Ability to write concise notes, documentation and summaries for both business and technical audiences.
- Language requirements: Native-level Dutch (CEFR C2), both spoken and written.
What do we offer?
Location: Ghent or Brussels (hybrid, work split between two offices with remote work possible).
Contract: Freelance or Permanent
Duraiton: 01/12/2025 - 15/07/2027
Vacancies that may also interest you
)
