Lead Data Engineer

New Yesterday

Job Description

Lead Data Engineer (Contract)

Location: London (2 days onsite per week)

Day Rate: £500-£550 Per day (Outside IR35)

Duration: 6-month contract

Start: ASAP


Overview

We are looking for a Lead Data Engineer to join a high-performing team delivering advanced data platforms that support financial institutions in tackling fraud and financial crime.

In this role, you will help design and evolve a modern Databricks + AWS lakehouse architecture, enabling analytics, machine learning, and investigative teams to generate actionable insights from large-scale datasets.

This is a hands-on leadership position focused on building robust, scalable, and governed data solutions using modern cloud technologies.


The Role

  • Own the end-to-end design, build, optimisation, and support of scalable Spark / PySpark data pipelines (batch and streaming)
  • Define and implement lakehouse architecture standards (medallion model: bronze, silver, gold), including governance, lineage, and data quality controls
  • Design and manage secure data ingestion frameworks (e.g. Apache NiFi, APIs, SFTP/FTPS) for internal and external data sources
  • Architect and maintain secure AWS-based data infrastructure (S3, IAM, KMS, Glue, Lake Formation, Lambda, Step Functions, CloudWatch, etc.)
  • Implement orchestration using tools such as Airflow, Databricks Workflows, and Step Functions
  • Champion data quality, observability, and reliability (SLAs, monitoring, alerting, reconciliation)
  • Drive CI/CD best practices for data platforms (infrastructure as code, automated testing, versioning, environment promotion)
  • Mentor engineers on distributed data processing, performance optimisation, and cost efficiency
  • Collaborate with data science, product, and compliance teams to translate requirements into scalable data solutions



Required Skills & Experience

  • Strong experience as a Senior or Lead Data Engineer with ownership of end-to-end data solutions
  • Expertise in Databricks, PySpark / Spark, SQL, and Python
  • Proven experience building and optimising large-scale data pipelines in production environments
  • Strong knowledge of cloud data architectures, particularly within AWS
  • Experience designing scalable data models and reusable frameworks
  • Hands-on experience with orchestration tools such as Airflow or similar
  • Solid understanding of data governance, lineage, and compliance requirements
  • Experience with CI/CD pipelines and infrastructure as code (e.g. Terraform, CloudFormation)
  • Strong communication skills with the ability to collaborate across technical and non-technical teams


What We’re Looking For

  • A hands-on technical leader who can design, build, and deliver solutions independently
  • Someone comfortable working with high-volume, high-throughput data systems
  • Strong problem-solving skills and a pragmatic, delivery-focused mindset
  • Experience mentoring engineers and setting engineering standards and best practices
  • Ability to balance technical excellence with delivery timelines

Location:
London
Job Type:
FullTime
Category:
Engineering

We found some similar jobs based on your search