DevOps Engineer

New Yesterday

we are hiring for Senior DevOps Engineer Permanent

Full-time

UK About the role

We're looking for a senior, hands-on Data Platform & DevOps Engineer to act as the critical link between Data Science and Engineering/Operations. This is an exciting opportunity to shape how data science workflows are built, standardised, and scaled

taking experiments from the notebook all the way through to reliable, automated production deployments. You'll collaborate closely with data scientists, engineers, and platform teams, bringing together deep analytical tooling knowledge and modern DevOps practices to build a robust, production-ready data science environment. What you'll be doing

Serve as the technical bridge between Data Science and Engineering/Operations, driving collaboration and alignment across teams Enable and optimise analytics workflows using Python, R, Jupyter Notebooks/JupyterLab, and analytical data formats such as Parquet Own and evolve a multi-user JupyterLab environment, making it a best-in-class collaborative space for data scientists Extend and maintain an existing Python framework for secure, enterprise-grade data integrations Lead advanced Git practices across the organisation

branching strategies, merging, conflict resolution, code reviews, and repository standards Design, build, and maintain CI/CD pipelines that support the full buildtestdeploy lifecycle for analytics workloads Containerise data science environments using Docker, ensuring portability and reproducibility at scale Partner with cloud and infrastructure teams to deploy and operate analytics workloads in cloud environments Champion engineering best practices around code quality, security, scalability, and operational stability What we're looking for

Essential Strong, demonstrable Python experience

this is a core requirement; working knowledge of R is also expected Extensive hands-on experience with Jupyter Notebooks and JupyterLab in collaborative or multi-user settings Solid understanding of analytical data formats, particularly Parquet Advanced Git proficiency

branching, merging, conflict resolution, and enforcing team-wide standards Proven track record designing and maintaining CI/CD pipelines for data or analytics workloads Practical experience with Docker and containerised environments A background that spans both data science and engineering/operations

you're comfortable in both worlds Strong communication skills and a collaborative, solutions-focused mindset Desirable Experience working with enterprise data platforms or large-scale analytics ecosystems Familiarity with one or more major cloud providers

AWS, Azure, or GCP A history of industrialising data science or machine learning workflows Knowledge of data governance, access controls, and compliance frameworks

TPBN1_UKTJ
Location:
United Kingdom
Job Type:
FullTime
Category:
IT