AWS Data Engineer
🔒 Confidential Employer
Posted 22 March 2026
LOCATION
London
TYPE
Full-time
LEVEL
Mid-Senior level
CATEGORY
IT Services
This employer holds a UK Home Office sponsor license — sponsorship for this specific role is at the employer’s discretion
SKILLS
Python
Apache Spark
AWS Glue
ETL/ELT
Data Pipelines
S3
Data Engineering
Data Modeling
FULL DESCRIPTION
AWS Data Engineer
[Employer hidden — view at passion-project.co.uk] Ltd
London, United Kingdom | Posted on 04/12/2025
Job Information
- Date Opened 04/12/2025
- Job Type Permanent
- Industry IT Services
- Work Experience 5+ years
- City London
- Province City of London
- Country United Kingdom
- Postal Code EC1A
About Us
We provide end-to-end IT solutions and services including Applications services, Data & Analytics services, AI/ML Technologies and Professional services in the UK and EU market.
Job Description
(10+ years of experience required)
Role Overview
We are building a next-generation data platform and are looking for an experienced Senior Data Engineer to help design, develop, and optimize large-scale data solutions. This role involves end-to-end data engineering, modern cloud-based development, and close collaboration with cross-functional stakeholders to deliver reliable, scalable, and high-quality data products.
Key Responsibilities
- Design, develop, and maintain scalable, testable, and high-performance data pipelines using Python and Apache Spark.
- Orchestrate data workflows using cloud-native services such as AWS Glue, EMR Serverless, Lambda, and S3.
- Apply modern engineering practices including modular design, version control, CI/CD automation, and comprehensive testing.
- Support the design and implementation of lakehouse architectures leveraging table formats such as Apache Iceberg.
- Collaborate with business stakeholders to translate requirements into robust data engineering solutions.
- Build observability and monitoring into data workflows; implement data quality checks and validations.
- Participate in code reviews, pair programming, and architecture discussions to promote engineering excellence.
- Continuously expand domain knowledge and contribute insights relevant to data operations and analytics.
What You’ll Bring
- Strong ability to write clean, maintainable Python code using best practices such as type hints, linting, and automated testing frameworks (e.g., pytest).
- Deep understanding of core data engineering concepts including ETL/ELT pipeline design, batch processing, schema evolution, and data modeling.
- Hands-on experience with Apache Spark or willingness and capability to learn large-scale distributed data processing.
- Familiarity with AWS data services such as S3, Glue, Lambda, and EMR.
- Ability to work closely with business and technical stakeholders and translate needs into actionable engineering tasks.
- Strong team collaboration skills, especially within Agile environments, emphasizing shared ownership and high transparency.
Nice-to-Have Skills
- Experience with Apache Iceberg or similar lakehouse table formats (Delta Lake, Hudi).
- Practical exposure to CI/CD tools such as GitLab CI, GitHub Actions, or Jenkins.
- Familiarity with data quality frameworks such as Great Expectations or Deequ.
- Interest or background in financial markets, analytical datasets, or related business domains.
Sign up free — access 45,000+ UK sponsor-licensed jobs