Data Engineer

🔒 Confidential Employer
Posted 8 May 2026
LOCATION
Remote
TYPE
Full-time
LEVEL
Mid-Senior level
CATEGORY
Information Technology
This employer holds a UK Home Office sponsor license — sponsorship for this specific role is at the employer’s discretion

SKILLS

Python SQL Apache Spark Hive Hadoop Cloud Infrastructure (Azure/GCP/AWS) ETL Data Pipelines

FULL DESCRIPTION

Data Engineer

[Employer hidden — sign up to reveal] is hiring a Data Engineer for a full-time, remote position. The role requires 5-8 years of experience in cloud platforms and big data environments.

Job Description

In this role, you will be an active member of the Clients AI Lab, playing the role of Data Engineer to support the implementation of solutions built in AI labs:

  • Build data pipelines to bring in wide variety of data from multiple sources within the organization as well as from relevant 3rd party sources.
  • Collaborate with cross functional teams to source data and make it available for downstream consumption.
  • Work with the team to provide an effective solution design to meet business needs.
  • Ensure regular communication with key stakeholders, understand any key concerns in how the initiative is being delivered or any risks/issues that have either not yet been identified or are not being progressed.
  • Ensure timelines (milestones, decisions, and delivery) are managed and value of initiative is achieved, without compromising quality and within budget.
  • Ensure an appropriate and coordinated communications plan is in place for initiative execution and delivery, both internal and external.

Who we are looking for

Competencies & Personal Traits

  • Work as a team player
  • Strong technical background in data, AI, and modern cloud infrastructure.
  • Excellent programming skills in languages such as Python, SQL.
  • Experience with Cloud Infra providers like Azure data bricks, Google cloud platform or AWS
  • Experience in building data pipelines using batch processing with Apache Spark (Spark SQL, Dataset / Dataframe API) or Hive query language (HQL)
  • Knowledge of Big data ETL processing tools
  • Experience with Hive and Hadoop file formats (Avro / Parquet / ORC)
  • Basic knowledge of scripting (shell / bash)
  • Experience of working with multiple data sources including relational databases (SQL Server / Oracle / DB2 / Netezza), NoSQL / document databases, flat files
  • Basic understanding of CI CD tools such as Jenkins, JIRA, Bitbucket, Artifactory, Bamboo and Azure Dev-ops.
  • Basic understanding of DevOps practices using Git version control
  • Ability to debug, fine tune and optimize large scale data processing jobs

Work Experience

5-8 years of broad experience of working with Enterprise IT applications in cloud platform and big data environments.

Professional Qualifications

Certifications related to Data and Analytics would be an added advantage

Education

Master/bachelor’s degree in STEM (Science, Technology, Engineering, Mathematics)

Job Details

  • Job Category: Information Technology
  • Job Type: Full Time
  • Job Location: Remote
  • Languages: English
  • Experience: 5 to 8 years
Sign up free — access 45,000+ UK sponsor-licensed jobs