Data Engineer - Kafka, Python and Hadoop

🔒 Confidential Employer
Posted 20 April 2026
LOCATION
Birmingham
TYPE
Contract
LEVEL
Mid-Senior level
CATEGORY
Technology
This employer holds a UK Home Office sponsor license — sponsorship for this specific role is at the employer’s discretion

SKILLS

Kafka Python Hadoop Scala Spark Avro Protobuf

FULL DESCRIPTION

Job Title: Data Engineer - Kafka, Python and Hadoop
Location: Sheffield, Birmingham, London- three days a week
Salary/Rate: £520 daily rate
Start Date: 20/04/2026
Job Type: Contract 7 months

Job Responsibilities/Objectives:

* Design and build Kafka-based streaming applications (Kafka Streams/ksqlDB) in Scala/Python for transformation, enrichment, and routing.
* Implement end-to-end streaming pipelines: producers, stream processors, and consumers with strong data quality, idempotency, and DLQ patterns.
* Model topics, schemas, and contracts (Avro/Protobuf/JSON) and maintain backward/forward compatibility.
* Develop batch/stream interoperability: Spark/Structured Streaming jobs for aggregation, feature generation, and storage in Parquet/ORC.

Required Skills/Experience:

The ideal candidate will have the following:
* Kafka application development: Kafka Streams/ksqlDB, producer/consumer patterns, partitioning/serialization, exactly-once/at-least-once semantics.
* Languages: Strong in Scala and/or Python for streaming apps; familiarity with testing frameworks and CI for stream processors.
* Schema management: Avro/Protobuf/JSON, schema registry usage, compatibility strategies.
* Stream/batch processing: Spark (including Structured Streaming), Parquet/ORC, partitioning/bucketing, performance tuning.

- If you are interested in this opportunity, please apply now with your updated CV in Microsoft Word/PDF format.

Sign up free — access 45,000+ UK sponsor-licensed jobs