Member of Technical Staff, Data Infrastructure
SKILLS
FULL DESCRIPTION
Member of Technical Staff, Data Infrastructure
[Employer hidden — view at passion-project.co.uk] is seeking a Member of Technical Staff, Data Infrastructure to build and scale data infrastructure for AI research and business intelligence. This role involves managing pipelines for multimodal datasets, CDC streams, production databases, and data transformation layers. Requires 4+ years data engineering experience, strong Python and SQL skills.
Location: Remote | Employment Type: Full time | Compensation: $240K – $290K
About the Role
We're looking for a Data Engineer to build and scale the data infrastructure that powers [Employer hidden]'s AI research and business intelligence. You'll own critical data pipelines spanning production databases, analytics warehouses, and large-scale ML training datasets. This role sits at the intersection of data engineering, ML infrastructure, and analytics—you'll enable both world-class research and data-driven business decisions.
You'll work on challenging problems at scale: managing billions of rows of multimodal training data, building CDC streams from production systems, optimizing vector databases for ML workflows, and creating the foundational data layer that the entire company relies on.
Technical Stack
Our data infrastructure spans multiple specialized systems: LanceDB for vector storage and dataset versioning with multimodal training data, ClickHouse as our analytics warehouse receiving CDC streams from production Postgres via AWS Kinesis, and BigQuery for training run logs and evaluation results. We use Ray for large-scale distributed data processing on managed Kubernetes clusters, handling preprocessing, feature generation, and dataset curation at scale.
We're actively building out our data platform—introducing dbt for standardized transformations, improving dataset versioning and data lineage tracking, scaling data sourcing pipelines, and establishing better data quality practices. We use Prometheus and Grafana for monitoring, and Terraform for infrastructure management.
What You'll Do
- Build and own pipelines for the creation, curation, and processing of large-scale multimodal datasets, including vector database (LanceDB) management and query optimization for ML metadata
- Build and own ETL and CDC streams from Postgres and ClickHouse to analytics warehouses
- Build standardized data transformation layers using dbt to replace ad-hoc SQL queries and create maintainable data models for business analytics
- Manage production databases (Postgres, ClickHouse) and optimize for performance and reliability
What You'll Need
- 4+ years of industry experience in data engineering
- Strong knowledge of Python
- Experience with data quality, deduplication, and cleaning at scale
- Comfortable working with cloud storage (S3) and managing large datasets
- Experience building and maintaining ETL/CDC pipelines at scale
- Strong SQL skills and experience with multiple database systems (Postgres, columnar databases like ClickHouse/Redshift)
- Humility and open mindedness; at [Employer hidden] we love to learn from one another
Nice to Have
- Experience with one or more frameworks for large-scale data processing (e.g. Spark, Ray, etc) and one or more ML frameworks (e.g. PyTorch, JAX)
- Knowledge of cloud platforms (AWS, GCP, or Azure) and their data service offerings
- Knowledge of data privacy and data security best practices
- Experience with business intelligence and visualization tools (e.g., Looker, Tableau, PowerBI, Metabase, or similar)
- Experience in a high-growth startup environment or similar fast-paced setting
Working at [Employer hidden]
[Employer hidden] strives to recruit and retain exceptional talent from diverse backgrounds while ensuring pay equity for our team. Our salary ranges are based on competitive market rates for our size, stage and industry, and salary is just one part of the overall compensation package we provide.
We're committed to creating a space where our employees can bring their full selves to work and have equal opportunity to succeed.
More about [Employer hidden]: Universal World Simulator, GWM-1, Gen-4.5, General World Models, Robotics SDK, Conversational Real-time Agents, [Employer hidden] Studios.