About the Role
What you’ll work on
Design, build, and operate services that move massive datasets between sources and destinations with strong guarantees on correctness and latency.
Analyse and optimise the end-to-end data pipeline for throughput, tail latency, and cost.
Build real-time and streaming capabilities (queues, webhooks, event streams) on top of our existing batch systems.
Evolve a multi-cloud, multi-region architecture for resilience and data locality.
Improve our caching and low-latency APIs that power personalisation and other real-time use cases.
Lead and contribute to technical design docs, run design reviews, and drive projects from idea → launch → iteration.
Help define SLOs, monitoring, alerting, and incident response for the systems you own.
Requirements
Location: Remote (North America time zones) Compensation: $350k + meaningful equity
What we’re looking for
4+ years of experience in backend or systems engineering, with a focus on distributed systems.
Deep understanding of concurrency, consistency models, fault tolerance, and performance in distributed architectures.
Strong experience in at least one of: Go, Java, Scala, Rust, or TypeScript/Node.
Hands-on experience with cloud infrastructure (AWS/GCP/Azure) and container orchestration (Kubernetes or similar).
Comfort working with datastores used in high-throughput systems (e.g. Postgres, columnar warehouses, key-value stores, caches).
You’ve owned production services end-to-end: design, implementation, rollout, on-call, and continuous improvement.
Strong product sense: you care about solving real customer problems, not just shipping code.
Nice to have
Experience with large-scale data pipelines (streaming and batch) and tools like Kafka, Pulsar, or similar.
Background in data/analytics, growth, or marketing tech.
Prior experience in a high-growth startup where you’ve seen systems and teams scale quickly.
Why this role is interesting
Hard technical problems: high-throughput, low-latency data systems across multiple regions and clouds.
Massive leverage: small, senior team; your work directly impacts every customer and internal product.
Autonomy: high trust, high ownership; engineers drive projects from idea to production.
Upside: compensation designed to be competitive with top-tier SaaS and infra companies, plus meaningful equity.
About the Company
We’re a fast-growing B2B SaaS unicorn building a data and AI platform used by product, data, and marketing teams around the world. Our product moves large volumes of data between warehouses, operational tools, and real-time experiences, and reliability here is absolutely mission-critical.
We’re hiring a Distributed Systems Engineer to own the services and infrastructure that power our data movement and real-time APIs.

