We are looking for Backend Engineers with 1-3 years of production experience shipping and supporting backend code. You will be a part of our team, owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools, and research.
Responsibilities
- Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets.
- Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink.
- Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it's doing.
- Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice.
- Partner with DevOps to containerize workloads and automate deployments.
- Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets.
- Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data.
Requirements
- Proficient in Rust - comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation.
- Stream-processing - have built or maintained high-throughput pipelines on NATS (ideal) or Kafka.
- Deep systems engineering skills - you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs.
- ClickHouse (or similar OLAP) - able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts.
- Cloud- have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups.
- Nice-to-have - exposure to blockchain or high-volume financial data streams.
This job was posted by Akshay Singh from Yugen.ai.