• Locations
    • United States
    • Massachusetts, US
    • Remote
    • New York
  • Date Posted
  • Oct. 27, 2021
  • Function
  • Software Engineering
  • Sector
  • Data

About Datadog:

We’re on a mission to build the best platform in the world for engineers to understand and scale their systems, applications, and teams.  We operate at high scale—trillions of data points per day—providing always-on alerting, metrics visualization, logs, and application tracing for tens of thousands of companies. Our engineering culture values pragmatism, honesty, and simplicity to solve hard problems the right way.

The team:

The Aggregations teams are a critical part of the core Metrics teams, responsible for fast and accurate aggregation, storage and querying of metric data. You will work with high throughput (millions of events per second), low latency data that enables engineers to more deeply understand how their applications are behaving and performing in production. On a typical day, you may research performance through detailed profiling, improve our automated accuracy validation, add new support for new types of complex queries, or work with other teams who rely on the aggregations platform.

You will:

  • Build distributed, high-throughput, real-time data pipelines
  • Do it in Go and Python, with bits of C or other languages
  • Use Kafka, Redis, Cassandra, Elasticsearch and other open-source components
  • Analyze and optimize performance and efficiency
  • Continuously improve the reliability and resilience of our pipelines
  • Own meaningful parts of our service, have an impact, grow with the company

Requirements:

  • You have a BS/MS/PhD in a scientific field or equivalent experience
  • You have significant backend programming experience in one or more languages
  • You can get down to the low-level when needed
  • You care about code simplicity and performance
  • You want to work in a fast, high-growth startup environment that respects its engineers and customers

Bonus points

  • You wrote your own data pipelines once or twice before (and know what you’d like to change)
  • You’ve built high scale systems with Cassandra, Redis, Kafka or Numpy
  • You have significant experience with Go, C, or Python
  • You have a strong background in stats