• Locations
    • Remote
    • Poland
  • Date Posted
  • Aug. 12, 2021
  • Function
  • Software Engineering
  • Sector
  • Fintech

Data sits at the heart of Revolut and plays a uniquely crucial role in what we do. With data we build intelligent real-time systems to personalise our product, tackle financial crime, automate reporting, track team performances and enhance customer experiences.

Fundamentally, data underpins all operations at Revolut and being part of the team gives you the chance to have a major impact across the company – apply today to join our world class data department.

In order to tame the exponential growth in size of data and the related complexity, we are looking for the most talented and passionate engineers – great builders and great collaborators.

What you’ll be doing

  • Think in systems and explore opportunities to improve and streamline end-to-end processes by transforming abstract concepts into working solutions.
  • Partner with product owners, engineers, data scientists, and data analysts to develop a seamless data platform.
  • Design, build and launch extremely efficient and reliable data infrastructure to move data across a number of platforms including Data Lake and Data Warehouse.
  • Communicate, at scale, through multiple mediums: dashboards, bots and more.
  • Leverage data and business principles to solve large scale web, mobile and data infrastructure problems.
  • Ensure consistent standards and quality in the data platform ecosystem.
  • Support and train new and existing users of the platform.
  • Create and maintain a company-wide data registry and documentation.
  • Come up with and enforce best practises end-to-end - coding, testing, deployment, maintenance.
  • Plan and implement complex platform changes that involve stakeholders across the whole organisation.

What you’ll need

  • Bachelor’s/Master’s/PhD in STEM (Mathematics, Computer Science, Physics, Engineering).
  • Fluency in SQL, Python, Unix/bash scripting.
  • Experience in custom ETL design, implementation and maintenance.
  • Experience with workflow orchestrators, such as Airflow.
  • Experience working with Big Data/MPP analytics platform (i.e. Exasol, Amazon Redshift, Google BigQuery, or similar).
  • Experience with TDD, CI/CD, trunk-based development.

Desired

  • Ability to write easily understandable and maintainable code in multiple programming languages.
  • Experience with public cloud systems (e.g. GCP, AWS).
  • Experience setting up infrastructure using Docker, Kubernetes, Terraform, Helm, Ansible.
  • Awareness of data quality aspects in data warehousing systems.
  • Experience with SQL performance tuning.
  • Experience with anomaly/outlier detection.
  • Experience with notebook-based Data Science workflow.
  • Experience querying massive datasets using Spark, Flink, Presto, Hive, etc.
  • Experience with monitoring and logging tools: NewRelic, Grafana, Prometheus, ELK.
  • Experience implementing Data Mesh principles.