• Location
    • Toronto, CA
  • Date Posted
  • May. 31, 2021
  • Function
  • Data Science
  • Sector
  • Security

Lookout is the leader in mobile security, protecting the device at the intersection of the personal you and the professional you. Our mission is to secure and empower our digital future in a privacy-focused world where mobile devices are essential to all we do for work and play. We’re trusted by millions of consumers, enterprises, government agencies, and partners such as AT&T, Verizon, Vodafone, Microsoft, Google, and Apple. Headquartered in San Francisco, Lookout has offices in Amsterdam, Boston, London, Sydney, Tokyo, Toronto and Washington, D.C.

Our Data Engineering team is transforming how we build products using Lookout’s unique data sets about mobile devices, applications and threats. As we continue to grow and scale, we need rock solid engineers who love the challenge of designing and building high performance, scalable data solutions that help Lookout protect millions of mobile users. You’ll design, develop, and test robust, scalable data platform components. You’ll work with a variety of teams and individuals, including Product Engineers to understand their data pipeline needs and come up with innovative solutions. By collaborating with our talented team of Engineers, Product Managers and Designers, you’ll be a driving force in defining new data products and features. We are looking for someone with a strong software engineering, distributed data systems and ETL background.

Responsibilities:

  • Design and development of the next generation of our data platform including data streaming, batch and replay capabilities
  • Working with Engineering, Data Science, Business Intelligence and Product Management teams to build and manage a wide variety of data sets
  • Analyse technical and business requirements to determine the best technologies and approaches for solving problems
  • Identify gaps and build tools to increase the speed of analysis
  • Design, build and launch new data models and business critical ETL pipelines
  • Fully participate in the ownership of your services and components, including on-call duties

Requirements:

  • BS/MS in Computer Science or related field/degree, and/or equivalent work experience
  • 10+ years overall software development experience with at least 3+ years of experience with data engineering
  • Experience with Spark (Batch and/or Streaming), Hive and Hadoop
  • Experience with Kafka or equivalent messaging systems at scale
  • Experience with streaming data pipelines using Spark streaming
  • Proficient in designing efficient and robust ETL workflows
  • Experience in optimizing Spark/Hive ETLs
  • Hands on experience with GCP and services such as Dataproc, BigQuery, BigTable, and Cloud Composer (Apache Airflow)
  • Experience building automated deployment pipelines for data infrastructure
  • Excellent communication and collaboration skills
  • Proficient in Scala and Python

Bonus Points:

  • You have built a data pipeline and the infrastructure required to deploy machine learning algorithms and real-time analytics in low latency environments
  • Understanding of CI/CD automation and willingness to learn new CD platform.