- Date Posted
- Sep. 28, 2021
- Software Engineering
We are open to remote work anywhere in North America
Lookout is an integrated endpoint-to-cloud security company. Our mission is to secure and empower our digital future in a privacy-focused world where mobility and cloud are essential to all we do for work and play. We enable consumers and employees to protect their data, and to securely stay connected without violating their privacy and trust. Lookout is trusted by millions of consumers, the largest enterprises and government agencies, and partners such as AT&T, Verizon, Vodafone, Microsoft, Google, and Apple. Headquartered in San Francisco, Lookout has offices in Amsterdam, Boston, London, Sydney, Tokyo, Toronto and Washington, D.C.
Our Data Engineering team is transforming how we build products using Lookout’s unique data sets about mobile devices, applications and threats. As we continue to grow and scale, we need rock solid engineers who love the challenge of designing and building high performance, scalable data solutions that help Lookout protect millions of mobile users. You’ll design, develop, and test robust, scalable data platform components. You’ll work with a variety of teams and individuals, including Product Engineers to understand their data pipeline needs and come up with innovative solutions. By collaborating with our talented team of Engineers, Product Managers and Designers, you’ll be a driving force in defining new data products and features. We are looking for someone with a strong software engineering, distributed data systems and ETL background.
Do you have exceptionally good coding skills with Python or Scala? Do you have strong experience with Kafka, Spark and ETL workflows? Then you could be Lookout’s Staff Data Software Engineer with ability to work remotely.
- Design and development of the next generation of our data platform including data streaming, batch and replay capabilities
- Working with Engineering, Data Science, Business Intelligence and Product Management teams to build and manage a wide variety of data sets
- Analyse technical and business requirements to determine the best technologies and approaches for solving problems
- Identify gaps and build tools to increase the speed of analysis
- Design, build and launch new data models and business critical ETL pipelines
- Fully participate in the ownership of your services and components, including on-call duties
- BS/MS in Computer Science or related field/degree, and/or equivalent work experience
- 10+ years overall software development experience with at least 3+ years of experience with data engineering
- Experience with Spark (Batch and/or Streaming), Hive and Hadoop
- Experience with Kafka or equivalent messaging systems at scale
- Experience with streaming data pipelines using Spark streaming
- Proficient in designing efficient and robust ETL workflows
- Experience in optimizing Spark/Hive ETLs
- Hands on experience with GCP and services such as Dataproc, BigQuery, BigTable, and Cloud Composer (Apache Airflow)
- Experience building automated deployment pipelines for data infrastructure
- Excellent communication and collaboration skills
- Proficient in Scala and Python
- You have built a data pipeline and the infrastructure required to deploy machine learning algorithms and real-time analytics in low latency environments
- Understanding of CI/CD automation and willingness to learn new CD platform.