- Location
- San Francisco
- Last Published
- Nov. 29, 2024
- Sector
- Security
- Functions
- Software Engineering
- Other Engineering
At Persona, we're building the first universal and comprehensive identity infrastructure to help businesses of all sizes better serve and protect their customers' identities. Our identity platform enables businesses to securely collect and manage their customers' personal information, verify that their customers are who they say they are, analyze and detect fraud and abuse, and pull sensitive reports about their customers in a privacy-centric way. In a world where consumer behaviors are changing and privacy and identity are taking on a new meaning, we want to help businesses find their superpowers and do it while putting their customers, the people, first. About the role We are looking for passionate engineers who are excited to build the identity layer for the internet. As an engineer on the Data Infrastructure team at Persona, you will play a key role in designing, building, and maintaining the data platform that powers our data science and analytics applications. This is a highly cross-functional role—you will work on everything from deploying new cloud infrastructure to writing data pipelines to developing new product features that better deliver analytics for our customers, all while closely collaborating with product, data science, and post-sales teams. We are a small team where every member has a high degree of ownership of our stack, working to make Persona a more data driven organization.
What you’ll do at Persona
- Build and expand the data platform using cutting edge technologies for the cloud data stack.
- Design and implement efficient data models to power analytics and core product workflows.
- Collaborate with stakeholders to translate business and product needs into product features.
- Identify new opportunities to use data in transformative ways across product, data science, and business teams.
What you’ll bring to Persona
- 3+ years of experience in software engineering, with a focus on data infrastructure or large-scale data systems.
- Proficiency in Python and familiarity with technologies like Kafka, Snowflake, ClickHouse, Airflow and Apache Flink.
- Experience designing and implementing scalable ETL pipelines and working with structured and unstructured data.
- Strong understanding of data modeling, including schema design and performance optimization for analytical and operational use cases.
- Excellent communication and collaboration skills, with experience working cross-functionally with product, data science, and operational teams.
- A passion for leveraging data to drive innovation.
Nice to have
- Familiarity with React, Ruby on Rails, Kubernetes, Google Cloud (GCP), MySQL, MongoDB