- Locations
- San Francisco
- New York
- Last Published
- Apr. 18, 2026
- Sector
- Fintech
- Functions
- Software Engineering
- Data Science
The Network Enablement team’s mission is to amplify Plaid’s network effects by fostering trust and sharing intelligence with data partners. We build Trust & Fraud Insights (real-time Protect model scoring, two-way APIs/webhooks, and investigation tooling), Bank Intelligence (ML driven retention and account-primacy metrics and scalable batch pipelines), and the ml/data foundations (graph and sequence-embedding models plus unified feature pipelines and feature-store patterns). We own productionization and reliability for data partner facing ML — low-latency scoring, offline↔online parity, observability and drift detection, PII-safe handling and auditability — and collaborated closely with MLE, DS, Data Platform, Fraud, Foundational Modeling, Product, and Privacy to scale network intelligence. On this team you will build and operate the ML infrastructure and product services that enable trust and intelligence across Plaid’s network. You’ll own feature engineering, offline training and batch scoring, online feature serving, and real-time inference so model outputs directly power partner-facing fraud & trust products and bank intelligence features. You will integrate inference into product logic (APIs, feature flags, backend flows), build reproducible pipelines and model CI/CD, and ensure observability, reproducibility, and compliance as you scale our network capabilities. You’ll partner with Product, ML/Data Platform, Fraud, Foundational Modeling, MLE, DS, and Privacy to ship auditable, reliable ML solutions that move product KPIs
Responsibilities
- Embed model inference into Network Enablement product flows and decision logic (APIs, feature flags, backend flows).
- Define and instrument product + ML success metrics (fraud reduction, retention lift, false positives, downstream impact).
- Design and run experiments and rollout plans (backtesting, shadow scoring, A/B tests, feature-flagged releases) to validate product hypotheses.
- Build and operate offline training pipelines and production batch scoring for bank intelligence products.
- Ship and maintain online feature serving and low-latency model inference endpoints for real-time partner/bank scoring.
- Implement model CI/CD, model/version registry, and safe rollout/rollback strategies.
- Monitor model/data health: drift/regression detection, model-quality dashboards, alerts, and SLOs targeted to partner product needs.
- Ensure offline and online parity, data lineage, and automated validation / data contracts to reduce regressions.
- Optimize inference performance and cost for real-time scoring (batching, caching, runtime selection).Ensure fairness, explainability and PII-aware handling for partner-facing ML features; maintain auditability for compliance.
- Partner with platform and cross-functional teams to scale the ML/data foundation (graph features, sequence embeddings, unified pipelines).
- Mentor engineers and document team standards for ML productization and operations.
Qualifications
- Must-haves:
- Strong software engineering skills including systems design, APIs, and building reliable backend services (Go or Python preferred).
- Production experience with batch and streaming data pipelines and orchestration tools such as Airflow or Spark.
- Experience building or operating real-time scoring and online feature-serving systems, including feature stores and low-latency model inference.
- Experience integrating model outputs into product flows (APIs, feature flags) and measuring impact through experiments and product metrics.
- Experience with model lifecycle and operations: model registries, CI/CD for models, reproducible training, offline & online parity, monitoring and incident response.
- Nice to have:
- Experience in fraud, risk, or marketing intelligence domains.
- Experience with feature-store products (Tecton / Chronon / Feast / internal) and unified pipelines.
- Experience with graph frameworks, graph feature engineering, or sequence embeddings.
- Experience optimizing inference at scale (Triton/ONNX/quantization, batching, caching).
Plaid is proud to be an equal opportunity employer and values diversity at our company. We do not discriminate based on race, color, national origin, ethnicity, religion or religious belief, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, military or veteran status, disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state, and local laws. Plaid is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance with your application or interviews due to a disability, please let us know at accommodations@plaid.com.
Please review our Candidate Privacy Notice here.
190800 - 286800 USD a year
The target base salary for this position ranges from $190,800/year to $286,800/year in Zone 1. The target base salary will vary based on the job's location.
Our geographic zones are as follows:
Zone 1 - San Francisco / New York City / Seattle
Zone 2 - Los Angeles / Washington DC / Austin / Boston / Sacramento / San Diego
Zone 3 - Atlanta / Portland / Chicago / Philadelphia / Denver / Miami / Dallas / Raleigh
Zone 4 - All other US cities
The base salary range listed for this full-time position excludes commission (if applicable), equity and benefits. The pay range shown on each job posting is the minimum and maximum target for new-hire salaries. Actual pay may be higher or lower depending on factors like skills, experience, and relevant education or training.