• Locations
    • United Kingdom
    • London
  • Date Posted
  • Aug. 9, 2021
  • Function
  • Software Engineering
  • Sector
  • Healthcare

At Causaly we are building the biggest knowledge platform in the world to empower people working on the most pressing issues in human health. To achieve this, we are teaching computers to read all knowledge ever published and develop an interface that allows humans to answer questions they can’t ask anywhere else.

The technology is self-developed and proprietary, powering a large Biomedical Causal Knowledge Graph. It helps researchers and decision-makers to discover insights from millions of academic publications, clinical trials, patents and other data sources, in minutes. Causaly is used by Pharmaceutical companies in Research and Commercial departments, for Drug Discovery, Safety and Competitive Intelligence.

Read how Causaly is used in Target Identification here: https://www.causaly.com/blog/ai-supported-target-i...

We are a VC-backed tech company with offices in London and Athens, looking for an experienced and driven Lead Backend Engineer who would work on building, scaling and automating our data processing and information extraction pipelines.

.

Responsibilities

  • Designing, creating and maintaining optimal data processing and information extraction pipelines
  • Leading and growing the team of backend engineers
  • Implementing processes supporting data transformation, data structure manipulation, metadata, dependency and workload management
  • Scaling and automating new and existing data pipelines for improved data transition and transformation from source to production environment, optimizing data delivery
  • Building the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources
  • Working with stakeholders in NLP/ML engineering, full-stack, knowledge engineering teams to design, build and continuously improve data processing modules
  • Ensuring information security compliance of the data pipeline operations

Requirements

  • Minimum qualifications: BSc in a related technical field
  • 5+ years experience working in a related field of data processing pipelines
  • Team management experience in a technical field
  • Fluency in Python, Linux OS
  • Strong knowledge of ElasticSearch, MySQL
  • Working knowledge of NoSQL databases (e.g. MongoDB, DynamoDB)
  • Experience with data pipeline management platforms (e.g. Airflow, Luigi)
  • Experience with unstructured and semi-structured data processing (Pipelining, Storage, ETL, Analytics, ML)
  • Experience with cloud reference architectures and developing specialized stacks on cloud services (GCP/AWS)
  • Experience with data processing engines (e.g. Spark, Beam)
  • Working knowledge of message queuing and stream processing (e.g. Kafka)
  • Working knowledge of software development best practices, e.g. testing, versioning, documentation
  • Excellent problem solving, ownership, organizational skills, high attention to detail and quality

Preferred qualifications:

  • Experience working with biomedical/life sciences data processing
  • Experience with Neo4j, graph database architectures
  • Experience with BigQuery

Benefits

  • Competitive Salary and equity option package
  • Be part of the early team that builds a transformative knowledge product with the potential to have real impact
  • Individual training budget for professional development
  • Regular team outings
  • Annual team retreat to secret destination
  • Easily accessible office in the heart of Angel, Islington