Trially Inc. Data Engineer Remote · Full time Company website

As a Data Engineer, you will be responsible for designing, developing, and maintaining robust data ingestion pipelines to support our AI-driven clinical trial solutions. You will work with modern data engineering tools such as dbt, dlt, Apache Airflow, Docker, and Kubernetes, ensuring scalable and reliable data workflows. Additionally, you will collaborate with cross-functional teams and integrate data engineering solutions with machine learning models, contributing to the enhancement of patient recruiting efficiency and precision.

Description

About the Company

At Trially, we leverage advanced AI to revolutionize patient recruitment in clinical trials. We utilize a modern, LLM-powered data stack to develop cutting-edge AI solutions, including helping patients find clinical trials they are a match for. Join us to be part of a team that addresses recruitment challenges, accelerates research, and brings life-saving treatments to market faster.


Scope of the RoleĀ 

As a Data Engineer, you will be responsible for designing, developing, and maintaining robust data pipelines to support our AI-driven clinical trial solutions. You will work with modern data engineering tools such as dbt, dlt, Apache Airflow, Docker, and Kubernetes, ensuring scalable and reliable data workflows. You will integrate data pipelines with machine learning models and web application backends. Ideally, you've built a data product and you understand the resiliency and flexibility that is required for data pipelines that sit behind an evolving product, as opposed to those used only for analytics. You're excited about the possibilities that LLMs bring and have interest in multi-agent LLM systems.


Duties and Responsibilities:

  • Design, develop, and maintain robust data pipelines
  • Ensure data pipelines are scalable, reliable, and efficient
  • Monitor and optimize the performance of data workflows
  • Work with dlt and dbt for data ingestion and transformation
  • Use Apache Airflow or Cloud Composer for orchestrating workflows
  • Implement containerization with Docker and orchestration with Kubernetes
  • Manage code via GitHub and deploy solutions on Google Cloud Platform (GCP)
  • Implement Continuous Integration/Continuous Deployment (CI/CD) practices
  • Utilize Infrastructure-as-Code (IaC) tools to manage and provision infrastructure
  • Collaborate with cross-functional teams including data scientists, software engineers, and clinical experts
  • Integrate data pipelines with machine learning models, LLMs, and NLP frameworks
  • Propose and implement improvements to existing data pipelines and infrastructure


Requirements:

  • 3+ years of production experience in data engineering roles
  • Demonstrated competence with deploying Python data infrastructure on GCP, AWS, or Azure
  • Experience with Apache Airflow or Cloud Composer for workflow orchestration
  • Proficiency with Docker
  • Experience utilizing CI/CD and Infrastructure-as-Code (IaC) tools in a production environment
  • SQL expertise
  • Strong understanding of data engineering architecture and data modeling principles
  • Experience working in a production team utilizing Github for version control
  • Desire to learn, grow, and sprint with our early stage start up and our ambitious goals!


Preferred:

  • Hands-on experience with dbt and dlt for data transformation
  • Experience or strong interest in multi-agent LLM systems
  • Experience with machine learning and natural language processing
  • Experience with production data engineering and application environments in GCP
  • Comfortability with AI-powered software development workflows




Salary

$80,000 - $200,000 per year