We are seeking a Senior Researcher to join our Research team. AssemblyAI’s core strengths include developing best-in-class AI models in Speech and Natural Language Processing (NLP), writing optimized inference code, and serving models at scale with low latency and high availability. This role demands expertise in both applied research and engineering in Deep Learning. Key responsibilities include: Writing efficient, high-performance training code for large-scale models with billions of parameters on TPU clusters, optimizing code for both training and inference, rigorously evaluating models to ensure quality and performance, and collaborating closely with engineers to support data creation, data filtering, and model deployment. The ideal candidate will possess a fine-grained understanding of deep learning research and engineering, spanning both software and hardware. As a startup, we require the candidate to be flexible and willing to work across various stages of the Speech AI model lifecycle, from data processing to model training and analysis. The candidate should also proactively expand the scope of work with the goal of getting the models into production to delight customers. They will play a pivotal role in advancing major research initiatives designed for large-scale deployment to solve real-world use cases.

What You’ll Do:

  • Train large-scale Speech AI models, including ASR and speech-focused multi-modal LLMs with billions of parameters.
  • Write and optimize training code for maximum efficiency and memory utilization.
  • Stay up-to-date on the latest AI research and share insights across the company.
  • Collaborate with the technology leadership team to prioritize the research and engineering agenda, define project scopes, and lead their execution.

What You’ll Need:

  • 2+ years of professional experience in deep learning research & development.
  • Demonstrated experience in all aspects of deep learning model development, including data acquisition and processing, implementation, model training, experimental analysis, and writing inference and evaluation code as an individual contributor.
  • Hands-on experience with JAX/TPUs and distributed training.
  • Excellent knowledge of GPU and TPU hardware.
  • Strong Python programming skills.
  • Strong written and verbal communication skills for technical matters.

Nice to Have:

  • Experience in training/fine-tuning (multimodal) Large Language Models.
  • Experience in LLM inference using frameworks like vLLM or JetStream.

You may be a great fit if you are:

  • Highly self-motivated and keen to have real-world impact.
  • Detail-oriented, analytical, and creative problem solver.
  • Meticulous in all aspects (code, data, modeling, …).
  • Able to work effectively in a team-oriented, collaborative environment.
Job Overview
Job alerts

Subscribe to our weekly job alerts below and never miss the latest jobs

Sign in

Sign Up

Forgotten Password

Job Quick Search

Cart

Cart

Share