As a Senior Data Engineer, you will play a pivotal role in architecting and scaling our data platforms, building robust and scalable data pipelines, and enabling data-driven decision-making across the company. You’ll work with massive, complex datasets from a variety of third-party and internal sources — driving data infrastructure and platform evolution to support both real-time and batch processing needs. In this role, you’ll not only write code but also influence the data strategy, mentor junior engineers, and collaborate cross-functionally with product, analytics, and platform teams. This role supports our Identification product where you will improve the core pipelines that power our Identification product as well as design new processes that enable our data science team to test and deploy new ML/AI models. The product delivered by this team is integrated into the core product stack and is a critical component of Demandbase’s account intelligence platform.
This is a high-impact individual contributor role for someone who combines deep technical knowledge with strategic thinking and a bias for action.
What You’ll Be Doing:
- Design & Architect: Lead the end-to-end design and evolution of scalable, resilient data pipelines and infrastructure, driving architecture decisions that impact the company’s data platform long-term.
- Build & Scale: Develop and optimize large-scale data processing workflows (batch and streaming), using Spark and related technologies, ingesting data from diverse internal and external sources.
- Mentor & Lead: Provide technical leadership and mentorship to mid- and junior-level engineers. Review design docs, PRs, and contribute to engineering best practices across the team.
- Improve Reliability: Build fault-tolerant, observable systems with self-healing and robust monitoring using tools like Airflow, Datadog, or equivalent.
- Collaborate: Partner with cross-functional stakeholders in Product, Analytics, and Infrastructure to ensure data architecture aligns with business needs and SLAs.
- Own & Operate: Take full lifecycle ownership of key data pipelines and integrations—from design to deployment to production support.
What we’re looking for:
- Bachelor’s degree in computer science, engineering, mathematics, or related field
- 7+ years of experience in software/data engineering roles, with deep expertise in building and maintaining large-scale distributed data systems.
- Scala experience required. Comfort with purely functional programming is a plus.
- Strong CS fundamentals, including algorithms, data structures, and system design.
- Strong background in data modeling, performance tuning, and data integration best practices.
- Experience owning end-to-end systems, including production monitoring, incident response, and system reliability engineering.
- Proficiency in cloud-native data platforms (e.g., GCP or AWS), including managed services for analytics and orchestration.
- Familiarity with real-time data processing, streaming architectures, and event-driven design.
- Excellent verbal and written communication skills; comfortable explaining complex concepts to technical and non-technical stakeholders.
- A strong sense of ownership, initiative, and accountability.
- BS or MS in Computer Science required
Share
Facebook
Twitter
LinkedIn
Telegram
Tumblr
WhatsApp
VK
Mail