As a data engineer on Kustomer’s Engineering team, you will be responsible for growing our business by leveraging advanced data-engineering techniques to enhance our data architecture and streamline data-ingestion processes. Your expertise will be instrumental in improving our data-driven product offerings and in providing valuable insights that drive strategic product decision-making and help us grow globally.

 

We believe in ownership and are looking for people driven to continuously deliver excellence across all dimensions of their role. This role requires a deep understanding of both the technical and strategic aspects of big data solutions, including data modeling, data warehousing, and data integration.

 

What You’ll Do:

  • Data Engineering: Write and adapt tools to classify, ingest and reconcile data. Onboard datasets, explore data and automate tasks using a modern data stack. Develop and implement systems for eventually consistent usage-based data tracking. Design and build solutions to efficiently transport data to customers.
  • Data Analysis/Business Intelligence: Parse, analyze and understand data sets. Work closely with key stakeholders to identify valuable insights that can be realized from the amalgamation of our data and integration of disparate data sources. Partner with stakeholders and cross-functional teammates to effectively surface and visualize data.
  • Data Debugging: Find anomalies in datasets and debug issues relating to data availability, access, integrity, privacy and security.
  • Production Support: Provide proactive oversight of our data pipeline, handle inquiries from internal customers and resolve issues in a timely manner.
  • Best Practices: Conduct architecture, systems and data reviews across the platform. Provide education and support to the engineering team in data architecture and design. Identify and resolve deficiencies and deviance from best practices. Participate in data initiatives across the platform including scalability, reliability, and cost effectiveness.

 

Our Tech Stack:

  • MongoDB, Redis, BigQuery, Elasticsearch, DynamoDB, Aurora, Snowflake
  • AWS, GCP
  • JavaScript/TypeScript (React/Node.js), Python, Go
  • Datadog, Coralogix, ELK

 

Minimum requirements: 

  • Bachelor’s degree in computer science, data engineering, relevant technical field or equivalent practical experience
  • Deep experience with a programming language such as JavaScript/Node.js, Python and/or Golang
  • Knowledgeable in data structures and experience writing efficient code
  • 6+ years of experience in all or most of the following:
    • Data warehouses (BigQuery, Snowflake, or similar), virtualization tools (dbt, SQLMesh, or similar), analytical and data visualization products (Google Data Studio or similar)
    • BigQuery integration with AWS products such as S3, Kinesis and AWS Batch
    • Cloud environments, especially GCP and AWS
    • Administration and architecture of SQL and NoSQL data stores and the integration of S3 and Kinesis with BigQuery
    • Data management/governance, database scripting and ETL processing
    • Data security and data privacy (PII, encryption, regulatory compliance, obfuscation)
    • Experience participating in Data on-call rotations

 

Nice To Have:

  • Certification in cloud or database technologies
  • Familiarity with SOLID principles
  • Big Data management and analysis
  • Elasticsearch experience
  • Experience with infrastructure as code using Terraform
Job Overview
Job alerts

Subscribe to our weekly job alerts below and never miss the latest jobs

Sign in

Sign Up

Forgotten Password

Job Quick Search

Cart

Cart

Share