The Role

As Bevy’s first dedicated data engineer, you will be embedded in Bevy’s data analytics and insights team. You will help guide data architecture and system setup and support continuous improvement to the organizations ability to store and process data.

Responsibilities

  • Create and maintain testable ETL pipelines using GCP services and Python
  • Optimize data ingestion, storage, and processing architecture to meet product, business, and performance needs
  • Care deeply about data quality, privacy, and security
  • Support data scientists and product engineers
  • Proactively support continuous learning and software engineering projects
  • Examine and troubleshoot data stored in SQL, NoSQL
  • Support data/analytics application development and R&D efforts
  • Proactively support continuous learning and software engineering projects

What we’re looking for:

  • Bachelor’s degree, or substantial coursework in Computer Science, Software Engineering, or similar
  • Familiarity with Hadoop ecosystem tools
  • Experience creating production data pipelines on GCP using Pub/Sub, Dataflow, and BigQuery, and Python
  • Experience supporting ML projects
  • You like solving interesting, hard problems and communicating the results in accessible ways
  • You ask probing questions about software and don’t give up easily
  • You reside in North or South America. Yes, we are a distributed company, but since we are still small, we like to minimize the time zone spread within the team
  • You are an excellent communicator. In our small team, English is the official language. You need to be able to articulate complex ideas efficiently and effectively. When people do not share an office, it is essential to pay extra attention to communication & speak up.
Job Overview
Job alerts

Subscribe to our weekly job alerts below and never miss the latest jobs

Sign in

Sign Up

Forgotten Password

Cart

Basket

Share