Senior Data Engineer at Lexer

Product - Integrations, Melbourne, Victoria, Australia melbourne engineering product
Description
Posted 2 months ago

About the role

Senior Data Systems Engineers are responsible for building Lexer's systems to process and analyse large volumes of incoming customer data at scale. They make customer data available for analysis, segmentation, and integration with other platforms.

They are experts in Spark, Python, Big Data Engineering, Data Lakes, Streaming data processing pipelines, Workflow engines, and APIs.

Data Systems Engineers are critical for keeping data flowing into the Lexer product and making it easy for our customers to make productive use of all this data. They collaborate closely with the Data Science, Product Management, and Data Operations teams and other developers in the Product team.

In this role, you will, 

  • Software engineering - write code, add new features, perform maintenance, fix bugs, write tests, perform code reviews, deploy code into production, and produce technical documentation.
  • Team Assistance and Mentoring - pair with other developers, help others solve problems, teach others how the system works, discuss improvements in technology and processes, and work with the Product Management team to scope changes and ship projects.
  • System analysis and design - produce the detailed design of new features, analyse the existing system to establish the impact of changes, identify areas of improvement within the system, perform task breakdowns and estimations, and evaluate new technologies.
  • Support - be an advocate for the Product, quickly respond to bugs or issues, clearly communicate with the support team, be the subject matter expert on the system to help other teams work with it, and participate in the regular dev first responder roster.

For This Role, You Will Have:

  • A love for Spark, you will need strong coding skills, in particular, back-end Python programming
  • A good understanding of how data is managed in a big data system, i.e. data lakes, data catalogs, how and when to use different file formats (e.g.  Parquet, etc.)
  • Relational database experience, from writing analytical queries in SQL to designing database schemas.
  • Modern software engineering practices, including automated testing, continuous delivery, and structured logging.
  • Experience building public-facing APIs, including API versioning, authentication, rate limiting, Open API specifications, and designing APIs for scale
  • Experience working with queuing systems, such as SQS, to process large volumes of data.
  • Experience working with streaming data processing systems such as Kinesis or Kafka and or experience with orchestrating data processing pipelines using workflow engines such as Airflow.
  • An interest in Machine Learning and how to incorporate it into a big data system.

What Are the Perks?

  • 4 weeks annual leave, plus gifted leave between Christmas and New Year.
  • Hybrid policy (3 days in office). Dog-friendly office
  • Many many cultural events—daily trivia, weekly tastings, homemade Friday lunches, cinema nights, book club, yoga, and more