Description
The Role:
We are looking for talented engineers to join our Data Platform team who are passionate about building large-scale data processing systems. This team is critical for owning and operating technology related to our Data platform. We are looking for engineers who are able to wrap their heads around incredibly complex challenges and come up with elegant solutions. You will be expected to apply your strong engineering and problem solving skills to priorities and execute on projects. Our ideal candidate has experience in both software engineering and distributed data stores.
Our team is looking for outstanding Senior Software Engineers to help us develop data storage/streams features, tooling, and automation that will scale our distributed data systems technology. Your contribution will be valuable to not only to our engineering teams but to our customers by providing a world class service. You will be involved in high impact initiatives whilst elevating and championing engineering best practices arounds Distributed Data systems technology and infrastructure.
The mission of this role is to deliver a reliable and resilient foundational Data Platform for services at Rokt.
Responsibilities:
- Ensuring monitoring, performance, reliability and quality of our Data platform services are running seamlessly
- Help design, build and run Data platforms that support real time workloads and streaming data flows in a microservice environment with ever growing traffic utilising automation.
- Drive productivity improvements through automation, optimisation and process enhancements.
- Provide guidance to other teams in data modelling / stream processing
- Actively participate in peer code reviews.
- Foster a collaborative environment and be a good role model within the team
Requirements:
- We expect you to have a strong understanding of software engineering principles and strong skills with Python, Go or similar languages
- A strong understanding of Systems Engineering with proven experience and background in Data Engineering and/or software development.
- You have a technical background in Big Data technologies like: Apache Kafka and/or Apache Presto, Spark, Flink, Beam, Hive, HDFS, YARN, Hadoop
- Familiarly with data warehousing technologies like AWS Redshift
- Proficient in at least one of the SQL languages (MySQL, PostgreSQL, SqlServer, Oracle)
- Good understanding of SQL Engine and able to conduct advanced performance tuning
- Experience working with Apache Cassandra preferred
- Experience working with AWS and Kubernetes in production preferred
- Experience with different types of data file formats such as Parquet, Avro, Protobuf
- Keen to always push the boundaries and think outside of the box
- Excellent communication skills
- Be motivated, self driven and pragmatic in a fast (we truly mean fast) paced environment with strong sense of ownership and craftsmanship
- BS degree in Computer Science, similar technical field of study or equivalent practical experience.