- Build distributed, scalable, and reliable data pipelines that ingest and process petabytes of data per day
- Work on a massively scalable segmentation engine that performs real time segmentation with accurate counts for people-based marketing. If you enjoy advanced query processing engines, indexing strategies, and database engines in general, this could be a fit
- Help lead and mentor your fellow engineers as they tackle challenging problems such as implementing SOA and Kubernetes at scale, building our new cloud platform, and migrating our data tier to GCP
- Leverage vendors and open source technologies as we complete our migration to the cloud.
- Own product features from the development phase through to production deployment.
- Evaluate big data technologies and prototype solutions to improve our data processing architecture.
- Around 5+ years of experience writing and deploying production code of which at least 2+ years’ experience is working on a Big Data Platform
- You love mentoring junior engineers and deploying best practices
- Extensive experience with data processing platforms such as Hadoop, Spark, Hive, Pig
- Solid understanding of large-scale data processing systems
- Proficient in one or more of Java, Go, C++, Scala
- Understanding of automated QA needs related to Big data
- Experience with Cloud providers like AWS, Azure; GCP preferable