Shamrock Trading Corporation is the parent company for a family of brands in transportation services, finance and technology. Headquartered in Overland Park, KS, Shamrock has been named “Best Places to Work” by the Kansas City Business Journal every year since 2015. We also have offices in Chicago, Dallas, Laredo, Midland and Nashville.
With an average annual revenue growth of 25% since 1986, Shamrock’s success is attributed to three key factors: hiring the best people, cultivating long-term relationships with our customers and continually evolving in the marketplace.
Shamrock Trading Corporation is looking for a Data Engineer who wants to utilize their expertise in data warehousing, data pipeline creation/support and analytical reporting skills by joining our Data Services team. This role is responsible for gathering and analyzing data from several internal and external sources, designing a cloud-focused data platform for analytics and business intelligence, reliably providing data to our analysts.
This role requires significant understanding of data mining and analytical techniques. An ideal candidate will have strong technical capabilities, business acumen, and the ability to work effectively with cross-functional teams.
- Work with Data architects to understand current data models, to build pipelines for data ingestion and transformation.
- Design, build, and maintain a framework for pipeline observation and monitoring, focusing on reliability and performance of jobs.
- Surface data integration errors to the proper teams, ensuring timely processing of new data.
- Provide technical consultation for other team members on best practices for automation, monitoring, and deployments.
- Provide technical consultation for the team with “infrastructure as code” best practices: building deployment processes utilizing technologies such as Terraform or AWS Cloud Formation.
- Bachelor’s degree in computer science, data science or related technical field, or equivalent practical experience
- Proven experience with relational and NoSQL databases (e.g. Postgres, Redshift, MongoDB, Elasticsearch)
- Experience building and maintaining AWS based data pipelines: Technologies currently utilized include AWS Lambda, Docker / ECS, MSK
- Mid/Senior level development utilizing Python: (Pandas/Numpy, Boto3, SimpleSalesforce)
- Experience with version control (git) and peer code reviews
- Enthusiasm for working directly with customer teams (Business units and internal IT)
- Preferred but not required qualifications include:
- Experience with data processing and analytics using AWS Glue or Apache Spark
- Hands-on experience building data-lake style infrastructures using streaming data set technologies (particularly with Apache Kafka)
- Experience data processing using Parquet and Avro
- Experience developing, maintaining, and deploying Python packages
- Experience with Kafka and the Kafka Connect ecosystem.
- Familiarity with data visualization techniques using tools such as Grafana, PowerBI, AWS Quick Sight, and Excel.
At Shamrock we hire bright, ambitious people and give them the tools they need to be successful. By investing in training and development, we hope to become a long-term career for employees, where there are always opportunities for advancement. Shamrock also offers a premier set of benefits for employees and their families:
- Medical: Fully-paid healthcare, dental and vision premiums for employees and eligible dependents
- Financial: Generous company 401(k) contributions and employee stock ownership after one year
- Wellness: Onsite gym, jogging trail and discounted membership to nearby fitness center
- Work-Life Balance: Competitive PTO and paid leave options