We are looking for Hadoop professionals for long term contract positions with our client in San Jose, CA and Cincinnati, OH. Please share updated copy of your resume at firstname.lastname@example.org
- Responsible for delivery in the areas of: big data engineering with Hadoop, Python and Spark (PySpark)
- Develop scalable and reliable data solutions to move data across systems from multiple sources in real time as well as batch modes
- Utilize expertise in technologies and tools, such as Python, Hadoop, Spark, AWS, as well as other cutting-edge tools and applications for Big Data
- Demonstrated ability to quickly learn new tools and paradigms to deploy cutting edge solutions.
- Develop both deployment architecture and scripts for automated system deployment in web and cloud
- Create large scale deployments using newly researched methodologies.
- Work in Agile environment
- Bachelor's degree in Computer Science or equivalent degree in Information Technology
- Solid experience with Hadoop including Hive, HDFS, Kafka and PySpark
- At least 4+ years of software development experience
I also urge you to visit us at www.intgrow.com to know more about our areas of expertise in Cloud Security, Cyber Security, Identity & Access Management and Big Data domain.