Synechron is one of the fastest-growing digital, business consulting & technology firms in the world. Specialized in financial services, the business’ focus on embracing the most cutting-edge innovations combined with expert knowledge and technical expertise has allowed Synechron to reach $500+ million in annual revenue, 8,000 employees and 18 offices worldwide. Synechron is agile enough to invest R&D into the latest technologies to help financial services firms stand at the cutting-edge of innovation; yet, also large enough to scale any global project. Learn more at: www.synechron.com
Synechron draws on over 15 years of financial services IT consulting experience to provide expert systems integration expertise and technical development work in highly-complex areas within financial services. This includes: Enterprise Architecture & Strategy, Application Development & Maintenance, Quality Assurance, Infrastructure Management, Data & Analytics and Cloud Computing. Synechron is one of the world’s leading systems integrators for specialist technology solutions including: Murex, Calypso, Pega, and others and also provides traditional offshoring capabilities with off-shore development centers located in Pune, Bangalore, Hyderabad, and Chennai as well as near-shoring capabilities for European banks with development centers in Serbia. Synechron’s technology team works with traditional technologies and platforms like Java, C++, Python, and others as well as the most cutting-edge technologies from blockchain to artificial intelligence. Learn more at: http://synechron.com/technology
Synechron Inc. is seeking PySpark Developer with Model Development Experience to join our Charlotte, NC team.
Job Description :
- The developer must have sound knowledge in Apache Spark and Python programming.
- Deep experience in developing data processing tasks using pySpark such as reading data from external sources, merge data, perform data enrichment and load in to target data destinations.
- 5+ years of experience in PySpark
- Experience in data ingestion, analysis and engineering.
- Experience in model development using SparkML, Scikit learn, etc.
- Experience in optimizing data and model engineering code.
- Develop applications through analysis coding writing clear documentation and problem resolution.
- Extensive experience required with Hadoop Hive Python Pandas NumPy lambda boto3 AWS Glue Spark PySpark lamba functions.