Data Engineer – 4016

, Full-time, Pune
This role has been filled
About us
We’re a diverse group of visionary innovators who provide trading and workflow automation software, high-value analytics, and strategic consulting to corporations, central banks, financial institutions, and governments. Founded in 1999, we’ve achieved tremendous growth by bringing together some of the best and most successful financial technology companies in the world. 
•Over 2,000 of the world’s leading corporations, including 50% of the Fortune 500 and 30% of the world’s central banks, trust ION solutions to manage their cash, in-house banking, commodity supply chain, trading and risk.
•Over 800 of the world’s leading banks and broker-dealers use our electronic trading platforms to operate the world’s financial market infrastructure.
With 10,000 employees and offices in more than 40 cities around the globe, ION is a rapidly expanding and dynamic group.
At ION, we offer careers that provide many opportunities: To invent. To design. To collaborate. To build. To transform businesses and empower people around the world to do more, faster and better than before. Imagine what you can do and experience. This is where you can do your best work.
Learn more at iongroup.com


Your Role:
Responsibilities:
•       design and develop high-quality software solutions for Risk platform
•       create new modules & data flows to meet regulatory requirements (FRTB, GMETH)
•       use Scala, Java or Python programming languages to build required functionality
•       implement data pipelines using big data/cloud technologies Apache Spark, Hadoop, Microsoft Azure Cloud (Data Lake, Data Factory, Batch) and Databricks
•       design and develop microservices using Spring Boot and Azure Kubernetes Service (PKS) cluster

Required Skills, Experience & Qualification:
•       Expertise with Scala, Java or Python languages
•       Experience with Spark, Hadoop, and Databricks (a must-have)
•       Experience with any of the Big-3 Cloud platforms - Azure (preferred), AWS or Google.
•  Experience building data lakes and data pipelines in the cloud
•       Experience designing high-performance, high-load systems
•       Understanding of the distributed Agile SDLC model
•       Preferred experiences
·   Spark Developer certification is an added advantage
·   Azure Data Factory, Azure Batch 
·   Expertise in Financial Services industry

Get notified for similar jobs

Sign up to receive job alerts