Apply now »

Oracle Advanced Engineering-Glue- AWS EMR-Redshift-Pyspark-S3- Airflow-senior

Location:  Noida
Other locations:  Anywhere in Country
Salary: Competitive
Date:  May 29, 2025

Job description

Requisition ID:  1610824

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. 

 

 

 

 

EY-Consulting – AWS Staff-Senior

The opportunity

We are looking for a skilled AWS Data Engineer to join our growing data team. This role involves building and managing scalable data pipelines that ingest, process, and store data from various sources using modern AWS technologies. You will work with both batch and streaming data and contribute to a robust, scalable data architecture to support analytics, BI, and data science use cases. As a problem-solver with the keen ability to diagnose a client’s unique needs, one should be able to see the gap between where clients currently are and where they need to be. The candidate should be capable of creating a blueprint to help clients achieve their end goal.

 

Key Responsibilities:

  • Design and implement data ingestion pipelines from various sources including on-premise Oracle databases, batch files, and Confluent Kafka.
  • Develop Python producers and AWS Glue jobs for batch data processing.
  • Build and manage Spark streaming applications on Amazon EMR.
  • Architect and maintain Medallion Architecture-based data lakes on Amazon S3.
  • Develop and maintain data sinks in Redshift and Oracle.
  • Automate and orchestrate workflows using Apache Airflow.
  • Monitor, debug, and optimize data pipelines for performance and reliability.
  • Collaborate with cross-functional teams including data analysts, scientists, and DevOps.

 

Required Skills and Experience:

  • Good programming skills in Python and Spark (Pyspark).
  • Hands on Experience with Amazon S3, Glue, EMR.
  • Good SQL knowledge on Amazon Redshift and Oracle
  • Proven experience in handling streaming data with Kafka and building real-time pipelines.
  • Good understanding of data modeling, ETL frameworks, and performance tuning.
  • Experience with workflow orchestration tools like Airflow.

 

Nice-to-Have Skills:

  • Infrastructure as Code using Terraform.
  • Experience with AWS services like SNS, SQS, DynamoDB, DMS, Athena, and Lake Formation.
  • Familiarity with DataSync for file movement and medallion architecture for data lakes.
  • Monitoring and alerting using CloudWatch, Datadog, or Splunk.

 

Qualifications:

  • BTech / MTech / MCA / MBA

 

EY | Building a better working world 


 
EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets.  


 
Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate.  


 
Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.  

Apply now »