Apply now »

EY - GDS Consulting - AI And DATA - AWS Databricks - Senior

Location:  Kochi
Other locations:  Anywhere in Country
Salary: Competitive
Date:  Mar 19, 2025

Job description

Requisition ID:  1588280

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. 

 

 

 

 

EY GDS – Data and Analytics (D&A) – AWS - SENIOR

 

We’re looking for candidates with strong technology and data understanding in big data engineering space, having proven delivery capability. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team.

 

Role description

We are seeking a AWS Data Engineer Senior with a strong background in AWS, PySpark, and Databricks . The ideal candidate will have hands-on experience in building and optimizing data pipelines, developing robust ETL workflows, and leveraging modern data engineering tools and platforms. Exposure to dbt (Data Build Tool) is a plus, as the role involves collaboration on end-to-end data transformation and modeling workflows.

 

Must have experience

 

 

  • Hands-on practical experience delivering system design, application development, testing, and operational stability
  • Proven experience in implementing data solutions on the Databricks platform with hands on experience in setting up Databricks cluster, working in Databricks modules, data pipelines for ingesting and transforming data from various sources into Databricks
  • Experience in building pipelines using Delta live tables, autoloader, and Structured streaming Databricks workflows for orchestration.
  • Deep understanding of Apache Spark, Delta Lake, DLT and other big data technologies
  • Hands-on experience with performance tuning and data modelling. Strong SQL skills with experience in querying and optimizing large datasets.
  • Experience in Python, Spark & Streaming (Spark Streaming or KAFKA or Kinesis) is plus
  • Experience in building metadata-driven ingestion and DQ framework using PySpark is plus.
  • Manage and monitor data workflows using orchestration tools  like Apache Airflow
  • Knowledge of CI/CD workflows for data engineering projects.
  • Utilize Git for version control, ensuring proper collaboration and tracking of code changes.
  • Establish and follow best practices for repository management, branching, and code reviews.
  • Good to have DBT Exposure -Contribute to dbt transformations and assist in setting up data modelling workflows.

 

 

To qualify for the role, you must have

  • BE/BTech/MCA/MBA

  • Minimum 3+ years hand-on experience in one or more key areas.

 

 

 

What we look for

 

  • A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment

  • An opportunity to be a part of market-leading, multi-disciplinary team of 1400 + professionals, in the only integrated global transaction business worldwide.

  • Opportunities to work with EY Consulting practices globally with leading businesses across a range of industries

 

 

EY | Building a better working world 


 
EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets.  


 
Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate.  


 
Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.  

Apply now »