Apply now »

DE-Azure DevOps Engineer- Data Fabric-GDSF02

Location:  Bengaluru
Other locations:  Primary Location Only
Salary: Competitive
Date:  Oct 22, 2025

Job description

Requisition ID:  1648589

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. 

 

 

 

 

The opportunity

Your role will be a Technology Lead, or Senior Technology Lead in the Cloud Engineering team. You will be responsible to be a part of the delivery of IT projects for our customers across the globe.

 

Your key responsibilities

  • Architect, implement, and manage CI/CD pipelines using Azure DevOps for data and analytics workloads.
  • Build and maintain CI/CD pipelines across on-premises and multi-cloud platforms (Azure, AWS, GCP), ensuring consistent and secure delivery practices.
  • Integrate DevSecOps practices, including automated testing, code quality, and vulnerability scanning (e.g., SonarQube, Checkmarx, Veracode, Fortify) within CI/CD pipelines.
  • Develop automation scripts using Bash, Python, Groovy, PowerShell, and leverage CLI tools (Azure CLI, AWS CLI) for operational tasks.
  • Maintain and optimize source control systems (Git, SVN), enforcing branching strategies and code quality standards.
  • Innovate in building independent automation solutions.
  • Automate data workflows leveraging Azure Data Lake, Databricks, and Microsoft Fabric for scalable, secure, and high-performance data solutions.
  • Integrate and orchestrate data pipelines and analytics solutions using Microsoft Fabric (including Lakehouse, Data Engineering, and Real-Time Analytics workloads).
  • Automate deployment and management of Databricks clusters, notebooks, and jobs.
  • Support migration and modernization of legacy data platforms to Azure and Microsoft Fabric.
  • Develop infrastructure as code (IaC) for Azure resources using Terraform, ARM Templates, or Bicep.
  • Demonstrate working knowledge of cloud-native services in Azure, including PaaS, SaaS, and IaaS offerings.
  • Ensure cloud security best practices are followed, with a strong understanding of identity, access management, and network security in cloud environments.
  • Comprehensive understanding of how IT operations are managed.
  • Implement data governance, security, and compliance best practices across Azure Data Lake, Databricks, and Fabric environments.
  • Monitor, troubleshoot, and optimize data pipelines and platform performance using Azure Monitor, Log Analytics, and Fabric monitoring tools.
  • Collaborate with data engineers, analysts, and business stakeholders to deliver end-to-end data solutions.

 

Skills and attributes for success

  • Strong hands-on experience with Azure DevOps for CI/CD automation, release management, and environment provisioning.
  • Deep expertise in Azure Data Lake, Databricks (including Spark, Delta Lake), and Microsoft Fabric (Lakehouse, Data Engineering, Real-Time Analytics).
  • Proficiency in scripting languages: Python, PowerShell, Shell.
  • Experience with infrastructure automation tools: Terraform, ARM Templates, Bicep.
  • Knowledge of data governance, security, and compliance in cloud environments.
  • Familiarity with monitoring and observability tools: Azure Monitor, Log Analytics, Fabric monitoring.
  • Ability to design scalable, secure, and cost-effective data architectures.
  • Capability to identify, communicate, and mitigate project risks.
  • Capability to create sustainable systems and services through automation and continuous improvement.
  • Ability to communicate effectively with customers and team members; strong analytical and problem-solving skills; flexibility to lead or support the team in day-to-day tasks.
  • Capability to design, build, and implement DevOps solutions for projects of varying complexity.
  • Capability to design and implement monitoring and observability with various monitoring tools.
  • Capability to implement SRE (Site Reliability Engineering) practices to reduce toil, measure, and optimize systems for agreed reliability.
  • Strong understanding of agile methodologies.
  • Capability to deliver best practices around provisioning, operations, and management of multi-cloud environments.
  • Capability to manage communication and deliverables from offshore teams.
  • Capability to assist the team in debugging and troubleshooting scripts (imperative and declarative).
  • Capability to identify software packages and solutions that meet client requirements, develop RFPs, and assist clients in evaluating proposals (business and technology fit, pricing, and support).
  • Experience designing and developing AI-infused DevOps frameworks.

 

To qualify for the role, you must have

  • BE/B.Tech/MCA with a sound industry experience of 6 to 8 years
  • Cloud Computing knowledge in multi cloud environments with Azure as the main one.
  • DevOps – Setting up CI/CD pipeline using Azure DevOps.
  • Hands-on experience with Azure Data Lake, Databricks (including Spark, Delta Lake), and Microsoft Fabric (Lakehouse, Data Engineering, Real-Time Analytics).
  • Practical experience with Docker and Kubernetes (AKS/EKS/GKE).
  • Proficiency in PowerShell, Python, Groovy, Shell scripting, and cloud-native CLI tools.
  • Must be good in IAC tools such as Terraform or ARM or Bicep
  • Understanding data governance, security, and compliance in cloud environments.

 

Preferred Skills

  • Microsoft Certified: DevOps Engineer Expert (AZ-400)
  • Microsoft Certified: Designing Microsoft Azure Infrastructure Solutions (AZ-305)
  • Microsoft Certified: Azure Solutions Architect Expert
  • Hashicorp Certified: Terraform Associates
  • Microsoft Certified: Azure Data Engineer Associate (DP-203)
  • Microsoft Certified: Fabric Analytics Engineer Associate
  • Databricks Certified Data Engineer Associate/Professional
  • Experience with GenAI technologies and integration with data platforms
  • Good to have knowledge in Terraform Cloud, Ansible Tower

 

Your people responsibilities

  • Foster teamwork and lead by example
  • Participating in the organization-wide people initiatives
  • Ability to travel in accordance with client and other job requirements
  • Excellent written and oral communication skills; writing, publishing, and conference-level presentation skills a plus

 

Technologies and Tools

  • Cloud platform - Azure
  • SDLC Methodologies: Agile /Scrum
  • Version Control Tools – GitHub/ Bitbucket/GitLab
  • CI/CD Automation Tools – Azure DevOps/Github actions
  • Data Platform- Azure Data Lake, Databricks, Microsoft Fabric
  • Container Management Tools – Docker/Kubernetes/Docker Swarm
  • Application Performance Management Tools – Prometheus/ Dynatrace/ AppDynamics
  • Monitoring Tools – Splunk/ Datadog/ Grafana
  • IAC Tools – Terraform/ARM Templates/ Bicep
  • Artifact Management Tools – Jfrog Artifactory/Nexus/CloudRepo/Azure Artifactory
  • Scripting - Python/Groovy/PowerShell/Shell Scripting
  • SAST/ DAST: SonarQube / Veracode/ Fortify
  • GitOps Tool – Argo CD/ Flux CD
  • GenAI Technology – Chat GPT, OpenAI

 

What we look for

  • Demonstrated experience in building and automating data platforms using Azure and Databricks.
  • Proven track record in implementing CI/CD for data workloads and multiple technologies, including strong use of DevOps tools and containerization.
  • Strong understanding of Microsoft Fabric and modern data architectures.
  • Experience with infrastructure automation (IaC tools + configuration management) and application automation using Azure DevOps.
  • Working experience in Azure with a solid grasp of cloud architecture strategy and cloud-related concepts.
  • Good exposure to cloud and container monitoring, logging, and troubleshooting.
  • Ability to design, conduct, and experiment with new technologies and approaches.
  • Ability to work collaboratively in cross-functional teams and mentor others.
  • Excellent communication, analytical, and problem-solving skills.

 

What we offer

EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland, and the UK – and with teams from all EY service lines, geographies, and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills, and insights that will stay with you throughout your career.

  • Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next.
  • Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way.
  • Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs.
  • Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs.

 

EY | Building a better working world 


 
EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets.  


 
Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate.  


 
Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.  

Apply now »