DE-Cloud Data Platform Engineer-GDSN02
Job description
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all.
The opportunity
We are the only professional services organization who has a separate business dedicated exclusively to the financial services marketplace. Join Digital Engineering Team and you will work with multi-disciplinary teams from around the world to deliver a global perspective. Aligned to key industry groups including Asset management, Banking and Capital Markets, Insurance and Private Equity, Health, Government, Power and Utilities, we provide integrated advisory, assurance, tax, and transaction services. Through diverse experiences, world-class learning and individually tailored coaching you will experience ongoing professional development. That’s how we develop outstanding leaders who team to deliver on our promises to all of our stakeholders, and in so doing, play a critical role in building a better working world for our people, for our clients and for our communities. Sound interesting? Well, this is just the beginning. Because whenever you join, however long you stay, the exceptional EY experience lasts a lifetime.
Your role will be a Technology Lead, or Senior Technology Lead in the Cloud Engineering team. You will be responsible to be a part of the delivery of IT projects for our customers across the globe.
Your key responsibilities
- Architect and manage Databricks infrastructure on Azure and AWS, ensuring scalable, secure, and cost-effective deployments.
- Establish and enforce data governance frameworks (including access controls, lineage, and compliance) across cloud and on-premises environments.
- Design and implement DataOps and MLOps pipelines for automated data ingestion, transformation, model training, deployment, and monitoring.
- Automate infrastructure provisioning using Terraform, ARM Templates, or Bicep for Databricks workspaces, clusters, and supporting resources.
- Integrate Databricks with enterprise CI/CD pipelines (Azure DevOps, GitHub Actions, Jenkins, etc.) for code, data, and model lifecycle management.
- Implement monitoring, logging, and alerting for Databricks workloads using native and third-party tools (e.g., Azure Monitor, AWS CloudWatch, Datadog).
- Optimize Databricks cluster performance and cost, including autoscaling, spot instance usage, and job scheduling.
- Ensure security best practices: manage secrets, network isolation, encryption, and compliance with organizational and regulatory standards.
- Collaborate with data engineering, analytics, and ML teams to enable seamless integration and workflow automation.
- Drive adoption of DevSecOps practices: integrate security scanning and compliance checks into DataOps/MLOps pipelines.
- Lead incident response and root cause analysis for Databricks platform issues, ensuring high availability and reliability.
- Document architecture, processes, and best practices for knowledge sharing and operational excellence.
Skills and attributes for success
- Expertise in Databricks administration (Azure Databricks, AWS Databricks), including workspace, cluster, and job management.
- Strong knowledge of cloud platforms (Azure, AWS), including networking, IAM, and cost management.
- Proficiency in Infrastructure as Code (Terraform, ARM, Bicep) and automation scripting (Python, Bash, PowerShell).
- Experience with CI/CD tools (Azure DevOps, GitHub Actions, Jenkins) and source control (Git).
- Deep understanding of DataOps and MLOps concepts, including orchestration, versioning, and monitoring.
- Familiarity with data governance tools and frameworks (Unity Catalog, Purview, AWS Lake Formation, etc.).
- Ability to design secure, compliant, and scalable data architectures in multi-cloud environments.
- Strong troubleshooting and performance optimization skills for distributed data and ML workloads.
- Excellent communication and stakeholder management skills; ability to lead cross-functional teams and drive platform adoption.
- Experience with monitoring and observability tools (Datadog, Prometheus, Grafana, Azure Monitor).
- Knowledge of DevSecOps practices and integrating security tools into data pipelines.
- Ability to mentor and upskill teams on Databricks, DataOps, and MLOps best practices.
- Strong analytical and problem-solving skills; ability to assess risks and propose mitigation strategies.
- Experience with regulatory compliance (GDPR, HIPAA, etc.) in cloud data environments is a plus.
- Capability to identify software packages and solutions that could meet client solution requirements and develop requests for proposal (RFP’s) from these vendors. Assist clients in the evaluation of the resulting proposals, including the package capabilities and fit to the requirements (business and technology), vendor proposals for pricing and support.
- Capability to design and developed AI infused DevOps and Platform Engineering framework.
To qualify for the role, you must have
- BE/B.Tech/MCA with a sound industry experience of 4 to 8 years
- Cloud Computing knowledge in multi cloud environments (Azure, AWS)
- Hands-on experience with Databricks administration on Azure and/or AWS, including workspace, cluster, and job management
- DevOps – Setting up CI/CD pipeline using Azure DevOps, GitHub Actions, or GitLab CI
- Strong understanding of DataOps and MLOps practices, including orchestration, monitoring, and governance
- Hands-on experience with containerization, including Docker & Kubernetes (AKS/EKS/GKE)
- Scripting Knowledge on PowerShell, python, Groovy, Shell Scripting and cloud native CLIs
- Must be good in IAC tools such as Terraform or ARM or Bicep
- Must be good in Config Management Tools such as Ansible or Chef or Puppet
- Expertise in Golden Paths creation for end-to-end developer workflows.
Preferred Skills
- Microsoft Certified: DevOps Engineer Expert (AZ-400)
- Microsoft Certified: Designing Microsoft Azure Infrastructure Solutions (AZ-305)
- Microsoft Certified: Azure Solutions Architect Expert
- AWS Certified DevOps Engineer – Professional
- AWS Certified Solutions Architect – Associate
- Hashicorp Certified: Terraform Associate
- GitHub Actions
- Github Copilot
- Good to have knowledge in Terraform Cloud, Ansible Tower
Your people responsibilities
- Foster teamwork and lead by example
- Participating in the organization-wide people initiatives
- Ability to travel in accordance with client and other job requirements
- Excellent written and oral communication skills; writing, publishing, and conference-level presentation skills a plus
Technologies and Tools
- Cloud platform – Azure,AWS
- Data Platform – Databricks , Flink , Kafka ,
- SDLC Methodologies: Agile /Scrum
- Version Control Tools – GitHub/ Bitbucket/GitLab
- CI/CD Automation Tools – Azure DevOps/ Bamboo/ Team City/ Harness/ Octopus
- Container Management Tools – Docker/Kubernetes/Docker Swarm
- IAC Tools – Terraform/ARM Templates/ Bicep
- Config Management Tools – Chef/ puppet/ Ansible
- Test Automation Tools – Selenium/Cucumber
- Artifact Management Tools – Jfrog Artifactory/Nexus/CloudRepo/Azure Artifactory
- Scripting - Python/Groovy/PowerShell/Shell Scripting
What we look for
- Has successfully demonstrated CI/CD Automation skills and technologies during many relevant projects
- Strong experience in the use of DevOps Tools and technologies, including container technologies.
- Proven experience in Infrastructure Automation (IAC Tools + Config Mgmt. Tool) and experience in Application Automation using Azure DevOps
- Working experience in Azure and strong knowledge in cloud Architecture strategy.
- Good Exposure in Cloud and Container Monitoring and logging and troubleshooting.
- Strong understanding of Cloud related concepts and technologies, and maintains an in-depth knowledge of the area
- Experience designing and conducting research and experiments with recently developed technologies
What we offer
EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland, and the UK – and with teams from all EY service lines, geographies, and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills, and insights that will stay with you throughout your career.
- Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next.
- Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way.
- Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs.
- Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs.
EY | Building a better working world
EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets.
Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate.
Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.