Senior Data Engineer - EY wavespace Data & AI hub
Job description
About Us
At EY wavespace Madrid - Data & AI Hub, we are a diverse, multicultural team at the forefront of technological innovation, working with cutting-edge technologies like Gen AI, data analytics, robotics, etc. Our center is dedicated to exploring the future of AI and Data.
What We Offer
Join our Data & AI Hub, where you will have the chance to work in a vibrant and collaborative environment. You will engage directly with advanced data engineering, where you'll leverage cutting-edge technologies to drive innovative data solutions and transform business insights. Our team supports your growth and development, providing access to the latest tools and resources.
Tasks & Responsibilities:
- Lead the design and execution of Intelligence transformation projects, ensuring alignment with business goals and objectives.
- Drive transformation processes towards a data-centric culture, promoting best practices and innovative solutions.
- Collaborate with cross-functional teams, including risk management, business optimization, and customer intelligence, to deliver impactful data solutions.
- Architect, design, and implement robust technical solutions across various business domains, ensuring scalability and performance.
- Mentor and coach junior data engineers, fostering a culture of continuous learning and development within the team.
- Apply your extensive knowledge and experience to deliver key projects, providing strategic insights and technical guidance.
Key Requirements:
- Bachelor’s or Master’s degree in Computer Engineering (or related fields), Physics, Statistics, Applied Mathematics, Computer Science, Data Science, Applied Sciences, etc.
- Fluent in English; proficiency in Spanish or other languages is an asset.
- Expertise in one or more object-oriented languages, including Python, Scala, or C++
- Deep understanding of data-modeling principles and best practices.
- Extensive experience with relational databases and excellent SQL fluency.
- Proven experience working with big data technologies (Hadoop, Spark, Hive).
- Experience working with MS Fabric, Databricks, and/or Snowflakes.
Preferred:
- Hands-on experience with ETL products
- Strong background in developing microservices-based architectures and the technologies that enable them (containers, REST APIs, messaging queues, etc.).
- Experience with team collaboration tools such as Git, Bitbucket, Jira, Confluence, etc.
- Solid experience with unit testing and continuous integration practices.
- Familiarity with Scrum/Agile development methodologies and the ability to lead agile teams.
- Strong critical thinking capabilities, with the ability to see the ‘big picture’ while also diving into the details when necessary.