About this role
Job Description Summary Designs, builds, and maintains scalable batch data pipelines and data products by integrating data from on-premise and cloud systems. Applies strong engineering practices, data modeling principles, and cloud-native technologies to deliver reliable, governed, and production-ready data solutions aligned to Medallion architecture. Job Description As a Cloud Data Engineer , you will act as a key enabler in transforming raw data into trusted, curated data assets. You will work within a data squad focused on data ingestion, pipeline development, and data product delivery, leveraging AWS-native technologies and modern engineering practices. You will dive deep into business and data requirements, understanding upstream systems and translating them into robust batch ingestion pipelines. Your work will involve cloud platforms (AWS, Azure)into a unified data platform. In this role, you will design and implement pipelines aligned with the Medallion Architecture (Bronze, Silver, Gold layers, ensuring data is progressively refined, structured, and made analytics-ready. Using AWS Glue or equivalent transformation engines, you will build scalable ETL/ELT jobs with a strong focus on performance, reusability, and maintain ability. You will orchestrate workflows using AWS Step Functions or equivalent orchestration framework, ensuring proper sequencing, error handling, and operational resilience of pipelines. Additionally, you will contribute to CI/CD automation using GitLab (SHIP-HATS),enabling seamless deployment and version control of data solutions. Infrastructure provisioning and environment management will be handled through Terraform, following strict Infrastructure-as-Code practices. You will also ensure adherence to existing governance, security, and compliance frameworks, maintaining high standards of data quality, privacy, and reliability. Your responsibilities extend beyond pipeline development, you will contribute to data product curation, ensuring datasets are business-ready, well-documented, and consumable by downstream stakeholders such as analytics and data science teams Required Skills and Experience • 4-8 years of experience in Data Engineering • Strong hands-on experience with AWS Glue (mandatory) • Proficiency in PySpark and SQL for building data pipelines • Experience designing and implementing batch data ingestion pipelines • Hands-on experience with Medallion Architecture (Bronze/Silver/Gold) • Hybrid/cloud environments (AWS, Azure) • Experience with workflow orchestration using AWS Step Functions • Hands-on experience with GitLab CI/CD (SHIP-HATS or similar frameworks) • Strong experience with Terraform for Infrastructure-as-Code • Understanding of data governance, data quality, and security practices • Ability to build scalable, reusable, and production-grade pipelines • Experience working with customers in different industries and understanding their specific challenges and requirements Preferred Skills and Experience • Bachelor's degree in computer science, Information Security, or a related field Skilled in planning, organization, analytics, and problem-solving. • Strong understanding of enterprise data governance frameworks • Cloud certifications in AWS (Data Analytics / Solutions Architect) • Strong communication skills and ability to work in cross-functional teams
Also in Data Science
MORGAN MCKINLEY PTE. LTD.
MORGAN MCKINLEY PTE. LTD.
AMBITION GROUP SINGAPORE PTE. LTD.