About this role
Job Description: * Design, develop, create, testing and maintain data pipeline ETL solutions to meet business, technical, user requirements. * Collect and refine new data set. Maintain comprehensive documentation and data mapping across multiple systems. * This role will involve creating scalable and optimized data models, ensuring they are aligned with best practices and organization's data architecture standards. * Perform code review and testing. * Drive optimization, testing to improve data quality and review solution design for data pipelines * Ensure that proposed solutions are aligned and conformed to the big data architecture guidelines and roadmap. * Ensure the reliability and accuracy of the data being processing is crucial. The engineer will establish robust monitoring, logging, and alerting to track pipelines' performance and data quality. * Adopt the best data engineering practices to design and build reliable data marts in the Hadoop ecosystem for planning, reporting and analytics. * Collaborate with cross-functional teams, such as stakeholders, data scientists and data analysts to align the logics of key metrics, and to ensure code logics correctly reflects latest business definitions. * Maintain and optimize data pipelines to ensure all data are up to date with data accuracy and integrity. * All code must be stored in a central repository with clear branching strategies and well documented commit messages. * Work across various stakeholders to ensure smooth production deployment of data pipeline and adherence to data governance policies. * Proactively identify and suggest solutions to improve data engineering process. * Architect end-to-end solution for insurance data modelling in data warehouse and work with data engineering team on acquiring of data, contextualizing data for business analytics and integrating of product with business process and implementing insurance data modelling. * Business process owner for onboarding business users and data products onto data platform and the data pipelines that feeds into dashboards or statistical model Mandatory Skillset: 10+ Years of experience in Data related projects Strong proficiency in Python, Hadoop, Hive, Spark, CDP Good experience in Insurance or Banking domain is added advantage User handling experience as this position require the individual to work with Business users directly
Required
Also in Data Science
ALT. PIZZA PTE. LTD.
MORGAN FUND PTE. LTD.
JOBSKIPEDIA PTE. LTD.