About this role
Key Responsibilities• Design, develop, and maintain robust ETL/ELT pipelines to ingest data from multiple sources (databases, APIs, flat files, streaming sources). • Build and optimize data warehouses and data lakes to support business intelligence and analytics use cases. • Ensure data quality, integrity, accuracy, and availability through validation, monitoring, and alerting mechanisms. • Collaborate closely with Data Analysts, Data Scientists, and Business Stakeholders to understand data requirements and deliver scalable solutions. • Optimize data processing performance, including query tuning and pipeline efficiency. • Implement and maintain data security, access controls, and governance standards. • Automate workflows and data operations to improve reliability and reduce manual intervention. • Troubleshoot and resolve data pipeline, performance, and production issues. • Document data models, pipeline architecture, and operational processes. • Support data migration, integration, and modernization initiatives. Required Skills & QualificationsTechnical Skills• Strong experience with SQL and relational databases (e.g., MySQL, PostgreSQL, SQL Server, Oracle). • Hands-on experience with Python (preferred) or Scala/Java for data engineering tasks. • Proven expertise in building ETL pipelines using tools such as Apache Airflow, Talend, Informatica, or similar. • Experience with Big Data technologies (e.g., Hadoop, Spark). • Solid understanding of data warehousing concepts, dimensional modeling, and schema design. • Experience working with cloud platforms (AWS, Azure, or GCP), including cloud-native data services. • Familiarity with REST APIs, data ingestion, and integration patterns. • Knowledge of version control systems (Git) and CI/CD practices. Soft Skills• Strong analytical and problem-solving abilities. • Excellent communication and stakeholder coordination skills. • Ability to work independently and within cross-functional teams. • Detail-oriented with a strong focus on data accuracy and reliability. Preferred / Good-to-Have Skills• Experience with streaming platforms (Kafka, Kinesis, Pub/Sub). • Exposure to BI tools (Power BI, Tableau, Looker). • Knowledge of data governance, metadata management, and data cataloging tools. • Experience supporting machine learning or advanced analytics pipelines. Education• Bachelor’s degree in Computer Science, Information Technology, Engineering, Mathematics, or a related field. • Relevant certifications in cloud platforms or data engineering are a plus.da
Also in Data Science
HORIZON DIGITAL MEDIA PTE. LTD.
HORIZON DIGITAL MEDIA PTE. LTD.
HORIZON DIGITAL MEDIA PTE. LTD.