About this role
Role OverviewWe are looking for a Data Engineer with 2–5 years of experience to support the design, development, and maintenance of scalable data platforms and pipelines. The role requires hands-on experience in data integration, ETL development, cloud technologies, and database management to support analytics and business intelligence initiatives. You will work closely with business users, analysts, and engineering teams to ensure reliable and high-quality data delivery across enterprise systems. Key Responsibilities• Design, develop, and maintain scalable ETL/ELT pipelines for structured and unstructured data. • Build and optimize data workflows for ingestion, transformation, validation, and reporting. • Develop and maintain data models, data marts, and warehouse solutions. • Work with cloud-based and on-premise data platforms to support enterprise analytics. • Perform data cleansing, transformation, and quality checks to ensure data accuracy and consistency. • Monitor and troubleshoot data pipeline issues, performance bottlenecks, and system failures. • Collaborate with data analysts, business stakeholders, and application teams to gather data requirements. • Support automation and optimization of data processing tasks. • Implement data governance, security, and compliance best practices. • Prepare technical documentation, workflow diagrams, and operational procedures. • Participate in deployment, testing, and production support activities. • Support continuous improvement initiatives related to data engineering and platform modernization. Required Skills• 2–5 years of hands-on experience in Data Engineering or ETL development. • Strong knowledge of SQL and relational databases such as MySQL, PostgreSQL, SQL Server, or Oracle. • Experience with ETL/ELT tools and frameworks. • Hands-on experience with Python, PySpark, Scala, or Java for data processing. • Experience working with cloud platforms such as AWS, Azure, or Google Cloud Platform. • Knowledge of data warehousing concepts and big data technologies. • Familiarity with Apache Spark, Hadoop, Kafka, or Airflow is an advantage. • Understanding of REST APIs, data integration, and batch/stream processing. • Exposure to CI/CD pipelines and DevOps practices is preferred. • Strong analytical, troubleshooting, and problem-solving skills. • Good communication and stakeholder management skills. • Ability to work independently and collaboratively in Agile environments. Preferred Skills• Experience with Snowflake, Databricks, Redshift, or BigQuery. • Exposure to banking, financial services, telecom, or enterprise environments. • Knowledge of data governance and security frameworks. • Familiarity with containerization and orchestration tools such as Docker or Kubernetes. Application NoteInterested applicants may send their CV directly to shyam@aryan-solutions.com for consideration.
Also in Data Science
NATIONAL CANCER CENTRE OF SINGAPORE PTE LTD
RECRUIT EXPRESS PTE LTD
SINGAPORE EXCELLEN PTE. LTD.