The hiring company is MNC, and look for data engineer to join the team for developing new solutions for in-house data analytics platform. about the job.
- Implementing back-end cloud-based data lake/data warehouse with best practices.
- Working closely with the development team to build and deliver back-end data pipelining components following industry best practices and in adherence with architectural principles.
- Designing, developing, and maintaining scalable and efficient ETL/ELT data pipelines from various internal and external data sources.
- Identifying and gathering business requirements to design data models to ensure data quality, integrity, and performance.
- Conducting thorough testing and validation of data pipelines to ensure accuracy, reliability, and data consistency.
skills & experiences required.
- Minimum 6 years of experience in a Data Engineering role utilizing SQL and Python.
- Strong understanding of data lake/data warehouse design best practices and principles.
- Practical hands-on experience in cloud-based data services for ETL, e.g., AWS, EMR, Airflow, Redshift, Glue.
- Deployment experience and management of MLOps frameworks, e.g., AWS SageMaker, ECR.
- Experience in distributed computing systems such as Spark, Hive, Hadoop.
- Proficiency with databases such as Postgres, MySQL, Oracle.
- Excellent communication skills in English - both verbal and written.
- Experience in other cloud platforms and hybrid cloud infrastructure, e.g., GCP, Azure, with an understanding of Machine Learning/Deep Learning
- Proficiency in real-time and near real-time data streaming, e.g., Kafka, Spark Streaming.
If you believe you have the right skills, attitude and experience please click 'apply now' below and upload your resume. Alternatively, for a confidential chat, please contact Wendy Fung by applying directly or email at wendy.fung@randstad.com.hk