Data Engineer (DataOps)

Makro PRO

  • Bangkok
  • Permanent
  • Full-time
  • 14 days ago
  • Apply easily
We are seeking an experienced Senior DataOps Engineer to implementation, and maintenance of our data infrastructure and pipelines. The ideal candidate will have a strong background in data engineering and DevOps principles, with a passion for automation, quality, and governance. You will act as a bridge between technical and business teams, ensuring our data platform is not only efficient and scalable but also reliable and compliant. This role is crucial for enabling data-driven decisions and accelerating the development lifecycle for all data initiatives.Key Responsibilities
  • Design & Implement Data Platforms: Design, develop, and maintain robust, scalable data pipelines and ETL processes, with a focus on automation and operational excellence.
  • Ensure Data Quality and Governance: Implement automated data validation, quality checks, and monitoring systems to ensure data accuracy, consistency, and reliability.
  • Manage CI/CD for Data: Develop, own and optimize the CI/CD pipelines for data engineering workflows, including automated testing and deployment of data transformations and schema changes.
  • Architect & Implement IaC: Use Infrastructure as Code (IaC) with Terraform to manage data infrastructure across various cloud platforms (Azure, AWS, GCP).
  • Performance & Optimization: Proactively monitor and optimize query performance, data storage, and resource utilization to manage costs and enhance efficiency.
  • Provide Technical Leadership: Mentor junior engineers, provide technical leadership, and champion best practices in data engineering and DataOps methodologies.
  • Collaborate with Stakeholders: Manage communication with technical and business teams to understand requirements, assess technical and business impact, and deliver effective data solutions.
  • Strategic Design: Possess the ability to see the big picture in architectural design, conduct thorough risk assessments, and plan for future scalability and growth.
Requirements
  • Experience: 3+ years of experience in data engineering, data warehousing, and ETL processes, with a significant portion of that time focused on DataOps or a similar operational role.
  • Platform Expertise: Strong experience with data platforms such as Databricks and exposure to multiple cloud environments (Azure, AWS, or GCP).
  • Data Processing: Extensive experience with Apache Spark for large-scale data processing.
  • Orchestration: Proficiency with data orchestration tools like Azure Data Factory (ADF), Apache Airflow, or similar.
  • CI/CD & Version Control: Expert-level knowledge of version control (Git) and experience with CI/CD pipelines (GitLab CI/CD, GitHub Actions).
  • IaC: Extensive hands-on experience with Terraform.
  • Programming: Strong programming skills in Python and advanced proficiency in SQL.
  • Data Modeling: Experience with data modeling tools and methodologies, specifically with dbt (data build tool).
  • AI & ML: Experience with AI-related technologies like Retrieval-Augmented Generation (RAG) and frameworks such as LangChain.
  • Data Observability: Hands-on experience with data quality and observability tools such as Great Expectations, Monte Carlo, or Soda Core.
  • Data Governance: Familiarity with data governance principles, compliance requirements, and data catalogs (e.g., Unity Catalog).
  • Streaming Technologies: Experience with stream processing technologies like Kafka or Flink.
  • BI Tools: Experience with data visualization tools (e.g. Power BI).
  • Containerization: Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes).

Makro PRO