Role Overview
As a Data Engineer, you will be the technical backbone for our diverse portfolio of global clients. Unlike an in-house role, you will lead the migration, modernization, and optimization of data architectures across different industries. You are a problem-solver who can jump into a client’s messy legacy system and transform it into a high-performing, cloud-native data platform.
Key Service-Based Responsibilities
• Client Delivery: Lead the end-to-end implementation of data solutions, from initial discovery and requirements gathering to final deployment and handover.
• Cross-Platform Engineering: Build and maintain ETL/ELT pipelines across various cloud environments (AWS, Azure, and GCP) depending on the specific client’s ecosystem.
• Legacy Modernization: Help clients migrate from traditional on-premise databases (SQL Server, Oracle) to modern, scalable cloud data warehouses (Snowflake, BigQuery, Databricks).
• Data Strategy: Advise clients on best practices for data governance, security, and cost optimization within their data infrastructure.
• Technical Documentation: Create high-quality architectural diagrams and technical handbooks to ensure client teams can maintain the systems you build.
Technical Requirements
• Cloud Proficiency: Deep expertise in at least one major provider (AWS, Azure, or GCP), with certifications preferred.
• Data Warehousing: Advanced knowledge of Snowflake, Databricks, or Amazon Redshift.
• Code Mastery: Strong proficiency in Python and Advanced SQL (window functions, CTEs, performance tuning).
• Modern Data Stack: Experience with dbt (data build tool), Fivetran/Airbyte, and orchestration tools like Apache Airflow.
• DevOps/DataOps: Experience with Git, CI/CD pipelines, and Terraform/CloudFormation for Infrastructure as Code (IaC).
Preferred Qualifications
• Consulting Experience: Previous experience in a client-facing or agency environment.
• Industry Knowledge: Familiarity with data regulations (GDPR, HIPAA) relevant to different sectors like Finance or Healthcare.