Join us/Open Position

Data Engineer

  • Senior
  • Mid-level
  • Full-remote

As a Data Engineer at DKL, you will play a critical role in designing, building, and optimizing our data infrastructure. Working alongside cross-functional teams, you’ll develop reliable data pipelines and maintain the integrity of large datasets used for analysis and reporting, directly impacting data-driven decision-making across the company.

REMOTE

100%

You will work from the location of your choice, provided you structure this in a way that is compatible with work residency in Spain. You will also need to have a high-bandwidth internet connection (>= 40Mbs up/down). DKL has no physical headquarters: we take remote work very seriously, and our team is distributed in various parts of Spain and abroad.

SCHEDULE

Flexible

You will work 40 hours per week with the flexibility to organize your schedule in a way that suits you. Requirements include having sufficient overlap with the teams you collaborate with and attending dailies and occasional client meetings. We know that personal wellness is crucial for achieving optimal results.

COMPENSATION

40-60K

Opportunities to grow and advance your career. Every year, you will have €500 explicitly allocated for your educational needs.

An annual 100€ Amazon Gift Card on Christmas.

Vacations: 23 days/year.

    The role

    Learn about your responsibilities, how you will work, and who you will work with.

    As a Data Engineer at DKL, you will be responsible for developing, operating, and maintaining scalable data architectures that support analysis, reporting, AI, and machine learning applications. Your role will involve managing ETL processes, creating and running data warehouses, and ensuring the high performance and reliability of data systems. You will collaborate closely with product owners, data scientists, and analysts to translate business requirements into effective technical solutions while maintaining data quality and accessibility. As one of the primary contributors to DKL's data infrastructure, you will ensure our data solutions are efficient, accurate, and aligned with client goals.

    Responsibilities

    Your responsibilities will encompass a wide range of tasks, including but not limited to:

    arrow_circle_right

    Pipelines

    Designing, building, and optimizing data pipelines to handle large volumes of data from various sources and various frequencies, including real-time data.

    arrow_circle_right

    Architecture

    Developing and maintaining data warehouse architecture, ensuring scalability and performance, taking into account organizational requirements to determine the optimal architecture.

    arrow_circle_right

    ETL/ELT

    Implementing ETL/ELT processes to extract, transform, and load data for reporting and analytics.

    arrow_circle_right

    Collaboration

    Collaborating with data scientists and analysts to support machine learning workflows and advanced analytics.

    arrow_circle_right

    Operations

    Monitoring and troubleshooting data systems to ensure high availability and reliability.

    arrow_circle_right

    Governance

    Ensuring data quality and compliance with company data governance standards.

    arrow_circle_right

    Documentation

    Documenting data processes and infrastructure for internal use and continuous improvement.

    How will you work?

    You’ll be part of DKL’s Data team, working remotely and collaborating with data scientists, analysts, and software engineers to support DKL’s data-driven goals. Daily check-ins and regular project meetings are held online, ensuring open communication and alignment throughout the project. Our data tools include Google Cloud Platform (GCP) and Microsoft Azure for cloud services, Databricks and Snowflake for big data processing and Data Warehousing, and Airflow for workflow orchestration. GitHub is used for version control and collaboration, while Jira and Confluence help with project management and documentation.

    Who will you work with?

    image of Matías Pizarro Matías Pizarro Data Architect & Software Architect

    With 28 years in software development and 8 years as Head of Engineering at McKinsey & Company, Matias leads our technical vision. He specializes in data engineering, AI, DevOps, and team scaling, and has grown Power Solutions Tech from 2 to 200 developers in just 5 years. Matías keeps Python, Pandas, Django, FreeBSD, and Bash in his daily toolkit and is passionate about using the right tools for the job. His leadership inspires innovation and excellence across our technical teams.

    image of Biel Llobera Biel Llobera Data Architect

    As a data architect with 10+ years of experience, Biel has specialized in designing and implementing large-scale data platforms that support complex analytics and data-driven decision-making. He has a strong background in building robust, scalable data pipelines, ensuring data quality, and designing scalable systems across various business requirements. He is proficient in industry-leading tools, including Airflow, DBT, Snowflake, and Databricks, and has extensive experience with the major cloud providers.

    What makes you a fit?

    Your qualifications

    Requirements

    arrow_circle_right

    Education

    Bachelor’s degree in Computer Science or a related field

    arrow_circle_right

    Experience

    Proven experience in data engineering, including designing and maintaining data pipelines

    arrow_circle_right

    Programming

    Strong Python programming and Software Engineering skills

    arrow_circle_right

    Analytics

    Strong SQL and analytical skills

    arrow_circle_right

    Cloud

    Proficiency with at least one of the leading cloud platforms (AWS, GCP, or Azure) and data warehousing tools (Snowflake, Databricks, Redshift, or BigQuery)

    arrow_circle_right

    Orchestration

    Proficiency with a workflow orchestration tool, preferably Airflow

    arrow_circle_right

    Governance

    Familiarity with data governance and security best practices

    arrow_circle_right

    Collaboration

    Excellent problem-solving skills and the ability to both work independently and collaborate with a larger team in a remote setting

    Nice-To-Have

    • Experience with data streaming technologies, such as Kafka or Kinesis
    • Experience with machine learning pipelines and MLOps
    • Experience implementing a data mesh architecture
    • Experience with functional data engineering
    • Experience with Apache Spark
    • Experience with a Data Quality framework, such as Great Expectations
    • Experience using DBT to orchestrate SQL transformations in a Data Warehouse
    • Cloud or data engineering certifications
    • Previous experience in a fast-paced, agile environment

    What's the first 6 months like?

    Your first six months will be structured to support your learning, integration, and progression as you settle into your role. This period aligns with our review checkpoints at 1, 3, and 6 months, ensuring you have a clear pathway to success during your probation period.

    Month 1

    Your first month will focus on onboarding and getting grounded in our data platforms, engineering practices, and team workflows. You’ll have access to comprehensive technical documentation and training resources, meet key stakeholders across data, analytics, and product teams, and start familiarizing yourself with our data architecture, pipelines, and development tools. This phase is all about building a strong foundation—setting up your local environment, understanding our deployment processes, and reviewing active projects. At the end of the month, we'll have a check-in to reflect on your experience, answer any technical or process-related questions, and ensure you have the support you need to move forward confidently.

    Months 2-3

    By month two, you'll start taking on defined responsibilities within our data engineering projects, collaborating closely with your team to plan deliverables, estimate workloads, and coordinate progress across stakeholders. During this phase, you'll begin owning smaller data pipelines or components within larger initiatives—whether that's building new data ingestion processes, optimizing existing workflows, or contributing to infrastructure improvements. This hands-on experience will help you build confidence with our tech stack and development practices. At the three-month mark, we’ll have a dedicated review to reflect on your progress, discuss any technical or operational challenges, and identify growth opportunities as you continue to deepen your impact on the team.

    Months 4-6

    With solid experience under your belt, by month four, you'll be ready to lead your own data engineering projects more independently. During this stage, you'll take ownership of end-to-end delivery—designing, building, testing, and deploying scalable data solutions that support our business needs. You'll also focus on refining your technical skills, improving system performance, and contributing to best practices within the team. The six-month review will serve as a key milestone to evaluate your overall impact, technical growth, and collaboration while closing out the probation period and setting clear goals for your continued development within the team.

    What's the selection process?

    We aim to make our selection process smooth, informative, and enjoyable, ensuring it's a two-way street where we get to know each other.

    01/

    Initial Meet & Greet

    A casual video call to introduce ourselves, discuss the role at a high level, and get to know each other's backgrounds and motivations. This call is designed to determine if we're a good mutual fit.

    02/

    Role-Focused Interview

    A more focused discussion, diving into the role's specifics and exploring key data engineering scenarios you might encounter with us. This is where we'll review some example cases, discuss your experience, and address any questions you may have about the day-to-day aspects

    03/

    Meet the Team Leads

    During this call, you'll have the opportunity to meet some of our key team leads. This conversation helps you understand the company culture, our team dynamics, and the kind of cross-functional work you'll be doing. It's also an opportunity to discuss the projects we're passionate about in more detail.

    04/

    Decision & Offer

    After the final discussion, we'll circle back with a decision. If we're a good match, we'll be excited to extend an offer and welcome you on board! If this isn't the right fit, we'll let you know and share our feedback, wishing you all the best on your career journey.

    Are you ready to take a new step in your career?

    Curious to find out more? Complete the formand send us your CV. And don't hesitate to ask questions!

    Max. 500 characters