The role
Learn about your responsibilities, how you will work, and who you will work with.
As a Data Engineer at DKL, you will be responsible for developing, operating, and maintaining scalable data architectures that support analysis, reporting, AI, and machine learning applications. Your role will involve managing ETL processes, creating and running data warehouses, and ensuring the high performance and reliability of data systems. You will collaborate closely with product owners, data scientists, and analysts to translate business requirements into effective technical solutions while maintaining data quality and accessibility. As one of the primary contributors to DKL's data infrastructure, you will ensure our data solutions are efficient, accurate, and aligned with client goals.
Responsibilities
Your responsibilities will encompass a wide range of tasks, including but not limited to:
Pipelines
Designing, building, and optimizing data pipelines to handle large volumes of data from various sources and various frequencies, including real-time data.
Architecture
Developing and maintaining data warehouse architecture, ensuring scalability and performance, taking into account organizational requirements to determine the optimal architecture.
ETL/ELT
Implementing ETL/ELT processes to extract, transform, and load data for reporting and analytics.
Collaboration
Collaborating with data scientists and analysts to support machine learning workflows and advanced analytics.
Operations
Monitoring and troubleshooting data systems to ensure high availability and reliability.
Governance
Ensuring data quality and compliance with company data governance standards.
Documentation
Documenting data processes and infrastructure for internal use and continuous improvement.
How will you work?
You’ll be part of DKL’s Data team, working remotely and collaborating with data scientists, analysts, and software engineers to support DKL’s data-driven goals. Daily check-ins and regular project meetings are held online, ensuring open communication and alignment throughout the project. Our data tools include Google Cloud Platform (GCP) and Microsoft Azure for cloud services, Databricks and Snowflake for big data processing and Data Warehousing, and Airflow for workflow orchestration. GitHub is used for version control and collaboration, while Jira and Confluence help with project management and documentation.
Who will you work with?
Matías Pizarro
Data Architect
&
Software Architect
With 28 years in software development and 8 years as Head of Engineering at McKinsey & Company, Matias leads our technical vision. He specializes in data engineering, AI, DevOps, and team scaling, and has grown Power Solutions Tech from 2 to 200 developers in just 5 years. Matías keeps Python, Pandas, Django, FreeBSD, and Bash in his daily toolkit and is passionate about using the right tools for the job. His leadership inspires innovation and excellence across our technical teams.
Biel Llobera
Data Architect
As a data architect with 10+ years of experience, Biel has specialized in designing and implementing large-scale data platforms that support complex analytics and data-driven decision-making. He has a strong background in building robust, scalable data pipelines, ensuring data quality, and designing scalable systems across various business requirements. He is proficient in industry-leading tools, including Airflow, DBT, Snowflake, and Databricks, and has extensive experience with the major cloud providers.
What makes you a fit?
Your qualifications
Requirements
Education
Bachelor’s degree in Computer Science or a related field
Experience
Proven experience in data engineering, including designing and maintaining data pipelines
Programming
Strong Python programming and Software Engineering skills
Analytics
Strong SQL and analytical skills
Cloud
Proficiency with at least one of the leading cloud platforms (AWS, GCP, or Azure) and data warehousing tools (Snowflake, Databricks, Redshift, or BigQuery)
Orchestration
Proficiency with a workflow orchestration tool, preferably Airflow
Governance
Familiarity with data governance and security best practices
Collaboration
Excellent problem-solving skills and the ability to both work independently and collaborate with a larger team in a remote setting
What's the first 6 months like?
Your first six months will be structured to support your learning, integration, and progression as you settle into your role. This period aligns with our review checkpoints at 1, 3, and 6 months, ensuring you have a clear pathway to success during your probation period.
What's the selection process?
We aim to make our selection process smooth, informative, and enjoyable, ensuring it's a two-way street where we get to know each other.
Initial Meet & Greet
A casual video call to introduce ourselves, discuss the role at a high level, and get to know each other's backgrounds and motivations. This call is designed to determine if we're a good mutual fit.
Role-Focused Interview
A more focused discussion, diving into the role's specifics and exploring key data engineering scenarios you might encounter with us. This is where we'll review some example cases, discuss your experience, and address any questions you may have about the day-to-day aspects
Meet the Team Leads
During this call, you'll have the opportunity to meet some of our key team leads. This conversation helps you understand the company culture, our team dynamics, and the kind of cross-functional work you'll be doing. It's also an opportunity to discuss the projects we're passionate about in more detail.
Decision & Offer
After the final discussion, we'll circle back with a decision. If we're a good match, we'll be excited to extend an offer and welcome you on board! If this isn't the right fit, we'll let you know and share our feedback, wishing you all the best on your career journey.