Design and evolve scalable infrastructure for data ingestion, processing, and delivery of large volumes of data.
Enhance existing frameworks and pipelines to optimize performance, reliability, and cost-efficiency.
Implement and maintain data governance practices to ensure accessibility and trust in information.
Transform raw datasets into clean, usable formats for analysis, modeling, and reporting.
Research and resolve complex data issues, ensuring system accuracy and resilience.
Maintain high standards of code quality, testing, and documentation, with a focus on reproducibility and observability.
Stay up to date with emerging trends and technologies to continuously improve engineering practices.
Degree in Computer Engineering, Data Engineering, or a related technical field.
Proven experience in data engineering or backend development, preferably in cloud-native environments.
Proficiency in Python and SQL.
Strong experience with distributed processing frameworks such as Apache Spark.
Solid knowledge of cloud platforms (AWS and GCP).
Analytical thinking and strong problem-solving skills.
Ability to work both autonomously and collaboratively, balancing hands-on work with technical leadership.
Advanced/Fluent English.