3 days ago Be among the first 25 applicants
We are looking for a highly skilled Data Software Engineer to join our team and contribute to the creation of a secure and innovative document flow solution hosted on AWS.
Collaborating with a seasoned team of professionals, you will work to enhance an end‑to‑end information lifecycle solution by leveraging cutting‑edge technologies such as AWS Glue, Athena, and Apache Spark.
Your efforts will be directed towards ensuring a scalable, efficient, and reliable system that revolutionizes digital document management for a global audience.
Responsibilities
- Design, develop, and implement data pipelines and workflows using AWS Glue and related technologies
- Build scalable and efficient data models with Athena and S3, catering to reporting and analytics needs
- Develop ETL processes leveraging Apache Spark to handle high‑volume data workloads
- Collaborate with BI analysts and architects to improve Business Intelligence and analytics workflows
- Optimize performance and cost‑efficiency of cloud solutions by utilizing fully managed AWS services
- Maintain CI/CD pipelines to ensure seamless integration and delivery of solutions
- Monitor system performance, reliability, and cost efficiency using modern observability tools
- Support reporting dashboard development by providing timely and accurate data models
- Write high‑quality code, adhering to best practices for testing and documentation
- Diagnose and resolve issues with data workflows to maintain system reliability
Requirements
- 2+ years of professional experience in data engineering or software development with an emphasis on AWS services
- Proficiency in AWS Glue, Amazon Athena, and foundational AWS tools like S3 and Lambda
- Expertise in Apache Spark, coupled with a background in large‑scale data processing systems
- Competency in BI process analysis, ensuring effective collaboration with analytics teams
- Familiarity with SQL, including crafting complex queries for data extraction and transformation
- Understanding of data lake and ETL architecture methodologies for scalable data solutions
- Knowledge of CI/CD pipelines and competency in incorporating data workflows into deployment systems
- Flexibility to use additional tools such as Amazon Kinesis, Apache Hive, or Elastic Kubernetes Service
- Strong communication skills in English, with a minimum proficiency level of B2
Nice to have
- Showcase of experience working with Amazon Elastic Kubernetes Service (EKS) for orchestrating containerized applications
- Familiarity with Amazon Kinesis for facilitating real‑time data streaming and event processing
- Understanding of Apache Hive applications in data warehousing environments
- Background in optimizing BI toolset operations to enhance platform efficiency
- Proficiency in Java or Node.js to elevate data processing functionalities
We offer
- International projects with top brands
- Work with global teams of highly skilled, diverse peers
- Employee financial programs
- Paid time off and sick leave
- Upskilling, reskilling and certification courses
- Unlimited access to the LinkedIn Learning library and 22,000+ courses
- Global career opportunities
- Volunteer and community involvement opportunities
- EPAM Employee Groups
- Award‑winning culture recognized by Glassdoor, Newsweek and LinkedIn
Seniority level
Employment type
Job function
- Engineering, Information Technology, and Business Development
- Software Development, IT Services and IT Consulting, and Venture Capital and Private Equity Principals
Referrals increase your chances of interviewing at EPAM Systems by 2x
Get notified about new Software Engineer jobs in Chile .
Santiago, Santiago Metropolitan Region, Chile 1 month ago
Las Condes, Santiago Metropolitan Region, Chile 2 weeks ago
#J-18808-Ljbffr