Data Engineer II
AudienceView is a global leader in live‑events technology, empowering venues worldwide with innovative software solutions that drive ticket sales, advertising, and attendee engagement.
Our team shares a common vision: to deliver exceptional experiences for people who love live events through technology, media brands, and dedicated expertise.
Why You’ll Want to Work With Us
As a Data Engineer II you’ll be a key technical contributor responsible for building, optimizing, and maintaining the data infrastructure that powers our analytics and business intelligence capabilities.
You will bring strong expertise in Azure cloud services, Databricks, and modern data platforms, and you will work closely with analytics, BI, and engineering teams to deliver reliable, high‑quality data pipelines and insights.
What You’ll Do
- Design, build, and maintain scalable data pipelines using Azure Data Factory, Databricks, and Synapse to support analytics and reporting needs.
- Develop and optimize data transformation logic using Python, PySpark, and SQL, ensuring performance, reliability, and data quality.
- Optimize Spark jobs and Databricks workflows for performance and cost‑efficiency, applying best practices for distributed data processing.
- Work with Azure services including ADLS Gen2, Key Vault, Event Hubs, and other data‑focused services to build robust data infrastructure.
- Manage and maintain Databricks Hive metastores, and contribute to Unity Catalog implementation and modern Databricks features such as Metric Views and structured streaming.
- Process and transform JSON messages from Kafka and other streaming sources, ensuring reliable data ingestion.
- Collaborate with analytics to understand data requirements and trace data lineage from Power BI reports through semantic models, Synapse, and transformation code back to raw data sources.
- Maintain code and projects in git repositories using VS Code, adhering to version control best practices including branch management and working with Power BI projects in PBIP format.
- Document work in progress and completed tasks using Azure DevOps (Kanban boards, wiki), ensuring clear communication and knowledge sharing across the team.
- Evaluate and contribute to the adoption of new Azure services and platforms such as Fabric/OneLake as the team explores enhancements to the data architecture.
- Collaborate with the Senior Data Engineer and broader team to identify and implement improvements to data pipelines, ingestion architecture, and overall data platform capabilities.
What You’ll Need
- Minimum 5+ years of experience as a Data Engineer, with strong hands‑on expertise in Azure cloud services and modern data platforms.
- Deep experience with Azure Databricks, including Spark optimization and working with Hive metastores.
- Proficiency in Spark optimization techniques for performance tuning and cost management in distributed data processing environments.
- Hands‑on experience with Azure Data Factory for building and orchestrating data pipelines.
- Experience with Azure Synapse for data warehousing and analytics workloads.
- Strong SQL skills for data manipulation, transformation, and analysis.
- Proficiency in Python and PySpark for data engineering tasks, transformations, and pipeline development.
- Experience with Azure ADLS Gen2, Key Vault, and Azure DevOps (Kanban boards, wiki, branch management).
- Experience working with JSON messages produced by Kafka or similar streaming platforms.
- Some knowledge of Power BI, including the ability to trace data lineage from reports through semantic models back to data sources.
- Comfort with VS Code and git for version control, including experience managing Power BI projects in PBIP format and adhering to collaborative development processes.
- Problem‑solving mindset with the ability to diagnose complex data issues, QA, troubleshoot pipeline failures, and optimize performance.
- High attention to detail, ensuring data quality, accuracy, and reliability across all pipelines and transformations.
- Strong collaboration and communication skills, including comfort with documentation, clear status updates, and working within processes that support team collaboration.
- Self‑awareness and judgment about when to work independently and when to seek help, recognizing that both are critical to success.
- Ability to work independently while also thriving in a collaborative team environment.
Nice to Have
- Familiarity with Unity Catalog and recent Databricks features (Metric Views, structured streaming).
- Familiarity with Event Hubs or similar streaming/ingestion services, as the team evaluates alternatives to current ingestion architecture.
- Awareness of Microsoft Fabric/OneLake, as these platforms are under consideration for future adoption.
- Bachelor’s degree in computer science, engineering, information systems, or a related technical field.
- Experience in the ticketing or live event industry.
Benefits
- Global leader in live events technology with a diverse client base across sports, music, and performing arts.
- Remote‑first company with flexible work schedule, uncapped vacation, and sick policy.
- Competitive salaries, excellent benefits, and a culture that champions inclusion and diverse perspectives.
- Opportunity to work across innovative cloud services, data platforms, and cutting‑edge analytics.
How We Hire
- Screening & resume review.
- Hiring Manager interview (60 min).
- Meet the team (60 min).
- Final interview (60 min) – a technical case‑study interview to showcase your data engineering skills.
EEO Statement
Diversity and inclusion have always been at the core of our values at AudienceView.
A diverse workforce with wide perspectives and creative ideas benefits our clients, the communities where we operate, and all of us as colleagues.
We welcome applications from qualified individuals from all backgrounds and encourage applications from people with disabilities.
Accommodations are available on request for candidates taking part in all aspects of the recruitment process.
#J-18808-Ljbffr