Key Responsibilities:
- Design and implement data pipelines using Azure data services
- Develop, optimize, and maintain ETL/ELT processes
- Work with large datasets to ensure data quality, integrity, and performance
- Integrate data from multiple sources into Azure data platforms
- Collaborate with data scientists, analysts, and stakeholders to support data needs
- Monitor, troubleshoot, and optimize data workflows
- Ensure data security, governance, and compliance standards are met
- Automate data workflows using scripting and orchestration tools
Required Skills:
- Strong experience with Azure Data Factory (ADF)
- Hands-on experience with Azure Data Lake and Azure Synapse Analytics
- Proficiency in SQL and data modeling
- Experience with Python or PySpark
- Knowledge of ETL/ELT processes and data warehousing concepts
- Experience with Azure Databricks
- Familiarity with REST APIs and data integration techniques
- Understanding of CI/CD pipelines and DevOps practices