Exp: 10+ years
Location:100% Remote(Should be ready to work in EST)
Must Have: Azure
Senior Azure Data Engineer will be responsible for:
- Design and Develop data pipelines utilizing Azure services such as Azure Data Lake, Data Factory, Databricks, Synapse, SQL Server environment.
- Lead a team of Azure Data Engineers. Responsible for technical delivery and act as advisory to any related technical challenges.
- Develop data transformations utilizing ADF functionality and Databricks Python processing.
- Working with external on-prem partners to bring data into cloud environments.
- Designing and maintaining data flow design and schemas.
- Working with the Data Architect to translate functional specifications into technical specifications.
- Partner with data analysts, product owners and data scientists, to better understand requirements, solution designs, finding bottlenecks, resolutions, etc.
- Support/Enhance data pipelines and ETL using heterogeneous sources.
- Work with other internal technical personnel to troubleshoot issues and propose solutions.
- Support compliance with data stewardship standards and data security procedures.
- Apply proven communication and problem-solving skills to resolve support issues as they arise.
- Overall 10+ years’ experience as Senior Data Engineer / Lead designing and developing Big Data pipelines utilizing Hadoop ecosystem and/or Cloud.
- 5+ Years’ experience with the Azure ecosystem is a must – Azure Data Lake, Data Factories, Databricks, Azure Functions, Azure SQL Datawarehouse.
- Experience working with Azure Synapse
- Experience with Microsoft SSIS and developing SSIS packages to extract data from on-prem sources such as SAP
- Experience with Databricks to develop Python processing modules and integrate them with ADF pipelines.
- Knowledge of design strategies for developing scalable, resilient, always-on data lake
- Programming – Python, Spark, or Java. Python highly preferred
- Experience with Query languages – SQL, Hive, Impala, Drill etc. SQL highly preferred
- You will transform data using data mapping and data processing capabilities like Python, SQL, Spark SQL etc.
- Strong development/automation skills. Must be very comfortable with reading and writing Python, Spark or Java code.
- Expands and grows data platform capabilities to solve new data problems and challenges.
- Ability to dynamically adapt to conventional big-data frameworks and tools with the use-cases required by the project.
- Experience in agile(scrum) development methodology
- Excellent interpersonal and teamwork skills
- Ability to work in a fast-paced environment and manage multiple simultaneous priorities
- Can-do attitude on problem solving, quality and ability to execute
- Masters or Bachelor’s degree in engineering in Computer Science or Information Technology is desired
Nice to have
- Experience with Theobald connector to extract data.
Let us know
Help us maintain the quality of jobs posted on RemoteTechJobs and let us know if: