Senior Data Engineer (immediate role)

Remote
Full Time
Experienced
Responsibilities
  • Able to participate in business discussions and assist  gathering data requirements.  Good analytical and problem-solving skills to help address data challenges.    
  • Proficiency in writing complex SQL queries for data extraction, transformation, and analysis.  Knowledge of SQL functions, joins, subqueries, and performance tuning.  Able to navigate source systems with minimal guidance to understand how data is related and use like data profiling to gain a better understanding of the data. Hands on experience with PySQL/Pyspark etc.  
  • Hands on Experience in creating and managing data pipelines using Azure Data Factory.  Understanding of data integration, transformation, and workflow orchestration in Azure environments.   
  • Knowledge of data engineering workflows and best practices in Databricks.  Able to understand existing templates and patterns for development. Hands on experience with Unity Catalog and Databricks workflow.  
  • Proficiency in using Git for version control and collaboration in data projects.  Ability to work effectively in a team environment, especially in agile or collaborative settings. 
  • Clear and effective communication skills to articulate findings and recommendations for other team members.  Ability to document processes, workflows, and data analysis results effectively. 
  • Willingness to learn new tools, technologies, and techniques as the field of data analytics evolves.  Being adaptable to changing project requirements and priorities. 
Skills
  • 7+ years of overall experience with more than 5+ years of expertise in Azure technologies with Certification is mandatory
  • Azure Databricks, Data Lakehouse architectures, and Azure Data Factory.
  • Expertise in optimizing data workflows and predictive modeling.
  • Designing and implementing data pipelines using Databricks, Spark,
  • Expertise in batch and streaming data solutions, automating workflows with CI/CD tools like Jenkins and Azure DevOps, and ensuring data governance with Delta Lake
  • Spark, PySpark, Delta Lake, Azure DevOps, Python.
Share

Apply for this position

Required*
We've received your resume. Click here to update it.
Attach resume as .pdf, .doc, .docx, .odt, .txt, or .rtf (limit 5MB) or Paste resume

Paste your resume here or Attach resume file

Human Check*