Minimum 3 - 5 years of experience in an Support Operations/Engineering role with knowledge on Azure Databricks.
Hands-on Experience on Databricks (using PySpark, SparkSQL) / Data integration Service & Monitoring
Good with creating operations documents.
Atleast 2 development/Operations Support project experiences with the above skillset.
Added advantage, knowledge on PBI + DAX.
Having coding skills Scala or Python and Pyspark.
Good understanding on Azure (ADF, Databricks, ADLS) Technical expertise on building ETL data pipeline, extraction from different source like Azure storage, PostgreSQL, structured and unstructured files etc..
Writing data transformation and actions using spark (RDD, Data frame, Data set, Spark SQL)
Good understanding Version controlling (Git, github, azure devops) and Dev ops
Good understanding on SQL (Should be able to write the optimize Sql queries and schema databases, normalization)
Good to have: - Java, Shell Scripting, Linux commands.