The key to certification success is to make sure you have the current requirements and training sites to have the best chance of passing the exam. This guide reviews the Microsoft exam structure, the program's role-based nature, and the material provided.
As with many exams, Microsoft certifications are not all that difficult if you are well prepared. Having had to write a few over the last couple of years, seven to be exact, which shows that anyone can do this if you prepare. This article presents my five most essential tips that have helped me in my certification journey.
Keeping up to speed with the pace of change in cloud-based computing is one of the most challenging tasks for technical resources. This article covers the Azure Updates site which lists resources to keep you informed about not only Azure SQL but also other Microsoft cloud services.
As of June 21, 2019 the exams, Exam DP-200: Implementing an Azure Data Solution and Exam DP-201: Designing an Azure Data Solution have changed. The exams look to be more focused in order to differentiate between a data engineer role from a database administrator role.
For those DBAs are using SQL for data discovery, the move to data science can involve a brand-new set of varied tools and technologies. This article is a walkthrough of setting up the tooling to do some data discovery using Python. By setting up your workflow using, GitHub, VSCode and Python you will have the basic architecture set up for Data Exploration.
This post covers the automation and creation of an; Azure Resource Group, Blob storage, Azure SQL Server and an Azure Analysis Services in a PaaS offering using Azure Automation PowerShell Runbooks.
This article will review a major trap you can find yourself in when using PowerShell in your Runbooks as Azure Automation does not keep your referenced modules up to date.
This article reviews the process of using Azure Data Factory V2 sliding windows triggers to archive fact data from SQL Azure DB. This process will automatically export records to Azure Data Lake into CSV files over a recurring period, providing a historical archive which will be available to various routines such as Azure Machine Learning, U-SQL Data Lake Analytics or other big data style workloads.