Last week, I had the pleasure of speaking at DATA:SCOTLAND, a free data and analytics conference in Glasgow that brought together nearly a thousand data enthusiasts.
My session, titled ‘Your first Microsoft Fabric Lakehouse implementation’, focused on practical tips for setting up lakehouses using Microsoft Fabric. Despite being relatively new to public speaking, I enjoyed every moment—especially connecting with so many like-minded professionals and discussing data topics all day. I had quite a full room, with 76 participants!
If you’re into data and analytics, I highly recommend attending similar events. Hope to see you at the next one!
Your first Microsoft Fabric Lakehouse implementation
In this session I share my transition journey and insights from building data warehouses with SQL to building them with lakehouses in Microsoft Fabric.
In this session you will learn enough about Fabric and lakehouse concepts to start building your own solutions right away.
The session starts out with a few theoretical concepts to give guidelines, and is then filled with practical examples.
We will go through everything necessary for your first project. From setting up the Fabric Capacity to Workspaces management, environments and importing your own custom Python code.
We end with a practical case study where we implement a Fabric Lakehouse solution for a small B2B services company. During this part you will see examples of the Data Factory orchestration pipelines, the folder and file structure of the data lake, and the PySpark notebooks you need to transform your raw data into insightful information.
Featured photo taken by Mikey Bronowski