Microsoft Fabric Security - The Foundation You Can't Ignore
- Harri Jaakkonen

- 19 hours ago
- 5 min read

Series · Part 1 of 2
How the platform authenticates, how network traffic is controlled, and why the SaaS model changes what you actually need to worry about.
It always starts the same way. The Fabric proof-of-concept goes well. The data team is excited. Someone in leadership says "let's move this to production," and then — somewhere between the handshake and the go-live — security has a meeting with the platform team, and the platform team says something they really should not say.
"It's SaaS. Microsoft handles the security."
They are not entirely wrong. But they are not right enough.
Fabric is a fully managed SaaS analytics platform, and that does shift a meaningful amount of security responsibility to Microsoft. Understanding exactly which parts — and what you are still accountable for — is the difference between a well-secured platform and a serious incident waiting to be written up in a post-mortem.
The Platform Team Was Not Completely Wrong
Here is the part where Microsoft does earn its keep. Fabric's security is, in the platform team's defense, genuinely "always on" by design.
Every request to Fabric is authenticated through Microsoft Entra ID. Every piece of data stored in OneLake is encrypted at rest. Every byte moving between Fabric services travels over Microsoft's own backbone network — not the public internet. TLS 1.2 is the floor; TLS 1.3 is negotiated wherever it can be.
None of this is optional. There is no toggle for "disable encryption to improve performance." There is no "legacy auth" backdoor. The baseline is real, and it is meaningful.
This is the point where Fabric genuinely differs from the SaaS platforms that gave SaaS security a bad reputation — the ones that handed you a login form and a password reset page and called it a security model.
But the baseline is just the baseline. What you build on top of it is still your problem.
The Phone Call No One Wants to Make
Imagine your organization has just onboarded three business units to Fabric. The workspaces are live. Pipelines are running. Analysts are happy.
Then someone in your security team pulls the sign-in logs. A user authenticated successfully from an IP in a country you do not operate in. From a personal laptop. With no MFA prompt. And they queried a Lakehouse that contains financial data.
The authentication was valid. Entra ID let it through, because nobody had told Entra ID not to.
This is what happens when you hand off security responsibility to the platform without configuring the surface area that belongs to you.
Conditional Access: The Control You Should Have Already Set Up
Conditional Access in Fabric is how you tell Entra ID what "valid authentication" actually means for your organization. Not just a correct password and a token — but the right device, the right location, the right method.
The scenario above gets resolved — before it happens — with a few Conditional Access policies:
Require MFA for any connection to Fabric, full stop
Require compliant devices — only Microsoft Intune-managed devices can authenticate
Restrict by geography — if your users are in the Nordics, there is no legitimate reason for a successful login from Southeast Asia at 03:00
Allow only known IP ranges — enforce an inbound allowlist for corporate traffic
None of this is unique to Fabric. These are the same Conditional Access policies you probably already have for Microsoft 365 and Azure. Adding Fabric as a target takes minutes. Not doing it is a decision, even if it did not feel like one.
If Conditional Access is not part of your current environment — that is a different and more urgent conversation.
Closing the Door Entirely: Private Links
Conditional Access is about controlling who connects. Private Links is about controlling how they connect.
For organizations where network isolation is non-negotiable — financial services, healthcare, government — Conditional Access alone may not be enough. Private Links let you cut off all public internet access to your Fabric tenant and require that every connection originates from an Azure Virtual Network you control. No VNet, no access. Full stop.
The setup process is documented in the Set up and use private links guide. The architecture is correct, and for the right organizations it is the only defensible posture.
The cost is operational complexity. Remote workers, external partners, legacy tooling — all of them need a path through the private network. That means VPN, ExpressRoute, or VNet peering, depending on your setup. Plan this before you enable Private Links, not while fighting a helpdesk ticket storm the morning after.
Conditional Access and Private Links are not competing solutions — they solve different parts of the problem. Microsoft has a good guide on choosing the right inbound solution for your situation.
The Direction Everyone Forgets: Outbound
Most engineers think about security as something that protects the front door. Who is allowed in. How they authenticate. Whether they can reach the platform.
Fabric also reaches out. When a Notebook queries your Azure SQL database, when a Dataflow pulls from an on-premises ERP system, when a pipeline lands data into a storage account — Fabric is making outbound connections. And those connections need to be secured too.
This is where organizations get surprised, because outbound security requires understanding both sides of a connection that traverses your network boundary.
Managed Private Endpoints handle the Azure-to-Azure case cleanly. Fabric connects to your Azure SQL, your Storage Account, your Azure Database for PostgreSQL through a private endpoint that never touches the public internet. Microsoft manages the endpoint. You do not need gateway infrastructure. See Managed private endpoints in Fabric.
Managed Virtual Networks go further. When your Spark workloads — Notebooks, Spark jobs — need network isolation, a Managed VNet gives each workspace its own dedicated virtual network. Compute clusters run in that network, not in a shared environment with other tenants.
On-premises data gateways are the answer when your data sources are not in Azure. The gateway lives inside your corporate network, opens an outbound channel to Fabric, and routes data back without requiring you to poke holes in your firewall or open inbound ports.
Azure service tags cover the remaining cases — Azure SQL VMs, Managed Instances, REST APIs behind network controls — where a gateway is overkill but you still need to manage access. See Service tags in Fabric.
There is no single right answer here. The architecture depends entirely on where your data actually lives.
What Microsoft Owns — and What You Own
This is the conversation that teams need to have explicitly, not assume.
Microsoft owns
Physical infrastructure and hypervisors
Platform patching cycle
Default encryption keys
Microsoft global network
Compliance certifications for the platform
Entra ID integration
Metadata platform operation
You own
Your Entra ID tenant configuration
Conditional Access policies
Private Links decisions
Customer-managed key choices
Outbound network architecture
Who has access to what, inside the platform
The security fundamentals documentation describes the architecture — it is worth reading once, carefully, to understand what you are sitting on.
If Microsoft's encryption is not sufficient for your compliance posture, Workspace customer-managed keys let you bring your own Azure Key Vault keys and control access at that level.
And then there is the part that causes the most damage: who has access to what, inside the platform, once they are authenticated. That is not in this post. That is Part 2 — and it is where most incidents actually originate.
Part 2 covers data access controls, permission models, sensitivity labels, compliance tools, and how Microsoft Purview fits into the picture.
Official references



Comments