Data Engineering & Infrastructure
You pull up to the parking lot with your friends, excited to watch SuperHero 4: The Rhinestone Glove of Power. Your friends are bemused as you leave the engine running, lock the doors, and walk off. But, you explain, "It's so convenient to be able to jump in and go." 4 and a half hours later, you walk out to the parking lot, jump in and drive home. Your friends question your sanity about the wasted gas by leaving the engine running. Still, you explain further, "Don't worry, I have a medium-tier gas subscription, so it makes sense."
Power BI, now integrated into Microsoft Fabric, continues to be a leading platform for business intelligence (BI), offering powerful data visualization tools widely used by business users; however, Fabric introduces a significant shift by requiring always-on capacities, which conflicts with the cloud consumption models that customers have grown accustomed to, especially with Power BI workloads. This shift raises questions about whether these capacities are suited for modern data platforms.
Let's walk through the components of Fabric, including OneLake and DirectLake, and how PowerBI integrates. Then, look at ways to leverage PowerBI without incurring significant wasted costs associated with always-on computing.
Understanding OneLake's Always-On Capacity
First, let's explore OneLake, which is touted as a single, unified big data storage built on ADLS. In Microsoft Fabric, data is stored in OneLake, which resides within Fabric workspaces. Each workspace is tied to a Fabric capacity, and the challenge with this model is that the capacity must always be running to access data from OneLake. This requirement applies even when attempting to access the data from non-Fabric compute environments via Azure Blob File System (ABFSS) URIs.
While all data access is blocked if a capacity is paused, storage billing continues at $0.23 per GB per month. Thus, it represents a significant departure from the more flexible model offered by traditional cloud storage services like Azure Data Lake Storage Gen2 (ADLS Gen2), Amazon S3 (Simple Storage Service), and Google Cloud Storage (GCS), which charge similar fees but allow continuous data access.
Many customers question whether Microsoft can justify the need for an always-on capacity solely for storage, as this approach leads to higher costs. In contrast, with ADLS, customers are only charged for storage, not accessing or browsing data, making the Fabric model seem more expensive for many organizations.
Thinking back to the "leave the car engine running" analogy, this is akin to car manufacturers requiring the engine to be running to read the car manual!
Direct Lake: Capacity Requirements Explained
Direct Lake is a storage mode option for tables in a Power BI semantic model stored in a Microsoft Fabric workspace. It's essentially loading data into memory for faster access. A key point with Direct Lake is that it requires a Fabric capacity license, meaning that datasets must be created and stored within Fabric capacities. While Direct Lake provides high-speed access to data, it has well-documented limitations. If these limitations are reached, the system defaults to direct queries on the Fabric Data Warehouse (DWH), which is slower and still requires active capacity to function.
Although you can pause a capacity in the Azure Portal, as mentioned earlier, doing so will make all Fabric services unresponsive.
While it makes sense that capacity must be running for you to use this in-memory access method on Direct Lake, it really adds more weight to needing or relying on capacities being constantly running to make use of it.
Power BI in Microsoft Fabric
Next, let's look at data access patterns with Power BI in Microsoft Fabric and how pausing capacities affect them.
We've heard from customers who work with Power BI that they often have excess Power BI capacity and hope to use Direct Lake to support additional workloads. However, this can be problematic. For example, you risk overloading the capacity if you decide to allocate extra capacity from a production instance—such as one running executive dashboards. This could lead to throttling, making those executive dashboards inaccessible until the throttling period ends or until you double the capacity.
While scaling out is an option, it involves nearly doubling your resources, resulting in an over-provisioned capacity that you pay for regardless of whether it is fully used.
Paying $5,000 a month for a "medium-tier gas subscription" might make sense if you knew you would predictably consume over that every month before hitting a theoretical limit of that "tier." Still, this consumption-based activity is sporadic and unpredictable, with peaks and troughs much like your data processing.
The Impact of Always-on Microsoft Fabric Capacity
Lastly, let's consider the impact of maintaining a minimum "always-on" F2 capacity for users interacting with OneLake and Fabric Workspaces. In large enterprises, where multiple teams manage their own workspaces, scaling up in the shared capacity model often requires creating new capacities or scaling existing ones. Since a workspace can only be assigned one capacity, and each capacity has resource limits, isolating workloads becomes necessary to avoid throttling, especially for critical tasks like executive dashboards.
Microsoft recommends having multiple capacities to support chargebacks and separate staging environments (Development, Testing, and Production). However, even when not performing heavy queries, developers and analysts still need F2 capacities to explore data and code, leading to a proliferation of under-utilized capacities. For example, organizations might require hundreds of F2 capacities to allow for this basic interaction. Databricks, by contrast, simplifies this with serverless exploration in its workspace and Unity Catalog, reducing the need for always-on resources.
This setup challenges most customers, as it's difficult to determine the minimum capacity needed. F2 also has limitations—you don't have access to Copilot, just your Fabric items and OneLake data. If you need Copilot, you'll have to scale up to F64 quickly. Deciding between managing numerous F2 capacities or fewer, larger F64 capacities—while also keeping Power BI items in mind—can be quite the conundrum.
Microsoft Fabric Capacity: Challenges and Takeaways
As we've explored, Microsoft Fabric has some great features, but its always-on capacity model introduces new challenges for organizations accustomed to more flexible cloud consumption models. Understanding these challenges is crucial when determining how to leverage Fabric's capabilities.
- Fabric Capacities Must Be Active: Capacity cannot be paused to access data in OneLake or use any Fabric experiences.
- Restricted Data Browsing: Users cannot browse data in Lakehouse Explorer when capacities are paused, which impacts all Fabric services.
- Non-Fabric Engine Access: Tools like Databricks or Tableau cannot access OneLake data via shortcuts if Fabric capacities are inactive.
- Capacity Throttling Risk: Scaling a single capacity across multiple workloads is risky and can lead to throttling, disrupting services like executive dashboards.
- Capacity Management Challenges: Identifying users who need a paused capacity can be difficult. Pausing capacities creates admin burdens, especially since serverless computing cannot auto-restart and must be manually re-enabled.
- Separate Capacity for Direct Lake: Keep Direct Lake workloads separate from existing Power BI production capacities to avoid throttling, as exploratory queries can impact performance.
While Fabric offers powerful tools like OneLake and Direct Lake, managing always-on capacities effectively is essential to avoid performance bottlenecks and unnecessary costs.
Is Always-On Capacity a Necessity or an Unnecessary Expense?
Microsoft Fabric's always-on capacity requirement adds complexity and costs to your data strategy. While Fabric offers integrated solutions, its rigid capacity model can lead to high costs for minimal flexibility. For businesses that need agile, workload-based scaling, Fabric's model creates inefficiencies that disrupt performance and waste resources.
We recommend evaluating whether your business can afford the limitations of Fabric's always-on model. Solutions like Databricks offer a scalable, serverless approach to data access, allowing the freedom to adapt without constant capacity expenses.
Aimpoint Digital has a proven track record of guiding organizations toward efficient, flexible data solutions. Whether it's enhancing data infrastructure, onboarding to Databricks, or ensuring seamless integrations, our team of seasoned data experts are here to support your data journey. Visit our "Meet an Expert" page to connect with our team and get started today.