In this post, I will cover some of the learnings from implementing large-scale xP&A (Extended planning & analytics) projects that include the handling of 2-way (read and write operations) within Power BI beyond the typical reporting and visualization aspects.
In particular I will focus on analytical applications that include data collection, workflows, and more advanced calculation logic. These are often part of xP&A initiatives like budgeting and financial forecasting in global groups with many subsidiaries or S&OP (Supply & Operations) planning aspects that combine the sales functions with production or service provision.
These projects typically involve three aspects:
xP&A (Extended Planning & Analytics) is a strategic approach that extends beyond traditional reporting and visualization in business intelligence, encompassing advanced analytics, data collection, workflows, and write-back capabilities. It is commonly used for complex tasks such as budgeting, financial forecasting, and sales and operations planning (S&OP), involving both read and write operations within data analysis platforms like Power BI.
Business Planning at Hyper Speed in Power BI and Excel
See how Acterys empowers teams to perform business planning in record time to stay profitable in these turbulent times.
Read WhitepaperFor these xP&A type projects, a properly maintained central dimensional data model is normally always required. In Marco Russo’s (Doyenne of the Power BI world) words in a (very worthwhile watch!) recent interview : “If you think a company can get rid of a curated data model: no it’s impossible”.
This data model includes the data in one or more fact tables and master data (additional details for the coordinates in the fact records) in related dimension tables:
There is a wide range of research and literature available that provide the basis for this approach among others covering Data Warehousing principles (Inmon/Kimball et al) and data modelling basics in Power BI (Russo/Ferrari et al).
The key reasons include among others:
A “quick and dirty” import of a few Excel files to Power BI will almost never suffice in the covered context.
A proper central data model independent of Power BI is even more relevant if the application involves write-back as the data should then be stored preferably in the same existing database following the “single version of the truth” principle.
Plan-Enable Any Power BI Model.
Sync Data Tables with Your DW. Send PDF Reports with Ease.
Explore Acterys Power BI SyncAs an alternative, some Power BI visuals offer users to enter data that is then stored in the visual, but these are typically unsuitable in the scenarios discussed here, as the data volumes required will always exceed the limited storage in a visual (at max a few thousand records). It is not uncommon in the projects that we work on, that customers have dimension master data tables (e.g. customers, products, etc.) with millions of rows and this is by far outnumbered in the associated fact tables.
Apart from the serious limitations from a data storage perspective with decentral (visual-based) approaches, collaboration is equally problematic: Users can only work in the one visual in the one Power BI file. In some cases, vendors address this with synchronization capabilities that allow export of deformalized tables to Excel or another data store.
This without fail will lead to the dreaded spreadsheet chaos with numerous disparate tables “flying around”. Not to mention other issues that a deformalized format brings. As opposed to a properly defined dimensional data model with fact tables that are connected to dimension tables with permanent IDs, all hierarchy information is stored statically in one table with clear text names:
The problem with that approach is that in case of a single-dimensional change e.g., when a product, an account name, or a related hierarchy changes, the whole “denormalized house of cards (tables)” crumbles.
All this is even further exacerbated by challenges when multiple of these tables from different users or visuals have to be merged.
In addition, the other aspects that we have already covered on what constitutes productive integrations in Power BI apply.
The logic component in FP&A applications revolves predominantly around calculations: for example, metrics, allocations, and drivers. The principle to follow here is that the logic should be as much as possible part of the central data model or at least openly accessible to be used with different front ends. Here it can be useful to support extensible models e.g., using a relational or multi-dimensional (OLAP) database with calculations there, that can be extended with DAX calculations in a Power BI model. These Power BI models support an openly accessible standard that allows access from, for example, Excel or standalone applications where the added logic is still available.
Handling logic solely in a Power BI visual contravenes the above principle as the logic “only lives there”. Access from the outside is -apart from data export crutches, as already mentioned before- unlikely to impossible.
XP&A applications typically require managed sequential steps: for example, submission and approval processes that involve notifications (e.g. email), locking of the submitted data. Following the above principles, equally it is imperative here to store the relevant workflow data points e.g. process status flags in the same central data model and not in a disparate, inaccessible source like a Power BI visual.
When the above principles are considered, we have seen the tremendous success of realizing xP&A processes with Power BI as a frontend extended with write-back visuals AND a proper client-server infrastructure. Typically involving a client-server architecture (web server + central data model) alongside the appropriate governance architecture in particular centrally defined row-level access rights and audit trails (“who did what when”).
A common outcome is that business functions realize very significant benefits by saving substantially on licensing fees previously spent on specialized silo solutions particularly when those utilize their own “exotic datastores” as opposed to existing corporate standards like SQL Server.
Where we see a bit trend for the future is the incorporation of server-side driven standards that apply to decentral components like Power BI visuals. For example, there is no point to repeat the configuration of a financial report every time the visual is used. The visual should automatically configure to the central standard. This equally applies to formatting as other data model aspects and potentially a “smart semantic layer” that treats data points according to the specific nature e.g. scenarios, measure types (flow / point in time), and others.
Please feel free to contact us to discuss your project requirements or share your experiences and success factors for xP&A implementations.
Need Tailored Reporting, Planning & Analytics Solutions?
Talk to our solution experts now
Book a Meeting© 2024 Managility Pty Ltd All rights reserved.