PowerPivot and Data Warehousing

A consultant that I know has recently been through the process of reviewing his company’s management information requirements, and also the process of evaluating what technology platform can deliver those requirements. Naturally, a data warehouse was one of the options. A data warehouse, in my opinion, will provide the enterprise-level platform for any organisation to deliver their management information.

In this case though, due diligence flagged up some alternative approaches to a data warehouse, namely the raft of self service BI tools that are out there, including PowerPivot. Other tools that can take data in from a variety of data sources are Tableau (which is also a good front end for Analysis Services) and Spotfire for example.

I think that all of these tools have their merits. PowerPivot is incredibly fast and easy to use, whereas Tableau and Spotfire for example have some handy data visualisation capabilities. Some vendors actually claim that self service tools can replace a data warehouse. Can this really be the case? Whereas these tools do a good job of presenting data in a dimensional model, my personal view is that a data warehouse of course also carries this out, but in addition delivers the following benefits to name a few:

  • Data Cleansing – The power of SQL in the backend and also features like Fuzzy Matching in SSIS mean that you can address the huge problem of data quality. The bonus here is that all data quality issues could be logged, meaning that you can go back to the owners of source systems and let them know that their data is in a mess.
  • Control – If you do encounter data quality issues, you may decide that they are actually too bad to load into the data warehouse and present to your users.
  • Transparency – if you want, you can choose to expose the data quality level of your facts to users, via the Kimball-style audit dimension. E.g. we may have noticed that within our retail data, one store has abnormally low sales. Therefore, we can assign this fact a lower data quality score than the other sales by store for the day, which lets the users know that the data quality steward is aware of the issue, and is looking into it.
  • Slowly Changing Dimensions – A lot of source systems don’t behave. E.g. if we’re tracking a customer’s Marital Status of ‘single’ or ‘married’, you often find that source systems only record the latest change to a given field. Without addressing slowly changing dimension issues in a data warehouse, we may never know that Customer X is now single, but once was married.
  • Conformed Dimensions – When we have multiple data sources for a single dimension, the goal is to ensure that each dimension available to end users represents a single, consistent view of that dimension. Due to the often complex matching and de-duplication that’s required, it’s difficult to see how this would be possible without a data warehouse.
  • Late Arriving Dimension Rows – Going back to the ‘Marital Status’ attribute, we can get the situation where a source system only updates a particular field a long time after the event actually occurred. E.g, if we think that a customer called John Doe is single, but then today we’re told he’s married, but has actually been married for the last two years. If Marital Status is an important piece of management information to the business, then we need to update all data associated with John Doe over the last two years, in order to reflect his correct Marital Status over the last two years.
  • Complex Transformations – I recently encountered a source system where one of the columns in a SQL table held data in a varying length format of ‘#Product Type#Currency#Value, #Product Type#Currency#Value’. So what you basically have here is potentially say 25 records held in one row in a SQL table, whereas another table might have just 7 records within one row. Rows could also be badly formed! We ended up using a bit of C# within SSIS to split this data out so that the sub-elements could be analysed correctly.

I could go on, but I think as a few simple examples the above will do.

Whilst I’m an advocate of data warehousing, I also think that the self service tools, such as PowerPivot, can be complementary to a data warehouse. You’ll often find, for example, that there’s some data that’s not in the data warehouse yet – after all some organisations have a lot of data, and it will take time before all the business analysis etc can be carried out to get this data into the warehouse. Another example is proto-typing, analysts can very quickly build a proto-type model, which can then be production-ised at a later stage. I think this video from Chris Webb is really worth watching – it covers these examples and a few more.

In summary there’s always going to be a need for users to go digging for data and producing their own reports – the data warehouse can’t always deliver everything immediately. PowerPivot is going to be great in this situation, but it will work best when it’s used alongside cleaned and conformed data, which is exactly what a data warehouse provides.