Archive

Author Archive

PowerPivot Tips and Tricks – First Findings

November 22nd, 2010
Comments Off

I just finished a little project with PowerPivot together with James Beresford, and I feel compelled to share some of the lessons I learned during these exciting if a little bit stressful few days. There are a few things I have heard in the past in regards to good PowerPivot model design, but I’d like to re-iterate some, and maybe add some more to what other bloggers have already mentioned. As it is usual with my posts – the following is not an exhaustive list, but may resonate well with newbie PowerPivot developers like me (which at this stage is probably 99% of Total).

Integer Keys

I found out the importance of having integer dimension keys. The sample model I was given to work with had floats as primary keys for all dimensions. This has always been a pretty bad choice, however, with PowerPivot avoiding this becomes even more important as these get loaded in memory and unless your workstation has heaps of it, keeping the size of the solution smaller is extremely important. Therefore, even though PowerPivot should work just fine with other data types, I would strongly recommend choosing integers for your keys.

Previewing and Filtering

Always have a look at the table you are importing into PowerPivot and make sure you deselect all the columns from your tables which you do not actually need. They can always be selected later on if needed. As it also is with SSAS – starting from a small model and extending it later is much better than starting from getting every possible column in and then removing them. This is even more valid with PowerPivot as you will use up a lot of memory to retrieve everything from the back-end database and unless you really don’t care about memory this is not a good thing.

Co-existence with SQL Server

I know for a fact that PowerPivot and SQL Server can co-exist peacefully. However, since SQL Server was running on the same box, I had to limit its memory usage to 150Mb so it doesn’t cause issues with PowerPivot.

Moving logic to the database

Instead of trying to do more in PowerPivot, if you are designing the solution and/or building the model – make sure that you move join logic, or similar in the back-end database. There is no point in importing tables in PowerPivot when the relationships between them are one-to-one. Instead – you can join them in a view, and then import them as one piece, thus avoiding some relationships in Vertipaq, which will improve performance and will make it easier to work with the model. In example, I had a 1-1 relationship through a large intermediate table (3+mil rows), which I managed to get rid of completely in the database back-end by applying a simple join.

Many To Many Relationships

If you can avoid them – do it. However, if you can’t, you can read Marco Russo’s, Alberto Ferrari’s and this PowerPivot FAQ posts to get more info on how to implement them. Unfortunately, we have to create multiple calculated members in DAX to avoid the problem and if we have many measures, this can become a bit tedious.

Saving Regularly

Yes, you should save your work, especially when you notice that memory gets scarce. Maybe not every 5 minutes, but at least every 30min should be a norm. Since once memory is very low PowerPivot starts having issues with itself (i.e. becomes a bit unstable), and you can easily lose some work. It does take some time to save a large model on disk, but it may be well worth it. This may sound as a no-brainer, but it is easy to stop saving our work because of the time required to do so. Therefore, it is possible (as it was in my case) to get a funny message asking you to Continue or Cancel and you are not too sure what’s the better choice after a couple of hours of work…

PowerPivot ,

 

Ordering Dimensions in PowerPivot

November 22nd, 2010

UPDATE (2011-07-20): Indeed, with the release of CTP3 of SQL Server Denali we can order the dimension members by another column effectively making this post obsolete. Please ignore the advice below if you are working with a version of PowerPivot and BISM >= Denali CTP3.

I am sure that this post will get outdated very soon, but until that happens it is a neat little trick that PowerPivot users may need to know.

While in Analysis Services we can order our attribute hierarchies any way we want, in PowerPivot we are stuck with the default sort order, which can be annoying in some cases. The other day while building a PowerPivot model (which performed GREAT by the way), I had to order a band dimension, which had members like “Band 1-4″, “Band 5-10″, “Band 11-14″, “Band 15-19″,…, “Band 100-109″, etc. Alphabetically this got ordered very wrong as all bands starting with “Band 1…” got bunched up in the beginning followed by “Band 2…” The users were very unhappy and after searching for a solution I got a bit disheartened because I could not find a solution anywhere online.

The following is Adventure Works 2008 in Excel 2010 (through PowerPivot) and the default ordering of the English Month Name attribute in the Date dimension:

What I noticed was that the actual order which I wanted was reflected by the primary key of the table, which went 1,2,3,4,… So, I un-hid the key from the Pivot Table, renamed it to Band Order and showed users how they can nest it in front of the Band attribute. Because I had one key per band I did not have to worry about having multiple unordered bands under the order attribute.

Furthermore, if placed on columns, the Order attribute row can be hidden for sheets which remain static (it’s ok if they have slicers):

Otherwise, if it is on rows, we can use the Classic Pivot Table view and then hide the order column, which gives a different, but equally useful result:

As a side note, when ordering date attributes like in my examples above, you can use the Sort Options -> More Options functionality in Excel like described in a PowerPivot Team’s post. I used the month as an example and the technique described here should be used when having trouble with other attributes. You can also sort them manually by dragging and dropping them in the pivot table as Dan English explains here.

PowerPivot ,

 

Thoughts on BISM, SSAS and MDX

November 12th, 2010

I missed PASS this year but I am addicted to all blog posts coming from the attendees. I am reading, sending links to colleagues and mostly thinking about the future – both with disappointment (rarely), and excitement (99% of the time). There were two very significant blog posts by Chris Webb and Teo Lachev in the last couple of days and I would like to share my thoughts on the future of SSAS.

Chris and Teo are both moderately unhappy (that’s how I got the mood from what they wrote) with the future of SSAS because of the shift of focus from “traditional” M/H/R OLAP to the in-memory Vertipaq engine, as well as the switch from MDX to DAX. My initial feeling was similar – I like MDX and the way SSAS works. Furthermore, after all, SSAS is the best OLAP tool on the market right now and it got there after a long evolution. As Donald Farmer mentioned on Tech Ed in Australia this year, SSAS is the best-selling OLAP tool for Oracle. It also is the best-selling OLAP tool in general. Leaving a tool like that and moving on to a completely new technology is a stunning move in my opinion.

There are a few things, which I think are important to consider before declaring it as a wrong move. First of all, in today’s affordable 64bit world, there are a few limits for in-memory OLAP. Also, MDX, if powerful, different, interesting and sometime elegant, is HARD. I know (personally) literally 3-4 developers in Australia who know how to write good MDX and grasp the concepts behind it. The vast majority of feedback on MDX is that it is “too hard”. Therefore, the two main selling points of SSAS – ease of building analytics and performance are hard to improve on without radical action. The only way to make Microsoft OLAP easier to use is to move on from MDX and the easiest way to improve performance is to go in-memory. This is why I am very optimistic about the future of SSAS and BISM. If small-size competitors like QlikView can build a good product (technologically), which can in some cases outperform SSAS, I am very confident that the SSAS team with its vast expertise on building the most successful OLAP tool in the world can beat them over a few years. It is better to start earlier rather than later.

On the flipside, the handling of the whole matter is puzzling and disorienting. Chris had a few really good points about the problems that may arise from such a swift transition. How do we sell traditional pre-Vertipaq SSAS when it is getting replaced with a new technology, which will require a couple (at least) releases to beat everyone else and become Enterprise-ready? This is a question I would like to get an answer for, as I am an advocate of Microsoft BI and apart from the minor inconvenience to my base for arguments with advocates of other BI suites/products will no doubt cause a major headache when talking to prospective clients. This is in fact my only concern.

To recap – I am very happy about the ongoing evolution of SSAS and I would be very happy when Vertipaq becomes the SSAS of in-memory BI, when we have an easier language to query it and when the stack gets better integration between its components. However, the transition is too abrupt for my taste. The “security through obscurity” approach to marketing I am observing may not be able to convince enterprise customers with complex needs and strategic plans to adopt a tool in transition. I hope that Microsoft does put more effort in a seamless migration path from SSAS to Vertipaq, so that the #1 spot of Microsoft OLAP technologies gets preserved through this drastic change.

Jumping straight to Amazon now, to buy a book or two on DAX – a great opportunity for me to get a head start in DAX and PowerPivot before we get the technology in SSAS J My previous concerns about the PowerPivot adoption rates because of a few little things is now replaced with excitement over the fact that we will get this new technology on a full-featured client-server architecture (if not fully-featured in Denali, then soon after that).

UPDATE: Please note Amir Netz’s response to Chris’ post in the comments – it explains the intent behind BISM and paints a brighter future for MOLAP and MDX.

UPDATE #2:TK Anand has a blog post in regards to this topic here.

UPDATE #3:Chris Webb’s follow up with a more optimistic tone here.

SSAS , , ,

 

Avoiding Multiple Role-Playing Date Dimensions

October 29th, 2010

If we investigate the Adventure Works SSAS solution, we’ll notice that whoever built it has chosen to use a role-playing Date dimension for various dates (e.g. Date, Ship Date, Delivery Date, etc.). I am not saying that instead, we should have multiple identical date dimensions – no, this would be a very bad idea. However, we could simplify things a bit by providing one Date dimension and then build a separate “Event” dimension, which represents the various events during the lifetime of an order. Possible dimension members would be Ordered, Shipped, Delivered, etc.

Let’s see how this can benefit us as developers, as well as our end-users. Firstly, often enough we want our cube to be able to answer a question like: “For this date, I want to see how many orders got ordered, shipped and delivered”. If we do use a Role-Playing date dimension like in Adventure Works, these are not very easy to answer. If we assume that our users use an ad-hoc query tool like an Excel pivot table, this sort of functionality is not possible out of the box and they have to separately pick each date dimension and sequentially replace it with the others to get different counts of orders – there is no way to get the data in one pivot table at the same time, as one order will usually get Ordered and then Shipped on different dates. Therefore, if we slice by both dimensions and choose the same date in both, we will get no results. To allow this, we can build a specific measure – in example “Orders Shipped” like this:

CREATE MEMBER CURRENTCUBE.[Measures].[Orders Shipped]
AS
    (LinkMember([Date].[Calendar].CurrentMember, [Ship Date].[Calendar]),
     [Measures].[Internet Order Count]);

This way our users can slice only by Date and the Orders Shipped measure will show the count of internet orders for the selected Date. This is however a bit hard on the users, as they would need to get a similar measure for each role-playing date dimension which they need to query like this, plus they would need to know which instance of it they should use for slicing. Furthermore, LinkMember is slow and better avoided. I am considering this sort of measures a “hack”, which should not be in a final implementation unless absolutely necessary. I have, so far, found that in most cases multiple Date dimensions lead to some sort of a scenario where someone asks for something which leads to LinkMember.

Another point against using multiple Date dimensions is the fact that an average date dimension would have approximately 2000-5000 (I will use 3000 for convenience) members. If we have 3 such dimensions, the complexity of our cube grows and the theoretical cube space expands to 3000^3 = 27,000,000,000, or 27 billion cells. You can see how it would grow exponentially with each incarnation of our Date dimensions. Of course, this is mostly sparse, however it still impacts performance. In general, adding dimensions increases complexity, which reduces performance.

There is another shortcoming of this approach. This is the case when an order is Ordered but not yet Shipped or Delivered. More often than not, developers choose to leave the dimension key in the fact table for the Shipped and Delivered dates as NULL, which in SSAS will get either converted to Unknown, or will be excluded altogether. I hate NULL dimension keys. Therefore, I always prefer to replace the NULL with something meaningful – in most cases with -1, but in the case of a Date dimension this becomes a bit different. The common approaches would be to either replace the date with an outlier value (e.g. 1900010 or 99991231), or with the first/last values in the dimension (e.g. 20000101 or 20200101). Querying this type of a fact table can also cause headaches. Asking “How many orders are Ordered but not yet shipped?” in MDX has to be replaced with asking “How many orders are Ordered but shipped on 99991231?” For tricky developers like us this is ok, but I would hate to explain this to Excel users.

Lastly, I want to discuss the issue of adding an additional “event” for the orders. In example, if we want to now record when an order gets Delivered, we have to add one more dimension to our fact table, change our cube to include that and potentially we would need to change some of our MDX code. This is a major inconvenience which is better avoided.

I mentioned an alternative. This is one Date dimension and an Event dimension (it could be better to call it Status in many cases). The Event dimension can contain only a few members – in our case Ordered, Shipped and Delivered. In the fact table we record each event. Therefore, when an order gets placed, we insert one row with the applicable Date and Event (in my example – Ordered). Then, when an order gets Shipped, we insert another row for the same order but with a different date and with an Event of “Shipped”.

Let’s see if this eliminates the problems above (hint: yes, it does):

Now users can simply drop our count of orders measure in the data area in a pivot table, then filter by a date and place the Event dimension on rows – the pivot table displays the relevant counts for the selected date. Thus, we eliminated the ambiguous bit.

There is also no need to use unusual dimension members for events which have not happened – we simply have no fact rows for them. Therefore, when we want to query the cube for orders which are ordered but not yet shipped, we can just ask for ones which are empty when sliced by the Shipped member of the Event dimension:

CREATE MEMBER CURRENTCUBE.[Measures].[Ordered but not Shipped]
AS
    FILTER([Order].[Order].[Order],
                     ([Event].[Event].[Event].[Ordered], [Measures].[Order Count]) > 0 AND
                     ([Event].[Event].[Event].[Shipped], [Measures].[Order Count]) = 0).COUNT;

We also completely avoid using LinkMember and the performance issue around the cube complexity, since we have only one Date dimension and a few Event dimension members. Instead of exponentially, the potential cube space grows linearly with each Event (instead of 3000^3, we get 3000*3 for three Events).

As for the last issue – when we want to add a new event, we can do this by simply adding one more Event dimension member and start inserting rows in the fact table with its key. As simple as that.

Therefore, yes, we have eliminated the problems I discussed.

Potential drawbacks (there are always some):

  1. The fact table will grow faster as we would be storing multiple rows per order. Well, this should not normally be a huge issue, but in some cases it could be. From a pure SSAS point of view, this is a non-issue but it should be considered in some scenarios. A mitigating factor is the decrease in row size in result of the elimination of the multiple Date keys but this is not likely to offset the extra rows in most cases.

  2. Storing amounts against orders becomes a bit more difficult in this case, because we may have to duplicate the amounts for each event and use something like LastNonEmpty to not sum the amounts over time, as well as make the Event dimension non-aggregatable, so amounts do not get aggregated for two different events. Also, we could possibly move the amounts to their own fact table.

  3. If we query the fact table in T-SQL as well, this becomes a very inconvenient model as simple tasks as finding the number of dates between an order is Ordered and Shipped cannot be done with a simple DATEDIFF() call. This is not really a problem in SSAS and we are talking about SSAS in this post but you should keep it in mind.

Stefan Riedel, a colleague of mine helped me with identifying some problems – thanks to him this post is not as incomplete as it could have been. There could be more, and I would welcome you to comment on this post with possible problems and solutions, which could help other readers implement this sort of a scenario better.

SSAS , ,

 

Importing Azure DataMarket feeds in PowerPivot

October 29th, 2010

I just had a quick play with the new PowerPivot add-in for Excel and I tested the import with a few data feeds from the new Azure Marketplace. Straight away, I have a few lessons learned points which may be good to share:

  1. The instructions of how to import the feeds through PowerPivot are slightly incorrect. Instead of “Getting Started with this Dataset”, you should click on “Explore this Dataset”. Then, on the bottom-left there is a drop-down, which allows you to select PowerPivot. From here onwards it’s staright-forward.

  2. Wait for the a feed to finish before importing the next one. It may take a while with a large feed and I was impatient (on a 64bit version for reference), which crashed my Excel.

  3. Importing the dataset through the Service Root URI (trying to paste this in PowerPivot) will result in a Bad Request 400 even though the connection tests are successful. Marco Russo blogged about adding a slash in the end of the URI here.

And a quick suggestion – it would be good to see the size of the feed we will be importing in the Explore Dataset page. Since it displays the top 100 items only, there is no way of telling if there are 101 or 1000000000 feed items – which in term doesn’t allow us to prepare for a long wait.

Apart from these, I had not other issues and I am happily browsing the datasets in PowerPivot to the amazement of my .NET colleagues.

PowerPivot , ,