Source Control for Database Development with SSDT

December 6th, 2013

I had never had the change of using source control for database development before I started working on my current project. I have used TFS and Subversion for SSAS, SSRS and SSIS projects in the past. Since it is very difficult to edit manually the rdl, dtsx and SSAS-related files, TFS/Subversion took the role of a glorified storage location with some version control capabilities.

With SSDT we can now enjoy the full set of capabilities of a source control system. This is by far the most compelling reason for me to use SSDT. Sure, there are other very very nice features which make SSDT way better IDE than the plain old Sql Server Management Studio so many SQL Server developers use: top 3 would be dependency/reference checking, better IntelliSense and schema compare (which I used to do with RedGate’s SQL Schema Compare in the past). But, source control is not just about convenience – it makes you and your team more productive and reduces stress levels (yeah, it does!). T-SQL is much like application code – it’s easy to modify by a developer, it’s readable and merge-able. So, it is a prime candidate for being managed using a source control system.

SSDT is free. There are absolutely no drawbacks to using it in a team db development environment and I would strongly encourage anyone who has not tried it to do so. The initial investment in learning how to use it will pay off handsomely in the longer term. Things like _old suffixes to tables and stored procedures become unnecessary, we don’t have to worry about deleting a piece of code and then losing it forever, daily backups for the purpose of not losing code, and in fact – we stop worrying about losing code in general. It will be there forever and we can track its progress (we can also see who breaks it). I’m sure most db developers would be well aware of the benefits of source control, but looking back, my projects always went ahead without using SSDT because the perceived complexity of using SSDT outweighed the potential benefits.

It is not hard. All you do is create a db project and start adding code. That’s in the form of .sql CREATE scripts. And that’s about it. Your project is configured to deploy to a SQL Server instance. When you build the solution, the output is a dacpac. When you deploy it to an instance the dacpac gets compared to what is already there and an incremental update is applied to the target environment. If you add a non-nullable column to a table which already has some data in it you will get a deployment failure. But try doing it on a live db – you’ll get the same issue, right? And since you are in SSDT if you change the name of a table you’ll get errors on all objects referencing it (like stored procedures) and your solution won’t build. Therefore, using SSDT prevents you from breaking your solution’s consistency. You can also refactor code – a right click on a function/stored proc reference allows you to change its name across the whole solution. All this plus source control. Still not convinced? Well, you may need to go through a few more db development projects (especially in a team environment), get pissed off with your teammates, lose some code, spend a few hours chasing code references to realise how important and empowering SSDT can be.

If your whole team is not on the same page, you can use SSDT solo. Import the db in SSDT and off you go – now you can tell everyone how they break your project all the time with changes which are not properly propagated across the solution. You can also import all new changes applied to the live db through the handy schema compare (SQL toolbar item). Just compare the live db to your project and it will show you all new objects, as well as all changes to existing objects in your solution. Applying these will effectively synchronise your solution with the new stuff other developers apply to the live db.

But don’t just take my word for it – download SSDT for free from: http://msdn.microsoft.com/en-gb/data/hh297027 and give it a good go before you decide that you are too old for it!

Oh, and if you do think it’s too much to handle, or if you would like to get your SSDT techniques honed a little and you live in London I’d recommend either coming to work on my current project, or letting you boss know of how great SSDT is, and how he/she can empower your team and splash a little on:

http://www.technitrain.com/coursedetail.php?c=28&trackingcode=CWB

T-SQL , ,

 

Running PowerShell Scripts in SQL Agent Using CmdExec

August 3rd, 2012
Comments Off

If you have tried to execute PowerShell scripts through SQL Server Agent you may have noticed that the “version” (technically it’s not a version – see the links below for more info) of PowerShell run by the agent is not the same as the command-line PowerShell you may have tested your scripts with. The differences are explained in quite some depth by Chad Miller in his “The Truth about SQLPS and PowerShell V2″ post and in his shorter answer to a question on StackOverflow. In brief, in all versions of SQL Server (including 2012), SQL Server Agent runs PowerShell scripts through a sqlps.exe process instead of the powershell.exe we get on the command prompt. Most annoyingly there are core syntax differences which do cause issues when running scripts with SQL Agent over sqpls. For example, if you have a construct like $(), or a Get-Date call, SQL Server Agent will not recognise the syntax and you’ll get a syntax error. It looks like SQL Agent’s Powershell task doesn’t like statements which can be interpreted as sqlcmd ones. Luckily, there is a very simple workaround if we want to circumvent this behavior. All we have to do is save the script somewhere on the filesystem (which SQL Server Agent’s service account can access, of course) and execute it through an Operating System (CmdExec), instead of a PowerShell step:

There we can simply call our script by:

powershell <script>

For example:

powershell “C:\test.ps1″

This will execute the test.ps1 script. All we have to do is make sure that the SQL Server Agent account can see the script (permissions) and also that it can read/write to the locations touched by the PowerShell script. This way if you test scripts through the powershell.exe PowerShell environment you will get the same results in SQL Server Agent. And if you want to have the SQL Server cmdlets available, you can load them as a module as described here.

Other ,

 

Cleaning up Hanging Traces with PowerShell

July 26th, 2012
Comments Off

This post is a brief follow-up to my previous post Counting Number of Queries Executed in SSAS. There we saw how we can create and save server-side traces in PowerShell. If you have played with those scripts chances are you have a few hanging traces on your servers. Other possible sources of hanging traces could be Profiler crashes (not on close as it usually cleans after itself), or other applications which do not drop traces they create. To check what’s running on your servers have a look at the XMLA scripts in this blog post by Karen Gulati. It could be nice to have a PowerShell script which drops all traces other than the ones you need. The following script does just that. As it loops over a number of servers you don’t need to install it on multiple machines (provided you have a suitable account). Note that if you are an ASTrace user you may not want to use this as the traces ASTrace creates do not have distinctive names which you can add to the $ExcludeTraces array, so the script would just clean them up as well.

[Reflection.Assembly]::LoadWithPartialName(
    "Microsoft.AnalysisServices") > $null
$SSASServers = @("myserver1", "myserver2")
$ExcludeTraces = @("FlightRecorder", "Important Trace")

foreach($srv in $SSASServers) {
    $conn = 
        New-Object Microsoft.AnalysisServices.Server
    $conn.Connect($srv)

    $traces = @($conn.Traces | 
        where {!($ExcludeTraces -contains $_.Name)})
    $cnt = $traces.Count-1

    while($cnt -ge 0) {
        $traces[$cnt].Drop()
        $cnt--
    }

    $conn.Disconnect()
}

Thanks to Darren Gosbell for his assistance with all things PowerShell :)

SSAS , , ,

 

Is Excel 2013 Power View hurting SSAS?

July 21st, 2012

Since the release of Office 2013 Preview, there has been an awful lot of commentary both in the BI blogosphere and in all media mostly praising the new look and features Microsoft are giving us. I agree that the new suite is getting a much needed face-lift (after all 2007 -> 2010 did not deliver much in this regard), and the mandatory for post-2010 releases “touch-enabled” interface. As a BI guy, the most interesting thing for me in the new Office is Excel, and within Excel, Power View (not completely ignoring the new Pivot Table options like adding new calcs natively, without using the OLAP PivotTable Extensions plug-in).

There are deficiencies in Power View as it stands now (maybe that is why it is an add-in, not enabled by default) but it is a nice data exploration tool which even Stephen Few told me was going in the right direction. It still doesn’t allow us to do things like Top 10, % of Total, etc. which I love in Excel Pivot Tables, but I am hoping that we’ll get all these and more at some point. Together with maps, small multiples and highlighting, Power View does seem like a potential killer application, making other tools like Tableau seem within reach.

However (here we go), in my opinion it has one drawback which really spoils the good things: it doesn’t work on SSAS Multidimensional cubes. I am sure that if someone from the Microsoft SSAS or SSRS teams reads this they wouldn’t be surprised. After all they have heard this numerous times since the release of Power View (even when it was Crescent). Chris Webb wrote about it and caused some stir, it was discussed and the issue was parked as “will be fixed at some point”. I don’t want to resurrect it in its previous format since we now know that a fix is coming (as publicly stated by Microsoft at PASS and other forums), but I would like to point out that last week after demoing Excel 2013 in front of some decision-makers in the organisation I currently work at, I had to answer questions like “So, we can’t use Excel with SSAS at all?” and “Do we have to upgrade all our cubes to in-memory models now?”. This made me think: we can’t cover the whole feature set of SSAS MD in SSAS Tabular, yet we can’t use the new end-user tools on SSAS MD as well. Basically, I am left with one option only – to recommend SSAS MD and Excel Pivot Tables as the only strategy for corporate BI competitive with other vendors like IBM and SAP. Furthermore, since we can’t use DirectQuery on Teradata, I can’t also say that SSAS Tabular is better than SSAS MD as an analytical platform on top of Teradata. Yes, one day when we get it working it will trump all other options, and the Teradata guys I work with are genuinely excited about the future, but it is unfortunately the future, while other options are there now. And the future in IT always looks bright, while the present is a different story altogether.

So, is Excel 2013 Power View hurting SSAS MD? Maybe not directly, but the more we promote Power View (hint: as it’s getting deployed on every workstation we hardly need to promote it), the more we also highlight the deficiencies in SSAS Tabular and make SSAS MD look like the neglected sibling. As for my demonstrations of Microsoft BI, I would probably treat the two components of SSAS – MD and Tabular as two separate stacks. MD – the best out there for corporate BI, and Tabular – the best for team/personal BI, not to be mixed together. To those who can’t wait to get their hands on a powerful, modern analytical tool working on the iPad and use it on a powerful, robust, widely used analytical engine which works on less than terabytes of memory (let me spell it out: Power View +mobile on SSAS MD), we have to tell: “at some point in the future, hopefully”.

PS: Yes, it does sound like yet another whiny post, but we have been waiting for a very long time (in IT-terms) since the inception of Crescent to get this bloody issue fixed and it is hard to imagine mighty Microsoft struggling to catch up behind Cognos (yes, they have mobile – terrible implementation, but works), Tableau (last time I checked they were a tiny company with overpriced products) and QlikView (ugly and slow from what I’ve seen). It’s not like Microsoft weren’t aware of this, and it’s not like it’s happening for the first time – remember how well PerformancePoint did being unable to connect to anything but SSAS MD and having a lacking feature set, plus SSRS to SSAS integration has always been a pain point. The new, quicker, release cycle at Microsoft seems to be delivering the same amount of features in less-complete releases, which doesn’t seem to help with inspiring confidence in its vision and ability to execute. However great these new ingredients are, the dish doesn’t taste that well when some are lacking…tends to get cold quickly too.

PowerPivot, SSAS, Visualisation , ,

 

Counting Number of Queries Executed in SSAS

February 23rd, 2012

When monitoring SSAS we can use a few tools to gain insight into what is happening on the instance. Perfmon is especially interesting as it shows us information about various aspects of the server – we can see memory, disk, CPU and MDX counters out of the box. However, perfmon cannot show us how many queries were run on SSAS – neither per period (e.g. second), nor in total since the service had started. We have another counter, similar to a count of queries – Storage Engine Query : Total Queries Answered. This is not the same as the number of MDX queries answered, as if a query does not hit the Storage Engine (SE), but the Formula Engine (FE) cache instead, it won’t increment the counter. And, if a query needs to be answered by more than one Storage Engine requests, we will see that the counter gets incremented with more than 1.

There are alternatives. We can try using the Query Log server settings and save queries to a database table automatically:

Unfortunately (for us in this case), since the Query Log is typically used for Usage Based Optimisations (UBO), it also shows SE requests, not the actual MDX queries issued. Hence, it suffers from the same drawback as the perfmon Total Queries Answered counter.

The other alternative which comes to mind is SQL Server Profiler. After all it allows us to see what queries get sent to the server, along with lots of other useful information. If we are interested in the count of queries only, we can capture a trace which includes only the Query End event. Why not Query Begin? Well, we can’t see some possibly interesting numbers before the query has been executed, for example the Duration of each query. If we capture only Query End we get exactly what we need – one event per query. We can also capture Error events to check if things go wrong.

A typical Query End-only trace looks like this (all columns included):

As we can see above, we get the actual query in the TextData column, the User Name, Application Name, Duration, Database Name – all very useful properties if we want to analyse the load and the performance of our server and various solutions. While the information looks great on screen, having it in a database table would enable us to do what we (BI devs) like doing – play with the data. Only this time it’s our data, about our toys, so it (at least for me) is twice as fun.

A SSAS trace can be run on the server without Profiler – the term for it is a “server-side trace”. Unlike SQL Server RDBMS traces, a SSAS server-side trace cannot directly log its output to a SQL Server table. Luckily, we can write the results in a “.trc” file, which in turn can be imported into a SQL Server table with a few lines of code.

To create a server-side trace we can first export the trace definition to an XMLA file from profiler:

The XMLA file we get looks like this (all scripts are attached to the end of this post):

If we remove the first line (?xml version…), we can run this and it will create a trace on the server. This trace is useless to us because it does not write its output anywhere. To fix this we can add a few settings like this:

Note that I have changed the Name and ID of the trace to be more user-friendly, and I have removed the Filers section, as well as the first line. Now the script can be used to create a trace. To delete the trace we can use:

And to list all running traces on a SSAS instance:

Note that by default SSAS is configured to run Flight Recorder, which uses a trace to capture events happening on the server (thanks to Dan English for suggesting this clarification). The trace may appear with ID and Name of FlightRecorder. You should not tamper with the trace manually. If you want to stop it, which is a performance recommendation from the SSAS Performance Guide, you should disable Flight Recorder from the SSAS server properties (click for a larger version):

Now, armed with all these scripts we can effectively manage traces on our instances. The only problem we have not resolved so far is exporting the trc file created by our create trace XMLA command to a database table. This can be easily done in a .NET app, or in PowerShell. The ps_trace_copy_to_table.ps1 PowerShell script does the following (thanks to Darren Gosbell for creating the initial version).

  1. Copy the trc file with another name for further processing
  2. Delete the trace form the server
  3. Delete the trace file
  4. Start the trace on the server again
  5. Export the copied trc file into a SQL Server table
  6. Run a stored procedure moving the data from the SQL Server table to another SQL Server table which contains all trace data

We delete->create the trace because otherwise we cannot delete the trace file. The last step runs a stored procedure which moves the data from the table we imported into to another table because the first (landing) table gets re-created every time we import into it and the trace data cannot persist in it.

Note that there is some exception handling implemented in this script (try/catch/finally blocks). If you are using an old version of PowerShell, it is possible that you may have to remove the exception handling statements as they were not supported. I have attached a sample script which excludes them (ps_trace_copy_to_table_no_exception_handling.ps1).

Finally, we can run this script through SQL Server Agent on regular intervals. SQL Server Agent allows direct PowerShell script execution, so this is very easy to set up and can be scheduled to run recurrently every X minutes/hours/days.

A small note: you may notice that some rows in the database table contain events with an Event Class <> 10 (10 is Query End) and NULLs in most columns. These are trace-related events and can be ignored (e.g. your stored procedure moving the data form one table to another can skip such rows).

There is also another way to achieve the same outcome. As a part of the Microsoft SQL Server samples you can download a utility called ASTrace. You will have to download and compile it, but after that you can run it as a Windows service and it will capture the trace events as they happen and write them directly to a SQL table. The Server -> File -> Table changes to Server -> Table. Also, you don’t need to move the data from one table to another. The drawback is that you have an additional service on your server. You may or may not be able to run it (depending on your organisation’s security policies and admin practices), and while it is open source, it is not officially supported by Microsoft.

Either way, collecting trace data allows you to create reports like the following, which can be a really nice way to track what’s happening on your servers:

Scripts:

SSAS , ,