Archive

Posts Tagged ‘T-SQL’

Source Control for Database Development with SSDT

December 6th, 2013

I had never had the change of using source control for database development before I started working on my current project. I have used TFS and Subversion for SSAS, SSRS and SSIS projects in the past. Since it is very difficult to edit manually the rdl, dtsx and SSAS-related files, TFS/Subversion took the role of a glorified storage location with some version control capabilities.

With SSDT we can now enjoy the full set of capabilities of a source control system. This is by far the most compelling reason for me to use SSDT. Sure, there are other very very nice features which make SSDT way better IDE than the plain old Sql Server Management Studio so many SQL Server developers use: top 3 would be dependency/reference checking, better IntelliSense and schema compare (which I used to do with RedGate’s SQL Schema Compare in the past). But, source control is not just about convenience – it makes you and your team more productive and reduces stress levels (yeah, it does!). T-SQL is much like application code – it’s easy to modify by a developer, it’s readable and merge-able. So, it is a prime candidate for being managed using a source control system.

SSDT is free. There are absolutely no drawbacks to using it in a team db development environment and I would strongly encourage anyone who has not tried it to do so. The initial investment in learning how to use it will pay off handsomely in the longer term. Things like _old suffixes to tables and stored procedures become unnecessary, we don’t have to worry about deleting a piece of code and then losing it forever, daily backups for the purpose of not losing code, and in fact – we stop worrying about losing code in general. It will be there forever and we can track its progress (we can also see who breaks it). I’m sure most db developers would be well aware of the benefits of source control, but looking back, my projects always went ahead without using SSDT because the perceived complexity of using SSDT outweighed the potential benefits.

It is not hard. All you do is create a db project and start adding code. That’s in the form of .sql CREATE scripts. And that’s about it. Your project is configured to deploy to a SQL Server instance. When you build the solution, the output is a dacpac. When you deploy it to an instance the dacpac gets compared to what is already there and an incremental update is applied to the target environment. If you add a non-nullable column to a table which already has some data in it you will get a deployment failure. But try doing it on a live db – you’ll get the same issue, right? And since you are in SSDT if you change the name of a table you’ll get errors on all objects referencing it (like stored procedures) and your solution won’t build. Therefore, using SSDT prevents you from breaking your solution’s consistency. You can also refactor code – a right click on a function/stored proc reference allows you to change its name across the whole solution. All this plus source control. Still not convinced? Well, you may need to go through a few more db development projects (especially in a team environment), get pissed off with your teammates, lose some code, spend a few hours chasing code references to realise how important and empowering SSDT can be.

If your whole team is not on the same page, you can use SSDT solo. Import the db in SSDT and off you go – now you can tell everyone how they break your project all the time with changes which are not properly propagated across the solution. You can also import all new changes applied to the live db through the handy schema compare (SQL toolbar item). Just compare the live db to your project and it will show you all new objects, as well as all changes to existing objects in your solution. Applying these will effectively synchronise your solution with the new stuff other developers apply to the live db.

But don’t just take my word for it – download SSDT for free from: http://msdn.microsoft.com/en-gb/data/hh297027 and give it a good go before you decide that you are too old for it!

Oh, and if you do think it’s too much to handle, or if you would like to get your SSDT techniques honed a little and you live in London I’d recommend either coming to work on my current project, or letting you boss know of how great SSDT is, and how he/she can empower your team and splash a little on:

http://www.technitrain.com/coursedetail.php?c=28&trackingcode=CWB

T-SQL , ,

 

Number of Weekdays Between Two Dates

August 4th, 2010
Comments Off

There was an old post here describing some T-SQL code for finding the number of weekdays between two dates, which I wrote. It was working fine, so if you have implemented it you have not done anything wrong. However, Jeff Moden from SQL Server Central has written a post a while ago about this same problem and his implementation is a bit cleaner, and thus I would consider it better than mine. So, here is the link:

http://www.sqlservercentral.com/articles/Advanced+Querying/calculatingworkdays/1660/

T-SQL , ,

 

Using the DSV to its Full Potential

July 22nd, 2010
Comments Off

The Data Source View in Analysis Services is a very powerful abstraction of the data source and it can help us overcome some scenarios in an easy and clean way. Many times we look for MDX or programmatic solutions to problems, which can be tackled best in our data. While for complex tasks we would be better off extending the ETL process, some simple ones can and should be implemented in the DSV.

 

As an introduction to the topic I would like to explain briefly what the DSV actually is. It can be conceptualised as a database view on top of the data source. By default all tables which we need for building the Analysis Services database (typically dimensions and facts) are appearing in the DSV as table bindings (exactly as if we do a SELECT * FROM Table). If we have no foreign keys defined in our database, SSAS will not show us the relationships in the DSV. However, we can define logical relationships in the DSV, thus connecting the tables on related columns, which are then used for automatically determining dimension relationships to the measure groups.

 

There are two important ways to modify the DSV, which allow us to add more columns to the existing tables and to modify the way the existing columns are shown:

Named Queries

 

 

 

If we right-click on a table in the DSV, we can select to replace the table with a Named Query. A Named Query is essentially a T-SQL statement, which is equivalent to a database view definition. By utilising Named Queries we can alter the way we see the tables and their column in SSAS. In example, we could concatenate columns, implement CASE logic, etc. Named Queries can be thought of as equivalent to database views.

 

Named Calculations

 

A named calculation is a SQL statement which adds a column to a table without modifying the table binding. It gives us an easy way to define a new column without changing the whole query. The statement defining the column is in T-SQL and it behaves the same way as a new column in a Named Query (or a SELECT statement). If we just want to add one more column (e.g. Display Order, Code+Description concatenation, etc.), we can simply define a Named Calculation. Also, as the name suggests, Named Calculations can be commonly used for defining a leaf-level calculation without modifying a large fact table’s SELECT statement in a Named Query.

 

The column we define here appears in both the DSV table and in the Dimension Designer window:

 

These two DSV functions can be used in many scenarios. Most importantly, there are a few when they yield better performance, faster development and easier maintenance:

Leaf-level calculations

If we have the common requirement to perform leaf-level calculations and then aggregate this up the hierarchy, as opposed to aggregating and then calculating, the best way to do this is in a SQL statement on the fact table. Alternatively, we can do this in and MDX statement:

 SUM(DESCENDANTS(Dim.CurrentMember,,LEAVES), MeasureCalc)

However, it comes at a price. Since SSAS would have to do the calculation for each leaf and then sum this up the hierarchy, this could take a long time to perform. Also, SSAS would not be able to use pre-processed aggregations and the calculations will be done at execution time. To avoid this we could add a new column to the fact table and do the calculation there (in SQL), using the column as a new measure in the cube, which can then be aggregated by SSAS as any other measure. The performance gain is usually substantial and using a Named Query or a Named Calculation should always be the preferred option.

Description Attributes

Often we need to perform a concatenation between different dimension attributes, which we can use as a Description attribute while slicing the cube, or when providing reports from the SSAS database. A very easy way to achieve such a requirement is to use our DSV and concatenate the column we need in a new column in the dimension table, which we can expose as a new attribute in the dimension. A task such as concatenating an Account Code and Account Description into an Account Long Description (i.e. [Account Code] + ‘-‘ + [Account Description]) becomes very easy to implement within the DSV without modifying the ETL or any tables.

Composite Keys

Sometimes we need to build unique keys for attribute column in a dimension. A good example is a Date dimension, which does not have unique keys for non-leaf levels such as Month. Often developers have Month Key of 1,2,3-12. This does not make a good Month key in SSAS as it is not unique for higher levels such as Year, Quarter, etc. There are a number of ways to tackle this common scenario. While the recommended approach would be to build a concatenation between Year-Quarter-Month as a Month Key in the dimension table, we can also achieve this by either selecting all of the columns as key columns for the attribute in the dimension attribute properties. However, this would give us a concatenated key in MDX and this could sometimes be undesirable. A yet simpler and cleaner solution is to concatenate the relevant columns in the DSV by using a Named Query. Instead of the typical

SELECT col1, col2,.., MonthKey, colx, coly, coly FROM DimDate

we can write

SELECT col1, col2,…,YearKey+QuarterKey+MonthKey AS MonthKey, colx, coly, coz FROM DimDate

This way we can use the MonthKey column directly as a key for our Month attribute.

While this is useful for a Date dimension, it can also be useful for any other composite key definition in our dimensions.

Other possible applications of DSV Named Queries and Named Calculations are the implementation of

  • Sort Order attribute, in cases when we need custom sort of the dimension attributes
  • Restricting the data which comes into the cube dynamically based on a certain condition (think of a Date dimension, which includes only relevant periods)
  • Combining tables – by a SQL join
  • Replacing 0s with NULLs (the opposite can be done automatically in SSAS) for our measures

Basically, in a DSV we can “correct” our data to make it suitable for our cube without changing the ETL.

Last but not least, we can also transform tables to conform to a star-schema-like design. If we want to show a proof of concept on top of a normalized OLTP database, we could avoid the ETL complexities, as well as building a datamart, and use SQL to join/split tables in dimension and fact tables, which are suitable for cube development. While this could work in post-POC scenarios, it would be better to take a cautious approach to it as there are many scenarios when it would either not work, or will be too slow.

And a word of warning – your DSV could become slow because of over-use of complex Named Queries. This could be painful when minimising cube processing time is crucial, or when the DSV starts timing out and queries take hours to execute. Luckily, in most cases we can simply move these large queries forward – to the ETL where we have more time and better tools (e.g. SSIS).

SSAS , , , , ,

 

Passing database names to SSIS stored procedures

June 29th, 2010
Comments Off

In the rare cases when we use dynamic SQL and want to use a database name in our code, we are better off avoiding hard-coding them. Unfortunately, I could not find an easy way to access a connection manager’s database name and on my current project the catalog name is not in the SSIS configurations XML file. Therefore, I had to resort to a little trick to pull the database name out and pass it to a stored procedure. In brief we can do the following:

1. Create a user variable database_name

2. Create an Execute SQL Task using the connection manager we want to get the database name from, which does:

 SELECT db_name() AS database_name

3. Map the Single Row result set to our database_name variable 

4. Place the task created in the previous step before any components which would be using the variable.

5. Pass the variable to our dynamic SQL stored procedure

 

 

There we go – a stored procedure configured in the SSIS package configurations – a bit better than just hard-coding the name.

SSIS , ,

 

SQL Server DBMS Top 1 Wish List

December 8th, 2009

As an addition to Teo Lachev’s Top 10 Wishlists (SSAS and SSRS), I would like to contribute only 1 item to a possible SQL Server DBMS wishlist:

1. Source Control.

Not SourceSafe source control, but rather an automated version out-of-the-box, not relying on developers to check in/out. Rather, it should track the changes to the code as they are made, and a full version history should be available directly in the DBMS. It should not be too hard. After all, there is a nice database available, which can store code with its version numbers just like anything else.

This would make a lot of developers’ lives a bit less frustrating.

OK, a SQL code “beautifier” would also be nice, but it is not all that important…

Vote on Connect

T-SQL , ,