Jump to content

Ms Sql Server Technology Discussions


9Pardhu

Recommended Posts

[b] SSAS: Clearing the MDX Script for a session in SSMS[/b]

Sometimes when troubleshooting performance issues you may want to comment out the MDX Script in your cube in order to quickly isolate whether the MDX Script is a significant contributor to the issue. So if you can reproduce your issue in your dev environment you can open up your project in BIDS / SSDT, comment out the script and re-deploy. But what happens if you can't reproduce the issue against dev. You may not have the same server specs, or the same data volumes or you may even have design changes in dev that have not yet been deployed that would cloud the issue.

This command effectively clears the MDX script for your current session. So what I usually do is to run the following:

[CODE]
CLEAR CALCULATIONS;
GO
CALCULATE;
[/CODE]

Link to comment
Share on other sites

[b] Semi Additive Measures using SQL Server Standard[/b]


One of the most frustrating limitations of SQL Server 2005 Standard edition is that it doesn’t support semi additive measures in SSAS Analysis Services cubes. This post explains a work around that provides similar functionality without having to shell out for the Enterprise Edition.

[b]What Are Semi Additive Measures?[/b]
Semi Additive measures are values that you can summarise across any related dimension except time.

For example, Sales and costs are fully additive; if you sell 100 yesterday and 50 today then you’ve sold 150 in total. You can add them up over time.

Stock levels however are semi additive; if you had 100 in stock yesterday, and 50 in stock today, you’re total stock is 50, not 150. It doesn’t make sense to add up the measures over time, you need to find the most recent value.

[b]Why are they important?[/b]
Whether they are important to you or not depends entirely on what you are trying to do with your cube. If all of your required measures are fully additive then you really don’t need to worry about anything. However as soon as you want to include measures such as stock levels, salarys, share prices or test results then they become pretty much essential.

[b]Why are they not available in SQL Standard edition?[/b]
Microsoft has to have some way of pursuading us to pay for the Enterprise edition!

[b]How can I get this functionality within SQL Standard?[/b]
Firstly we need to understand what semi additive measures do. By far the most common aggregation used is the LastNonEmpty function, so we’ll stick with that as an example. This basically says that whatever time frame you are looking at, find the most recent value for each tuple. This really is a fantastically powerful function, which only really becomes apparent whan you don’t have it!

Lets say that you perform a stock take of different products on different days of the week. You will have a stock entry for product A on a Thursday and product B on a Friday. The LastNonEmpty function takes care of this for you, if you look at the stock level on Saturday it will give you the correct values for both A and B, even though you didn’t perform a physical stock take on the Saturday.

If you then add the time dimension into the query, SSAS will perform this function for each and every time attribute shown, and then aggregate the results up to any other dimensions used. i.e. Each month will then display the sum of all LastNonEmpty values for all products within that month, essentially the closing stock level for each and every month.

To replicate this in Standard Edition, we need to split the work up into two stages.
1) Create daily values in the data warehouse
2) Use MDX to select a single value from the time dimension.

Think of this as splitting up the LastNonEmpty function into two, ‘Last’ and ‘Non Empty’. The ‘Non Empty’ bit essentially fills in the blanks for us. If a value doesn’t exist for that particular day, it looks at the previous day’s value. The ‘Last’ bit says that if we are looking at months in our query, find the value for the last day in that month. The same goes for years, or indeed any other time attribute.

To code up a full LastNonEmpty function ourselves in MDX would be too slow to query as soon as you get a cube of any reasonable size. One of the key benefits of a cube is speed of querying data and we don’t want to impact this too much, therefore we move some of the donkey work into the ETL process populating the datawarehouse. This leaves the cube to perform a simple enough calculation so as to not cause any problems.

[b]1) The ‘Non Empty’ bit[/b]

Lets say that have a table called tblStock, containing the following data

[center][img]http://www.purplefrogsystems.com/images/blog/tblStock.png[/img][/center]

We need to expand this into a new fact table that contains one record per day per product.

[center][img]http://www.purplefrogsystems.com/images/blog/FactStock.png[/img][/center]

There are a number of ways of doing this, I’ll describe one here that should suit most situations, although you may need to customise it to your own situation, and limit it to only updating changed/new records rather than re-populating the entire table, but you get the idea. I should point out that you would be much better off populating this as part of your ETL process, but I’m showing this method as it’s more generic.
You need a list of all available dates relevant to your data warehouse or cube. If you already have a time dimension table then use this, otherwise create a SQL function that returns you a list of dates, such as this one:
[color=#008000]
[/color][color=#008000][CODE]
CREATE FUNCTION [dbo].[FN_ReturnAllDates](
@DateFrom DateTime, @DateTo DateTime)
RETURNS @List TABLE (Date DateTime)
BEGIN
DECLARE @tmpDate DateTime
SET @tmpDate = @DateFrom
WHILE @tmpDate<=@DateTo
BEGIN
INSERT INTO @List
SELECT Convert(datetime,
Convert(Nvarchar,@tmpDate, 102), 102)
SET @tmpDate = Dateadd(d,1,@tmpDate)
END
RETURN
END
[/CODE][/color][color=#008000]
[/color]
We need to perform a full outer join between the date dimension and any other relevant dimensions, in this case product. This will generate one record per product per date. We can then perform a sub query for each combination to find the stock level appropriate for that day. (Yes, this will be a slow query to run – I did say you should do it in your ETL process!)
[color=#008000]
[/color][color=#008000][CODE]
INSERT INTO FactStock
(StockTakeDate, ProductID, StockLevel)
SELECT D.Date, P.ProductID,
ISNULL((SELECT TOP 1 StockLevel
FROM tblStock
WHERE ProductID = P.ProductID
AND StockTakeDate<=D.Date
ORDER BY StockTakeDate DESC),0)
FROM FN_ReturnAllDates((SELECT Min(StockTakeDate)
FROM tblStock),GetDate()) D
FULL OUTER JOIN
(SELECT ProductID FROM tblProduct) P ON 1=1
[/CODE][/color][color=#008000]
[/color]
[b]2) The ‘Last’ bit[/b]

Now that we have a large fact table consisting of one record per product/date, we can load this into the cube.

If you just add the StockLevel field as a measure and browse the results, you’ll quickly see that if you view it by month, you will get each day’s stock level added together giving you a non-sensical value. To fix this we need to tell Analysis Services to only show one day’s value.

To do this we first need to find all descendents of the current time member at the day level, using something like this:
[color=#008000][CODE]
DESCENDANTS([Time].[Year Month Day].CurrentMember,
[Time].[Year Month Day].[Day])
--Please modify to suit your own date hierarchy!
[/CODE][/color]
We can then find the last member (giving us the closing stock level) by using TAIL():
[color=#008000][CODE]
TAIL(DESCENDANTS([Time].[Year Month Day].CurrentMember,
[Time].[Year Month Day].[Day]))
[/CODE][/color]
You could aso use HEAD() if you wanted to find the opening stock instead of closing.

You should hide the actual StockLevel measure to prevent users from selecting it, I usually alias these with an underscore, as well as making them invisible, just for clarity. You can then add a calculated member with the following MDX:
[color=#008000]
[/color][color=#008000][CODE]
CREATE MEMBER CURRENTCUBE.[MEASURES].[Stock Level Close]
AS SUM(TAIL(DESCENDANTS([Time].[Year Month Day].currentmember,
[Time].[Year Month Day].[Day])),
[Measures].[_Stock Level]),
FORMAT_STRING = "#,#",
VISIBLE = 1 ;
[/CODE][/color][color=#008000]
[/color]
Or you can calculate the average stock over the selected period
[color=#008000]
[/color][color=#008000][CODE]
CREATE MEMBER CURRENTCUBE.[MEASURES].[Stock Level Avg]
AS AVG(DESCENDANTS([Time].[Year Month Day].currentmember,
[Time].[Year Month Day].[Day]),
[Measures].[_Stock Level]),
FORMAT_STRING = "#,#",
VISIBLE = 1 ;
[/CODE][/color][color=#008000]
[/color]
Or the maximum value
[color=#008000]
[/color][color=#008000][CODE]
CREATE MEMBER CURRENTCUBE.[MEASURES].[Stock Level Max]
AS MAX(DESCENDANTS([Time].[Year Month Day].currentmember,
[Time].[Year Month Day].[Day]),
[Measures].[_Stock Level]),
FORMAT_STRING = "#,#",
VISIBLE = 1 ;
[/CODE][/color][color=#008000]
[/color]
Or the mimimum value
[color=#008000]
[/color][color=#008000][CODE]
CREATE MEMBER CURRENTCUBE.[MEASURES].[Stock Level Min]
AS MIN(DESCENDANTS([Time].[Year Month Day].currentmember,
[Time].[Year Month Day].[Day]),
[Measures].[_Stock Level]),
FORMAT_STRING = "#,#",
VISIBLE = 1 ;
[/CODE][/color][color=#008000]
[/color]
And there you have it, [b]semi additive measures in SQL Server 2005 Standard Edition![/b]
Even though this method does work well, it is still not as good as having the Enterprise edition. The built in functions of Enterprise will perform significantly better than this method, and it saves having to create the large (potentially huge) fact table. This process will also only work on a single date hierarchy. If you have multiple hierarchies (i.e. fiscal and calendar) you will need to enhance this somewhat.

Link to comment
Share on other sites

Semi additive measures are measures that have a different way of aggregation over time (or other specific dimensions). For example, thinking of stock values you want to sum them up by product, location, country etc. but not by time (here you may want to see the last value, the average, the minimum/maximum or whatever).

For SQL Server SSAS (multidimensional) semi additive measures are a feature that is only available in the Enterprise Edition of the server. The following post shows methods how to implement such calculations on a Standard Edition. However, the implementation in the Enterprise Edition performs better than the custom calculation shown here.

This is how the sample data looks like when using the measure StockLevel:

[url="http://lh4.ggpht.com/-tNRxgdUJuNk/UB5fS31WSFI/AAAAAAAADNo/r8FjWs0C3DU/s1600-h/t13.png"][img]http://lh4.ggpht.com/-SfDWH9orwTs/UB5fTRCgL2I/AAAAAAAADNw/vrK0JQyyPDQ/t1_thumb1.png?imgmax=800[/img][/url]

The time dimension is at day granularity. The stock values are associated with the last day of the month (in order to properly show the effect of the ‘Last Value’ semi additive function).

Since we use Sum as the aggregation function here, the grand total for each row is correct while the sub totals and and the grand total for the columns make no sense.

This is a job for semi additive measures! In order to see different effects, I created three of them named[list]
[*]EP Average
[*]EP LastValue
[*]EP LastNonEmpty
[/list]
The prefix EP stands for Enterprise Edition. For example, this is the definition of “EP LastNonEmpty”:

[url="http://lh3.ggpht.com/-yOomylKWhBQ/UB5fTutytYI/AAAAAAAADN4/T9sNr8TkKTI/s1600-h/t24.png"][img]http://lh3.ggpht.com/-iv9nU9Fr4oc/UB5fUJpk5LI/AAAAAAAADOA/MVWC2FKh2nc/t2_thumb2.png?imgmax=800[/img][/url]

The following screen shot shows all of those measures side by side (for all products):

[url="http://lh5.ggpht.com/-S8OkbiNG5EY/UB5fUt3AyTI/AAAAAAAADOI/OO78Qs43O_k/s1600-h/t33.png"][img]http://lh4.ggpht.com/-7_Ec2CcoHIQ/UB5fU6jKXSI/AAAAAAAADOQ/ahd5adV4N70/t3_thumb1.png?imgmax=800[/img][/url]

The difference between ‘last value’ and ‘last non empty’ is about the total. Last non empty really takes the last non empty value [i]within the current context[/i] – not within the full date dimension. So for the measure “EP LastNonEmpty” we see a total for 2011 but no total for April (as there is no value for April at all). The measure “EP LastValue” on the other hand gives no total for 2011 as there is no value for December 31, 2011.

Now let’s try to rebuild those measures.

[b][u]Average over time (version 1)[/u][/b]
First we start with the average over time. If you have (or can create) a distinct count measure for the date key of the fact table it is quite easy to compute the average over time. Let’s assume our distinct count measure is named [Date Key Distinct Count], then the calculation for the average looks like this:

[CODE]
CREATE MEMBER CURRENTCUBE.[Measures].[STD Average]
AS Measures.StockLevel / [Measures].[Date Key Distinct Count];
[/CODE]

We can check the results be putting both measures side by side:

[url="http://lh6.ggpht.com/-WbBwjgdQBi4/UB5fVZYOkTI/AAAAAAAADOY/uLvblrQo3K0/s1600-h/t43.png"][img]http://lh3.ggpht.com/-DpZ1lW8O6dA/UB5fV0rp4iI/AAAAAAAADOg/ktD0Avofj_s/t4_thumb1.png?imgmax=800[/img][/url]

Of course, the calculation also works if you filter for a specific product.

[b][u]Average over time (version 2)[/u][/b]
I just wanted to present a different approach in case you do not want to create a distinct measure (separate measure group). We still need to find out the number of days in a given date range. We can use a dynamic set in order to do so:

[font="Courier New"][CODE]
CREATE DYNAMIC SET CURRENTCUBE.[CurrentDateRange]
AS [Date].[Day].[Day], DISPLAY_FOLDER = 'Calculations' ; [/font]
[font="Courier New"]CREATE MEMBER CURRENTCUBE.[Measures].[STD Average 2]
AS [Measures].[Stocklevel]
/count(nonempty(existing CurrentDateRange, [Measures].[Stocklevel])) ;
[/CODE][/font]

The set needs to be dynamic in order to reflect a sub cube context (for example if Microsoft Excel 2007 and later is used to query the cube). Also, we have to filter out the dates with values as the average needs to count only existing date rows.

Comparing the three measures (Enterprise method and the two methods for the average) side by side shows that they are the same:

[url="http://lh4.ggpht.com/-loI3qyezEKc/UB5fWDSdyfI/AAAAAAAADOo/-EoajQ-iBjU/s1600-h/t55.png"][img]http://lh6.ggpht.com/-AceMYVj5TfE/UB5fWrwVeCI/AAAAAAAADOw/rkJ9xlREQzs/t5_thumb3.png?imgmax=800[/img][/url]

[b][u]Last value[/u][/b]
For the last value, I’m also using the approach with the dynamic set here. We use the same set, so the calculation is simply:

[CODE]
CREATE MEMBER CURRENTCUBE.[Measures].[STD LastValue]
AS ([Measures].[Stocklevel], tail(existing CurrentDateRange,1).item(0)) ;
[/CODE]

The expression tail (…, 1) gives the last item of the set. We evaluate our measure on this last item.
Before comparing the results with the enterprise measure, let’s quickly add the last one:

[b][u]Last Non Empty[/u][/b]
This formula is almost identical to the one I used before, except that we wrap the set in a nonempty(…) function. Again we take the last item (in this case this is the last item with data), so the function looks like this:

[font="Courier New"][CODE]
CREATE MEMBER CURRENTCUBE.[Measures].[STD LastNonEmpty]
AS ([Measures].[Stocklevel],
tail(nonempty(existing CurrentDateRange, [Measures].[Stocklevel]),1).item(0)) ;
[/CODE][/font]

Here are our two last measures together with their corresponding Enterprise measures side by side:

[url="http://lh4.ggpht.com/-ItSBmuYcFpo/UB5fXAg9PPI/AAAAAAAADO4/SPbnIRKeGhA/s1600-h/t63.png"][img]http://lh4.ggpht.com/-uTZv2LzGJDA/UB5fXXv4BWI/AAAAAAAADPA/ObW7FJjxkyc/t6_thumb1.png?imgmax=800[/img][/url]

Of course you can use the calculation above also for other kind of aggregates. For example a minimum over time (sum over all other dimensions) would look like this:

[CODE]
CREATE MEMBER CURRENTCUBE.[Measures].[STD Min]
AS Min(existing CurrentDateRange, [Measures].[Stocklevel]);
[/CODE]

And you can also use this kind of calculation for dimensions other than the time dimension.

Link to comment
Share on other sites

[b] Semi additive measures in DAX / BISM Tabular[/b]

Semi additive measures, i.e. measures that have to be aggregated differently over different dimensions, are commonly used in BI solutions. One example could be stock levels. Of course we don’t want to sum them up over time, but only over product, location etc. For the time, a different aggregation is used, for example average or last value.

The following example shows how to implement some of the most commonly used semi additive measures in DAX.

In my example I’m using PowerPivot (Denali edition), but the same calculations can be used in a BISM Tabular model in Visual Studio.

In order to keep things simple, I’m using just a short table of test data:

[url="http://lh5.ggpht.com/-p6xSwMNQ6Lc/TkaVS4hnepI/AAAAAAAACx8/4gyZcI7xkyw/s1600-h/p18.png"][img]http://lh4.ggpht.com/-YpKkqqU90pU/TkaVUYg4m8I/AAAAAAAACyA/b_vxhgnvI9U/p1_thumb4.png?imgmax=800[/img][/url]
As you see, we only have two products with monthly stock levels in 2010 and 2011.

Although not needed for my semi additive measures, I created additional columns in my PowerPivot sheet for convenient reasons: Year, Month, Day (using the corresponding DAX-function with the same name). I also set the newly created columns, as well as the Stocklevel column to hidden (it makes no sense to sum up the stock level). Although the date information is kept in the same table as the data to keep things simple for this example, I encourage to build a separate date dimension table here (similar idea as with a date dimension in a multidimensional model).

Finally, I created a hierarchy named ‘Calendar’ on my newly created date columns:

[url="http://lh3.ggpht.com/-z7cQdU_c8KY/TkaVVOhMgjI/AAAAAAAACyE/Fivz2yqvkqA/s1600-h/p113.png"][img]http://lh5.ggpht.com/-TcwRnTxy19U/TkaVWI7exeI/AAAAAAAACyI/SxAxVsxwuu8/p1_thumb7.png?imgmax=800[/img][/url]

Now we’re ready for the semi additive measures.

[b][u]Average (over time)[/u][/b]
Let’s start with an easy one, the average over time. Since we can easily compute the distinct count of our date values, we can simply add up the stock level and divide it by the distinct count. In my example the formula looks like this:

[CODE]
Avg Stock:=Sum([Stocklevel])/DISTINCTCOUNT([Date])
[/CODE]

[b][u]Last value (over time)[/u][/b]
In order to compute the last value, the DAX function LASTDATE comes in handy. Here is the formula:

[CODE]
Last Stock:=CALCULATE(SUM([Stocklevel]),LASTDATE('Stock'[Date]))
[/CODE]

[b][u]Min/Max value (over time)[/u][/b]
For min/max we have to be a little bit more tricky. In the approach I’m showing here, I’m grouping the table by date by using the SUMMARIZE function and the SUM aggregation. Then I’m using the function MINX or MAXX to find the minimal or maximal value.

Here are the two formulas:

[CODE]
Max Stock:=MINX(SUMMARIZE('Stock','Stock'[Date],"SumByDate",SUM('Stock'[Stocklevel])),[SumByDate])
Max Stock:=MAXX(SUMMARIZE('Stock','Stock'[Date],"SumByDate",SUM('Stock'[Stocklevel])),[SumByDate])
[/CODE]

To understand these formulas you can see the effect pretty well after restoring the PowerPivot workbook to a SSAS server in tabular mode.

After doing so, we can create a query to show the result of the inner SUMMARIZE function using this DAX query:

[CODE]
evaluate(
SUMMARIZE('Stock','Stock'[Date],"SumByDate",SUM('Stock'[Stocklevel]))
)
[/CODE]

Here’s the result:

[url="http://lh4.ggpht.com/-fE4dCP_728M/TkaVYNShqSI/AAAAAAAACyM/_j9bHsI7F9I/s1600-h/t45.png"][img]http://lh6.ggpht.com/-B_54HmEr7d0/TkaVY9EUl1I/AAAAAAAACyQ/MUamjPyME3I/t4_thumb1.png?imgmax=800[/img][/url]

The MinX or MaxX function simply takes the lowest/highest value from this table.

Now let’s see, how this looks like in Excel. The following screenshot shows the calculations in my PowerPivot sheet:

[url="http://lh5.ggpht.com/-JdAjgShwksk/TkaVaNJygKI/AAAAAAAACyU/CBFZYw8fUKs/s1600-h/t24.png"][img]http://lh6.ggpht.com/-sA4g8UUY49A/TkaVcQTmNdI/AAAAAAAACyY/0pY2ILmkTWo/t2_thumb2.png?imgmax=800[/img][/url]

Here’s the result in Excel

[url="http://lh4.ggpht.com/-mQ6QHHgNzZE/TkaVddCkahI/AAAAAAAACyc/rsBrteDp2GM/s1600-h/t18.png"][img]http://lh6.ggpht.com/-jlmJwG7LKAw/TkaVerIlU-I/AAAAAAAACyg/CBvvna6YKIA/t1_thumb4.png?imgmax=800[/img][/url]

And of course, the aggregations also work correctly when filtering the data as shown below (single select on product and multi select on months):

[url="http://lh5.ggpht.com/-4NOjBTMDdFY/TkaVfQPBpzI/AAAAAAAACyk/JdiydwprEkk/s1600-h/t32.png"][img]http://lh5.ggpht.com/-79Q9yZ6oX9c/TkaVge9N5iI/AAAAAAAACyo/lLE_2Zjnyzc/t3_thumb.png?imgmax=800[/img][/url]

Another cool feature is that besides DAX we can still use standard MDX to query our SSAS tabular model, for example:

[CODE]
select
{[Measures].[Avg Stock],[Measures].[Last Stock],
[Measures].[Min Stock],[Measures].[Max Stock]} on 0,
[Stock].[Calendar].[Year] on 1
from [Model]
[/CODE]

[url="http://lh5.ggpht.com/-VDL2M7dvEfo/TkaVhKJelhI/AAAAAAAACys/qcbaqajE82M/s1600-h/t54.png"][img]http://lh3.ggpht.com/-wJbFOzq8-ic/TkaVh0IFgzI/AAAAAAAACyw/pHcDMVXYYqs/t5_thumb2.png?imgmax=800[/img][/url]

A final word about empty (missing) rows: The above calculations need a value of zero as the information that there is no stock at that month. If the value is left blank (no source data row at all), the month itself is treated as missing (interpretation more like we didn’t have this product in our portfolio at all).

Link to comment
Share on other sites

[b] Primary Key and NonClustered Index in Simple Words[/b]


Here is the common misconception prevailing in the industry.

[i][b]Primary Key has to be Clustered Index. [/b][/i]

In reality the statement should be corrected as follows:

[i][b]Primary Key can be Clustered or Non-clustered but it is a common best practice to create a Primary Key as Clustered Index. [/b][/i]

Well, now we have corrected the statement let us understand a bit more in detail. Primary Key should be uniquely identifying column of the table and it should be NOT NULL. A good (most of the time) candidate of the clustered index key also uniquely identifies column and NOT NULL (most of the time). Well, that means it is a good idea to create a Primary Key Clustered so it solve both the problems together. Keeping these facts in mind SQL Server automatically creates Clustered Index on the Primary Key when the table is created. Developers often do not specify which column should have clustered index so by default Primary Key becomes Clustered Index. This practice is now extremely common and lots of people have forgotten that Primary Key and Clustered Index is two different things. They can be same column but they do not have to be.
[size=3]Well here are four examples we will see where we will learn the behavior of SQL Server when it is about Primary Key and Clustered Index.[/size][list]
[*][size=3]Scenario 1 : Primary Key will default to Clustered Index[/size]
[*][size=3]Scenario 2: Primary Key is defined as a Non-clustered Index[/size]
[*][size=3]Scenario 3: Primary Key defaults to Non-Clustered Index with another column defined as a Clustered Index[/size]
[*][size=3]Scenario 4: Primary Key defaults to Clustered Index with other index defaults to Non-clustered index[/size]
[/list]
Now let us see each of the scenarios in detail.
[b] Scenario 1 : Primary Key will default to Clustered Index[/b]

In this case we will create only Primary Key and when we check the kind of index created on the table we will notice that it has created clustered index automatically over it.

[CODE]
-- Case 1 Primary Key Defaults to Clustered Index
USE TempDB
GO
-- Create table
CREATE TABLE TestTable
(ID INT NOT NULL PRIMARY KEY,
Col1 INT NOT NULL)
GO
-- Check Indexes
SELECT OBJECT_NAME(OBJECT_ID) TableObject,
[name] IndexName,
[Type_Desc]
FROM sys.indexes
WHERE OBJECT_NAME(OBJECT_ID) = 'TestTable'
GO
-- Clean up
DROP TABLE TestTable
GO
[/CODE]
[b] [img]http://www.pinaldave.com/bimg/clpk1.png[/img][/b]

[b] Scenario 2: Primary Key is defined as a Non-clustered Index[/b]

In this case we will explicitly defined Primary Key as a non-clustered index and it will create it as a non-clustered index. It proves that Primary Key can be non-clustered index.

[CODE]
-- Case 2 Primary Key Non-clustered Index
USE TempDB
GO
-- Create table
CREATE TABLE TestTable
(ID INT NOT NULL PRIMARY KEY NONCLUSTERED,
Col1 INT NOT NULL)
GO
-- Check Indexes
SELECT OBJECT_NAME(OBJECT_ID) TableObject,
[name] IndexName,
[Type_Desc]
FROM sys.indexes
WHERE OBJECT_NAME(OBJECT_ID) = 'TestTable'
GO
-- Clean up
DROP TABLE TestTable
GO
[/CODE]
[b] [img]http://www.pinaldave.com/bimg/clpk2.png[/img][/b]


[b] Scenario 3: Primary Key defaults to Non-Clustered Index with another column defined as a Clustered Index[/b]

In this case we will create clustered index on another column, SQL Server will automatically create a Primary Key as a non-clustered index as clustered index is specified on another column.

[CODE]
-- Case 3 Primary Key Defaults to Non-clustered Index
USE TempDB
GO
-- Create table
CREATE TABLE TestTable
(ID INT NOT NULL PRIMARY KEY,
Col1 INT NOT NULL UNIQUE CLUSTERED)
GO
-- Check Indexes
SELECT OBJECT_NAME(OBJECT_ID) TableObject,
[name] IndexName,
[Type_Desc]
FROM sys.indexes
WHERE OBJECT_NAME(OBJECT_ID) = 'TestTable'
GO
-- Clean up
DROP TABLE TestTable
GO
[/CODE]
[b] [img]http://www.pinaldave.com/bimg/clpk3.png[/img][/b]


[b] Scenario 4: Primary Key defaults to Clustered Index with other index defaults to Non-clustered index[/b]

In this case we will create two indexes on the both the tables but we will not specify the type of the index on the columns. When we check the results we will notice that Primary Key is automatically defaulted to Clustered Index and another column as a Non-clustered index.

[CODE]
-- Case 4 Primary Key and Defaults
USE TempDB
GO
-- Create table
CREATE TABLE TestTable
(ID INT NOT NULL PRIMARY KEY,
Col1 INT NOT NULL UNIQUE)
GO
-- Check Indexes
SELECT OBJECT_NAME(OBJECT_ID) TableObject,
[name] IndexName,
[Type_Desc]
FROM sys.indexes
WHERE OBJECT_NAME(OBJECT_ID) = 'TestTable'
GO
-- Clean up
DROP TABLE TestTable
GO
[/CODE]

[img]http://www.pinaldave.com/bimg/clpk4.png[/img]

I think above examples clarifies if there are any confused related to Primary and Clustered Index.

Now here is the question I often get asked [b]what can be the reason for creating Primary Key and Clustered Index Key on different columns[/b]. Well, there are many scenarios when this can be true. It is possible that you have column SSN which you want to create as a Primary Key but do not want make it as a clustered index key because you have uniquely increasing identity column which best suites your need for that table (again this is just an example – you can argue exactly the opposite way as well).

Link to comment
Share on other sites

evaraina ee below ques ki answers chepara

****1) ***Rule discovery searches for patterns and correlations in large data sets True/False?

*****2) ***Scalable technology is critical to a data mart True/False?

3)

****Which of the following data-mining techniques identifies clusters of observations with similar characteristics?
Answer
Case reasoning
Rule discovery
Clustering and signal processing
Neural nets


4)

****Which of the following data-mining techniques searches for patterns and correlations in large data sets?
Answer
Case reasoning
Rule discovery
Signal processing
Neural nets


5)

****OLAP tools that use the database as a traditional relational database are called:
Answer
ROLAP tools.
MOLAP tools.
slice and dice.
None of these.


6)

*****A class of database technology used to store textual and other unstructured data is called:
Answer
mySQL.
NoSQL.
KnowSQL.
PHP.


7)

****Data transformation is not an important part of the data reconciliation process.
Answer True
False

8)

*****Which of the following are key steps in a data quality program?
Answer
Conduct a data quality audit.
Apply TQM principles and practices.
Estimate return on investment.
All of these.


9)

*****Data quality problems can cascade when:
Answer
data are not deleted properly.
data are copied from legacy systems.
there is redundant data storage and inconsistent metadata.
there are data entry problems.

Link to comment
Share on other sites

[quote name='ghazni' timestamp='1360708769' post='1303268612']
mari silly question le....... chusukokunda doubt adiga............
[/quote]

whatever manaki silly undochu those who are learing SQL Server now ki help avutundi kadaa? neeku silly anipisthe naaku PM chey...nen vestha ikkada :)

Link to comment
Share on other sites

[quote name='GatisKandis' timestamp='1360711251' post='1303268765']
babu kaarthikeya microsoft certifications chesava...sampi padestunnav kada
[/quote]
antha ledu mitram [img]http://www.manadb.com/Smileys/default/Brahmi.gif[/img] actually ee month raaddaam anukunna kaani okkadu kuda ye certification raasthe manchido seppatam ledu sFun_duh2

Link to comment
Share on other sites

flow chart easy ga dhorukutundhi msdn lo ne undhi kani mari first ninchi rayadam avasaram ledhu anukunta....naaku plan undhi kanukoni weekend lopu cheptha kalisi rasi 100% kodudham [img]http://www.manadb.com/Smileys/default/Brahmi.gif[/img]
[quote name='Kaarthikeya' timestamp='1360715507' post='1303269037']
antha ledu mitram [img]http://www.manadb.com/Smileys/default/Brahmi.gif[/img] actually ee month raaddaam anukunna kaani okkadu kuda ye certification raasthe manchido seppatam ledu sFun_duh2
[/quote]

Link to comment
Share on other sites

[quote name='Kaarthikeya' timestamp='1360715507' post='1303269037']
antha ledu mitram [img]http://www.manadb.com/Smileys/default/Brahmi.gif[/img] actually ee month raaddaam anukunna kaani okkadu kuda ye certification raasthe manchido seppatam ledu sFun_duh2
[/quote]
[quote name='GatisKandis' timestamp='1360717751' post='1303269132']
flow chart easy ga dhorukutundhi msdn lo ne undhi kani mari first ninchi rayadam avasaram ledhu anukunta....naaku plan undhi kanukoni weekend lopu cheptha kalisi rasi 100% kodudham [img]http://www.manadb.com/Smileys/default/Brahmi.gif[/img]
[/quote]


MCTS- 70448 ( sql server 2008 business intelligence development & maintenance)

Link to comment
Share on other sites

×
×
  • Create New...