Tuesday, October 30, 2012

SQL SERVER – UNION ALL and ORDER BY – How to Order Table Separately While Using UNION ALL

I often see developers trying following syntax while using ORDER BY.
SELECT Columns
FROM TABLE1
ORDER BY Columns
UNION ALL
SELECT Columns
FROM TABLE2
ORDER BY Columns

However the above query will return following error.
Msg 156, Level 15, State 1, Line 5
Incorrect syntax near the keyword ‘ORDER’.
It is not possible to use two different ORDER BY in the UNION statement. UNION returns single resultsetand as per the Logical Query Processing Phases.
However, if your requirement is such that you want your top and bottom query of the UNION resultset independently sorted but in the same resultset you can add an additional static column and order by that column. Let us re-create the same scenario.
First create two tables and populated with sample data.
USE tempdb
GO
-- Create tableCREATE TABLE t1 (ID INT, Col1 VARCHAR(100));CREATE TABLE t2 (ID INT, Col1 VARCHAR(100));GO-- Sample Data BuildINSERT INTO t1 (ID, Col1)SELECT 1, 'Col1-t1'UNION ALLSELECT 2, 'Col2-t1'UNION ALLSELECT 3, 'Col3-t1';INSERT INTO t2 (ID, Col1)SELECT 3, 'Col1-t2'UNION ALLSELECT 2, 'Col2-t2'UNION ALLSELECT 1, 'Col3-t2';GO

If we SELECT the data from both the table using UNION ALL .
-- SELECT without ORDER BYSELECT ID, Col1FROM t1UNION ALLSELECT ID, Col1FROM t2
GO

We will get the data in following order.
However, our requirement is to get data in following order.
If we need data ordered by Column1 we can ORDER the resultset ordered by Column1.
-- SELECT with ORDER BYSELECT ID, Col1FROM t1UNION ALLSELECT ID, Col1FROM t2ORDER BY ID
GO

Now to get the data in independently sorted in UNION ALL let us add additional column OrderKey and use ORDER BY  on that column. I think the description does not do proper justice let us see the example here.
-- SELECT with ORDER BY - with ORDER KEYSELECT ID, Col1, 'id1' OrderKeyFROM t1UNION ALLSELECT ID, Col1, 'id2' OrderKeyFROM t2ORDER BY OrderKey, ID
GO

The above query will give the desired result.
Now do not forget to clean up the database by running the following script.
-- Clean upDROP TABLE t1;DROP TABLE t2;GO
Here is the complete script used in this example.
USE tempdb
GO
-- Create tableCREATE TABLE t1 (ID INT, Col1 VARCHAR(100));CREATE TABLE t2 (ID INT, Col1 VARCHAR(100));GO-- Sample Data BuildINSERT INTO t1 (ID, Col1)SELECT 1, 'Col1-t1'UNION ALLSELECT 2, 'Col2-t1'UNION ALLSELECT 3, 'Col3-t1';INSERT INTO t2 (ID, Col1)SELECT 3, 'Col1-t2'UNION ALLSELECT 2, 'Col2-t2'UNION ALLSELECT 1, 'Col3-t2';GO-- SELECT without ORDER BYSELECT ID, Col1FROM t1UNION ALLSELECT ID, Col1FROM t2
GO
-- SELECT with ORDER BYSELECT ID, Col1FROM t1UNION ALLSELECT ID, Col1FROM t2ORDER BY ID
GO
-- SELECT with ORDER BY - with ORDER KEYSELECT ID, Col1, 'id1' OrderKeyFROM t1UNION ALLSELECT ID, Col1, 'id2' OrderKeyFROM t2ORDER BY OrderKey, ID
GO
-- Clean upDROP TABLE t1;DROP TABLE t2;GO

I am sure there are many more ways to achieve this, what method would you use if you have to face the similar situation?
Reference: Pinal Dave (http://blog.sqlauthority.com)

Monday, October 29, 2012

Windows Efficiency Tricks and Tips – Personal Technology Tip

This is the second post in my series about my favorite Technology Tips, and I wanted to focus on my favorite Microsoft product.  Choosing just one topic to cover was too hard, though.  There are so many interesting things I have to share that I am forced to turn this second installment into a five-part post.  My five favorite Windows tips and tricks.

1) You can open multiple applications using the task bar.

With the new Windows 7 taskbar, you can start navigating with just one click.  For example, you can launch Word by clicking on the icon on your taskbar, and if you are using multiple different programs at the same time, you can simply click on the icon to return to Word.  However, what if you need to open another Word document, or begin a new one?  Clicking on the Word icon is just going to bring you back to your original program.  Just click on the Word icon again while holding down the shift key, and you’ll open up a new document.

2) Navigate the screen with the touch of a button – and not your mouse button.

Yes, we live in a pampered age.  We have access to amazing technology, and it just gets better every year.  But have you ever found yourself wishing that right when you were in the middle of something, you didn’t have to interrupt your work flow be reaching for your mouse to navigate through the screen?  Yes, we have all been guilty of this pampered wish.  But Windows has delivered!  Now you can move your application window using your arrow keys.
  • Lock the window to the left, right hand screen: Win+left Arrow and Win+right Arrow
  • Maximize & minimize: Win+up arrow and Win+down arrow
  • Minimize all items on screen: Win+M
  • Return to your original folder, or browse through all open windows: Alt+up arrow, Alt+Left Arrow, or Alt+right arrow
  • Close down or reopen all windows: win+home

3) Are you one of the few people who still uses Command Prompt?

You know who you are, and you aren’t ashamed to still use this option that so many people have forgotten about it.  You can easily access it by holding down the shift key while RIGHT clicking on any folder.

4) Quickly select multiple files without using your mouse.

We all know how to select multiple files or folders by Ctrl-clicking or Shift-clicking multiple items.  But all of us have tried this, and then accidentally released Ctrl, only to lose all our precious work.  Now there is a way to select only the files you want through a check box system.  First, go to Windows Explorer, click Organize, and then “Folder and Search Options.”  Go to the View tab, and under advanced settings, you can find a box that says “Use check boxes to select items.”  Once this has been selected, you will be able to hover your mouse over any file and a check box will appear.  This makes selecting multiple, random files quick and easy.

5) Make more out of remote access.

If you work anywhere in the tech field, you are probably the go-to for computer help with friends and family, and you know the usefulness of remote access (ok, some of us use this extensively at work, as well, but we all have friends and family who rely on our skills!).  Often it is necessary to restart a computer, which is impossible in remote access as the computer will not show the shutdown menu.  To force the computer to do your wishes, we return to Command Prompt.  Open Command Prompt and type “shutdown /s” for shutdown, or “shutdown /r” for restart.
I hope you will find above five tricks which I use in my daily use very important.
Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Storing Variable Values in Temporary Array or Temporary List

SQL Server does not support arrays or a dynamic length storage mechanism like list. Absolutely there are some clever workarounds and few extra-ordinary solutions but everybody can;t come up with such solution. Additionally, sometime the requirements are very simple that doing extraordinary coding is not required. Here is the simple case.
Let us say here are the values: a, 10, 20, c, 30, d. Now the requirement is to store them in a array or list. It is very easy to do the same in C# or C. However, there is no quick way to do the same in SQL Server. Every single time when I get such requirement, I create a table variable and store the values in the table variables. Here is the example:
For SQL Server 2012:
DECLARE @ListofIDs TABLE(IDs VARCHAR(100));INSERT INTO @ListofIDsVALUES('a'),('10'),('20'),('c'),('30'),('d');SELECT IDs FROM @ListofIDs;GO
When executed above script it will give following resultset.
Above script will work in SQL Server 2012 only for SQL Server 2008 and earlier version run following code.
DECLARE @ListofIDs TABLE(IDs VARCHAR(100), ID INT IDENTITY(1,1));INSERT INTO @ListofIDsSELECT 'a'UNION ALLSELECT '10'UNION ALLSELECT '20'UNION ALLSELECT 'c'UNION ALLSELECT '30'UNION ALLSELECT 'd';SELECT IDs FROM @ListofIDs;GO
Now in this case, I have to convert numbers to varchars because I have to store mix datatypes in a single column. Additionally, this quick solution does not give any features of arrays (like inserting values in between as well accessing values using array index).
Well, do you ever have to store temporary multiple values in SQL Server – if the count of values are dynamic and datatype is not specified early how will you about storing values which can be used later in the programming.
Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL SERVER – Move Database Files MDF and LDF to Another Location

When a novice DBA or Developer create a database they use SQL Server Management Studio to create new database. Additionally, the T-SQL script to create a database is very easy as well. You can just write CREATE DATABASE DatabaseName and it will create new database for you. The point to remember here is that it will create the database at the default location specified for SQL Server Instance (this default instance can be changed and we will see that in future blog posts). Now, once the database goes in production it will start to grow.
It is not common to keep the Database on the same location where OS is installed. Usually Database files are on SAN, Separate Disk Array or on SSDs. This is done usually for performance reason and manageability perspective. Now the challenges comes up when database which was installed at not preferred default location and needs to move to a different location. Here is the quick tutorial how you can do it.
Let us assume we have two folders loc1 and loc2. We want to move database files from loc1 to loc2.
USE MASTER;GO-- Take database in single user mode -- if you are facing errors
-- This may terminate your active transactions for database
ALTER DATABASE TestDBSET SINGLE_USERWITH ROLLBACK IMMEDIATE;GO-- Detach DBEXEC MASTER.dbo.sp_detach_db @dbname = N'TestDB'GO
Now move the files from loc1 to loc2. You can now reattach the files with new locations.
-- Move MDF File from Loc1 to Loc 2
-- Re-Attached DB
CREATE DATABASE [TestDB] ON( FILENAME = N'F:\loc2\TestDB.mdf' ),
(
FILENAME = N'F:\loc2\TestDB_log.ldf' )FOR ATTACH
GO
Well, we are done. There is little warning here for you: If you do ROLLBACK IMMEDIATE you may terminate your active transactions so do not use it randomly. Do it if you are confident that they are not needed or due to any reason there is a connection to the database which you are not able to kill manually after review.
Reference : Pinal Dave (http://blog.SQLAuthority.com)

Thursday, October 25, 2012

SQL SERVER – Importance of User Without Login

 

Some questions are very open ended and it is very hard to come up with exact requirements. Here is one question I was asked in recent User Group Meeting.
Question: “In recent version of SQL Server we can create user without login. What is the use of it?”
Great question indeed. Let me first attempt to answer this question but after reading my answer I need your help. I want you to help him as well with adding more value to it.
Answer:
Let us visualize a scenario. An application has lots of different operations and many of them are very sensitive operations. The common practice was to do give application specific role which has more permissions and access level. When a regular user login (not system admin), he/she might have very restrictive permissions. The application itself had a user name and password which means applications can directly login into the database and perform the operation. Developers were well aware of the username and password as it was embedded in the application. When developer leaves the organization or when the password was changed, the part of the application had to be changed where the same username and passwords were used. Additionally, developers were able to use the same username and password and login directly to the same application.
In earlier version of SQL Server there were application roles. The same is later on replaced by “User without Login”. Now let us recreate the above scenario using this new “User without Login”. In this case, User will have to login using their own credentials into SQL Server. This means that the user who is logged in will have his/her own username and password. Once the login is done in SQL Server, the user will be able to use the application. Now the database should have another User without Login which has all the necessary permissions and rights to execute various operations. Now, Application will be able to execute the script by impersonating “user without login – with more permissions”.
Here there is assumed that user login does not have enough permissions and another user (without login) there are more rights. If a user knows how the application is using the database and their various operations, he can switch the context to user without login making him enable for doing further modification. Make sure to explicitly DENY view definition permission on the database. This will make things further difficult for user as he will have to know exact details to get additional permissions.
If a user is System Admin all the details which I just mentioned in above three paragraphs does not apply as admin always have access to everything. Additionally, the method describes above is just one of the architecture and if someone is attempting to damage the system, they will still be able to figure out a workaround. You will have to put further auditing and policy based management to prevent such incidents and accidents.
I guess this is my answer. I read it multiple times but I still feel that I am missing something. There should be more to this concept than what I have just described. I have merely described one scenario but there will be many more scenarios where this situation will be useful. Now is your turn to help – please leave a comment with the additional suggestion where exactly “User without Login” will be useful as well did I miss anything when I described above scenario.
Reference: Pinal Dave (http://blog.sqlauthority.com)

Tuesday, October 16, 2012

List All The Column With Specific Data Types in Database


5 years ago I wrote script SQL SERVER – 2005 – List All The Column With Specific Data Types, when I read it again, it is very much relevant and I liked it. This is one of the script which every developer would like to keep it handy. I have upgraded the script bit more. I have included few additional information which I believe I should have added from the beginning. It is difficult to visualize the final script when we are writing it first time. I use every script which I write on this blog, the matter of the fact, I write only those scripts here which I was using at that time. It is quite possible that as time passes by my needs are changing and I change my script. Here is the updated script of this subject. If there are any user data types, it will list the same as well.
SELECT s.name AS 'schema', ts.name AS TableName,c.name AS column_name, c.column_id,SCHEMA_NAME(t.schema_id) AS DatatypeSchema,t.name AS Datatypename,t.is_user_defined, t.is_assembly_type,c.is_nullable, c.max_length, c.PRECISION,c.scaleFROM sys.columns AS cINNER JOIN sys.types AS t ON c.user_type_id=t.user_type_idINNER JOIN sys.tables ts ON ts.OBJECT_ID = c.OBJECT_IDINNER JOIN sys.schemas s ON s.schema_id = ts.schema_idORDER BY s.name, ts.name, c.column_id
I would be very interested to see your script which lists all the columns of the database with data types. If I am missing something in my script, I will modify it based on your comment. This way this page will be a good bookmark for the future for all of us.
Reference : Pinal Dave (http://blog.SQLAuthority.com)

Why Do We Need Data Quality Services – Importance and Significance of Data Quality Services (DQS)


Databases are awesome.  I’m sure my readers know my opinion about this – I have made SQL Server my life’s work after all!  I love technology and all things computer-related.  Of course, even with my love for technology, I have to admit that it has its limits.  For example, it takes a human brain to notice that data has been input incorrectly.  Computer “brains” might be faster than humans, but human brains are still better at pattern recognition.  For example, a human brain will notice that “300” is a ridiculous age for a human to be, but to a computer it is just a number.  A human will also notice similarities between “P. Dave” and “Pinal Dave,” but this would stump most computers.
In a database, these sorts of anomalies are incredibly important.  Databases are often used by multiple people who rely on this data to be true and accurate, so data quality is key.  That is why the improved SQL Server features Master Data Management talks about Data Quality Services.  This service has the ability to recognize and flag anomalies like out of range numbers and similarities between data.  This allows a human brain with its pattern recognition abilities to double-check and ensure that P. Dave is the same as Pinal Dave.
A nice feature of Data Quality Services is that once you set the rules for the program to follow, it will not only keep your data organized in the future, but go to the past and “fix up” any data that has already been entered.  It also allows you do combine data from multiple places and it will apply these rules across the board, so that you don’t have any weird issues that crop up when trying to fit a round peg into a square hole.
There are two parts of Data Quality Services that help you accomplish all these neat things.  The first part is DQL Server, which you can think of as the hardware component of the system.  It is installed on the side of (it needs to install separately after SQL Server is installed) SQL Server and runs quietly in the background, performing all its cleanup services.
DQS Client is the user interface that you can interact with to set the rules and check over your data.  There are three main aspects of Client: knowledge base management, data quality projects and administration.  Knowledge base management is the part of the system that allows you to set the rules, or program the “knowledge base,” so that your database is clean and consistent.
Data Quality projects are what run in the background and clean up the data that is already present.  The administration allows you to check out what DQS Client is doing, change rules, and generally oversee the entire process.  The whole process is user-friendly and a pleasure to use.  I highly recommend implementing Data Quality Services in your database.
Here are few of my blog posts which are related to Data Quality Services and I encourage you to try this out.
Reference: Pinal Dave (http://blog.SQLAuthority.com)

Cursor to Kill All Process in Database

When you run the script please make sure that you run it in different database then the one you want all the processes to be killed.
CREATE TABLE #TmpWho(spid INT, ecid INT, status VARCHAR(150), loginame VARCHAR(150),hostname VARCHAR(150), blk INT, dbname VARCHAR(150), cmd VARCHAR(150))INSERT INTO #TmpWhoEXEC sp_whoDECLARE @spid INT
DECLARE
@tString VARCHAR(15)DECLARE @getspid CURSOR
SET
@getspid =   CURSOR FOR
SELECT
spidFROM #TmpWhoWHERE dbname = 'mydb'OPEN @getspidFETCH NEXT FROM @getspid INTO @spidWHILE @@FETCH_STATUS = 0BEGIN
SET
@tString = 'KILL ' + CAST(@spid AS VARCHAR(5))EXEC(@tString)FETCH NEXT FROM @getspid INTO @spidEND
CLOSE
@getspidDEALLOCATE @getspidDROP TABLE #TmpWhoGO

Reference : Pinal Dave (http://blog.SQLAuthority.com)

DELETE, TRUNCATE and RESEED Identity

Yesterday I had a headache answering questions to one of the DBA on the subject of Reseting Identity Values for All Tables. After talking to the DBA I realized that he has no clue about how the identity column behaves when there is DELETE, TRUNCATE or RESEED Identity is used.

Let us run a small T-SQL Script.

Create a temp table with Identity column beginning with value 11. The seed value is 11.
USE [TempDB]
GO
-- Create TableCREATE TABLE [dbo].[TestTable]([ID] [int] IDENTITY(11,1) NOT NULL,[var] [nchar](10) NULL
)
ON [PRIMARY]
GO
-- Build sample dataINSERT INTO [TestTable]VALUES ('val')GO

When seed value is 11 the next value which is inserted has the identity column value as 11.
– Select Data
SELECT *FROM [TestTable]
GO


Effect of DELETE statement

-- Delete DataDELETE FROM [TestTable]
GO

When the DELETE statement is executed without WHERE clause it will delete all the rows. However, when a new record is inserted the identity value is increased from 11 to 12. It does not reset but keep on increasing.
-- Build sample dataINSERT INTO [TestTable]VALUES ('val')GO-- Select DataSELECT *FROM [TestTable]

Effect of TRUNCATE statement

-- Truncate tableTRUNCATE TABLE [TestTable]
GO

When the TRUNCATE statement is executed it will remove all the rows. However, when a new record is inserted the identity value is increased from 11 (which is original value). TRUNCATE resets the identity value to the original seed value of the table.
-- Build sample dataINSERT INTO [TestTable]VALUES ('val')GO-- Select DataSELECT *FROM [TestTable]
GO

Effect of RESEED statement

If you notice I am using the reseed value as 1. The original seed value when I created table is 11. However, I am reseeding it with value 1.
-- ReseedDBCC CHECKIDENT ('TestTable', RESEED, 1)GO
When we insert the one more value and check the value it will generate the new value as 2. This new value logic is Reseed Value + Interval Value – in this case it will be 1+1 = 2.
-- Build sample dataINSERT INTO [TestTable]VALUES ('val')GO-- Select DataSELECT *FROM [TestTable]
GO

Here is the clean up act.
-- Clean upDROP TABLE [TestTable]
GO

Question for you:

If I reseed value with some random number followed by the truncate command on the table what will be the seed value of the table. (Example, if original seed value is 11 and I reseed the value to 1. If I follow up with truncate table what will be the seed value now?
Here is the complete script together. You can modify it and find the answer to the above question. Please leave a comment with your answer.
Reference: Pinal Dave (http://blog.sqlauthority.com)

Friday, October 12, 2012

Advanced Data Quality Services with Melissa Data – Azure Data Market

There has been much fanfare over the new SQL Server 2012, and especially around its new companion product Data Quality Services (DQS). Among the many new features is the addition of this integrated knowledge-driven product that enables data stewards everywhere to profile, match, and cleanse data. In addition to the homegrown rules that data stewards can design and implement, there are also connectors to third party providers that are hosted in the Azure Datamarket marketplace.  In this review, I leverage SQL Server 2012 Data Quality Services, and proceed to subscribe to a third party data cleansing product through the Datamarket to showcase this unique capability.

Crucial Questions

For the purposes of the review, I used a database I had in an Excel spreadsheet with name and address information. Upon a cursory inspection, there are miscellaneous problems with these records; some addresses are missing ZIP codes, others missing a city, and some records are slightly misspelled or have unparsed suites. With DQS, I can easily add a knowledge base to help standardize my values, such as for state abbreviations. But how do I know that my address is correct? And if my address is not correct, what should it be corrected to? The answer lies in a third party knowledge base by the acknowledged USPS certified address accuracy experts at Melissa Data.

Reference Data Services

Within DQS there is a handy feature to actually add reference data from many different third-party Reference Data Services (RDS) vendors. DQS simplifies the processes of cleansing, standardizing, and enriching data through custom rules and through service providers from the Azure Datamarket. A quick jump over to the Datamarket site shows me that there are a handful of providers that offer data directly through Data Quality Services. Upon subscribing to these services, one can attach a DQS domain or composite domain (fields in a record) to a reference data service provider, and begin using it to cleanse, standardize, and enrich that data. Besides what I am looking for (address correction and enrichment), it is possible to subscribe to a host of other services including geocoding, IP address reference, phone checking and enrichment, as well as name parsing, standardization, and genderization.  These capabilities extend the data quality that DQS has natively by quite a bit.
For my current address correction review, I needed to first sign up to a reference data provider on the Azure Data Market site. For this example, I used Melissa Data’s Address Check Service. They offer free one-month trials, so if you wish to follow along, or need to add address quality to your own data, I encourage you to sign up with them.
Once I subscribed to the desired Reference Data Provider, I navigated my browser to the Account Keys within My Account to view the generated account key, which I then inserted into the DQS Client – Configuration under the Administration area.

Step by Step to Guide

That was all it took to hook in the subscribed provider -Melissa Data- directly to my DQS Client. The next step was for me to attach and map in my Reference Data from the newly acquired reference data provider, to a domain in my knowledge base.
On the DQS Client home screen, I selected “New Knowledge Base” under Knowledge Base Management on the left-hand side of the home screen.
Under New Knowledge Base, I typed a Name and description of my new knowledge base, then proceeded to the Domain Management screen.
Here I established a series of domains (fields) and then linked them all together as a composite domain (record set). Using the Create Domain button, I created the following domains according to the fields in my incoming data:
  1. Name
  2. Address
  3. Suite
  4. City
  5. State
  6. Zip
I added a Suite column in my domain because Melissa Data has the ability to return missing Suites based on last name or company. And that’s a great benefit of using these third party providers, as they have data that the data steward would not normally have access to. The bottom line is, with these third party data providers, I can actually improve my data.
Next, I created a composite domain (fulladdress) and added the (field) domains into the composite domain. This essentially groups our address fields together in a record to facilitate the full address cleansing they perform.
I then selected my newly created composite domain and under the Reference Data tab, added my third party reference data provider –Melissa Data’s Address Check- and mapped in each domain that I had to the provider’s Schema.
Now that my composite domain has been married to the Reference Data service, I can take the newly published knowledge base and create a project to cleanse and enrich my data.
My next task was to create a new Data Quality project, mapping in my data source and matching it to the appropriate domain column, and then kick off the verification process. It took just a few minutes with some progress indicators indicating that it was working.
When the process concluded, there was a helpful set of tabs that place the response records into categories: suggested; new; invalid; corrected (automatically); and correct. Accepting the suggestions provided by  Melissa Data allowed me to clean up all the records and flag the invalid ones. It is very apparent that DQS makes address data quality simplistic for any IT professional.

Final Note

As I have shown, DQS makes data quality very easy. Within minutes I was able to set up a data cleansing and enrichment routine within my data quality project, and ensure that my address data was clean, verified, and standardized against real reference data. As reviewed here, it’s easy to see how both SQL Server 2012 and DQS work to take what used to require a highly skilled developer, and empower an average business or database person to consume external services and clean data.
Reference: Pinal Dave (http://blog.sqlauthority.com)

Thursday, October 11, 2012

Identify Numbers of Non Clustered Index on Tables for Entire Database

Here is the script which will give you numbers of non clustered indexes on any table in entire database.
SELECT COUNT(i.TYPE) NoOfIndex,[schema_name] = s.name, table_name = o.nameFROM sys.indexes iINNER JOIN sys.objects o ON i.[object_id] = o.[object_id]INNER JOIN sys.schemas s ON o.[schema_id] = s.[schema_id]WHERE o.TYPE IN ('U')
AND
i.TYPE = 2GROUP BY s.name, o.nameORDER BY schema_name, table_name
Here is the small story behind why this script was needed.
I recently went to meet my friend in his office and he introduced me to his colleague in office as someone who is an expert in SQL Server Indexing. I politely said I am yet learning about Indexing and have a long way to go. My friend’s colleague right away said – he had a suggestion for me with related to Index. According to him he was looking for a script which will count all the non clustered on all the tables in the database and he was not able to find that on SQLAuthority.com.
I was a bit surprised as I really do not remember all the details about what I have written so far. I quickly pull up my phone and tried to look for the script on my custom search engine and he was correct. I never wrote a script which will count all the non clustered indexes on tables in the whole database. Excessive indexing is not recommended in general. If you have too many indexes it will definitely negatively affect your performance. The above query will quickly give you details of numbers of indexes on tables on your entire database. You can quickly glance and use the numbers as reference.
Please note that the number of the index is not a indication of bad indexes. There is a lot of wisdom I can write here but that is not the scope of this blog post. There are many different rules with Indexes and many different scenarios. For example – a table which is heap (no clustered index) is often not recommended on OLTP workload (here is the blog post to identify them), drop unused indexes with careful observation (here is the script for it), identify missing indexes and after careful testing add them (here is the script for it). Even though I have given few links here it is just the tip of the iceberg. If you follow only above four advices your ship may still sink. Those who wants to learn the subject in depth can watch the videos here after logging in.
Note: You change where condition type to 6 for nonclustered column store index.
Reference: Pinal Dave (http://blog.sqlauthority.com)

INFORMATION_SCHEMA.COLUMNS and Value Character Maximum Length -1

INFORMATION_SCHEMA.COLUMNS and Value Character Maximum Length -1
I personally use the sys schema and DMV to retrieve most of the information. However, I am not surprised see usage of Information_Schema. It has been very popular and works in most of the time. Though, I do not use any feature it does not mean everybody else should stop using the same feature. The matter of the fact, when I receive questions about features which I have not used frequently I feel refreshed to come across new concepts.
Just a few days ago, I received a simple question about INFORMATION_SCHEMA.COLUMNS table. The question was as follows:
Question: I often see the value -1 in the CHARACTER_MAXIMUM_LENGTH column of INFORMATION_SCHEMA.COLUMNS table. I understand that the length of any column can be between 0 to large number but I do not get it when I see value in negative (i.e. -1). Any insight on this subject?
Answer: Of course, I love this kind of simple question which often know the answer or assume that we know the answer. Whenever we use data type VARCHAR(MAX) for any column it is represented by -1 in INFORMATION_SCHEMA.COLUMNS table. Let us see a quick demonstration of the same.
Let us create a table which has column which is of VARCHAR(MAX) and see the result returned by the same.
-- Create Sample TableCREATE TABLE t(id INT,name VARCHAR(200),address VARCHAR(MAX))GO-- select from columnsSELECT COLUMN_NAME,CHARACTER_MAXIMUM_LENGTHFROM INFORMATION_SCHEMA.COLUMNSWHERE TABLE_NAME=OBJECT_NAME(OBJECT_ID('t'))GO-- drop tableDROP TABLE t
GO  
Let us check the resultset.
You will see that the column address which is of datatype VARCHAR(MAX) have Character Maximum Length value as -1. You will see the same behavior from nvarchar(max) and varbinary(max).
I personally believe in simple learning – if we learn a thing a day we will learn 365 new things every year!
Reference: Pinal Dave (http://blog.sqlauthority.com)

Why Do We Need Master Data Management – Importance and Significance of Master Data Management (MDM)

Why Do We Need Master Data Management – Importance and Significance of Master Data Management (MDM)
Let me paint a picture of everyday life for you.  Let’s say you and your wife both have address books for your groups of friends.  There is definitely overlap between them, so that you both have the addresses for your mutual friends, and there are addresses that only you know, and some only she knows.  They also might be organized differently.  You might list your friend under “J” for “Joe” or even under “W” for “Work,” while she might list him under “S” for “Joe Smith” or under your name because he is your friend.  If you happened to trade, neither of you would be able to find anything!
This is where data management would be very important.  If you were to consolidate into one address book, you would have to set rules about how to organize the book, and both of you would have to follow them.  You would also make sure that poor Joe doesn’t get entered twice under “J” and under “S.”
This might be a familiar situation to you, whether you are thinking about address books, record collections, books, or even shopping lists.  Wherever there is a lot of data to consolidate, you are going to run into problems unless everyone is following the same rules.
I’m sure that my readers can figure out where I am going with this.  What is SQL Server but a computerized way to organize data?  And Microsoft is making it easier and easier to get all your “addresses” into one place.  In the  2008 version of SQL they introduced a new tool called Master Data Services (MDS) for Master Data Management, and they have improved it for the new 2012 version.
MDM was hailed as a major improvement for business intelligence.  You might not think that an organizational system is terribly exciting, but think about the kind of “address books” a company might have.  Many companies have lots of important information, like addresses, credit card numbers, purchase history, and so much more.  To organize all this efficiently so that customers are well cared for and properly billed (only once, not never or multiple times!) is a major part of business intelligence.
MDM comes into play because it will comb through these mountains of data and make sure that all the information is consistent, accurate, and all placed in one database so that employees don’t have to search high and low and waste their time. MDM also has operational MDM functions.  This is not a redundancy.  Operational MDM means that when one employee updates one bit of information in the database, for example – updating a new address for a customer, operational MDM ensures that this address is updated throughout the system so that all departments will have the correct information.
Another cool thing about MDM is that it features Master Data Services Configuration Manager, which is exactly what it sounds like.  It has a built-in “helper” that lets you set up your database quickly, easily, and with the correct configurations.  While talking about cool features, I can’t skip over the add-in for Excel.  This allows you to link certain data to Excel files for easier sharing and uploading.
In summary, I want to emphasize that the scariest part of the database is slowly disappearing.  Everyone knows that a database – one consolidated area for all your data – is a good idea, but the idea of setting one up is daunting.  But SQL Server is making data management easier and easier with features like Master Data Services (MDS).
Reference: Pinal Dave (http://blog.SQLAuthority.com)

SQL in Sixty Secondsby Pinal Dave

Simple Cursor to Select Tables in Database with Static Prefix and Date Created

Following cursor query runs through database and find all the table with certain prefixed (‘b_’,'delete_’). It also checks if the Table is more than certain days old or created before certain days, it will delete it. We can have any other opertation on that table like delete, print or reindex.
SET NOCOUNT ON
DECLARE
@lcl_name VARCHAR(100)DECLARE cur_name CURSOR FOR
SELECT
nameFROM sysobjectsWHERE type = 'U'AND crdate <= DATEADD(m,-1,GETDATE())
AND
name LIKE 'b_%'OPEN cur_nameFETCH NEXT FROM cur_name INTO @lcl_nameWHILE @@Fetch_status = 0BEGIN
SELECT
@lcl_name = 'sp_depends ' +@lcl_namePRINT @lcl_name--  EXEC (@lcl_name )FETCH NEXT FROM cur_name INTO @lcl_nameEND
CLOSE
cur_nameDEALLOCATE cur_nameSET NOCOUNT OFF

Reference: Pinal Dave (http://www.SQLAuthority.com)

Monday, October 8, 2012

Manage Help Settings – CTRL + ALT + F1

It is a miracle that curiosity survives formal education. ~ Albert Einstein
I have 3 years old daughter and she never misses any chance to play with the system. I have multiple computers and I always make sure that if I am working with production server, I never leave it open but when I am doing some experiment I often leave my computer open. My daughter loves the part when I have left the computer open and I am not observing her. Recently I had the same scenario, I got urgent call and I moved away from my computer and when I returned she was playing with SSMS left open my computer. Here is the screen which was visible on the screen.
For a moment, I could not figure out what was this screen and what was about to get updated. I tried to ask her what keys she pressed the reaction was “I wanted – eya eya o”. Well, what more I expect from 3 years old. She is no computer genius – she just learned to use notepad and paint on my machine.
Finally, when I saw the above screen in detail, I realize that this screen was from the help screen and something got updated. I have been using SQL Server for a long time but I never updated help on the screen. When I need to search something if I remember that I have written it earlier I will go to http://search.sqlauthority.com and will search there or will search on Google.
As this computer was already updated I fired up Virtual Machine and tried to look recreate how my daughter was reached to above screen. Here are the steps which I have to do to reach to above screen.
Go to SSMS >> Toolbar >> Help >> Manage Help Settings (or type CTRL+ALT+F1) and click it.
Above click brought up following screen.
I clicked on Check for update online brought following screen up.
When I clicked on Update it brought me back to original screen which my daughter was able to bring up earlier.
I found it so interesting that what took me 2-3 minutes to figure out and the screen which I have never come across in my career I learned from my curiosity like my daughter.
Reference: Pinal Dave (http://blog.sqlauthority.com)

Getting Columns Headers without Result Data – SET FMTONLY ON


I was recently watching a videos online of TechEd 2011 USA (link) and I learned that SET FMTONLY ON is going to be replaced with enhanced DMVs in future versions of SQL Server. I really liked the new direction of the product. However, SET FMTONLY ON is really have done its job so far. I have used it many times so far and always find it useful.
SET FMTONLY ON returns only metadata to the client. It can be used to test the format of the response without actually running the query. When this setting is ON the resultset only have headers of the results but no data. If resultset has Spatial Results, it will have the spatial results tab as well, however, no spatial data.
USE AdventureWorks2008R2
GO
SET FMTONLY ON;SELECT *FROM HumanResources.Department;SELECT *FROM Person.Address;SET FMTONLY OFF;GO

If you have turned on the execution plan (CTRL+M) while executing this settings, it will not return any execution plan as well.

Reference: Pinal Dave (http://blog.SQLAuthority.com)