Thursday, September 27, 2012

Introduction to SQL Server Security


Take the workplace example where “Tom” needs “Read” access to the “Financial Folder”. What are the Securable, Principal, and Permissions from that last sentence?
  • A Securable is a resource that someone might want to access (like the Financial Folder).
  • A Principal is anything that might want to gain access to the securable (like Tom).
  • A Permission is the level of access a principal has to a securable (like Read).
Let’s re-word the story above using the right terms. The Principal, named Tom, needs the Permission of Read, to the Securable item Financial Folder.
In this lesson, you will learn how to use three powerful keywords (GRANT, DENY and REVOKE) of the Data Control Language (DCL). All of the examples shown below will presume the following:
  • The Securable is the entire SQL Server itself.
  • The Principal is named either Murray or Sara.
  • The Permission(s) will either be ‘Control Server’ or ‘Alter Any Database’.

Granting Permissions

The GRANT keyword is a DCL statement that creates permissions on a securable and grants these permissions to a principal. OK, that sounds a lot like ‘Geek Speak’, so let’s use a simple example to demonstrate how this works.
With the GRANT keyword, you can tell the SQL Server (securable) to allow the ‘Alter Any Database’ (permission level) to Murray (principal), while also allowing the ‘Control Server’ (permission level) to Murray and Sara (principal).
GRANT ALTER ANY DATABASE TO MurrayGRANT CONTROL SERVER TO MurrayGRANT CONTROL SERVER TO Sara
GO
In the code above, the GRANT(s) allow Murray two explicit permissions and Sara a single explicit permission to the SQL Server. Despite this, they have the same level of resource access to the SQL Server securable. That’s because ‘Control Server’ can do everything including ‘Alter Any Database’. In this scenario, Sara can effectively do everything at the same level as Murray.
Principal
Securable
Permissions
Murray
SQL Server
Control Server = GrantedAlter Any Database = Granted
Sara
SQL Server
Control Server = Granted

Denying Permissions

Currently, Murray and Sara both have full control of the SQL Server securable. Even though Murray has more explicit permissions listed, they both have the same effective permissions.
With the DENY keyword, you can tell the SQL Server to explicitly deny the ‘Alter Any Database’ to Sara, without affecting any of the permissions already given to Murray. However; this action will prevent Sara from having the ability to modify or create any databases on the SQL Server securable.
You can keep the ‘Control Server’ permission in place, while denying Sara the ‘Alter Any Database’ permission, by simply writing the following DCL statement:
DENY ALTER ANY DATABASE TO Sara
GO
Sara has now been granted control to the SQL Server, but explicitly denied the permission to ‘Alter Any Database’. As you can see in the screenshot below, this restriction will cause a problem when Sara tries to create a database.
 
To summarize what has been done to the principals, Murray and Sara, review the Table below. (The change has been highlighted in red to make it easy to find).
Principal
Securable
Permissions
Murray
SQL Server
Control Server = GrantedAlter Any Database = Granted
Sara
SQL Server
Control Server = GrantedAlter Any Database = Denied

Revoking Permissions

Sara currently has limited control of the SQL Server, since there is a DENY statement on the ‘Alter Any Database’ permission for her. Murray effectively has full control of the SQL Server, as no permissions have been denied to him. In fact, the GRANT statement for him to ‘Alter Any Database’ appears redundant. Indeed, Murray would be unaffected if his ‘Alter Any Database’ permission no longer existed.  Let’s see what it takes to make this change.
Your goal is to leave the ‘Alter Any Database’ permission for Murray as unspecified or revoked in the SQL Server’s access list. Neither the GRANT or DENY keywords are able to accomplish this task.
With the REVOKE keyword, you can instruct the SQL Server securable to revoke the ‘Alter Any Database’ permission for Murray, without affecting his current ‘Control Server’ permission. You can now achieve your goal by writing the following DCL statement:
REVOKE ALTER ANY DATABASE TO Sara
GO
The REVOKE keyword will simply remove an existing GRANT or DENY permission from the SQL Server access list. Review the results of the previous code in the table below.
Principal
Securable
Permissions
Murray
SQL Server
Control Server = Granted
Sara
SQL Server
Control Server = GrantedAlter Any Database = Denied
Revoke sounds like a penalty or a roadblock to someone’s permissions. This indeed can be the case, as REVOKE removes both GRANT and DENY permissions. In reality, you can use REVOKE to easily restore Sara’s permissions to the same level as Murray (full control) by writing the following DCL statement:
REVOKE ALTER ANY DATABASE TO Sara
GO
What is the end result?  Sara’s restriction to ‘Alter Any Database’ has been removed. The Murray and Sara principals now share ‘Control Server’ permission on the SQL Server securable. You can review the final results of this coding exercise in the table below.
Principal
Securable
Permissions
Murray
SQL Server
Control Server = Granted
Sara
SQL Server
Control Server = Granted

This blog post is inspired from Beginning SQL Joes 2 Pros: The SQL Hands-On Guide for Beginners – SQL Exam Prep Series 70-433 – Volume 1.
[Amazon] | [Flipkart] | [Kindle] | [IndiaPlaza]
Reference: Pinal Dave (http://blog.sqlauthority.com)

How to replace a column name in multiple stored procedure efficiently and quickly?

The answer is here assuming you have regular stored procedures and you are working on the Development NON Production Server.
Go to Server Note >> Databases >> DatabaseName >> Programmability >> Stored Procedure
Now make sure that Object Explorer Details are open (if not open it by clicking F7). You will see the list of all the stored procedures there. Now you will see a list of all the stored procedures on the right side list. Select either all of them or the one which you believe are relevant to your query. Now…
Right click on the stored procedures >> SELECT DROP and CREATE to >> Now select New Query Editor Window or Clipboard.
Paste the complete script to a new window if you have selected Clipboard option. Now press Control+H which will bring up the Find and Replace Screen. In this screen insert the column to be replaced in the “Find What”box and new column name into “Replace With” box.
Now execute the whole script. As we have selected DROP and CREATE to, it will created drop the old procedure and create the new one.
Another method would do all the same procedure but instead of DROP and CREATE manually replace the CREATE word with ALTER world. There is a small advantage in doing this is that if due to any reason the error comes up which prevents the new stored procedure to be created you will have your old stored procedure in the system as it is. “
Well, this was my answer to the question which I have received. Do you see any other workaround or solution?
Reference : Pinal Dave (http://blog.SQLAuthority.com)

Beginning of SQL Server Security

Security is a very important concept and no matter how many times we discuss this it is never enough. I have requested my friend Bharti who is expert in SQL Domain to help me write  a simple post about beginning SQL Server security. I personally found it very easy read and good beginning on this subject. Bharti Batra is Trainer, SQL Domain, Koenig Solutions. Koenig is a premier IT training firm that provides several IT certifications, such as Oracle 11g, Server+, RHCA, SQL Server Training, Prince2 Foundation etc.


Today all organizations have important data that is confidential and mission-critical, and should not be disclosed to unauthorized persons. Hence the security of data is one of the prime aspects of a DBA’s job.
We will discuss about the SQL server security model and different features that SQL provides for data security.
Security means that only relevant data should be available to authenticated persons. It additionally includes appropriate restrictions on the use of data, as well as controls to prohibit data theft. In case data is stolen by unauthorized users, security systems do not allow them to make use of it.
However it is not sufficient to just secure the SQL server. The operating system’s network security is also important. In addition, physical access to the server must be restricted and the application which accesses SQL server must be authenticated.
The security model has five layers, which are Physical security, OS and Network security, Database security, Data security and Application Security. Below mentioned are the specifications for each one of them.
Physical security: Physical access to server should be restricted. This can be done by guards outside the server room, biometric locks like retina scan or finger print reader.
OS and Network Security: Provide only restrictive or no access to server over network. Use firewall to restrict unauthorized access from network. Use less privileged service accounts. And SQL server data files, logs are stored on windows file system. So restrict access to these file locations. Always keep your system up to date with service packs and patches.
SQL Server Security:  Access to SQL Server must be restricted by authenticated users. SQL server supports two authentication mechanisms: – Windows authentication and SQL Server authentication.
Windows authentication is secure for most environments. With this we can grant login rights to the database to a Windows login or a Windows group.
In SQL Server authentication, the user has an SQL login that is not mapped with any Windows login.
There are some important terms in SQL server security, which include:
Principals are that require access to SQL server or database, to which we grant permissions for example: logins, roles etc.
Securables are items we secure or ones on which we grant permissions to principals. For example, database, schemas, table etc. The below figure gives you details about the principals and securables.
Login: At server level we have logins, which give access to a user to SQL Server. Server level permissions are assigned to logins. Login can be windows login or SQL server login, depending upon the authentication mechanism you are using.
Example: Creating a login
To create an SQL login named SqlLogin with password ‘Password@123’ by T-SQL, the command is:
To create a windows authenticated login, the T-SQL command is:
To create a login from Management studio
Open object explorer and then go to security folder at instance level and then right click login to create a new login
Fill the form to create a new login
Users:  The logins are mapped with users in database and the database level permissions are assigned to users. There are some predefined user present in every database which are dbo, guest. ‘dbo’ is the default user for sysadmin login.
Example: Creating a user
Create a user by T-SQL command:
Create a user by Management studio
In object explorer, go to the security folder inside the database folder and right click users for new users
Fill the form
Roles: There are some built-in roles present at server and database level. These roles contain predefined set of permissions so if you want to assign specific set of permissions to a login or user, make them the member of a role. There are two types of roles:
Fixed server roles: These server roles are predefined and you cannot create a new one.
To make a user member of Server Role, T-SQL command is:
Database Level Roles: At database level, we have some predefined database level roles. However users can create their own database roles.
To make a user member of Database Role, T-SQL command is
By these means we can restrict access to our database, however placing restrictions is not enough because data can also be stolen. Critical data must be encrypted for security so that even if it is stolen, it cannot be used. SQL server encryption mechanism to encrypt the data by using certificates, keys.
Encryption:
The SQL server provides hierarchical encryption structure, as shown in the image below:
The top key, i.e. service master key is encrypted with windows Data Protection API (DPAPI). And at database level we have Database Master Key that is encrypted with the Service Master Key. So the encryption in SQL Server is layered where the upper layer encrypts the layer below it. Data is encrypted by keys, Symmetric (Public key) or Asymmetric (public, private key pair) and these keys get encrypted by certificates, which are encrypted by database master key. Data can also be directly encrypted by certificates that have more information than keys like the name of issuing authority, subject and expiry date.
Data can be encrypted, and the keys and certificates are in the database. But what if the backup file or mdf file gets stolen?
For that we have Transparent Database Encryption (TDE), which is a real-time IO encryption that uses certificates and Database encryption key. This is done for encrypting the database, so in case the backup or mdf file is stolen, it cannot be used fully without the certificate.
There are other important considerations too. For example, just applying the security is not enough. Auditing also holds huge significance. SQL server provides SQL Server Audit for auditing purpose. SQL Server Audit can track and log events that occur at the server level or the database level automatically. With SQL server audits, several server and database level actions can be tracked. These include failed login attempts, modification in structure of database or table etc. These can then be logged in Windows event log or a file system file.
Guest Author 
The author of the article is Bharti Batra, Trainer, SQL Domain, Koenig Solutions. Koenig is a premier IT training firm that provides several IT certifications, such as Oracle 11g, Server+, RHCA, SQL Server Training, Prince2 Foundation etc.
Reference: Pinal Dave (http://blog.sqlauthority.com)

Wednesday, September 26, 2012

Excel Losing Decimal Values When Value Pasted from SSMS ResultSet

No!
It is not a SQL Server Issue or SSMS issue. It is how things work. There is a simple trick to resolve this issue.
It is very common when users are coping the resultset to Excel, the floating point or decimals are missed.
The solution is very much simple and it requires a small adjustment in the Excel. By default Excel is very smart and when it detects the value which is getting pasted is numeric it changes the column format to accommodate that. Now as Zero which are training any digit after decimal points have no value, Excel automatically hides it. To prevent this to happen user has to convert columns to text format so it can preserve the formatting.
Here is how you can do it.
Select the corner between A and 1 and Right Click on it.
It will select complete spreadsheet. If you want to change the format of any column you can select an individual column the same way. In the menu Click on Format Cells…
It will bring up the following menu. Here by default the selected column will be General, change that to Text. It will change the format of all the cells to Text.
Now once again paste the values from SSMS to the Excel. This time it will preserve the decimal values from SSMS.
Solved!
Any other trick you do you know to preserve the decimal values? Leave a comment please.
Reference: Pinal Dave (http://blog.SQLAuthority.com)

Tuesday, September 25, 2012

Demystifying SQL Server Differential Database Backups

SQL Server has three backup types.

1. Full Database Backup

When you request a full backup, SQL Server dumps all the data pages from your database, metadata about how your database is stored on disk and finally enough of the transaction log to bring the database back into a consistent state.  There are a few things you need to know about the full backup semantics. When you take a full backup it makes a few changes. Those changes are tracked in two places in the database and one in MSDB. The changes tracked in the database allow us to then use transaction log backups and differential database backups. The data logged to MSDB isn’t critical for restoring your backups. It does make it much easier to do so. Full backups are considered our “base” backup type. Every other backup type can use a full database backup as its base. Even though a full backup does capture some of the transaction log it doesn’t clear the log. If you are in simple mode, the normal checkpoint process will clear the log. If you are in bulk load or full recovery mode, you will need to take a transaction log backup to clear the log.

2. Transaction Log Backup

Transaction log backups are a critical part of any recovery plan. Without them you can’t restore up to the minute. If your database is in anything other than simple recovery mode, your only supported option to clear the log is a transaction log backup. Transaction log backups are serial by nature. The log restored depends on either a full or differential for its base and any log backups done before the current log you wish to apply.

3. Differential Database Backup

Like a full database backup, the differential backs up data pages and enough of the transaction log to bring the database back into a consistent state. Unlike full or transaction log backups, the differential backup captures all changes since the last full backup occurred. The information on changed data pages is stored internally in the database and doesn’t require any information from MSDB. The map of changed data pages only gets reset on the next full backup. Transaction log backups or other differential database backups will not reset the changed data map. You can think of transaction log backups as incremental backups. People coming from a systems administration background can get tripped up and treat differential backups like incremental backups. This can cause a significant waste of time when restoring your database since you only need to apply the full backup and the most current differential, or the differential you are interested in to get your database back into a recovered state.

Understanding Differential Database Backups

Most people are put off by the nature of differential backups mainly due to the amount of space they can grow to and the extra complexity they can add to your recovery plans. If you don’t manage them, you can quickly run into a differential that is larger than the full it is based on. Also, any data page alterations are tracked. For example, if you take a full backup then perform full index reorganization on a heavily fragmented index you can end up with very large differential backups. File shrinks with full reorganizations also have the same effect. Even though the actual data hasn’t changed, you end up with differential database backups that are unwieldy and difficult to manage. If you miss a full backup in your schedule, your differentials again may grow larger than your full backup.
There are several cases where differential database backups are a pivotal key to recovering your database quickly and with as little data loss as possible. Let’s take a look at a few scenarios.

Shortening Recovery Time

This alone should be good enough reason for you to investigate differential backups. Every restore operation has a cost-in-time associated to it. Remember, even if a transaction log backup is virtually empty, there is a cost-in-time to spool up and tear down the restore session for each log backup you apply. Not to mention replaying the transactions in the logs. In many cases, it can be much faster to apply a differential backup than applying multiple transaction log backups. By skipping all the data manipulation and just replacing the altered pages you are reducing the amount of IO required to restore.

Database in Simple Recovery Mode

There may be situations where you aren’t concerned with up-to-the-minute recovery but still need something better than weekly full backups to meet your recovery goals. Differentials fit in well here. By leveraging differential backups, you can take a single full once a week and daily differentials to cut down on the space needed to store your backups. Also, since differential backups contain all the changes since the last full, to recover you only need the full backup and the differential backup of the time interval you want to restore to. I recommend keeping your differentials just like you would your transaction log backups so if you need to recover your database into another environment or if you suffer corruption in one of your differentials, you still have as much data as possible to restore.

Large Database with Minimal Data Change

With today’s large disk capacities, it isn’t unusual to see multi-terabyte databases with years of data in them. Moving our full backup schedule out to every two weeks or every month and supplementing with differentials is an excellent way to conserve backup space and shorten time to recovery. Again, we only need the last full, the last differential and any transaction logs after the differential was taken to get us back up to the minute.

Increasing Recoverability

if you only take a full database backup once a week and transaction log backups every 15 minutes, you could end up applying over 670 logs to get your database back on line if you have a failure at the end of the week. If you have any errors in one of the transaction log backups, everything after that is pretty much useless to you. If it dies at backup 599, it may not be the end of your business. If it is log 38, it could mean a week’s worth of data gone. Since differential backups don’t break the LSN chain and transaction log backups don’t reset the changed data map, you can use either backup type even when one or the other may have had an error. Differentials allow us to bridge gaps in our transaction log since we can apply any transaction logs taken after the differential backup. This is one of the real strengths of differential backups. So, if you are doing weekly full backups, daily differential backups and transaction log backups every 15 minutes, you are covered in two ways. Normally, you would restore the full then the latest differential backup followed by any additional transaction logs. If you had a differential backup corrupted but your transaction logs, were fine you could still restore fully.

Repairing Log Shipping

Another great use of differentials is to repair your log shipped databases. If anything happens to the LSN chain, in most cases the only way to repair your log ship target is to start over again from a full and then apply all the logs to get it back up to current. If this is a large database or if there are a lot of transaction log backups to recover this could leave you exposed for quite a while. You can always take a differential backup, apply that to the log ship target then restart your log shipping from that point. I have used this technique successfully over the years when there have been network outages causing our log ship targets to fall way behind cutting catch up time from hours to minutes.

Final Thoughts

Incorporating differential backups will add complexity to your backup strategy but the benefits can be staggering. Between the storage savings and reduction in recovery time it’s clear that differential backups should be in your tool belt. I would also encourage you to practice restoring using your differential backups. Try out different failure scenarios like failed transaction logs or differential backups. Make sure you understand how to restore up to the minute and stop at a specific time now that you have differentials in the mix.

Monday, September 24, 2012

Validating Unique Columnname Across Whole Database

Option 1: Check if Column Exists in Current Database
IF EXISTS
(  
SELECT *FROM sys.columnsWHERE Name = N'NameofColumn')BEGIN
SELECT
'Column Exists'-- add other logicEND
ELSE
BEGIN
SELECT
'Column Does NOT Exists'-- add other logicEND
Option 2: Check if Column Exists in Current Database in Specific Table
IF EXISTS
(  
SELECT *FROM sys.columnsWHERE Name = N'NameofColumn'AND OBJECT_ID = OBJECT_ID(N'tableName'))BEGIN
SELECT
'Column Exists'-- add other logicEND
ELSE
BEGIN
SELECT
'Column Does NOT Exists'-- add other logicEND

Reference: Pinal Dave (http://blog.SQLAuthority.com)

Simple Explanation and Puzzle with SOUNDEX Function and DIFFERENCE Function

In simple words - SOUNDEX converts an alphanumeric string to a four-character code to find similar-sounding words or names. The first character of the code is the first character of character_expression and the second through fourth characters of the code are numbers that represent the letters in the expression. Vowels incharacter_expression are ignored unless they are the first letter of the string. DIFFERENCE function returns an integer value. The  integer returned is the number of characters in the SOUNDEX values that are the same. The return value ranges from 0 through 4: 0 indicates weak or no similarity, and 4 indicates strong similarity or the same values.
Learning Puzzle 1:
Now let us run following four queries and observe its output.
SELECT SOUNDEX('SQLAuthority') SdxValueSELECT SOUNDEX('SLTR') SdxValueSELECT SOUNDEX('SaLaTaRa') SdxValueSELECT SOUNDEX('SaLaTaRaM') SdxValue
When you look at the result set all the four values are same.
The reason for all the values to be same is as for SQL Server SOUNDEX function all the four strings are similarly sounding string.
Learning Puzzle 2:
Now let us run following five queries and observe its output.
SELECT DIFFERENCE (SOUNDEX('SLTR'),SOUNDEX('SQLAuthority'))SELECT DIFFERENCE (SOUNDEX('TH'),SOUNDEX('SQLAuthority'))SELECT DIFFERENCE ('SQLAuthority',SOUNDEX('SQLAuthority'))SELECT DIFFERENCE ('SLTR',SOUNDEX('SQLAuthority'))SELECT DIFFERENCE ('SLTR','SQLAuthority')
When you look at the result set you will get the result in the ranges from 1 to 4.
Here is how it works if your result is 0 which means absolutely not relevant to each other and if your result is 1 which means the results are relevant to each other.
Have you ever used above two functions in your business need or on production server? If yes, would you please leave a comment with use cases. I believe it will be beneficial to everyone.
Reference: Pinal Dave (http://blog.SQLAuthority.com)

Introduction to PERCENT_RANK() – Analytic Functions Introduced in SQL Server 2012

SQL Server 2012 introduces new analytical functions PERCENT_RANK(). This function returns relative standing of a value within a query result set or partition. It will be very difficult to explain this in words so I’d like to attempt to explain its function through a brief example. Instead of creating a new table, I will be using the AdventureWorks sample database as most developers use that for experiment purposes.
Now let’s have fun following query:
USE AdventureWorks
GO
SELECT SalesOrderID, OrderQty,RANK() OVER(ORDER BY SalesOrderID) Rnk,PERCENT_RANK() OVER(ORDER BY SalesOrderID) AS PctDistFROM Sales.SalesOrderDetailWHERE SalesOrderID IN (43670, 43669, 43667, 43663)ORDER BY PctDist DESCGO
The above query will give us the following result:
Now let us understand the resultset. You will notice that I have also included the RANK() function along with this query. The reason to include RANK() function was as this query is infect uses RANK function and find the relative standing of the query.
The formula to find PERCENT_RANK() is as following:
PERCENT_RANK() = (RANK() – 1) / (Total Rows – 1)
If you want to read more about this function read here.
Now let us attempt the same example with PARTITION BY clause
USE AdventureWorks
GO
SELECT SalesOrderID, OrderQty, ProductID,RANK() OVER(PARTITION BY SalesOrderIDORDER BY ProductID ) Rnk,PERCENT_RANK() OVER(PARTITION BY SalesOrderIDORDER BY ProductID ) AS PctDistFROM Sales.SalesOrderDetail sWHERE SalesOrderID IN (43670, 43669, 43667, 43663)ORDER BY PctDist DESCGO
Now you will notice that the same logic is followed in follow result set.
I have now quick question to you – how many of you know the logic/formula of PERCENT_RANK() before this blog post?
Reference: Pinal Dave (http://blog.SQLAuthority.com)

How to rebuild Master Databse?



Master database is system database and it contains information about running server’s configuration. When SQL Server 2005 is installed it usually creates master, model, msdb, tempdb resource and distribution system database by default. Only Master database is the one which is absolutely must have database. Without Master database SQL Server cannot be started. This is the reason it is extremely important to backup Master database.

To rebuild the Master database, Run Setup.exe, verify, and repair a SQL Server instance, and rebuild the system databases. This procedure is most often used to rebuild the master database for a corrupted installation of SQL Server.

Why the Stored Procedure takes long time to run for first time?

The reason for the same is because Stored Procedures are compiled when it runs first time. In fact, Stored Procedures are not pre-compiled; they compile only during their first time execution.
There is a misconception that stored procedures are pre-compiled. They are not pre-compiled, but compiled only during the first run. For every subsequent runs, it is for sure pre-compiled.
If you create any SP, you will find that there is no cache entry for the execution of that SP.
After running the SP for the first time, the entry for the cache is made in the system.
If we see the following script, we can notice the different of cache when SP was created and SP was executed.
/* Exeercise to verify if stored procedure pre-compiled */USE AdventureWorks
GO
-- Clean CacheDBCC FREEPROCCACHE
GO
IF EXISTS (SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[CompSP]') AND type IN (N'P', N'PC'))DROP PROCEDURE [dbo].[CompSP]
GO
-- Create New Stored ProcedureCREATE PROCEDURE CompSPAS
SELECT
*FROM HumanResources.Department
GO
-- Check the Query Plan for SQL Batch
-- You will find that there is no ObjectName with CompSP
SELECT cp.objtype AS PlanType,OBJECT_NAME(st.objectid,st.dbid) AS ObjectName,cp.refcounts AS ReferenceCounts,cp.usecounts AS UseCounts,st.TEXT AS SQLBatch,qp.query_plan AS QueryPlanFROM sys.dm_exec_cached_plans AS cpCROSS APPLY sys.dm_exec_query_plan(cp.plan_handle) AS qpCROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) AS st;GO/* Execute Stored Procedure */EXEC CompSP
GO
-- Check the Query Plan for SQL Batch
-- You will find that there is one entry with name ObjectName with name CompSP
SELECT cp.objtype AS PlanType,OBJECT_NAME(st.objectid,st.dbid) AS ObjectName,cp.refcounts AS ReferenceCounts,cp.usecounts AS UseCounts,st.TEXT AS SQLBatch,qp.query_plan AS QueryPlanFROM sys.dm_exec_cached_plans AS cpCROSS APPLY sys.dm_exec_query_plan(cp.plan_handle) AS qpCROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) AS st;GO
The result set of above query is as following.
The above script to find out the cache is taken from the white paper SQL SERVER – Plan Caching in SQL Server 2008 by Greg Low. You can also read my follow up article SQL SERVER – Plan Caching and Schema Change – An Interesting Observation, where I have given an interesting conversation with Greg Low.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

Tuesday, September 18, 2012

What are different Types of Join?




Cross Join

A cross join that does not have a WHERE clause produces the Cartesian product of the tables involved in the join. The size of a Cartesian product result set is the number of rows in the first table multiplied by the number of rows in the second table. The common example is when company wants to combine each product with a pricing table to analyze each product at each price.
...

Inner Join

A join that displays only the rows that have a match in both joined tables is known as inner Join. This is the default type of join in the Query and View Designer.

Outer Join

A join that includes rows even if they do not have related rows in the joined table is an Outer Join. You can create three different outer join to specify the unmatched rows to be included:

Left Outer Join: In Left Outer Join all rows in the first-named table i.e. “left” table, which appears leftmost in the JOIN clause are included. Unmatched rows in the right table do not appear.

Right Outer Join: In Right Outer Join all rows in the second-named table i.e. “right” table, which appears rightmost in the JOIN clause are included. Unmatched rows in the left table are not included.

Full Outer Join: In Full Outer Join all rows in all joined tables are included, whether they are matched or not.
Self Join

This is a particular case when one table joins to itself, with one or two aliases to avoid confusion. A self join can be of any type, as long as the joined tables are the same. A self join is rather unique in that it involves a relationship with only one table. The common example is when company has a hierarchal reporting structure whereby one member of staff reports to another. Self Join can be Outer Join or Inner Join.

SQL SERVER – Performance Tuning – Part 2 of 2 – Analysis, Detection, Tuning and Optimizing

This second part of Performance Tuning – Part 1 of 2 – Getting Started and Configuration. I suggest you read the first part before continuing on this second part.

Analysis and Detection

If you have noticed that configuration of the data source and profile is a very easy task and if you are familiar with the tool, this can be done in less than 2 minutes. However, while configuration is an important aspect, appropriate analysis of the data is more important since that is what leads us to appropriate results.
Once configuration is over, the screen shows the results of the profiling session. Now the screen will start returning information from various sources in detail. After a few minutes it will display the exact information regarding what is going on under the hood of SQL Server.
Let us see each of the sections in detail. The left most part will show the SQL statement. The SQL statement section is extremely important, once expanded it will start showing exactly what is going on with respect to the SQL Server load in the system.

Identifying the Most Critical Query

A frequently-asked question by developers is: in real systems there are millions of queries executing in a given time period. If I have to begin tuning queries, where should I begin, and which query should I tune to give me maximum return with respect to the performance? Well, this is the precise question the SQL Statement area attempts to answer. It displays the queries in terms of the percentage of DB Activity. One can easily narrow down the query which is most expensive and focus on tuning the query.

Identifying Wait Statistics

This is another very popular aspect covered in the tool. If you know me, I am a big believer in SQL Wait Stats. When SQL Server is executing any task, and if for any reason it has to wait for resources to execute the task, this wait is recorded by SQL Server with the reason for the delay. Later on we can analyze these wait stats to understand the reason the task was delayed and maybe we can eliminate the wait for SQL Server. It is not always possible to remove the wait type 100%, but there are a few suggestions that can help. Here is the SQL Server Wait Stats (http://wp.me/p2NUQ-38d) series which I have written which explains all the major wait stats in detail.

Profiling Active Sessions with Resources

Well, personally I found this section the most useful and insightful. The biggest challenge is to identify which resource is bottlenecked in SQL Server. It is quite possible that all the queries are running at their best. There is nothing left to tune with respect to the query, but still the performance is very slow. The reason can be that resources on the system are bottlenecked. The scarcity of resources is creating the performance degradation scenario. In this top horizontal section of the window it clearly maps what time exactly which resource was utilized. The same colors are also used in the SQL Statement, Events and Sessions section which help visually map queries, resources and events.
I believe once the query profiling is done, keeping resources in mind, the performance tuning task is much easier. Here are the steps a developer should be taking when they are trying to optimize queries using DB Optimizer:
  1. Identify slow query or offending query
  2. Identify the resource bottleneck using active sessions and events
  3. Tune the offending query
It is equally important to tune the offending query as much as it is to identify the offending query. In the next section we will see how DBOptimizer can help optimize the queries in a few clicks.

Tuning and Optimizing

When we identify the offending query our next intention is to optimize the offending query. To tune any query we need to understand all the details surrounding to the query. What impressed me the most about this tool was that it’s instant database-level analysis. Once we select the query to tune we can also see various deep down profiling details for individual queries as well.
First go to SQL statement and click on SQL Text. It will give the exact statement which ran, with all the parameters. The next tabs explain events, sessions and procedures which were associated with SQL Statement.
Once the profiling details associated with any query is understood the next important task is to tune the query and understand the execution plan. In the SQL Statement query if you right click there are two options visible, 1) Explain Plan and 2) Tune

Explain Plan

You can see that now we are entering into specific details related to the query.
In this section you can see very clear tabular formatted explanations of the execution plan. There is various information associated with the execution plan. Personally I prefer to see the execution plan in XML format as it explains various details. However, when in a hurry it is not always possible to spend time on XML. There are cases when any organization does not have time and bandwidth to do this.
The Execution Plan section is not only important for understanding queries but it is a great learning tool as well. I personally spend quite a lot of time on the execution plan. I spend time even on good queries as it is important to know what made the query so good. I think the Explain Plan can aid in learning this technique. Pay attention to I/O Cost, CPU Cost, Row size, Rows and Execution count.

Tuning Execution Plan

I think this is the most interesting part of the DB Optimizer tool. It is called TUNE! I think in one simple word it says what it does and when I saw this for the first time, I just felt like clicking on it. There are cases when a developer or DBA does not want to dig deeper in the scenario, or there are cases when they just want to try alternate scenarios as well. I believe this is a good place to start. Just hit Tune and let DBOptimizer suggest the best plan.
Though I was intrigued to hit Tune, I was bit wary to click it as I was not sure if the tool would then execute any T-SQL statements or tune script against the database. I am the guy who trusts anybody and everybody, but even then, I always like to at least know what is going to happen when it is about my most critical database. Fortunately, when I clicked on Tune, it showed me an overview screen. Where there were two options provided: 1) Generate Cases and 2) Perform Detail Analysis.
This was a very clean and easy to understand interface. The part I liked was generating cases. Often when I am developing T-SQL I want to try out various options but there is always a deadline to finish the project, and it makes sense to spend a limited amount of the time on any query as Return of Time Invested is very important as well. I think this tool can automate this mundane task and help figure out what the optimal solution is.
Clicking on the analysis tab also provides various index options. Additionally, it provides the cost for each index so you know which will be most efficient for a specific query.
If you click on Create Index it also generates the index for you, which you can apply to your database. I think there are many other features which we can discuss here, if I had the time. Additionally, please note that creating an index may or may not be the optimal solution. It is quite possible that a query re-write may solve your problem or even dropping unused indexes is the solution. Anyway, this tool gives lots of suggestions along with their impact, this makes decisions much easier.

Caution

I have always believed in the philosophy that there is no replacement for the human brain. No matter the advancements in artificial intelligence, there are places and cases when the human brain makes better decisions than any machine or tool. Though I have found DB Optimizer very well balanced and organized, I will also watch its decisions. It is great tool which so far has proved itself, but just like any other tool, it should be used with human intelligence. Just like Missing Index DMV gives plenty of Indexes to create but the DBA usually selects the most useful index and creates only those which are necessary. In the same way, there will many solutions this tool will suggest along with their impact. It is our responsibility to select the most powerful solution, keeping overall server health in focus.

Top 5 things I like the most

There are quite a few things which I really liked it but I decided to list the top 5 here to keep it sharp and short.
  1. Clear and easy to navigate interface
  2. Index analysis is fairly accurate and very informative
  3. Execution plan layout and arrangement is very well organized
  4. Painless, simple installation
  5. Well documented help and community support

Top 3 things for wish list for next version

Nobody is perfect. After using this tool for a long time, I have a wish list for this product. I am listing the top 3 items from this list.
  1. Further explanation of the wait stats
  2. Do not generate case – currently it generates various cases, there are times when I do not want cases to generate. I think this feature is a bit over-ambitious. Basic users who this tool targets may never need it
  3. Additional reference materials for the execution plans

Summary

Overall, DB-Optimizer-XE is indeed a performance tuning flagship product from Embarcadero. As I said earlier, the right tool can make life easier for experts as well remove all the uncertainty from a performance tuning project. I also found this product extremely easy to use, with a very clear user interface and even some fun options (like Tune).  I think anyone who is interested in performance tuning should at least give DB Optimizer a try to see if they like it.
DB-Optimizer-XE has a lot of great features, but it’s not perfect.  Keep in mind that you have to be smarter than the tools you use, so you can’t rely on one program alone to solve all your problems.  However, with that said, I highly recommend this program to anyone who is facing performance tuning now and in the future.
Reference: Pinal Dave (http://blog.sqlauthority.com)

SQL SERVER – Performance Tuning – Part 1 of 2 – Getting Started and Configuration

Performance tuning is always a complex subject whenever one has to deal with it. When I was beginning with SQL Server, this was the most difficult area for me. However, there is a saying that if one has to overcome their fear one has to face the fear first. So I did exactly this. I started to practice performance tuning.
Early in my career I often failed when I had to deal with performance tuning tasks. However, each failure taught me something. It took a quite a while and about 100+ various projects before I started to consider myself a guy who knows something about performance tuning.
Every performance tuning exercise gave me new experience. I firmly believe in “Practice makes perfect” when I am involved in performance tuning projects. There were times when I could figure out problems by just looking at few configurations and settings. When asked how I knew this so quickly my usual answer was “experience.”
Quite often I get requests to share my expertise in performance tuning. This is when I wish it were easier. If I had to go back and do one thing during my whole career of performance tuning, I would document every single performance tuning rule that I have come across in my life. Once everything is documented, the expertise of performance tuning can be measured, and the results of these rules can give additional confidence while doing performance tuning.
Performance tuning used to be the responsibility of the DBA and developers who built the code. However, recently I have seen in many organizations that there are dedicated experts who are responsible for performance tuning of the server. Often they have to manage multiple servers and heterogeneous RDBMS. The job is more stressful and more challenging than we know.
I constantly try to test out various aspects of performance tuning as well as all available tools which can help me select the optimal solution for performance tuning. I recently tested out DB Optimizer from Embarcadero and here is very short tutorial of how this tool can be used, as well few of the pros and cons.
Let us divide this article into the following subjects.
  1. Getting Started
  2. Analysis and Detection
  3. Tuning and Optimizing

Getting Started

In this section we will go over basic steps to install DB Optimizer XE2 Version 3.0. First of all go to DB-Optimizer-XE and download the trial version of the DB Optimizer. The installation is very quick and simple. The installation is pretty much selecting button NEXT … NEXT and Finish!

After you install and load the DB Optimizer, and right after the splash screen it will ask for the workspace where all the files of this optimization project will be saved. I think the most difficult part is over by now. From here the tool is very easy to use and self-explanatory. The very first screen has three sections:  overview, tutorial and workbench.

I strongly suggest that if you are new to the tool then go over the tutorial. The tutorial is well organized and very detailed. It not only explains how one can do various performance tuning tasks but it often explains why they are done a certain way.

Once the tutorial is over or if you are an experienced user, head to the workbench directly. Workbench is the area where we will do various performance tuning related tasks.

Configuration


Here is one thing I have to point out: even though there is no wizard or step-by-step setup guide – the interface is very intuitive. For example, as soon as you start workbench, the very first screen is data source. Once data source is configured a simple right click on the data source is all one needs to work with performance tuning aspects.

This tool lets you connect to multiple data sources. We will select SQL Server as the data source and give it an easy name so we can refer to it easily in the future. One specific thing worth noting is that while connecting to the data source, it lets you specify what kind of data source you are connecting to. Are you connecting to Development, Test, QA or Production? This is very essential so that we do not deploy changes to the incorrect server while we are performing various tasks with DB Optimizer.
The next step is to provide details of the connection string and check if the connection is successful by clicking on the Test Connection button at the bottom. If the host address along with username and password is correct, the connection will pop up with a “Connection was successful” message.

Click on Finish after the connection is successfully made to the data source. It will register the data source and bring up the same screen where you started.

Expand the data source which we have registered. Once you expand you will see every single object listed under the data source.

After configuring the data source, the next task is to configure the profile.

Profile configuration is a very easy process and we define where we will be storing various captured data in the database as well what data source will be captured.

Once the data profile is configured go ahead and click on Profile. I think we are now 100% done in configuring the setup. The next step is to analyze our profiled data.
In next blog post we will continue with the second part of  Analysis, Detection, Tuning and Optimizing.
Till than you can download and configure DB-Optimizer-XE.
Reference: Pinal Dave (http://blog.sqlauthority.com)

What is sub-query? Explain properties of sub-query.



Sub-queries are often referred to as sub-selects, as they allow a SELECT statement to be executed arbitrarily within the body of another SQL statement. A sub-query is executed by enclosing it in a set of parentheses. Sub-queries are generally used to return a single row as an atomic value, though they may be used to compare values against multiple rows with the IN keyword.

A subquery is a SELECT statement that is nested within another T-SQL statement. A subquery SELECT statement if executed independently of the T-SQL statement, in which it is nested, will return a result set. Meaning a subquery SELECT statement can standalone and does not depend on the statement in which it is nested. A subquery SELECT statement can return any number of values, and can be found in, the column list of a SELECT statement, a FROM, GROUP BY, HAVING, and/or ORDER BY clauses of a T-SQL statement. A Subquery can also...
be used as a parameter to a function call. Basically a subquery can be used anywhere an expression can be used.

Properties of Sub-Query

A subquery must be enclosed in the parenthesis.
A subquery must be put in the right hand of the comparison operator, and
A subquery cannot contain an ORDER-BY clause.
A query can contain more than one sub-queries.

What is the use of DBCC commands?




DBCC stands for database consistency checker. We use these commands to check the consistency of the databases, i.e., maintenance, validation task and status checks.
E.g. DBCC CHECKDB – Ensures that tables in the db and the indexes are correctly linked.
DBCC CHECKALLOC – To check that all pages in a db are correctly allocated.
...
DBCC CHECKFILEGROUP – Checks all tables file group for any damage.

SQL SERVER – SSMS 2012 Reset Keyboard Shortcuts to Default

As a technologist, I love my laptop very much and I do not lend it to anyone as I am usually worried that my settings would be messed up when I get it back from its borrower. Honestly, I love how I have set up my laptop and I enjoy the settings and programs I have placed on my computer. If someone changes things there – it will surely be annoying for me.
Recently at one of the conferences I was attending in, a small accident happened – one of the speaker’s hard drives failed. The owner immediately panicked due to this and had no clue what to do. I suggested that he may not be able to show the demos but can still show a few things with the product. He accepted my suggestion and finished the presentation. Good thing everybody looked satisfied. Little did I know that a small accident was waiting for me later that day.
When I opened my SQL Server Denali, all my shortcuts were totally out of order. When I hit F5, instead of executing query, it was opening up some debugging screen for me. I was very much annoyed so I decided to surf the Internet for me to know how I could reset it to SQL Server 2012 default shortcut. There were so many different advices which further confused me. After reading 4-5 different methods, I used the following method to reset my SQL Server 2012 keyboard shortcuts:
Go to SSMS > Tools >> Import and Export Settings…
As I wanted to reset all the settings I selected the “Reset all settings” option, but you can select other options as per your need.
I suggest you save your current Settings; however, in my case, I did not like the current option so I just decided to overwrite my settings.
Here I have two options –  I can either set my shortcut to SQL Server 2012 settings or Visual Studio 2010. I went ahead with the SQL Server 2012 option.
Success!
Reference: Pinal Dave (http://blog.SQLAuthority.com)

Thursday, September 13, 2012

Why is Data Modeling Important?


Data modeling is probably the most labor intensive and time consuming part of the development process. The goal of the data model is to make sure that the all data objects required by the database are completely and accurately represented. Because the data model uses easily understood notations and natural language , it can be reviewed and verified as correct by the end-users.
In computer science, data modeling is the process of creating a data model by applying a data model theory to create a data model instance. A data model theory is a formal data model description. When data modelling, we are structuring and organizing data. These data structures are then typically implemented in a database management system. In addition to defining and organizing the data, data modeling will impose (implicitly or explicitly) constraints or limitations on the data placed within the structure.

...
Managing large quantities of structured and unstructured data is a primary function of information systems. Data models describe structured data for storage in data management systems such as relational databases. They typically do not describe unstructured data, such as word processing documents, email messages, pictures, digital audio, and video. (Reference : Wikipedia)

Wednesday, September 12, 2012

What are the Advantages of Using Stored Procedures?



Stored procedure can reduce network traffic and latency, boosting application performance.

Stored procedure execution plans can be reused; they are staying
...
cached in SQL Server’s memory, reducing server overhead.

Stored procedures help promote code reuse.

Stored procedures can encapsulate logic. You can change stored procedure code without affecting clients.

Stored procedures provide better security to your data.

Monday, September 10, 2012

What does it mean to have quoted_identifier on? What are the implications of having it off?


When SET QUOTED_IDENTIFIER is ON, identifiers can be delimited by double quotation marks, and literals must be delimited by single quotation marks. When SET QUOTED_IDENTIFIER is OFF, identifiers cannot be quoted and must follow all Transact-SQL rules for identifiers.

What is sp_configure commands and set commands?


Use sp_configure to display or change server-level settings. To change database-level settings, use ALTER DATABASE. To change settings that affect only the current user session, use the SET statement.

What is difference between DELETE & TRUNCATE commands?




Delete command removes the rows from a table based on the condition that we provide with a WHERE clause. Truncate will actually remove all the rows from a table and there will be no data in the table after we run the truncate command.

TRUNCATE
...

TRUNCATE is faster and uses fewer system and transaction log resources than DELETE.
TRUNCATE removes the data by deallocating the data pages used to store the table’s data, and only the page deallocations are recorded in the transaction log.
TRUNCATE removes all rows from a table, but the table structure, its columns, constraints, indexes and so on, remains. The counter used by an identity for new rows is reset to the seed for the column.
You cannot use TRUNCATE TABLE on a table referenced by a FOREIGN KEY constraint. Because TRUNCATE TABLE is not logged, it cannot activate a trigger.
TRUNCATE cannot be rolled back.
TRUNCATE is DDL Command.
TRUNCATE Resets identity of the table
DELETE

DELETE removes rows one at a time and records an entry in the transaction log for each deleted row.
If you want to retain the identity counter, use DELETE instead. If you want to remove table definition and its data, use the DROP TABLE statement.
DELETE Can be used with or without a WHERE clause
DELETE Activates Triggers.
DELETE can be rolled back.
DELETE is DML Command.
DELETE does not reset identity of the table.