Sunday, January 31, 2010

Life time (age) of Shared, Exclusive and Update lock

SQL Server uses locks to maintain consistency while multiple transactions execute simultaneously. We know the various types of locks at various levels. But what is the life time or age of a lock? Let me answer it one by one for three types of locks:

Shared Locks (S): Age of shared lock is decided by the isolation level of session. Following table describes the isolation level and age of Shared lock:

 

Isolation Level

Age of Shared Lock

READ UNCOMMITTED

No lock held

READ COMMITTED

Execution time of SELECT statement

READ COMMITTED WITH SNAPSHOT

No lock held

REPEATABLE READ

Execution time of outermost user defined Transaction

SNAPSHOT

No lock held

SERIALIZABLE

Execution time of outermost user defined Transaction

 

Exclusive Locks (X): Execution time of outermost user defined Transaction.

Update Locks (U): Execution time of outermost user defined Transaction.

 

Yes, age of exclusive locks and update locks is always the length of outermost user defined transaction. There is no impact of isolation levels on the life time of these locks.

This behavior is same for row or key, page and table levels. Age of “intent lock” is same as the age of their base lock like Intent Shared (IS) lock’s age is same to that of its relevant Shared (S) lock. 

Index reduces blocking

We know that indexes improve query performance by SEEKING the records instead of table SCAN. Because in case of index seek maximum number of comparisons is equal to the number of index page levels while in scanning its equal to the total number of records in table. But reduced number of comparison is not the only reason for improved performance. There is another reason because of that not only the performance of query improves but blocking is also reduces. Let us understand the behavior of table scanning and index seeking with following scenario.

Suppose a table has 8 records with id 1 to 8. Table does not have any index on ID column. We want to update record of ID 6. Record of id 3 is already locked by another process. Here our query to update record ID 6 should complete without waiting for the release of record ID 3. But actually our query would waits for the release of record ID 3. Because table is scanned and scanning block if any of record is locked.

image
Now let us see the change if table has an index in ID column. Now the record ID 6 would be searched through index level traversal. So the record ID 3 would not come in the way and query would complete without waiting for the release of record ID 3.
image

Let us test it. In SSMS execute the below batch:
 
CREATE TABLE dbo.TestDeadlock(ID INT IDENTITY, Name VARCHAR(25))
GO
INSERT INTO dbo.TestDeadlock VALUES('Ajay')
INSERT INTO dbo.TestDeadlock VALUES('Vijay')
 
BEGIN TRAN
UPDATE dbo.TestDeadlock WITH (ROWLOCK) SET Name = 'Rachit' WHERE ID = 1
WAITFOR DELAY '00:00:20'
COMMIT TRAN



Now in another query window (to start another session) execute the below query:

UPDATE dbo.TestDeadlock WITH (ROWLOCK) SET Name = 'Mahesh' WHERE ID = 2

Notice that second query does not complete until the first query completes.

Now create an index on ID column as below:

CREATE CLUSTERED INDEX ci_ID ON dbo.TestDeadlock (ID)

Execute the queries in both sessions again. Notice that second query completes instantly.

So this is another reason because of that index improves query performance and reduces blocking.

Getting Deadlock details using Traces

Deadlock is a circular locking among two or more processes that is automatically identified by SQL Server and resolved by killing one process. Whenever our process is killed because of deadlock we get the following message:
Msg 1205, Level 13, State 51, Line 4
Transaction (Process ID 53) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
Here we not know the details of deadlock like blocking resource, queries of our process and other processes, etc. To get these details we can use SQL Server Profiler. But profiler is a bit complex in configuration and consumes resources heavily that may significantly degrade the performance of server.
We have another method that is easy in implementation and very light weighted. On the server execute the below statement:
DBCC TRACEON (1204,3605, -1)
That’s all to track and log the complete details of deadlock in any of database on that server. Now whenever a deadlock will occur details will be logged in the SQL Server error log file located in Log folder (for me this is at C:\Program Files\Microsoft SQL Server\MSSQL10.SQLEXPRESS\MSSQL\Log”). The details would be as below:

Deadlock encountered .... Printing deadlock information
Wait-for graph
Node:1
KEY: 7:72057594048413696 (0300146b1a34) CleanCnt:2 Mode:X Flags: 0x1
       Grant List 1:
         Owner:0x040E8740 Mode: X        Flg:0x40 Ref:0 Life:02000000 SPID:54 ECID:0 XactLockInfo: 0x043BCC38
         SPID: 54 ECID: 0 Statement Type: UPDATE Line #: 3
         Input Buf: Language Event: BEGIN TRAN
                UPDATE dbo.TestDeadlock WITH (ROWLOCK) SET Name = 'Mahesh' WHERE ID = 2
                UPDATE dbo.TestDeadlock WITH (ROWLOCK) SET Name = 'Tarun' WHERE ID = 1
COMMIT TRAN
      Requested by:
        ResType:LockOwner Stype:'OR'Xdes:0x0428F9F8 Mode: X SPID:53 BatchID:0 ECID:0 TaskProxy:(0x0491C354) Value:0x40e8cc0 Cost:(0/0)
     
      Node:2
      KEY: 7:72057594048413696 (0200c411ba73) CleanCnt:2 Mode:X Flags: 0x1
       Grant List 0:
         Owner:0x040E8D00 Mode: X        Flg:0x40 Ref:0 Life:02000000 SPID:53 ECID:0 XactLockInfo: 0x0428FA20
         SPID: 53 ECID: 0 Statement Type: UPDATE Line #: 4
         Input Buf: Language Event: BEGIN TRAN
                UPDATE dbo.TestDeadlock WITH (ROWLOCK) SET Name = 'Rachit' WHERE ID = 1
                WAITFOR DELAY '00:00:10'
                UPDATE dbo.TestDeadlock WITH (ROWLOCK) SET Name = 'Karan' WHERE ID = 2
COMMIT TRAN
      Requested by:
        ResType:LockOwner Stype:'OR'Xdes:0x043BCC10 Mode: X SPID:54 BatchID:0 ECID:0 TaskProxy:(0x0495A354) Value:0x40ea500 Cost:(0/256)
     
      Victim Resource Owner:
       ResType:LockOwner Stype:'OR'Xdes:0x0428F9F8 Mode: X SPID:53 BatchID:0 ECID:0 TaskProxy:(0x0491C354) Value:0x40e8cc0 Cost:(0/0)

I extract useful information from above details as below:

Deadlock was between two nodes
Node: 1
SPID: 54
Blocking Resource: Key (clustered index row)
Mode of lock: X (exclusive)
T-SQL Batch that caused blocking:
BEGIN TRAN
                UPDATE dbo.TestDeadlock WITH (ROWLOCK) SET Name = 'Mahesh' WHERE ID = 2
                UPDATE dbo.TestDeadlock WITH (ROWLOCK) SET Name = 'Tarun' WHERE ID = 1
COMMIT TRAN

Node: 2
SPID: 53
Blocking Resource: Key (clustered index row)
Mode of lock: X (exclusive)
T-SQL Batch that caused blocking:
BEGIN TRAN
                UPDATE dbo.TestDeadlock WITH (ROWLOCK) SET Name = 'Rachit' WHERE ID = 1
                WAITFOR DELAY '00:00:10'
                UPDATE dbo.TestDeadlock WITH (ROWLOCK) SET Name = 'Karan' WHERE ID = 2
COMMIT TRAN

Because of this deadlock SPID 53 was killed and SPID 54 completed successfully.

To test this scenario execute the below script in SSMS:
 
DBCC TRACEON (1204,3605, -1)
GO
 
CREATE TABLE dbo.TestDeadlock(ID INT IDENTITY, Name VARCHAR(25))
GO
CREATE CLUSTERED INDEX ci_ID ON dbo.TestDeadlock (ID) 
GO
INSERT INTO dbo.TestDeadlock VALUES('Ajay')
INSERT INTO dbo.TestDeadlock VALUES('Vijay')


BEGIN TRAN
UPDATE dbo.TestDeadlock WITH (ROWLOCK) SET Name = 'Rachit' WHERE ID = 1
WAITFOR DELAY '00:00:10'
UPDATE dbo.TestDeadlock WITH (ROWLOCK) SET Name = 'Karan' WHERE ID = 2
COMMIT TRAN

Now start another session (by clicking on New Query button) and execute the below query within 10 seconds:

BEGIN TRAN
UPDATE dbo.TestDeadlock WITH (ROWLOCK) SET Name = 'Mahesh' WHERE ID = 2
UPDATE dbo.TestDeadlock WITH (ROWLOCK) SET Name = 'Tarun' WHERE ID = 1
COMMIT TRAN


One of these sessions would fail with deadlock error. Now open the SQL Server ERRORLOG file. You would get the deadlock details that I mentioned above.

Know transaction log file size and free space for all databases

The best way to get transaction log file’s total size and free space in percentage for all databases is using the old  DBCC SQLPERF(LOGSPACE) command.

To know the total size and space used by all files (data files and log file) of a database (current database only) use the below query:

select name, size*8 as total, 
FILEPROPERTY (name, 'spaceused')*8 as used, 
((size*8) - (FILEPROPERTY (name, 'spaceused')*8))*100/(size*8) as [free space percentage], [filename]  
from sys.sysfiles

Friday, January 29, 2010

Know failed and slow running jobs

The below script returns two resultsets
1. Jobs that failed today.
2. Jobs that are running slow than average execution time. The second reultset have one column Execution_Time_Normal as values “Yes” or “No”. This values is Yes for jobs that are running in normal execution time but is No for jobs that are taking 25 % time more on their average execution time. You can decide the precentage value for your jobs by replacing 25 with your value in line  “when ((today_execution_time-avg_execution_time)*100)/avg_execution_time > 25” in last SELECT statement.
 
use msdb
go
select distinct name from sysjobs inner join sysjobhistory on sysjobs.job_id = sysjobhistory.job_id
where sysjobhistory.run_date = year(getdate())*10000 + month(getdate())*100 + datepart("dd",getdate()) 
and sysjobhistory.run_status = 0

--currently  running jobs:

IF OBJECT_ID('tempdb..#tblResults') IS NOT NULL
drop table #tblResults
CREATE TABLE #tblResults 
(
job_id uniqueidentifier NOT NULL,
last_run_date int NOT NULL,
last_run_time int NOT NULL,
next_run_date int NOT NULL,
next_run_time int NOT NULL,
next_run_schedule_id int NOT NULL,
requested_to_run int NOT NULL, -- BOOL
request_source int NOT NULL,
request_source_id sysname COLLATE database_default NULL,
running int NOT NULL, -- BOOL
current_step int NOT NULL,
current_retry_attempt int NOT NULL,
job_state int NOT NULL
)
INSERT INTO #tblResults
EXEC master.dbo.xp_sqlagent_enum_jobs 1, ''

alter table #tblResults add      Avg_Execution_Time int null
alter table #tblResults add       Today_execution_time int null
go

update ttbl set ttbl.Avg_Execution_Time = ttmp.avg_time
from #tblResults ttbl inner join (select sysjobs.job_id , avg (case LEN(run_duration) 
when 3 then run_duration/100
when 4 then run_duration/100
when 5 then ((LEFT(convert(varchar(5),run_duration),1)*60) + (RIGHT(convert(varchar(5),run_duration),4)/100))
when 6 then (LEFT(run_duration,2)*60) + (RIGHT(run_duration,4)/100) end) avg_time
from sysjobs inner join sysjobhistory on sysjobs.job_id = sysjobhistory.job_id
inner join (select job_id from #tblResults where running = 1) tt on sysjobs.job_id = tt.job_id
where sysjobhistory.run_date > year(getdate())*10000 + (month(getdate())-1)*100 + datepart("dd",getdate()) 
and sysjobhistory.step_id =0
group by sysjobs.job_id ) ttmp on ttbl.job_id = ttmp.job_id

update ttbl set ttbl.Today_execution_time = 
((datepart("hh",getdate())* 60) + (datepart("n",getdate()))) -
case LEN(next_run_time) 
when 3 then next_run_time/100
when 4 then next_run_time/100
when 5 then ((LEFT(convert(varchar(5),next_run_time),1)*60) + (RIGHT(convert(varchar(5),next_run_time),4)/100))
when 6 then (LEFT(next_run_time,2)*60) + (RIGHT(next_run_time,4)/100) end
from #tblResults ttbl
where running = 1

 
SELECT tj.name, case 
when ((today_execution_time-avg_execution_time)*100)/avg_execution_time > 25 then 'No'
else 'Yes' end as Execution_Time_Normal,
Today_execution_time,Avg_Execution_Time,tr.next_run_date as current_run_date,tr.next_run_time as current_run_time, tr.last_run_date, tr.last_run_time,*
FROM #tblResults tr inner join msdb.dbo.sysjobs tj on tr.job_id = tj.job_id
where running = 1
order by tr.last_run_date
 

In SQL Server 2008 SSMS run T-SQL code in debug mode

SQL server 2008 reintroduced the T-SQL debugging feature in SSMS. Near the execute button a green Debug button is placed. In debug mode we can break the process at any line using breakpoint, watch variables and expressions using Add Watch, monitor Locals, Call stack and threads as in Visual studio.

image

How to Add Watch in SQL Server 2008 SSMS debugging

While debugging my code in SSMS I found the Watch window at Debug menu > Windows > Watch > Watch 1, 2, 3, 4.

Now to monitor a variable’s value I want to add that variable in Watch window.

To add a watch on a variable break in debug mode, simply Right click the variable and select Add watch. To add an expression just select it > Right click > Add Watch.

image

Debugging asks to Configure Firewall for Remote Debugging

I write some code in SSMS new query window and when to debug I clicked the green Debug button. A window asking to configure firewall to remote debugging appeared as in screenshot below:

image

The options are self explanatory and to enable debugging from computers in my network, I selected the 2nd option (Unblock remote debugging from computers on the local network) and I am able to run my code with debugging as expected.

In SQL Server button for Step Into is F11 and for Step Over is F10.

Fixing error: The database principal owns a schema in the database, andcannot be dropped

A user can be dropped using DROP USER command. But if that user owns any object that we get an error “The database principal owns a schema in the database, and cannot be dropped.”

To resolve this issue first we will have to know that what objects are owned by that particular user. Below is the query to retrieve the list of all objects that are owned by a user (here in script “UserA”).

SELECT o.name ObjectName, o.type_desc as ObjectType
FROM sys.objects o INNER JOIN sys.database_principals p
ON p.principal_id = o.principal_id
WHERE p.name = 'UserA'
UNION
SELECT s.name, 'Schema'
FROM sys.schemas s INNER JOIN sys.database_principals p
ON p.principal_id = s.principal_id
WHERE p.name = 'UserA'

This script returns a result as in following screenshot:

image

Here we got the list of objects that are owned by user UserA.

Now we will alter the owner of all these objects to different owner using the ALTER AUTHORIZATION command. After that the DROP USER command would complete successfully.

User – Schema relation

We know that database objects are contained in Schema and schema are owned by Database Principals. But this is not all about this user-schema relation.

Let us understand this relation with below drawing.

image

This drawing demonstrates following 5 facts:
1. A user can own multiple schemas.

2. A schema can be owned by a single users.

3. A user can have a single default schema.

4. A user may not own any schema but must have one default schema.

5. A schema could be default for many users.

Why the qualified naming is important

When we call an object without specifying the schema (unqualified name), SQL Server searches it in following sequence:
1. In sys schema
2. In users default schema
3. In dbo schema
Inside the procedural objects (like stored procedure, function, view) called object is searched in following sequence:
1. In sys schema
2. In default schema of procedural object’s owner. This behavior can be changed by using the EXECUTE AS clause.
3. In dbo schema
So the qualified name (schema.object) is important in two aspects:
1. In getting the right object.
2. In improving the performance of searching an object.

Owner does not have permission

In the below script I creates a user “UserA” and then. UserA creates a table and then execute a SELECT statement on his table. The script to perform this task is following:

CREATE LOGIN UserA WITH PASSWORD = 'usera', CHECK_POLICY=OFF GO USE Test GO CREATE USER UserA GO sp_addrolemember 'db_ddladmin', N'UserA' GO EXECUTE AS USER = 'UserA' GO CREATE TABLE tblUserA (col1 INT) GO SELECT * FROM tblUserA GO
Here we expect that SELECT statement should complete successfully because UserA is the owner and owner has all rights on an object.

But the result is an error that “
The SELECT permission was denied on the object 'tblUserA',
database 'Test', schema 'dbo'.

Screenshot is following:

image

To get the answer of this problem, we have to understand some facts of name resolution:
1. When a user is created without specifying default schema then dbo is assigned as the default schema to that user, even the new user would not have permission on dbo schema.

2. When an object is created without specifying the schema then objects is created in the default schema of user.
Now we can understand our problem. Here we didn’t specify the schema to user UserA so the default schema is dbo. Then UserA created a table without specifying the schema so the table is created in dbo schema. But UserA does not have any permission on dbo schema and error returned.
Resolution: Below is the updated script that uses schema and this script executes successfully:
 
CREATE LOGIN UserA WITH PASSWORD = 'usera', CHECK_POLICY=OFF
GO
USE Test
GO
CREATE USER UserA
GO
sp_addrolemember 'db_ddladmin', N'UserA'
GO
CREATE SCHEMA SchemaA AUTHORIZATION UserA
GO
EXECUTE AS USER = 'UserA'
GO
CREATE TABLE SchemaA.tblUserA (col1 INT)
GO
SELECT * FROM SchemaA.tblUserA

Ownership chaining

When a user executes a stored procedure, does permission for each object (table, view or function) called inside the stored procedure is checked?

Answer is No. SQL Server uses a concept called Ownership Chaining to decide whether permission for an referenced object would be checked or not. Permission is checked whenever ownership chaining breaks. Let’s understand it with an example.

Here we take three users: UserA, UserB and UserC
UserA owns a table tblUserA
UserB owns a table tblUserB
UserA grants SELECT permission on table tblUserA to UserB.
UserB create a stored procedure usp_UserB in which both tables are used.
UserB grant EXECUTE permission on stored procedure to UserC.


Now take different execution scenario and status:



SP statements


UserB execute


UserC execute







Permission check


Statement result


Permission check


Statement result


Execute usp_UserB


No


Success


Yes


Success


SELECT * FROM tblUserB


No


Success


No


Success


SELECT * FROM tblUserA


Yes


Success


Yes


Failed


Stored procedure execution status







Successfully completed







Failed


From the above result we can conclude that permission is checked at every statement where owner of called object changes. If all referenced objects are owned by owner of stored procedure or view then this ownership scenario is called unbroken ownership chaining. But if owner of any referenced object changes then its called broken ownership chaining.

The script to verify the above explanation is as below:

CREATE LOGIN UserA WITH PASSWORD = 'usera', CHECK_POLICY=OFF 
CREATE LOGIN UserB WITH PASSWORD = 'userb', CHECK_POLICY=OFF
CREATE LOGIN UserC WITH PASSWORD = 'userc', CHECK_POLICY=OFF
GO
USE Test
GO
CREATE USER UserA
CREATE USER UserB
CREATE USER UserC
GO
sp_addrolemember 'db_ddladmin', N'UserA'
GO
sp_addrolemember 'db_ddladmin', 'UserB'
GO
sp_addrolemember 'db_ddladmin', 'UserC'
GO
CREATE SCHEMA SchemaA AUTHORIZATION UserA
GO
CREATE SCHEMA SchemaB AUTHORIZATION UserB
GO
CREATE SCHEMA SchemaC AUTHORIZATION UserC
GO
EXECUTE AS USER = 'UserA' 
GO
CREATE TABLE SchemaA.tblUserA (col1 INT)
GO
GRANT SELECT ON SchemaA.tblUserA TO UserB
GO
REVERT
GO
EXECUTE AS USER = 'UserB' 
GO
CREATE TABLE SchemaB.tblUserB (col1 INT)
GO
CREATE PROC SchemaB.usp_UserB 
AS
SELECT * FROM SchemaB.tblUserB
SELECT * FROM SchemaA.tblUserA
GO
GRANT EXECUTE ON SchemaB.usp_UserB TO UserC
GO
EXEC SchemaB.usp_UserB            -- completes successfully
GO
REVERT
GO
EXECUTE AS LOGIN = 'UserC' 
GO
EXEC SchemaB.usp_UserB    -- Failing with error: The SELECT 
-- permission was denied on the 
-- object 'tblUserA', schema 'SchemaA'.


If for testing you have executed the above script in your database then to undo all changes execute the below script:

REVERT
GO
REVERT
GO
drop table SchemaA.tblUserA
drop table SchemaB.tblUserB
DROP PROC SchemaB.usp_UserB
DROP SCHEMA SchemaA
DROP SCHEMA SchemaB
DROP SCHEMA SchemaC
GO
drop user  UserA
drop USER UserB
drop USER UserC
GO
drop LOGIN UserA
drop LOGIN UserB
drop LOGIN UserC

Troubleshooting the full transaction log problem

One my database’s log file is increasing continuously. Database size is 5 GB while log file has exceeded 50 GB.

Now to resolve the issue the first thing I need to know is why the log file is increasing?

SQL Server provide us the exact information in sys.databases system catalog view. Execute the below statement and you would get the answer in “log_reuse_wait_desc” column.


SELECT log_reuse_wait_desc, * FROM sys.databases



 


image


All the possible reasons and there corrective actions are as following:









































log_reuse_wait value


log_reuse_wait_desc value


Description


1


CHECKPOINT


In the database execute the command CHECKPOINT


2


LOG_BACKUP


Take two log backups. TRUNCATE_ONLY can be used if not need later for restore purpose.


3


ACTIVE_BACKUP_OR_RESTORE


A data backup or a restore is in progress (all recovery models). Wait until the data backup or restore process completes.


4


ACTIVE_TRANSACTION


A transaction is running. Wait until the active transaction completes. If transaction is deferred then remove that.


5


DATABASE_MIRRORING


If database mirroring is paused, or under high-performance mode, the mirror database is significantly behind the principal database (full recovery model only).


Resolve the mirroring issue.


6


REPLICATION


If transactions relevant to the publications are still undelivered to the distribution database. Resolve the replication issue.


7


DATABASE_SNAPSHOT_CREATION


A database snapshot is being created (all recovery models). Wait until the snapshot completes.


8


LOG_SCAN


A log scan is occurring. This is a routine. Wait until scan completes.