Thursday, June 30, 2011

Monitoring Database Mirroring

Database Mirroring Monitor is a graphical user interface tool that enables system administrators to view and update status and to configure warning thresholds on several key performance metrics. You can monitor a mirrored database during a mirroring session to verify whether and how well data is flowing. 
There are different ways to monitor database mirroring, based on what information you want to track.
For basic information about the database mirroring state, safety level, and witness status, you can use catalog views.

Specifically, monitoring a mirrored database allow us to know:
·         Verify that mirroring is functioning.
·         Basic status includes knowing if the two server instances are up, that the servers are connected, and that the log is being moved from the principal to the mirror.
·         Determine whether the mirror database is keeping up with the principal database.
·         During high-performance mode, a principal server can develop a backlog of unsent log records that still need to be sent from the principal server to the mirror server. Furthermore, in any operating mode, the mirror server can develop a backlog of unrestored log records that have been written to the log file but still need to be restored on the mirror database.
·         Determine how much data was lost when the principal server instance becomes unavailable during high-performance mode.
·         You can determine data loss by looking at the amount of unsent transaction log (if any) and the time interval in which the lost transactions were committed at the principal.
·         Compare current performance with past performance.
·         When problems are occurring, a database administrator can view a history of the mirroring performance to help in understanding the current state. Looking at the history can allow the user to detect trends in performance, identify patterns of performance problems (such as times of day when the network is slow or the number of commands entering the log is very large).
·         Troubleshoot the cause of reduced data flow between mirroring partners.
·         Set warning thresholds on key performance metrics.
·         If a new status row contains a value that exceeds a threshold, an informational event is sent to the Windows event log. A system administrator can then manually configure alerts based on these events. For more information, see Using Warning Thresholds and Alerts on Mirroring Performance Metrics.

To launch the Database Mirroring Monitor, right click on the database in SSMS, select "Tasks", then "Launch Database Mirroring Monitor".   You then need to register your mirrored database by connecting to either the principle or the mirror, and selected the mirrored database.  You can then view the current statistics of the mirror, including previous snapshots of the mirror’s status.
The primary drawback to the Database Mirroring Monitor is that you must be watching it to ensure there’s no problems.  The Perfmon Counters provided by the SQL Server: Database Mirroring object provide similar information that can be found in the Database Mirroring Monitor, but with all the benefits of Perfmon counters, such as registering alerts, and having a performance baseline.  

Monitor Database Mirroring status Using TSQL Statements / Queries
We can also verify / check Database mirroring status using a system stored procedure,"sp_dbmmonitorresults".
Catalog Views
There are four views related to mirroring.  They are:
1.   sys.database_mirroring
2.   sys.database_mirroring_endpoints
3.   sys.database_mirroring_witnesses
4.   sys.dm_db_mirroring_connections
These views show information about the mirroring setup, but not a lot about the current status of how the mirror is operating. 
sys.database_mirroring contains a row for each mirror, including the current state of the mirror.
sys.dm_db_mirroring_connections has information about how much traffic the connection has experienced, and the last time that it had traffic, but viewing this is more appropriate for checking the configuration of the mirror, not constant monitoring.
You can then view the current statistics of the mirror, including previous snapshots of the mirror’s status.
Perfmon Counters
The primary drawback to the Database Mirroring Monitor is that you must be watching it to ensure there’s no problems.  The Perfmon Counters provided by the SQL Server: Database Mirroring object provide similar information that can be found in the Database Mirroring Monitor, but with all the benefits of Perfmon counters, such as registering alerts, and having a performance baseline.  The counters available are:
Bytes Received/sec
The number of bytes sent to the other server, per second
Bytes Sent/sec
The number of bytes received from the other server, per second
Log Bytes Received/sec
The number of bytes from the log received from the principle, per second
Log Bytes Sent/sec
The number of bytes from the log sent to the mirror, per second
Log Send Queue KB
The number of bytes in the log that has not been sent to the mirror
Pages Sent/sec
The number of transaction log pages sent per second
The number of mirroring messages received per second
Redo Bytes/sec
The number of bytes of log rolled forwards per second on the mirror
Redo Queue KB
The number of bytes of transaction log that remains to be applied to the mirror to roll it forwards
Send/Receive Ack Time Sends/sec
The number of mirroring messages sent per second
Transaction Delay
The delay while waiting for the mirror to commit a transaction

Wednesday, June 29, 2011

Outer and Inner join .

An inner join is a join where the only results displayed are results where the keys are in both tables. Inner joins (the typical join operation, which uses some comparison operator like = or <>). These include equi-joins and natural joins.
Inner joins use a comparison operator to match rows from two tables based on the values in common columns from each table. For example, retrieving all rows where the student identification number is the same in both the students and courses tables.
An outer join will display the results for all keys in one tables, a left join from the first and a right join from the second. Outer joins. Outer joins can be a left, a right, or full outer join. 
Outer joins are specified with one of the following sets of keywords when they are specified in the FROM clause.
The result set of a left outer join includes all the rows from the left table specified in the LEFT OUTER clause, not just the ones in which the joined columns match. When a row in the left table has no matching rows in the right table, the associated result set row contains null values for all select list columns coming from the right table.

A right outer join is the reverse of a left outer join. All rows from the right table are returned. Null values are returned for the left table any time a right table row has no matching row in the left table.
A full outer join returns all rows in both the left and right tables. Any time a row has no match in the other table, the select list columns from the other table contain null values. When there is a match between the tables, the entire result set row contains data values from the base tables.
Cross joins. 
Cross joins return all rows from the left table, each row from the left table is combined with all rows from the right table. Cross joins are also called Cartesian products.
For example:
A right outer join will select all records from the second table, and any records in the first table that match the joined keys.
Let's say table1 has the following primary key and data pairs: (1, a), (2, b), (3, c)
Let's also say that table2 has the following primary key and data pairs: (1, fun), (3, can), (4, happen)
So an inner join of table1 to table2 on the primary keys would yield the following resulting triplets (with the common primary key first, the first table's second item second and the second table's second item third): (1, a, fun), (3, c, can)
A left outer join of table1 to table2 on the primary keys would yield the following resulting triplets (same format as above): (1, a, fun), (2, b, NULL), (3, c, can)
A right outer join of table1 to table2 on the primary keys would yield the following resulting triplets (same format as above): (1, a, fun), (3, c, can), (4, NULL, happen)
The inner join shows the users, which have one or more telephone(s) together with their telephone number(s).
The left outer join additionally list those 'users' which have no telephone.

Tuesday, June 28, 2011

DateTime DataTypes

Before SQL 2008 there are 2 Datetime Data type to store Date and Time. These are:
  1. 1.   DateTime
  2. 2.   SmallDateTime

You cannot store Data and Time separately. You had to separate Date and Time using convert Function.

In SQL Server 2008 four new datetime data types are introduced:


1 – DATE data type
• Allows to store a date without time
• By default date is displayed in YYYY-MM-DD format
• The Date datatype will accept DATETIME values and implicitly convert it to a DATE by removing the time portion
• Each date variable requires 3 bytes of storage and precision of 10 digits
• Range for the DATE datatype is from 0001-01-01 through 9999-12-31.
• In SQL Server 2005 the minimum date is 1st Jan 1753 and in SQL Server 2008 it is 1st Jan 0001.

2- Time data type
• To store only the time
This will display the time:
SET @dt=getdate()
PRINT @dt 
• Is based on 24 hour clock
• Range for TIME datatype is 00:00:00:0000000 through 23:59:59:9999999

3 – DATETIME2 data type
• Is a date/time datatype with larger fractional seconds and year range that the existing DATETIME datatype 
• Fraction can be specified
• Maximum fraction that can be specified is 7 while the minimum fraction is 0
SET @dt7=getdate()
PRINT @dt7

• DATETIMEOFFSET defines a date that is combined with a time of a day that has time zone awareness and is based on a 24 hour clock.
• Currently (in SQL Server 2005) when saving the date and time in a column, it will not indicate what time zone that date and time belongs to.
SET @dt=‘2007-10-29 22:50:55 -1:00’
SET @dt1=‘2007-10-29 22:50:55 +5:00’
SELECT DATEDIFF (hh,@dt,@dt1)
This will give the difference in hours between the two dates specified.
The pattern followed will be @dt1-@dt. 

SSRS 2008- Underlying connection was closed – could not establish trust relationship for SSL/TLS

This error comes, when I tried to access the report manager at http://localhost/Reports_SQL
Open Reporting Services Configuration Manager - Click on Report Manager URL section. It shows

Here https:// has been configured. But I did not do it. May be, the default configuration. I clicked on Advanced. See the screenshot below:
The default SSL certificate for both IPv4 & IPv6 was configured which was causing the problem. This default configuration was done by SSRS 2008 installation. Now, I removed both the configurations and clicked OK. But it still not done. The same configuration has to be removed from “Web Service URL” section in SSRS configuration – exactly in the same way.

Monday, June 27, 2011

Whats causing my tempdb to grow

The below script will find out which SQL Statement is consuming most TempDB space.

FROM (Select session_id, request_id,
        SUM(internal_objects_alloc_page_count) AS task_alloc,
        SUM (internal_objects_dealloc_page_count) AS task_dealloc
  FROM sys.dm_db_task_space_usage
  GROUP BY session_id, request_id) AS trans1,
  sys.dm_exec_requests AS trans2
WHERE trans1.session_id = trans2.session_id  AND
        (trans1.request_id = trans2.request_id) and
        (trans1.task_alloc + trans1.task_dealloc > 0)
ORDER BY trans1.task_alloc DESC

Count Rows in Tables and its Size

Run the below query to count Rows in Tables and its size.

TableName =,
TotalRows = prt.rows,
[SpaceUsed(KB)] = SUM(alloc.used_pages)*8
FROM sys.objects obj
JOIN sys.indexes idx on obj.object_id = idx.object_id
JOIN sys.partitions prt on obj.object_id = prt.object_id
JOIN sys.allocation_units alloc on alloc.container_id = prt.partition_id
obj.type = 'U' AND idx.index_id IN (0, 1)
GROUP BY, prt.rows
ORDER BY TableName

Saturday, June 25, 2011

Divide tempdb Into Multiple Files

By default, when tempdb is created, the MDF and LDF are created as single physical files.

While the LDF should always remain as a single physical file, often dividing the tempdb MDF into multiple physical files can help performance.

Multiple files can reduce contention on various global allocation structures by spreading activity over multiple physical files. This is particularly useful for those tempdb database which spend a large percentage of time allocating and de-allocating tables. If you don’t have contention issues, then you probably don’t  need multiple files.

Keep in mind that using more physical disk files for tempdb can increase switching costs and file management overhead, which could potentially hurt performance.

Thus, a balance must be maintained. How is this done?

Microsoft recommends (very generally) that tempdb should have one physical file per CPU core.
This often makes sense with 8 or fewer cores.

But with more than 8 cores, you may be reaching the point where the overhead of maintaining multiple tempdb physical files overcomes the benefits.

The design of your disk subsystem will affect this.

Only through formal testing will you able to determine the optimum number of physical files for your particular instance’s tempdb.

If you choose to use multiple tempdb files, keep the following in mind:
  • Each physical file must be identical in size, and should be large enough to meet the tempdb needs of your instance.
  • The autogrowth setting for each physical file must be identical, using the same fixed autogrowth amount.

The above is important because SQL Server uses a proportional fill strategy to fill the physical files, and if the files become different sizes, then tempdb becomes less efficient.

Example for Creating Multiple Files:

Determine where current tempdb file is at, and its size. Determine what file(s) have to be added.
Run ALTER DATABASE command, or use SSMS.

USE master;
ADD FILE ( NAME = 'tempdev1', 
FILENAME = 'f:\tempdb1.ndf' , 
SIZE = 8192KB , FILEGROWTH = 10%)

Note: Restart SQL Server to get physical files in sync.

Friday, June 24, 2011

Failover Clustering

This provides server-level redundancy on a certified Microsoft Windows Cluster Services Configuration. A failover cluster is setup with a minimum of two servers. In the cluster setup, each server is called a node. All the nodes are connected to shared disk resources. Only one node is active at a time for a SQL Server instance and serves all the requests for that instance of SQL Server. When the active node fails, failover takes place and one of the other available nodes will become the active node. More information can be read from this MSDN library article.
  • This is an appropriate option for mission critical applications where automatic failover is needed for the entire instance of SQL Server.
  • Recovery Time Objective (RTO) - Almost immediately, because processing is handed over to another node in the cluster.
  • Recovery Point Objective (RPO) - If there is no data corruption or data loss (due to data deletion), there is minimal to no data loss during the failover.
  • It provides automatic failover.
  • It protects an entire SQL Server instance.
  • Can apply service packs to one node at a time to minimize downtime.
  • If secondary server is used in a passive mode (only for failover), additional SQL Server licensing is not needed.
  • It is more expensive since it requires special hardware for the setup as well as redundant hardware.
  • It is not supported with all SQL Server editions. (see chart below)
  • It is a single shared data resource.

Update stats for all Databases at one go

declare @ServerName varchar(50), --variable to hold the ServerName
    @DatabaseName varchar(100), --variable to hold the database name
    @Command varchar(1000) --variable to hold the sql Command

-- set variables
select @ServerName=@@ServerName

-- declare the cursor
declare Cursor1 cursor for
select name from master.dbo.sysdatabases
    where  (status & 32 <> 32 and status & 128 <> 128 and status & 512 <> 512 and status & 1024 <> 1024 and status & 4096 <> 4096 and status & 2048 <> 2048)  and (name not in ('Northwind', 'Pubs'))

-- open the cursor
open Cursor1

-- fetch the first record into the cursor
fetch Cursor1 into @DatabaseName

-- while the fetch was successful
while @@fetch_status=0
    -- print the header for each database
    print ''
    print '***************************'
    print 'DBCC UPDATEUSAGE Report For '+@DatabaseName
    print '***************************'
    print ''

    -- set the Command to execute
    set @Command='dbcc updateusage('+@DatabaseName+')'

    -- execute the Command

    -- fetch the next record into the cursor
    fetch Cursor1 into @DatabaseName

-- close the cursor
close Cursor1

-- deallocate the cursor
deallocate Cursor1

-- tell user when the script was last run
select 'This script was executed on ' + cast(getdate() as varchar(50))

Thursday, June 23, 2011

SQL Server Concurrency and Locking Interview Questions

  • Question 1: What is the size of a lock structure?
    • Answer:  96 bytes
  • Question 2: Can locks ever be de-escalated?
  • Question 3: What is the least restrictive type of lock? What is the most restrictive?
    • Answer: The least restrictive type of lock is a shared lock. The most restrictive type of lock is a schema-modification
  • Question 4: Within the REPEATABLE_READ and SERIALIZABLE isolation levels, during a read operation and assuming row-level locking, how long are locks held/retained for?
    • Answer: Within either of these isolation levels, locks are held for the duration of the transaction, unlike within the READ_COMMITTED isolation level as noted above.
  • Question 5: Name as many of the lockable resources as possible in SQL Server 2005?
    • Answer:
      • RID (single row on a heap)
      • KEY (single row (or range) on an index)
      • PAGE
      • EXTENT
      • HOBT (heap or b-tree)
      • TABLE (entire table, all data and indexes)
      • FILE
      • METADATA
      • DATABASE
  • Question 6: Which 2 isolation levels support optimistic/row-versioned-based concurrency control?
    • Answer: First is the the READ COMMITTED isolation level.  This is the only level that supports both a pessimistic (locking-based) and optimistic (version-based) concurrency control model. Second is SNAPSHOT isolation level that supports only an optimistic concurrency control model.
    • Question 7: What database options must be set to allow the use of optimistic models?
      • Answer: READ_COMMITTED_SNAPSHOT option for the read committed optimistic model. ALLOW_SNAPSHOT_ISOLATION option for the snapshot isolation level
    • Question 8: Can deadlocks occur on resources other than database object?
      • Answer: YES.
    • Question 9: What is a deadlock and how is it different from a standard block situation?
      • Answer: A deadlock occurs when two or more tasks permanently block each other by each task having a lock on a resource which the other tasks are trying to lock. In a deadlock situation, both transactions in the deadlock will wait forever unless the deadlock is broken by an external process – in a standard blocking scenario, the blocked task will simply wait until the blocking task releases the conflicting lock scenario.
    • Question 10: Explain how the database engine manages the memory footprint for the lock pool when running in a dynamic lock management mode.
      • Answer (SQL Server 2000): When the server is started with locks set to 0, the lock manager allocates two percent of the memory allocated to SQL Server to an initial pool of lock structures. As the pool of locks is exhausted, additional locks are allocated. The dynamic lock pool does not allocate more than 40 percent of the memory allocated to SQL Server.
        • Generally, if more memory is required for locks than is available in current memory, and more server memory is available (the max server memory threshold has not been reached), SQL Server allocates memory dynamically to satisfy the request for locks. However, if allocating that memory would cause paging at the operating system level (for example, if another application was running on the same computer as an instance of SQL Server and using that memory), more lock space is not allocated.
      • Answer (SQL Server 2005): When running in dynamic management mode (i.e. if the the server is started with locks configuration option set to 0), the lock manager acquires sufficient memory from the Database Engine for an initial pool of 2,500 lock structures. As the lock pool is exhausted, additional memory is acquired for the pool.
        • Generally, if more memory is required for the lock pool than is available in the Database Engine memory pool, and more computer memory is available (the max server memory threshold has not been reached), the Database Engine allocates memory dynamically to satisfy the request for locks. However, if allocating that memory would cause paging at the operating system level (for example, if another application is running on the same computer as an instance of SQL Server and using that memory), more lock space is not allocated. The dynamic lock pool does not acquire more than 60 percent of the memory allocated to the Database Engine. After the lock pool has reached 60 percent of the memory acquired by an instance of the Database Engine, or no more memory is available on the computer, further requests for locks generate an error.
    • Question 11: What are the different types of resources that can deadlock?
      • Answer: Deadlock is a condition that can occur on any system with multiple threads, not just on a relational database management system, and can occur for resources other than locks on database objects. Here are the resources:
        • Locks - Waiting to acquire locks on resources, such as objects, pages, rows, metadata, and applications can cause deadlock.
        • Worker threads - A queued task waiting for an available worker thread can cause deadlock. If the queued task owns resources that are blocking all worker threads, a deadlock will result
        • Memory - When concurrent requests are waiting for memory grants that cannot be satisfied with the available memory, a deadlock can occur.
        • Parallel query execution-related resources - Coordinator, producer, or consumer threads associated with an exchange port may block each other causing a deadlock usually when including at least one other process that is not a part of the parallel query. Also, when a parallel query starts execution, SQL Server determines the degree of parallelism, or the number of worker threads, based upon the current workload. If the system workload unexpectedly changes, for example, where new queries start running on the server or the system runs out of worker threads, then a deadlock could occur.
        • Multiple Active Result Sets (MARS) resources - Resources used to control interleaving of multiple active requests under MARS, including:
          • User resource - when a thread is waiting for a resource that is potentially controlled by a user application, the resource is considered to be an external or user resource and is treated like a lock
          • Session mutex - The tasks running in one session are interleaved, meaning that only one task can run under the session at a given time. Before the task can run, it must have exclusive access to the session mutex.
          • Transaction mutex - All tasks running in one transaction are interleaved, meaning that only one task can run under the transaction at a given time. Before the task can run, it must have exclusive access to the transaction mutex.
    • Question 12: Within the READ_COMMITTED isolation level, during a read operation how long are locks held/retained for?
      • Answer: When SQL Server executes a statement at the read committed isolation level, it acquires short lived share locks on a row by row basis. The duration of these share locks is just long enough to read and process each row; the server generally releases each lock before proceeding to the next row. Thus, if you run a simple select statement under read committed and check for locks, you will typically see at most a single row lock at a time. The sole purpose of these locks is to ensure that the statement only reads and returns committed data. The locks work because updates always acquire an exclusive lock which blocks any readers trying to acquire a share lock.
    • Question 13: What are the different types of lock modes in SQL Server 2000 and 2005?
      • Answer:
        • Shared
        • Update
        • Exclusive
        • Schema (modification and stability)
        • Bulk Update
        • Intent (shared, update, exclusive)
        • Key Range (shared, insert, exclusive)
    • Question 14: What isolation levels will provide completely read-consistent views of a database to all transactions?
      • Answer (SQL Server 2000): Only the SERIALIZABLE isolation level will provide a completely read-consistent view of a database to a given transaction. In any of the other isolation levels, you could perceive some/all of the following, depending on the isolation level running in:
        • Uncommitted dependency/dirty reads
        • Inconsistent Analysis/non-repeatable reads
        • Phantom reads (via insert/delete)
      • Answer (SQL Server 2005): Both the SERIALIZABLE and SNAPSHOT isolation levels will provide a completely read-consistent view of a database to a given transaction. In any of the other isolation levels, you could perceive some/all of the following, depending on the isolation level running in:
        • Uncommitted dependency/dirty reads
        • Inconsistent Analysis/non-repeatable reads
        • Phantom reads (via insert/delete)
    • Question 15: What is lock escalation and what triggers it?
    Answer: Lock escalation is when the system combines multiple locks into a higher level one. This is done to recover resources taken by the other finer granular locks. The system automatically does this. The threshold for this escalation is determined dynamically by the server.
    ·         To reduce system over head by recovering locks
    ·         Maximize the efficiency of queries
    ·         Helps to minimize the required memory to keep track of locks. 
      • Question 16: Can you explain scenarios where each type of lock would be taken:
      • Answer:
        • SHARED - Used for read operations that do not change or update data, such as a SELECT statement.
        • UPDATE - Used on resources that can be updated. Prevents a common form of deadlock that occurs when multiple sessions are reading, locking, and potentially updating resources later. In a repeatable read or serializable transaction, the transaction reads data, acquiring a shared (S) lock on the resource (page or row), and then modifies the data, which requires lock conversion to an exclusive (X) lock. If two transactions acquire shared-mode locks on a resource and then attempt to update data concurrently, one transaction attempts the lock conversion to an exclusive (X) lock. The shared-mode-to-exclusive lock conversion must wait because the exclusive lock for one transaction is not compatible with the shared-mode lock of the other transaction; a lock wait occurs. The second transaction attempts to acquire an exclusive (X) lock for its update. Because both transactions are converting to exclusive (X) locks, and they are each waiting for the other transaction to release its shared-mode lock, a deadlock occurs. To avoid this potential deadlock problem, update (U) locks are used. Only one transaction can obtain an update (U) lock to a resource at a time. If a transaction modifies a resource, the update (U) lock is converted to an exclusive (X) lock.
        • EXCLUSIVE - Used for data-modification operations, such as INSERT, UPDATE, or DELETE. Ensures that multiple updates cannot be made to the same resource at the same time.
        • INTENT - Used to establish a lock hierarchy. The types of intent locks are: intent shared (IS), intent exclusive (IX), and shared with intent exclusive (SIX). (Another question in the Difficult level section expands on this)
        • SCHEMA - Used when an operation dependent on the schema of a table is executing. The types of schema locks are: schema modification (Sch-M) and schema stability (Sch-S).
        • BULK UPDATE - Used when bulk copying data into a table and the TABLOCK hint is specified.
        • KEY RANGE - Protects the range of rows read by a query when using the serializable transaction isolation level. Ensures that other transactions cannot insert rows that would qualify for the queries of the serializable transaction if the queries were run again.
    • Question 17: Explain the purpose of INTENT locks?
      • Answer: The Database Engine uses intent locks to protect placing a shared (S) lock or exclusive (X) lock on a resource lower in the lock hierarchy. Intent locks are named intent locks because they are acquired before a lock at the lower level, and therefore signal intent to place locks at a lower level. Intent locks serve two purposes:
        • To prevent other transactions from modifying the higher-level resource in a way that would invalidate the lock at the lower level.
        • To improve the efficiency of the Database Engine in detecting lock conflicts at the higher level of granularity.
    • Question 18: In what circumstances will you see key-range locks, and what are they meant to protect against?
      • Answer: You will only see key-range locks when operating in the SERIALIZABLE isolation level.
      • Key-range locks protect a range of rows implicitly included in a record set being read by a Transact-SQL statement. The serializable isolation level requires that any query executed during a transaction must obtain the same set of rows every time it is executed during the transaction. A key range lock protects this requirement by preventing other transactions from inserting new rows whose keys would fall in the range of keys read by the serializable transaction.
      • Key-range locking prevents phantom reads. By protecting the ranges of keys between rows, it also prevents phantom insertions into a set of records accessed by a transaction.
    • Question 19: What requirements must be met for a BULK-UPDATE lock to be granted, and what benefit do they serve?
      • Answer: The Database Engine uses bulk update (BU) locks when bulk copying data into a table, and either the TABLOCK hint is specified or the table lock on bulk load table option is set using sp_tableoption. Bulk update (BU) locks allow multiple threads to bulk load data concurrently into the same table while preventing other processes that are not bulk loading data from accessing the table.
    • Question 20: Describe the differences between the pessimistic SERIALIZABLE model and the optimistic SNAPSHOT model in terms of transactional isolation (i.e., not the concurrency differences, but instead how the exact same transactional modifications may result in different final outcomes).
      • Answer: (See Craig Freedman’s Blog Post Here:
      • It is typically relatively simple to understand SERIALIZABLE. For the outcome of two transactions to be considered SERIALIZABLE, it must be possible to achieve this outcome by running one transaction at a time in some order.
      • Snapshot does not guarantee this level of transactional isolation.
      • Imagine the following sample scenario:
        • There is a bag containing a mixture of white and black marbles. Suppose that we want to run two transactions. One transaction turns each of the white marbles into black marbles. The second transaction turns each of the black marbles into white marbles. If we run these transactions under SERIALIZABLE isolation, we must run them one at a time. The first transaction will leave a bag with marbles of only one color. After that, the second transaction will change all of these marbles to the other color. There are only two possible outcomes: a bag with only white marbles or a bag with only black marbles.
        • If we run these transactions under snapshot isolation, there is a third outcome that is not possible under SERIALIZABLE isolation. Each transaction can simultaneously take a snapshot of the bag of marbles as it exists before we make any changes. Now one transaction finds the white marbles and turns them into black marbles. At the same time, the other transactions finds the black marbles - but only those marbles that where black when we took the snapshot - not those marbles that the first transaction changed to black - and turns them into white marbles. In the end, we still have a mixed bag of marbles with some white and some black. In fact, we have precisely switched each marble.
    ·         Question 21: Describe optimistic and pessimistic concurrency?
    o    AnswerOptimistic concurrency: It assumes that resource conflicts between multiple users are very unlikely to occur and thus allows transactions to execute without any locking mechanisms on the resources. It is only while changing the data that a check is made on resources if any conflicts have occurred. If there’s a conflict then the application must read the data again and try to change it as well.
    o    Pessimistic concurrency: Assumes that resource conflicts between multiple users are very likely to occur and hence locks resources as they are used by transactions for the duration of the transaction. A transaction is assured of successful completion unless a Deadlock ocurrs.

    ·         Question 22: What are the differences between lost updates and uncommitted dependencies?

    o    AnswerLost updates:  Last update overwrites other updates Cannot report on data that does not exist. Data is lost
    o    Uncommitted dependencies: Access a row being updated by others. May report on data that does not exist Updates are lost

    ·         Question 23: What is a live lock?

    Answer: When a request for exclusive lock is denied again and again because a series of overlapping shared locks are interfering with each other and to adapt from each other they keep on changing the status, it is known as live lock.

    ·         Question 24: What is a dead lock?

    Answer: A dead lock occurs when 2 user processes have locks on 2 separate objects and each process is trying to acquire a lock on the object which is under the other process. SQL Server identifies dead locks and ends the dead lock by choosing one process and aborting the other process automatically. The aborted transaction is rolled back and an error message is sent to the user application. Usually, the transaction which needs the least amount of overhead to rollback is aborted.
    ·         Question 25: Describe in brief SQL Server locking.
    Answer: SQL Server has 3 main lock types:
    ·         Shared: Locks are compatible with other shared and update locks.
    ·         Update: Locks are compatible with shared locks.
    ·         Exclusive: Locks are not compatible with any other locks.
    Apart from lock types, there are transaction isolation levels for managing security among transactions:
    ·         READ UNCOMMITTED
    ·         READ COMMITTED
    ·         REPEATABLE READ
    ·         SERIALIZABLE
    SQL Server has some locking optimizer hints along with lock types:
    ·         NOLOCK
    ·         HOLDLOCK
    ·         UPDLOCK
    ·         TABLOCK
    ·         PAGLOCK
    ·         TABLOCKX
    ·         READCOMMITTED
    ·         REPEATABLEREAD
    ·         SERIALIZABLE
    ·         READPAST
    ·         ROWLOCK

    ·         Question 26: What guidelines should be followed to help minimize deadlocks? 

    Answer: When two computer programs sharing the same resource prevent each other from accessing the resource results in a deadlock. This deadlock results in finishing the program.
    P1 requests R1 and receives it. P2 requests R2 and receives it. P1 requests resource R2 and is queued up, pending the release of R2. P2 requests resource R1 and is queued up, pending the release of R1
    Here, P and R is Program and Resource respectively.
    Guidelines to minimize deadlocks:-
    ·         Avoid user interaction in the transactions. The transaction must not rely on any inputs from the user.
    ·         The concurrent transactions must access data in the same order. There should be consistency in which the operations occur
    ·         Transactions must be short and simple to avoid deadlocks. Long transactions may block other necessary activities
    ·         A lower isolation level like read committed must be used. Using lower isolation levels reduces the possibility of holding shared locks for a longer time
    ·         Bound connections should be used. Here, two or more connections opened by the same application can assist each other.