/ SQL

SQL

1.What is RDBMS?
Relational Data Base Management Systems (RDBMS) are database management systems that maintain data records and indices in tables. Relationships may be created and maintained across and among the data and tables. In a relational database, relationships between data items are expressed by means of tables. Interdependencies among these tables are expressed by data values rather than by pointers.This allows a high degree of data independence. An RDBMS has the capability to recombine the data items from different files, providing powerful tools for data usage.

2.What is normalization?
Database normalization is a data design and organization process applied to data structures based on rules that help build relational databases. In relational database design, the process of organizing data to minimize redundancy. Normalization usually involves dividing a database into two or more tables and defining relationships between the tables. The objective is to isolate data so that additions, deletions,and modifications of a field can be made in just one table and then propagated through the rest of the database via the defined relationships.

3.What are different normalization forms?
1NF: Eliminate Repeating Groups
Make a separate table for each set of related attributes, and give each table a primary key. Each field contains at most one value from its attribute domain.
2NF: Eliminate Redundant Data
If an attribute depends on only part of a multi-valued key, remove it to a separate table.
3NF: Eliminate Columns Not Dependent On Key
If attributes do not contribute to a description of the key, remove them to a separate table. All attributes must be directly dependent on the primary key
BCNF: Boyce-Codd Normal Form
If there are non-trivial dependencies between candidate key attributes, separate them out into distinct tables.
4NF: Isolate Independent Multiple Relationships
No table may contain two or more 1:n or n:m relationships that are not directly related.
5NF: Isolate Semantically Related Multiple Relationships
There may be practical constrains on information that justify separating logically related many-to-many relationships.
ONF: Optimal Normal Form
A model limited to only simple (elemental) facts, as expressed in Object Role Model notation.
DKNF: Domain-Key Normal Form
A model free from all modification anomalies.
Remember, these normalization guidelines are cumulative. For a database to be in 3NF, it must first fulfill all the criteria of a 2NF and 1NF database.

4.What is Stored Procedure?

A stored procedure is a named group of SQL statements that have been previously created and stored in the server database. Stored procedures accept input parameters so that a single procedure can be used over the network by several clients using different input data. And when the procedure is modified, all clients automatically get the new version. Stored procedures reduce network traffic and improve performance. Stored procedures can be used to help ensure the integrity of the database.
e.g. sp_helpdb, sp_renamedb, sp_depends etc.
Advantages of Stored Procedure is below
Stored procedure can reduced network traffic and latency, boosting application performance.
Stored procedure execution plans can be reused, staying cached in SQL Server's memory,reducing server overhead.
Stored procedures help promote code reuse.
Stored procedures can encapsulate logic. You can change stored procedure code without affecting clients.
Stored procedures provide better security to your data.


5.What is Trigger?
A trigger is a SQL procedure that initiates an action when an event (INSERT, DELETE or UPDATE) occurs. Triggers are used to maintain the referential integrity of data by changing the data in a systematic fashion. A trigger cannot be called or executed; the DBMS automatically fires the trigger as a result of a data modification to the associated table. Triggers can be viewed as similar to stored procedures in that both consist of procedural logic that is stored at the database level. Stored procedures, however, are not event-driven and are not attached to a specific table as triggers are. Stored procedures are explicitly executed by invoking a CALL to the procedure while triggers are implicitly executed. In addition, triggers can also execute stored procedures.

6. What is a Nested Trigger ?

A trigger can also contain INSERT, UPDATE and DELETE logic within itself, so when the trigger is fired because of data modification it can also cause another data modification, thereby firing another trigger. A trigger that contains data modification logic within itself is called a nested trigger.

What are different type of trigerrs?

There are three types of trigerrs
1.DML Trigerrs
a.Instead of trigger
Instead of trigger are fired in place of the trigerring action such as insert,update or delete
b.After Trigger
After trigger execute the following the trigerring action such as an insert,update or delete
2)DDL Trigger
This type of trigger is fired against a DDL statements like Drop table,Create table or Alter table.DDL Triggers are always after triggers
3)Logon Trigger
This type of trigger is fired against a LOGO event before a user session is established to the SQL Server.

7.What is View?
A simple view can be thought of as a subset of a table. It can be used for retrieving data, as well as updating or deleting rows. Rows updated or deleted in the view are updated or deleted in the table the view was created with. It should also be noted that as data in the original table changes, so does data in the view, as views are the way to look at part of the original table. The results of using a view are not permanently stored in the database. The data accessed through a view is actually constructed using standard T-SQL select command and can come from one to many different base tables or even other views.

8. What is Index?

An index is a physical structure containing pointers to the data. Indices are created in an existing table to locate rows more quickly and efficiently. It is possible to create an index on one or more columns of a table, and each index is given a name. The users cannot see the indexes, they are just used to speed up queries. Effective indexes are one of the best ways to improve performance in a database application. A table scan happens when there is no index available to help a query. In a table scan SQL Server examines every row in the table to satisfy the query results. Table scans are sometimes unavoidable, but on large tables, scans have a terrific impact on performance.
Clustered indexes define the physical sorting of a database table’s rows in the storage media. For this reason, each database table may have only one clustered index.
Non-clustered indexes are created outside of the database table and contain a sorted list of references to the table itself.

9. What is the difference between clustered and a non-clustered index?

You can create as many as 249 nonclustered indexes on a single table. Nonclustered indexes, just like clustered indexes, create a B-tree structure. However, unlike a clustered index, in a nonclustered index, the leaf level of the index contains a pointer to the data instead of the actual data. This pointer can reference one of two items. If the table has a clustered index, the pointer points to the clustering key. If the table does not have a clustered index, the  pointer points at a relative identifier (RID), which is a reference to the physical location of the data within a data page. When the pointer references a nonclustered index, the query transits the B-tree structure of the index. When the query reaches the leaf level, it uses the pointer to find the clustering key. The query then transits the clustered index to reach the actual row of data. If a clustered index does not exist on the table, the pointer returns a RID, which causes SQL Server to scan an internal allocation map to locate the page referenced by the RID so that it can return the requested data.

10.What are the different index configurations a table can have?
A table can have one of the following index configurations:
No indexes
A clustered index
A clustered index and many nonclustered indexes
A nonclustered index
Many nonclustered indexes

11. What is a covering index?

An index that SQL Server can use to satisfy a query without having to access the table is called a covering index.

12. What is cursor?
Cursor is a database object used by applications to manipulate data in a set on a row-by-row basis, instead of the typical SQL commands that operate on all the rows in the set at one time. In order to work with a cursor we need to perform some steps in the following order:
Declare cursor
Open cursor
Fetch row from the cursor
Process fetched row
Close cursor
Deallocate cursor

13.What is a Linked Server?

Linked Servers is a concept in SQL Server by which we can add other SQL Server to a Group and query both the SQL Server dbs using T-SQL Statements. With a linked server, you can create very clean, easy to follow, SQL statements that allow remote data to be retrieved, joined and combined with local data.
Storped Procedure sp_addlinkedserver, sp_addlinkedsrvlogin will be used add new Linked Server.

14. What is Collation?
Collation refers to a set of rules that determine how data is sorted and compared. Character data is sorted using rules that define the correct character sequence, with options for specifying casesensitivity,accent marks, kana character types and character width.

15. What are different type of Collation Sensitivity?
Case sensitivity
A and a, B and b, etc.
Accent sensitivity
a and á, o and ó, etc.
Kana Sensitivity
When Japanese kana characters Hiragana and Katakana are treated differently, it is called Kana sensitive.
Width sensitivity
When a single-byte character (half-width) and the same character when represented as a double-byte character (full-width) are treated differently then it is width sensitive.

16.What's the difference between a primary key and a unique key?

Both primary key and unique enforce uniqueness of the column on which they are defined. But by default primary key creates a clustered index on the column, where are unique creates a nonclustered index by default. Another major difference is that, primary key doesn't allow NULLs, but unique key allows one NULL only.

17.How to implement one-to-one, one-to-many and many-to-many relationships while
designing tables?
One-to-One relationship can be implemented as a single table and rarely as two tables with primary and foreign key relationships.
One-to-Many relationships are implemented by splitting the data into two tables with primary key and foreign key relationships.
Many-to-Many relationships are implemented using a junction table with the keys from both the tables forming the composite primary key of the junction table.

18.What is a NOLOCK?
Using the NOLOCK query optimiser hint is generally considered good practice in order to improve concurrency on a busy system. When the NOLOCK hint is included in a SELECT statement, no locks are taken when data is read. The result is a Dirty Read, which means that another process could be updating the data at the exact time you are reading it. There are no guarantees that your query will retrieve the most recent data. The advantage to performance is that your reading of data will not block updates from taking place, and updates will not block your reading of data. SELECT statements take Shared (Read) locks. This means that multiple SELECT statements are allowed simultaneous access, but other processes are blocked from modifying the data. The updates will queue until all the reads have completed, and reads requested after the update will wait for the updates to complete. The result to your system is delay(blocking).

19.What is difference between DELETE & TRUNCATE commands?
Delete command removes the rows from a table based on the condition that we provide with a WHERE clause. Truncate will actually remove all the rows from a table and there will be no data in the table after we run the truncate command.
TRUNCATE
TRUNCATE is faster and uses fewer system and transaction log resources than DELETE.
TRUNCATE removes the data by deallocating the data pages used to store the table’s data, and only the page deallocations are recorded in the transaction log.TRUNCATE removes all rows from a table, but the table structure and its columns, constraints, indexes and so on remain. The counter used by an identity for new rows is reset to the seed for the column.
You cannot use TRUNCATE TABLE on a table referenced by a FOREIGN KEY constraint.
Because TRUNCATE TABLE is not logged, it cannot activate a trigger.
TRUNCATE can not be Rolled back.
TRUNCATE is DDL Command.
TRUNCATE Resets identity of the table.
DELETE
DELETE removes rows one at a time and records an entry in the transaction log for each deleted row.
If you want to retain the identity counter, use DELETE instead. If you want to remove table definition and its data, use the DROP TABLE statement.
DELETE Can be used with or without a WHERE clause
DELETE Activates Triggers.
DELETE Can be Rolled back.
DELETE is DML Command.
DELETE does not reset identity of the table.

20.Difference between Function and Stored Procedure?

UDF can be used in the SQL statements anywhere in the WHERE/HAVING/SELECT section where as Stored procedures cannot be.
UDFs that return tables can be treated as another rowset. This can be used in JOINs with other tables.
Inline UDF's can be though of as views that take parameters and can be used in JOINs and other Rowset operations.

21.When is the use of UPDATE_STATISTICS command?
This command is basically used when a large processing of data has occurred. If a large amount of deletions any modification or Bulk Copy into the tables has occurred, it has to update the indexes to take these changes into account.UPDATE_STATISTICS updates the indexes on these tables accordingly.

22.What types of Joins are possible with Sql Server?
Joins are used in queries to explain how different tables are related. Joins also let you select data from a table depending upon data from another table.
Types of joins: INNER JOINs, OUTER JOINs, CROSS JOINs. OUTER JOINs are further classified as LEFT OUTER JOINS, RIGHT OUTER JOINS and FULL OUTER JOINS.

23.What is the difference between a HAVING CLAUSE and a WHERE CLAUSE?

Specifies a search condition for a group or an aggregate. HAVING can be used only with the SELECT statement. HAVING is typically used in a GROUP BY clause. When GROUP BY is not used, HAVING behaves like a WHERE clause. Having Clause is basically used only with the GROUP BY function in a query. WHERE Clause is applied to each row before they are part of the GROUP BY function in a query.

24.What is SQL Profiler?
SQL Profiler is a graphical tool that allows system administrators to monitor events in an instance of Microsoft SQL Server. You can capture and save data about each event to a file or SQL Server table to analyze later. For example, you can monitor a production environment to see which stored procedures are hampering performance by executing too slowly.Use SQL Profiler to monitor only the events in which you are interested. If traces are becoming too large, you can filter them based on the information you want, so that only a subset of the event data is collected. Monitoring too many events adds overhead to the server and the monitoring process and can cause the trace file or trace table to grow very large, especially when the monitoring process takes place over a long period of time.

25.What is User Defined Functions?

User-Defined Functions allow to define its own T-SQL functions that can accept 0 or more parameters and return a single scalar data value or a table data type.

26.What kind of User-Defined Functions can be created?
There are three types of User-Defined functions in SQL Server 2000 and they are Scalar, Inline Table-Valued and Multi-statement Table-valued.

Scalar User-Defined Function
A Scalar user-defined function returns one of the scalar data types. Text, ntext, image and timestamp data types are not supported. These are the type of user-defined functions that most developers are used to in other programming languages. You pass in 0 to many parameters and you get a return value.

Inline Table-Value User-Defined Function
An Inline Table-Value user-defined function returns a table data type and is an exceptional alternative to a view as the user-defined function can pass parameters into a T-SQL select command and in essence provide us with a parameterized, non-updateable view of the underlying tables.

Multi-statement Table-Value User-Defined Function
A Multi-Statement Table-Value user-defined function returns a table and is also an exceptional alternative to a view as the function can support multiple T-SQL statements to build the final result where the view is limited to a single SELECT statement. Also, the ability to pass parameters into a TSQL select command or a group of them gives us the capability to in essence create a parameterized,non-updateable view of the data in the underlying tables. Within the create function command you must define the table structure that is being returned. After creating this type of user-defined function,It can be used in the FROM clause of a T-SQL command unlike the behavior found when using a stored procedure which can also return record sets.

27.Which TCP/IP port does SQL Server run on? How can it be changed?
SQL Server runs on port 1433. It can be changed from the Network Utility TCP/IP properties –> Port number.both on client and the server.

28.Where are SQL server users names and passwords are stored in sql server?
They get stored in master db in the sysxlogins table.

29.Which command using Query Analyzer will give you the version of SQL server and operating system?
SELECT SERVERPROPERTY('productversion'), SERVERPROPERTY ('productlevel'), SERVERPROPERTY('edition')

30.What is SQL server agent?
SQL Server agent plays an important role in the day-to-day tasks of a database administrator (DBA). It is often overlooked as one of the main tools for SQL Server management. Its purpose is to ease the implementation of tasks for the DBA, with its full-function scheduling engine, which allows you to schedule your own jobs and scripts.

31.Can a stored procedure call itself or recursive stored procedure? How many  level SP nesting possible?

Yes. Because Transact-SQL supports recursion, you can write stored procedures that call themselves.      Recursion can be defined as a method of problem solving wherein the solution is arrived at by repetitively applying it to subsets of the problem. A common application of recursive logic is to perform numeric computations that lend themselves to repetitive evaluation by the same processing steps.

Stored procedures are nested when one stored procedure calls another or executes managed code by referencing a CLR routine, type, or aggregate. You can nest stored procedures and managed code references up to 32 levels.

32.What is @@ERROR?
The @@ERROR automatic variable returns the error code of the last Transact-SQL statement. If there was no error, @@ERROR returns zero. Because @@ERROR is reset after each Transact-SQL statement,it must be saved to a variable if it is needed to process it further after checking it.

33.What is Raiseerror?
Stored procedures report errors to client applications via the RAISERROR command. RAISERROR doesn't change the flow of a procedure; it merely displays an error message, sets the @@ERROR automatic variable, and optionally writes the message to the SQL Server error log and the NT application event log.

34.What is log shipping?
Log shipping is the process of automating the backup of database and transaction log files on a production SQL server, and then restoring them onto a standby server. Enterprise Editions only supports log shipping. In log shipping the transactional log file from one server is automatically updated into the backup database on the other server. If one server fails, the other server will have the same db can be used this as the Disaster Recovery plan. The key feature of log shipping is that is will automatically backup transaction logs throughout the day and automatically restore them on the standby server at defined interval.

35.What is the difference between a local and a global variable?
A local temporary table exists only for the duration of a connection or, if defined inside a compound statement, for the duration of the compound statement.
A global temporary table remains in the database permanently, but the rows exist only within a given connection. When connection are closed, the data in the global temporary table disappears. However,the table definition remains with the database for access when database is opened next time.

36.What command do we use to rename a db?
sp_renamedb ‘oldname’ , ‘newname’
If someone is using db it will not accept sp_renmaedb. In that case first bring db to single user using sp_dboptions. Use sp_renamedb to rename database. Use sp_dboptions to bring database to multi user mode.

37.What is sp_configure commands and set commands?
Use sp_configure to display or change server-level settings. To change database-level settings, use ALTER DATABASE. To change settings that affect only the current user session, use the SET statement.

39.What are the OS services that the SQL Server installation adds?
MS SQL SERVER SERVICE, SQL AGENT SERVICE, DTC (Distribution transaction coordinator)

40.What are three SQL keywords used to change or set someone’s permissions?
GRANT, DENY, and REVOKE.

43.Using query analyzer, name 3 ways to get an accurate count of the number of records in a table?

SELECT * FROM table1
SELECT COUNT(*) FROM table1
SELECT rows FROM sysindexes WHERE id = OBJECT_ID(table1) AND indid < 2

44.How to rebuild Master Database?
Shutdown Microsoft SQL Server 2000, and then run Rebuildm.exe. This is located in the ProgramFiles\Microsoft SQL Server\80\Tools\Binn directory.
In the Rebuild Master dialog box, click Browse.
In the Browse for Folder dialog box, select the \Data folder on the SQL Server 2000 compact disc or in the shared network directory from which SQL Server 2000 was installed, and then click OK.
Click Settings. In the Collation Settings dialog box, verify or change settings used for the master database and all other databases.
Initially, the default collation settings are shown, but these may not match the collation selected during setup. You can select the same settings used during setup or select new collation settings. When done,click OK.
In the Rebuild Master dialog box, click Rebuild to start the process.
The Rebuild Master utility reinstalls the master database.To continue, you may need to stop a server that is running.
Source: http://msdn2.microsoft.com/en-us/library/aa197950(SQL.80).aspx

45.What is the basic functions for master, msdb, model, tempdb databases?
The Master database holds information for all databases located on the SQL Server instance and is the glue that holds the engine together.It will have logon accounts system configuration settings etc. Because SQL Server cannot start without a functioning master database, you must administer this database with care.
The msdb database stores information regarding database backups, SQL Agent information, DTS packages, SQL Server jobs, and some replication information such as for log shipping.
The tempdb holds temporary objects such as global and local temporary tables and stored procedures.
The model is essentially a template database used in the creation of any new user database created in the instance.

Resource database : Is read only database contains all the system objects that are included in SQL Server.SQL Server system objects such as sys.objects are physically persisted in the resource database but they logically appear in the sys schema of every database. The resource database does not contain user data or user metadata.

Max number of indices in a table

SQL 2005 – 1 clustered+ 249 non clusterd
SQL2008 – 1 clustered+999 non clustered

46.What are primary keys and foreign keys?
Primary keys are the unique identifiers for each row. They must contain unique values and cannot be null. Due to their importance in relational databases, Primary keys are the most fundamental of all keys and constraints. A table can have only one Primary key.
Foreign keys are both a method of ensuring data integrity and a manifestation of the relationship between tables.

47.What is data integrity? Explain constraints?

Constraints provide a second level of business-rule implementation by preventing users from entering data into tables that is outside the allowed boundaries.

Check Constraints

You use check constraints to limit the range of possible values in a column or to enforce specific patterns for data. All check constraints must evaluate to a Boolean True/False and cannot reference columns in another table.
Eg : Require an e-mail address to contain, in order, any number of characters or digits, an @ symbol, a number of characters or digits, a period (.), and then either three characters or two characters with a period (.) plus two more characters.

Rules

Rules provide the same functionality as check constraints, except that you create them as a separate object. Because rules are not associated with a specific table or column when you create
them
Eg : CREATE RULE EmployeeIDValidator
AS
@column like '[A-Z][0-9][0-9][0-9][A-Z][A-Z][A-Z][A-Z][A-Z][A-Z]';
After defining a rule, you then bind it to columns or user-defined data types by using the sp_bindrule system stored procedure

Default Constraints
Another mechanism for enforcing a business rule in a table is a default constraint,which enables SQL Server to write a value to a column when the user doesn’t specify a value

Unique Constraints

A unique constraint prohibits a column or combination of columns from allowing duplicate values. You might use a unique constraint to enforce a business rule stating that each customer name must be unique.
Primary Key Constraints

Your choice of primary key constraint is critical in creating a sound structure for a table. A primary key defines the column or combination of columns that allow a row to be uniquely identified.
Foreign Key Constraints

You use foreign key constraints to implement a concept called referential integrity. Foreign keys ensure that the values that can be entered in a particular column exist in a specified table. Users cannot enter values in this column that do not exist in the specified table.

48.What are the properties of the Relational tables?
Relational tables have six properties:
Values are atomic.
Column values are of the same kind.
Each row is unique.
The sequence of columns is insignificant.
The sequence of rows is insignificant.
Each column must have a unique name.

49.What is De-normalization?
De-normalization is the process of attempting to optimize the performance of a database by adding redundant data. It is sometimes necessary because current DBMSs implement the relational model poorly. A true relational DBMS would allow for a fully normalized database at the logical level, while providing physical storage of data that is tuned for high performance. De-normalization is a technique to move from higher to lower normal forms of database modeling in order to speed up database access.

50. What is stripped Backup?
Stripped backup is nice option you run out of space. You can take your backup in chunks.If we don’t have space in the drives we can split the backup to multiple files.

51.What is Identity?
Identity (or AutoNumber) is a column that automatically generates numeric values. A start and increment value can be set, but most DBA leave these at 1. A GUID column also generates numbers,the value of this cannot be controlled. Identity/GUID columns do not need to be indexed.

52.What is a Scheduled Jobs or What is a Scheduled Tasks?
Scheduled tasks let user automate processes that run on regular or predictable cycles. User can schedule administrative tasks, such as cube processing, to run during times of slow business activity.User can also determine the order in which tasks run by creating job steps within a SQL Server Agent job. E.g. Back up database, Update Stats of Tables. Job steps give user control over flow of execution.
If one job fails, user can configure SQL Server Agent to continue to run the remaining tasks or to stop execution.

53.What is a table called, if it does not have neither Cluster nor Non-cluster Index? What is it used for?
Unindexed table or Heap. Microsoft Press Books and Book On Line (BOL) refers it as Heap.A heap is a table that does not have a clustered index and, therefore, the pages are not linked by pointers. The IAM pages are the only structures that link the pages in a table together.Unindexed tables are good for fast storing of data. Many times it is better to drop all indexes from table and than do bulk of inserts and to restore those indexes after that.

54.What is BCP? When does it used?
BulkCopy is a tool used to copy huge amount of data from tables and views. BCP does not copy the structures same as source to destination.

55.How do you load large data to the SQL server database?
BulkCopy is a tool used to copy huge amount of data from tables. BULK INSERT command helps to Imports a data file into a database table or view in a user-specified format.

56.Can SQL Servers linked to other servers like Oracle?
SQL Server can be lined to any server provided it has OLE-DB provider from Microsoft to allow a link.E.g. Oracle has a OLE-DB provider for oracle that Microsoft provides to add it as linked server to SQL Server group.

57.How to know which index a table is using?
SELECT table_name,index_name FROM user_constraints

58.How to copy the tables, schema and views from one SQL server to another?
Microsoft SQL Server 2000 Data Transformation Services (DTS) is a set of graphical tools and programmable objects that lets user extract, transform, and consolidate data from disparate sources into single or multiple destinations.

59.What is Self Join?
This is a particular case when one table joins to itself, with one or two aliases to avoid confusion. A self join can be of any type, as long as the joined tables are the same. A self join is rather unique in that it involves a relationship with only one table. The common example is when company has a hierarchal reporting structure whereby one member of staff reports to another.

60.What is Cross Join?
A cross join that does not have a WHERE clause produces the Cartesian product of the tables involved in the join. The size of a Cartesian product result set is the number of rows in the first table multiplied by the number of rows in the second table. The common example is when company wants to combine each product with a pricing table to analyze each product at each price.

61.Which virtual table does a trigger use?
Inserted and Deleted.

66.What is an execution plan? When would you use it? How would you view the execution plan?
An execution plan is basically a road map that graphically or textually shows the data retrieval methods chosen by the SQL Server query optimizer for a stored procedure or ad-hoc query and is a very useful tool for a developer to understand the performance characteristics of a query or stored procedure since the plan is the one that SQL Server will place in its cache and use to execute the stored procedure or query. From within Query Analyzer is an option called "Show Execution Plan" (located on the Query drop-down menu). If this option is turned on it will display query execution plan in separate window when query is ran again.

67.What is locking

To manage multi user access to data while maintaining data consistency, SQL Server uses a locking mechanism for data. Locks occur at three different levels and can be of three different types. A lock can be applied at a row, page, or table level.

The main decision threshold occurs at approximately three percent to five percent. If  SQL Server determines that a query requires locks on three percent to five percent of the rows on a given page, it acquires a page-level lock. Similarly, if SQL Server determines that a query requires locks on three percent to five percent of the pages in a given table, it acquires a table-level lock. Because it is not always possible to accurately predict the percentage of rows or pages that require a lock, SQL Server can automatically promote from fine-grained locks to a coarser level of lock. This process is called lock escalation.

68 .Lock escalation paths
It is a common misconception that SQL Server escalates locks from a row level to a page level and finally to a table level. However, lock escalation has exactly two paths. SQL Server escalates row level locks to table-level locks, and it escalates page-level locks to table-level locks.

SQL Server has three types of locks: shared, exclusive, and update.

A shared lock, as its name implies, allows shared access to the data. An unlimited number of  connections are allowed to read the data. However, any piece of data that has a shared lock on it cannot be modified until all shared locks are released.

An exclusive lock, as its name implies, allows only a single connection to access the locked data. SQL Server uses this type of lock during data modification to ensure that other users cannot view the data until it has been committed to the database.

An update lock is a special case. This lock begins as a shared lock while SQL Server locates the rows it must modify within the table. After SQL Server locates the rows, it promotes the lock to an exclusive lock just before it performs the actual modification of the data. This lock promotion during an update is the most common cause of deadlock issues.

Intent Locks : Used to establish a lock hierarchy.The types of intent locks are intent shared (IS),Intent Exclusive (IX),Share with Intent Exclusive(SIX)

Schema locks: Used when an operation dependant on the schema of a table is executed.The type of schema locks are schema modification and schema stability.

Bult update locks :Used when bulk copying data into a table and the TABLOCK hint is specified.


sys.dm_tran_locks:


When the lock escalation happens?

these circumstances are :

1) When a single Transact-SQL statement acquires 5,000 locks on a single table or index.

2) When the number of locks in an instance of the Database Engine exceeds memory (60% of Max server memory) or configuration thresholds    (using sp-configure 'locks' option).

69. Understanding Isolation Levels

SQL Server 2005 specifies five different isolation levels that affect the way transactions are handled and the duration of locks.

Isolation level Description

READ UNCOMMITTED

This isolation level lets other connections read data that has not yet been committed.

READ COMMITTED

This isolation level prevents other connections from reading data that is being modified until the transaction has been committed.

REPEATABLE READ
Connection 1 is not allowed to read data that has been modified but not yet committed by Connection
2. Additionally, no other connection is allowed to modify any data that has been read by Connection 1 until the transaction completes. This causes shared locks to be placed on all data that is read, and the locks are held until the transaction completes.

READ SERIALIZABLE
This isolation level places all of the restrictions as  REPEATABLE READ and prevents new rows from being inserted within the keyset range that is locked by a transaction.

SNAPSHOT
 Commonly known as “readers do not block writers and writers do not block readers,” this isolation level uses row versioning and ensures that a read operation will return the image of the data as it existed prior to the start of a modification.

Phantom Read
REPEATABLE READ isolation level can cause Phantom Reads. Under REPEATABLE READ, SQL Server will lock the rows it reads. But it does not prevent from inserting new rows. So, it can happen that when you run the same query for second time, under REPEATABLE READ, you might find new rows in the second query. Such a row is called 'Phantom Row' and a read that returns a Phantom Row is called a Phantom Read.
SERIALIZABLE is very close to REPEATABLE READ isolation level, except that it prevents phantom rows. The principal difference between SERIALIZABLE and REPEATABLE READ is that SERIALIZABLE applies a range lock so that you cannot insert new rows within the range locked by the transaction. This behavior prevents Phantom rows.
SNAPSHOT isolation level does not apply a lock while reading rows. But it does not read uncommitted data. When you read rows from a table, if a row is being updated by another transaction, it will return the last committed data.

What are DMV’s and DMF’s used for

The DMV’s and DMFs were introduced in SQL 2005.It gives the database administrator information about the current state of the SQL Server machine on various aspects.From the basic definition these dynamic management views and functions replace many of the DBCC command outputs and the pseudo table outputs.Hence it is far easier to detect the health of SQL Server using these views and funcitions.

70. Understanding Blocking

Because read operations place shared locks on rows, pages, or tables, and update operations need to place exclusive locks on rows, pages, or tables, conflicts can occur between locks—an exclusive lock cannot be acquired against a resource that has a shared lock. This condition is called blocking and is a normal operation in multiuser environments to ensure integrity of data and of query results.

To determine whether processes are being blocked and to identify the process that is creating the blocking, you would use the sys.dm_exec_requests DMV. If a value greater than 0 exists in the blocking_process_id column, the process is being blocked by the SPID logged in this column.

Select * from sys.dm_exec_requests     =  To identify the process that is creating the blocking.

SELECT session_id, sql_handle, plan_handle FROM sys.dm_exec_requests WHERE blocking_process_id > 0

71.Understanding Deadlocking

it is possible to have a combination of blocks that can never be resolved. This situation is called a deadlock.
A deadlock always requires at least two processes, and each of those processes must be making a modification to data. if Process 1 attempts to acquire a shared lock on the row that is exclusively locked by Process 2, and Process 2 at the same time attempts to acquire a shared lock on the row that is exclusively locked by Process 1, an impossible scenario is created. Neither process can ever complete because each process relies on the other process completing first.This causes a dead lock.Because neither process has the capability to complete a transaction, the locks would be held forever unless there were a way to detect and resolve the deadlock. SQL Server can detect a deadlock and, in response, it applies an algorithm (deadlock detection) that selects one of the processes as a deadlock victim. SQL Server terminates the victim process, rolls back any open transactions, releases the locks, and returns error 1205 to the application.

SQL Server Profiler has three events related to deadlocks. The Locks\Lock:Deadlock Chain and Locks\ Lock:Deadlock events contain little information that is useful for resolving the cause of a deadlock. You should only ever need to use the Locks\Deadlock Graph event, which provides all the information required to resolve the cause of a deadlock.

72. Say some DBCC Commands?

DBCC OPENTRAN   - What are the transactions are going
DBCC CHECKDB     - Consistency Check & Allocation Errors
DBCC FREEPROCCACHE - Clears the procedure cache removing all execution
Plans, all procedures are recompiled
DBCC DROPCLEANBUFFERS -Drops all the data that was cached in memory
DBCC SHOWCONTIG - Shows fragmentation within tables / indexes
DBCC DBREINDEX    - Performs a complete reorganization of the index.
DBCC SQLPERF (LOGSPACE)
DBCC IND
DBCC PAGE
DBCC CHECKTABLE
DBCC CHECKCATALOG
DBCC CHECKALLOC
DBCC SHRINKDATABASE
DBCC SHRINKFILE
DBCC LOGINFO
DBCC LOGTRUNCATEONLY
DBCC INPUTBUFFER
DBCC CHECKCONSTRAINTS - Runs after repair allow data loss to find referential integrity issues
DBCC MEMMORYSTATUS – It will show the VM Committed,reserved,AWE allocated etc.
DBCC OUTPUTBUFFER - Returns the current output buffer in hexadecimal and ASCII format
DBCC CHECKDB WITH ESTIMATE – It will show the estimated Tempdb space required for checktable and checkalloc.
DBCC User Options command on any database to get few details about date format, date first as well isolation level.
73. What happens during checkdb?
DBCC CHECKDB
Checks the logical and physical integrity of all the objects in the specified database by performing the following operations:
Runs DBCC CHECKALLOC on the database.
Runs DBCC CHECKTABLE on every table and view in the database.
Runs DBCC CHECKCATALOG on the database.
Validates the contents of every indexed view in the database.
dbcc checkdb (AdventureWorks)

dbcc checkdb (AdventureWorks,repair_allow_data_loss)

3 OPTIONS - (REPAIR_ALLOW_DATA_LOSS | REPAIR_FAST | REPAIR_REBUILD )

DBCC CHECKTABLE
A. Checking a specific table
The following example checks the data page integrity of the HumanResources.Employee table in the AdventureWorks2008 database
USE AdventureWorks2008;
GO
DBCC CHECKTABLE ("HumanResources.Employee");
GO
B. Performing a low-overhead check of the table
The following example performs a low overhead check of the Employee table in  adventureWorks2008 database.
USE AdventureWorks2008R2;
GO
DBCC CHECKTABLE ("HumanResources.Employee") WITH PHYSICAL_ONLY;
GO
C. Checking a specific index
The following example checks a specific index, obtained by accessing sys.indexes.
USE AdventureWorks2008;
GO
DECLARE @indid int;
SET @indid = (SELECT index_id
              FROM sys.indexes
              WHERE object_id = OBJECT_ID('Production.Product')
                    AND name = 'AK_Product_Name');
DBCC CHECKTABLE ("Production.Product", @indid);

DBCC CHECKALLOC - Checks the consistency of disk space allocation structures for a specified database.
DBCC CHECKCATALOG -Checks for catalog consistency within the specified database. The database must be online.

-- Check the current database.
DBCC CHECKCATALOG;
GO
-- Check the AdventureWorks2008R2 database.
DBCC CHECKCATALOG (AdventureWorks2008R2);
GO
The basic syntax is:
DBCC CHECKDB ('DB Name') WITH NO_INFOMSGS

NO_INFOMSGS prevents an excessive number of informational messages from being generated. There are several other options, but this is the syntax you should aim to use as it performs all integrity checks.

This may take a long time on large databases and you may want to specify the PHYSICAL_ONLY option. This checks physical on-disk structures, but omits the internal logical checks. The syntax is:
DBCC CHECKDB ('DB Name') WITH PHYSICAL_ONLY
74. It Has Found A Problem - What Do I Do?
You do have backups don't you? You might be lucky; a non-clustered index can be dropped and rebuilt but actual data, such as a clustered index, cannot.

Look at the output from DBCC CHECKDB. You may see something like this:
Object ID 2088535921, index ID 0, partition ID 72345201021503994, alloc unit ID 72345201051571606 (type In-row data): Page (1:94299) could not be processed. See other errors for details.
Msg 8939, Level 16, State 98, Line 1
Table error: Object ID 2088535921, index ID 0, partition ID 72345201021503994, alloc unit ID 72345201051571606 (type In-row data), page (1:94299). Test (IS_OFF (BUF_IOERR, pBUF->bstat)) failed.
CHECKDB found 0 allocation errors and 2 consistency errors in table 'yourtable' (object ID 2088535921).
CHECKDB found 0 allocation errors and 2 consistency errors in database 'yourdb'.
repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (yourdb).
From this you can see what page is corrupted (1:94299)
The first thing to do is check if it is data in a heap, in a clustered index, or in a non-clustered index. In the above text you can see it is index ID 0. You could also examine the page (1:94299 in database 'yourdb') as follows:
DBCC TRACEON (3604, -1)
GO
DBCC PAGE('yourdb', 1, 94299, 3)
GO
In the output you will see something like:

Metadata: IndexId = n

If n is greater than 1 it is a non-clustered index and can safely be dropped and recreated. If n is 0 or 1 you have data corruption and need to perform one of the options described below.
Restoring from a backup
If the recovery model is FULL (or BULK_LOGGED, with some limitations), you can backup the tail of the log, perform a restore (with norecovery) from the last clean full backup, followed by subsequent log backups and finally the tail of the log.

If only a few pages are affected you have the option of selectively restoring only the bad pages, as follows:
RESTORE DATABASE yourdb PAGE = '1:94299'
FROM DISK = 'C:\yourdb.bak'
WITH NORECOVERY

If the recovery model is simple you don't have that option, and have to accept that a restore from the last full backup will result in subsequent transactions being lost. In this case, or if you have no backups at all, you may decide that an automatic repair is the only option.

Automatic Repair Options
First let me emphasise the importance of running a backup BEFORE you go any further.

Have a look at the output of the original CHECKDB. It will specify the minimum repair level.

REPAIR_REBUILD
If the minimum repair level is REPAIR_REBUILD you have been lucky.
The syntax is
DBCC CHECKDB('DB Name', REPAIR_REBUILD)

REPAIR_ALLOW_DATA_LOSS
This attempts to repair all errors. Sometimes the only way to repair an error is to deallocate the affected page and modify page links so that it looks like the page never existed. This has the desired effect of restoring the database's structural integrity but means that something has been deleted (hence the ALLOW_DATA_LOSS). There are likely to be issues with referential integrity, not to mention the important data that may now be missing.

The syntax is
DBCC CHECKDB('DB Name', REPAIR_ALLOW_DATA_LOSS)

Make sure you run DBCC CHECKCONSTRAINTS afterwards so you are aware of referential integrity issues and can take the appropriate action.
75.Primary key & Foreign Key?

.Primary Key: Primary key is a combination of Unique and NOT NULL.

.Foreign Key: Foreign key always references the Primary key and allows values to be inserted that are present in Primary Key column.

76. Creating Local and Global Temporary Tables
When working with complex queries, it is often helpful to break up the logic into smaller, more manageable chunks. Breaking the logic can help simplify queries and stored procedures, especially when iterative logic is necessary. It can also help performance in many cases. If you need to apply the results of a complex query to other queries,it is often cheaper to cache the results of the query in a temporary table and reuse them than to reexecute the complex query each time.SQL Server has two types of  temporary tables: local and global. Local temporary tables are visible only to the connection that created them. Global temporary tables,on the other hand, are visible to all connections.
77. Truncating the Transaction Log
If log records were never deleted from the transaction log, the logical log would grow until it filled all the available space on the disks holding the physical log files. At some point in time, old log records no longer necessary for recovering or restoring a database must be deleted to make way for new log records. The process of deleting these log records to reduce the size of the logical log is called truncating the log.
The active portion of the transaction log can never be truncated. The active portion of the log is the part of the log needed to recover the database at any time, so must have the log images needed to roll back all incomplete transactions. It must always be present in the database in case the server fails because it will be required to recover the database when the server is restarted. The record at the start of the active portion of the log is identified by the minimum recovery log sequence number (MinLSN).
 The tempdb database always uses the simple recovery model, it cannot be switched to another  recovery model. Log truncation always occurs on a checkpoint in tempdb.
The log records before the MinLSN are only needed to maintain a sequence of transaction log backups.
In the simple recovery model, a sequence of transaction logs is not being maintained. All log records before the MinLSN can be truncated at any time, except while a BACKUP statement is being processed. NO_LOG and TRUNCATE_ONLY are the only BACKUP LOG options that are valid for a database that is using the simple recovery model.
78. Checkpoints occur:
1.When a CHECKPOINT statement is executed. The current database for the connection is checkpointed.
2.When ALTER DATABASE is used to change a database option. ALTER DATABASE checkpoints the database when database options are changed.
3.When an instance of SQL Server is stopped by:
•Executing a SHUTDOWN statement.
•Using the SQL Server Service Control Manager to stop the service running an instance of the database engine.
Either of these methods checkpoints each database in the instance of SQL Server.
•When an instance SQL Server periodically generates automatic checkpoints in each database to reduce the amount of time the instance would take to recover the database.
If the database is using either the full or bulk-logged recovery model, an automatic checkpoint is generated whenever the number of log records reaches the number SQL Server estimates it can process during the time specified in the recovery interval option.
If the database is using the simple recovery model, an automatic checkpoint is generated whenever the number of log records reaches the lesser of these two values:
The log becomes 70 percent full.
The number of log records reaches the number SQL Server estimates it can process during the time specified in the recovery interval option.
If autogrow is not enabled, or the disk holding the log file has less free space than the amount specified in growth_increment, an 1105 error is generated.


81.Define candidate key, alternate key, composite key.
A candidate key is one that can identify each row of a table uniquely. Generally a candidate key becomes the primary key of the table. If the table has more than one candidate key, one of them will become the primary key, and the rest are called alternate keys.

A key formed by combining at least two or more columns is called composite key.
82.What's the maximum size of a row?
8060 bytes.
84.What are the steps you will take to improve performance of a poor performing query?
This is a very open ended question and there could be a lot of reasons behind the poor performance of a query. But some general issues that you could talk about would be: No indexes, table scans, missing or out of date statistics, blocking, excess recompilations of stored procedures, procedures and triggers without SET NOCOUNT ON, poorly written query with unnecessarily complicated joins, too much normalization, excess usage of cursors and temporary tables.

Some of the tools/ways that help you troubleshooting performance problems are: SET SHOWPLAN_ALL ON, SET SHOWPLAN_TEXT ON, SET STATISTICS IO ON, SQL Server Profiler, Windows NT /2000 Performance monitor, Graphical execution plan in Query Analyzer.

86.What is a deadlock and what is a live lock? How will you go about resolving deadlocks?
Deadlock is a situation when two processes, each having a lock on one piece of data, attempt to acquire a lock on the other's piece. Each process  would wait indefinitely for the other to release the lock, unless one of the user processes is terminated. SQL Server detects deadlocks and terminates one user's process.

A livelock is one, where a  request for an exclusive lock is repeatedly denied because a series of overlapping shared locks keeps interfering. SQL Server detects the situation after four denials and refuses further shared locks. A livelock also occurs when read transactions monopolize a table or page, forcing a write transaction to wait indefinitely.

Check out SET DEADLOCK_PRIORITY and "Minimizing Deadlocks"  in SQL Server books online. Also check out the article Q169960 from Microsoft knowledge base.
87.What are statistics, under what circumstances they go out of date, how do you update them?
Statistics determine the selectivity of the indexes. If an indexed column has unique values then the selectivity of that index is more, as opposed to an index with non-unique values. Query optimizer uses these indexes in determining whether to choose an index or not while executing a query.

Some situations under which you should update statistics:
1) If there is significant change in the key values in the index
2) If a large amount of data in an indexed column has been added, changed, or removed (that is, if the distribution of key values has changed), or the table has been truncated using the TRUNCATE TABLE statement and then repopulated
3) Database is upgraded from a previous version

Look up SQL Server books online for the following commands: UPDATE STATISTICS, STATS_DATE, DBCC SHOW_STATISTICS, CREATE STATISTICS, DROP STATISTICS, sp_autostats, sp_createstats, sp_updatestats
88.What are the different ways of moving data/databases between servers and databases in SQL Server?
There are lots of options available, you have to choose your option depending upon your requirements. Some of the options you have are: BACKUP/RESTORE, dettaching and attaching databases, replication, DTS, BCP, logshipping, INSERT...SELECT, SELECT...INTO, creating INSERT scripts to generate data.
89. What is DAC
Only a single connection at a time is allowed. If the connection is already being used, any subsequent connections are refused. Additionally, you create a connection to the DAC in only two ways:
■Via SQLCMD
■Through the Query window in SSMS
To connect to the DAC, specify ADMIN: and the name of your instance—for example,
ADMIN:MyMachine\Instance1 . Port number used for DAC is 1434.

90. Give an example of inner join query
Inner join queries

SELECT *
FROM HumanResources.Employee AS E
INNER JOIN HumanResources.EmployeeAddress AS EA ON
E.EmployeeId = EA.EmployeeId
Only rows that exist in both tables with the same value for the EmployeeId column
are returned:  Outer joins return rows with matching data as well as rows with nonmatching data
91. What are the default startup options in SQL Server?
-d master_file_path
-e error_log_path
-l master_log_path
Other startup options
Description
-c
Shortens startup time when starting SQL Server from the command prompt. Typically, the SQL Server Database Engine starts as a service by calling the Service Control Manager. Because the SQL Server Database Engine does not start as a service when starting from the command prompt, use -c to skip this step.
-f
Starts an instance of SQL Server with minimal configuration. This is useful if the setting of a configuration value (for example, over-committing memory) has prevented the server from starting. Starting SQL Server in minimal configuration mode places SQL Server in single-user mode. For more information, see the description for -m that follows.
-m
Starting SQL Server in Single-User Mode
Under certain circumstances, you may need to start Microsoft SQL Server in single-user mode. This is the startup option -m. For example, you may want to change server configuration options or recover a damaged master or other system database; both require starting SQL Server in single-user mode.
When you start SQL Server in single-user mode:
Only one user can connect to the server.
The CHECKPOINT process is not executed. By default, it is executed automatically at startup.
The sp_configure system stored procedure allow updates option is enabled. By default, the allow updates option is disabled.
Starts an instance of SQL Server in single-user mode. When you start an instance of SQL Server in single-user mode, only a single user can connect, and the CHECKPOINT process is not started. CHECKPOINT guarantees that completed transactions are regularly written from the disk cache to the database device. (Typically, this option is used if you experience problems with system databases that should be repaired.) Enables the sp_configure allow updates option. By default, allow updates is disabled. Starting SQL Server in single-user mode enables any member of the computer's local Administrators group to connect to the instance of SQL Server as a member of the sysadmin fixed server role. For more information, see Troubleshooting: Connecting to SQL Server When System Administrators Are Locked Out. For more information about single-user mode, see Starting SQL Server in Single-User Mode.
-m"Client Application Name"
When you use the -m option with SQLCMD or SQL Server Management Studio, you can limit the connections to a specified client application. For example, -m"SQLCMD" limits connections to a single connection and that connection must identify itself as the SQLCMD client program. Use this option when you are starting SQL Server in single-user mode and an unknown client application is taking the only available connection. To connect through the Query Editor in Management Studio, use -m"Microsoft SQL Server Management Studio - Query".
Client Application Name is case sensitive.
Important
Do not use this option as a security feature. The client application provides the client application name, and can provide a false name as part of the connection string.
-s
Allows you to start a named instance of SQL Server. Without the -s parameter set, the default instance will try to start. You must switch to the appropriate BINN directory for the instance at a command prompt before starting sqlservr.exe. For example, if Instance1 were to use \mssql$Instance1 for its binaries, the user must be in the \mssql$Instance1\binn directory to start sqlservr.exe -s instance1.
-T trace#
Indicates that an instance of SQL Server should be started with a specified trace flag (trace#) in effect. Trace flags are used to start the server with nonstandard behavior. For more information, see Trace Flags (Transact-SQL).

92. What is MTL
“Memory To Leave” area
SQL Server divides the total memory dedicated to it into two regions, referred to as Buffer Pool (BPool) and Memory to Leave (MemToLeave). BPool is the larger of the two and is used for most memory operations: stored procedure query plan generation, caching data and creating indexes. MemToLeave is dedicated to executing linked server queries, extended stored procedure calls and OLE automation procedure calls. By default, MemToLeave is 384 MB and the balance of total memory available to SQL Server will determine the size of BPool. It is important to realize that although MemToLeave is predominantly used for in-process COM objects, such as extended procedures and linked server queries; memory allocations from this area are also used for large stored procedure or query execution plans. Any memory allocation by SQL Server that is larger than 8KB comes from MemToLeave. Furthermore, memory allocations less than 8KB can, in some cases, come from MemToLeave region as well.
Why should you care about how SQL Server splits the memory between BPool and MemToLeave? Well, if you're working with applications that utilize extended stored procedures, linked server queries and OLE automation procedures heavily; you might need to allow additional memory for these objects by increasing the size of MemToLeave area. On the other hand, if your usage of MemToLeave objects is minimal then you can shrink.
When you start SQL Server, it will compute the maximum size for BPool first; then it reserves the memory for MemToLeave. After MemToLeave is reserved, SQL Server will reserve memory for BPool and subsequently release the MemToLeave memory so that it is available for external processes. This is why even if you set maximum server memory configuration option to 6GB you will only see SQL Server use approximately 5.7GB for buffer pool.
93. What is VAS?
The virtual address space is the set/range of virtual memory addresses a process can use. The address space for each process is private and cannot be accessed by other processes unless it is shared.
A virtual address does not represent the actual physical location of an object in memory; Each time a thread references an address, the system translates the virtual address to a physical address.
 The virtual address space for 32-bit Windows is 4 gigabytes (GB) in size and divided into two partitions: one for use by the process and the other reserved for use by the system.
94. What is PAE
Some facts about Physical Address Extension (PAE).
By default 32 bit systems will not be able to access more than 4 GB of RAM
The PAE feature enables 32 bit systems to recognize more than 4 GB of RAM installed
However PAE switch does not increase the Virtual Address Space (VAS) available to the windows process. Any windows process has a VAS size of 4 GB. Again this is divided into User Mode Space and Kernel Mode Space of 2 GB each. The /3GB switch will increase the User Mode space to 3GB there by decreasing the Kernel Mode space to 1 GB
Against each Operating System listed in boot.ini, the /PAE switch had to be entered. The boot.ini looks like this. After making this change, rebooting the server would ensure that the Server recognizes memory greater than 4 GB.
SQL Server 2005 memory configuration management
Some notes on memory configuration management for SQL server 2005 (32bit and 64bit). These are guidelines, always perform an exhaustive analysis prior to making configure changes.
32 BIT
Switches can be added to the Boot.ini file - first , some terminology.
/PAE - this switch allows OS to access physical memory beyond 4GB. On Windows Enterprise and Datacenter Editions..
/3 GB - Forces the OS to reserve only 1GB and allow applications (e.g SQL Server ) to use the remainder
Some guidelines on what switches to use , & under what circumstances
1)If memory <= 4GB - there is nothing to configure (except /3G switch in boot.ini).
2)If there is 4Gb - 16 GB physical memory - use /PAE and /3G switch
95.What is AWE
Address Windowing Extensions (AWE) allow 32-bit operating systems to access large amounts of memory.This option allows the Enterprise and Developer editions of SQL Server to take advantage of up to 64 Gigabytes of memory. AWE stands for Address Windows Extensions. By default this option is turned off (has a value of 0).

By default SQL Server can only use up to 2 Gigabytes of memory. If SQL Server has more memory available it can store more data and stored procedure execution plans in cache. So normally giving additional memory to SQL Server provides better performance.

In order to take advantage of AWE you need to add /PAE and /3GB flags to the boot.ini file of your server
96.What is lazy writer and check point?
Two processes essential to SQL Server's memory management are lazy writer and CHECKPOINT. Both of these processes scan the buffer cache.

The buffer cache contains pages, each of which has a reference counter and an indicator of whether the page contains data modifications not yet written to disk. The reference counter shows how frequently the page has been accessed. The buffer cache is scanned periodically by worker threads, lazy writer and checkpoint. The reference counter on each buffer page is decremented. When the reference counter on a page is equal to zero at the end of the buffer cache scan, SQL Server checks whether the page is dirty. If so data changes are written to disk and the page is freed. If the page isn't dirty at the end of the buffer cache scan the page is freed. Once the page is freed it is placed on the free page list. Using this method, pages that are accessed frequently remain in memory, whereas those that aren't accessed frequently are freed up for use by new queries.

Lazy writer serves two purposes:
1. Ensure that a specified number of buffers are free in the Buffer Pool so they can be allocated for use by the server.

2. Monitor the usage of committed memory by the Buffer Pool and adjust it as necessary so that enough physical memory remains free to prevent Windows from paging.

Lazy writer can adjust the number of buffers in the buffer pool if dynamic memory management is enabled. SQL Server estimates the number of necessary Buffer Pool buffers based on system activity and based on the number of stalls. A stall occurs when a request for memory has to wait on a free buffer page.

The checkpoint process flushes dirty pages to disk. Its job is to minimize the amount of time required to recover the system by keeping the number of dirty pages to a minimum. The frequency by which checkpoint is executed depends on the value of recovery interval server configuration option. Checkpoint also occurs when a database option is changed using ALTER DATABASE statement, when SQL Server service is stopped or when CHECKPOINT Transact-SQL statement is executed
97. What is DBCC Page command?
It is an undocumented DBCC command called DBCC PAGE that you can use to look at the contents of database pages.
98. Running a query and getting error" insufficient memory to run query" .How to troubleshoot?
If you look at the output of DBCC MEMORYSTATUS that is automatically logged to the error log on 701 error messages, it will have entries similar to the following:
Additionally, if you query the sys.dm_os_memory_clerks dynamic management view (DMV), during the time the batch is getting executed, the single_pages_kb column for the USERSTORE_SXC cache show a continuous growth over a period of time that leads to the 701 error Cause
The amount of memory allocated to store a request in SQL Server depends on:
   The batch size (number of RPCs per request)
    Type of parameters.
For certain types of parameters (for example sql_variant), SQL Server can save the requests in memory in a potentially inefficient manner. When a client sends a large batch of requests that use these types of parameters, multiple RPCs can be sent in one request. In this scenario, the server accumulates the whole request in memory before it is executed. This could potentially lead to 701 error discussed in symptoms section.
Resolution:
 You can use one of the following workarounds:
Reduce batch sizes.
Change parameter types, for example, replace sql_variants with other types.
99. Phase 1 of 3 is in recovery.. while starting SQL what does it mean?
A restore is a multiphase process. The possible phases of a restore include the data copy, redo (roll forward), and undo (roll back) phases:
The data copy phase involves copying all the data, log, and index pages from the backup media of a database to the database files.
The redo phase applies the logged transactions to the data copied from
the backup to roll forward that data
to the recovery point. At this point, a database typically has uncommitted
transactions and is in an unusable
state. In that case, an undo phase is required as part of recovering the
database.
The undo phase, which is the first part of recovery, rolls back any
uncommitted transactions and makes
the database available to users.
After the roll back phase, subsequent backups cannot be restored.
100. During bulk operations what will be logged in the transaction log?
Compared to the full recovery model, which fully logs all transactions, the bulk-logged recovery model minimally logs bulk operations, although fully logging other transactions. The bulk-logged recovery model protects against media failure and, for bulk operations, provides the best performance and least log space usage.
Under the bulk-logged recovery model, if a log backup covers any bulk operations, the log backup contains both log records and the data pages that were changed by bulk operations. This is necessary to capture the results of the bulk-logged operations. The incorporated data extents can make a log backup very large. Additionally, backing up the log requires access to the data files that contain the bulk-logged transactions
Using Bulk Logged recovery model for Bulk Operations will reduce the size of Transaction log backups. Backups under BULK_LOGGED recovery model, will include log records and the data pages changed by bulk operations, which will actually make a log backup very large.
101.To determine the version number of the Resource database, use:
SELECT SERVERPROPERTY('ResourceVersion');
102.How to recover a corrupted resource database
The best thing you can do is apply the service pack on another instance and copy over the physical mdf and ldf files to the corrupted database path.
103.How to take a tail log backup?
BACKUP LOG database_name TO <backup_device> WITH NORECOVERY
104. Resource Database
The Resource database is a read-only database that contains all the system objects that are included with SQL Server. SQL Server system objects, such as sys.objects, are physically persisted in the Resource database, but they logically appear in the sys schema of every database. The Resource database does not contain user data or user metadata.
The Resource database makes upgrading to a new version of SQL Server an easier and faster procedure. In earlier versions of SQL Server, upgrading required dropping and creating system objects. Because the Resource database file contains all system objects, an upgrade is now accomplished simply by copying the single Resource database file to the local server.
The physical file names of the Resource database are mssqlsystemresource.mdf and mssqlsystemresource.ldf. These files are located in <drive>:\Program Files\Microsoft SQL Server\MSSQL10_50.<instance_name>\MSSQL\Binn\. Each instance of SQL Server has one and only one associated mssqlsystemresource.mdf file, and instances do not share this file.
SQL Server cannot back up the Resource database. You can perform your own file-based or a disk-based backup by treating the mssqlsystemresource.mdf file as if it were a binary (.EXE) file, rather than a database file, but you cannot use SQL Server to restore your backups. Restoring a backup copy of mssqlsystemresource.mdf can only be done manually The ID of the Resource database is always 32767. Other important values associated with the Resource database are the version number and the last time that the database was updated.
106. How to enable Mirroring in SQL2005 without SP1?

DBCC TRACEON(1400)
107. How to capture the deadlock Graph
There are 2 ways :

- Use Traceflag 1204 <-- for 2000 and 1222 <-- for 2005 and above.
108. What is in MTL?
COM Objects:
Linked Server OLEDB Providers:
Extended Stored Procedures:
Network Packets
Memory consumed by memory clerk’s. If the memory request is > 8 KB
109.What is in BPOOL?
Data Pages/Index pages,Memory consumed by any of below memory clerk’s. If the memory
110.What is a dirty Page?
A dirty page is a page that has not yet been written to the disk. You can (and most often will) have many pages that are different in memory as opposed to the copy on disk. They are called dirty, because the application has "smudged" them with new data. Once they are written to disk and both the disk copy and the memory copy agree, then it is no longer dirty.
111.Explain about buffer pool.

Remember SQL Server has two memory settings that you can control using sp_configure. They are max and min server memory. I am not sure if you know but these two setting really control the size of the buffer pool. They do not control overall amount of physical memory consumed by SQL Server. In reality we can't control amount of memory consumed by SQL Server because there could be external components loaded into server's process.

When SQL Server starts, during initialization, Buffer Pool first decides how much of VAS it needs to reserve for its usage. It bases its decision on the amount of physical memory, RAM, present on the box. If amount of physical memory is equal or larger than amount of VAS it can use, remember that VAS is limited resource especially on x86, it will leave 256MB of VAS for external components plus a number of threads SQL Server is configured to use multiplied by 512KB. You might remember that 512KB is SQL Server's thread stack size. In default configuration with physical memory larger than 2GB, Buffer Pool will leave 256MB+256*512KB = 384MB of VAS space. Some people name this region as MemToLeave but in reality it is in correct. SQL Server might end up using this part of VAS itself and I will show you how it could happen latter on. You might also remember -g parameter that some people recommend to use when SQL Server starts outputting "Can't Reserve Virtual Address Space" errors. First 256MB is exactly what -g parameter controls. If you specify -g 512MB, amount of VAS that BP won't use is 512MB+256*512KB = 640MB. There is no point in specifying -g 256MB. This input parameter is the same as default value.

Once BP decides amount of VAS it will use. It reserves all of it right a way.
The size of SQL Server database page is 8KB. Buffer Pool is a cache of data pages. Consequently Buffer Pool operates on pages of 8KB in size. It commits and decommits memory blocks of 8KB granularity only. If external components decide to borrow memory out of Buffer Pool they can only get blocks of 8KB in size. These blocks are not continues in memeory.  Interesting, right? It means that Buffer Pool can be used as underneath memory manager forSQL Server components as long as they allocate buffers of 8KB. (Sometimes pages allocated from BP are referred as stolen)

Here is where SQLOS and Buffer Pool meet. See Fig.3

                -----------------
                | Memory Node   |
                -----------------
                          |
                          |
                          V
           ------------------------
           | Single Page Allocator  |
           ------------------------
                          |
                          |
                          V
                -----------------
                |   Buffer Pool    |
                -----------------


SQLOS' memory manager can be dynamically configured to use specific single page allocator. This is exactly what SQL Server does during a startup it configures Buffer Pool to be SQLOS's single page allocator. From that point on all dynamic single page allocations are provided by Buffer Pool. For example remember that memory object's payload is 8KB. When a component creates a memory object the allocation is served by SQLOS's single page allocator which is BP.

When describing the memory manager I mentioned that every large component has its own memory clerk. It means that Buffer Pool has its own memory clerk as well. How is it possible, BP leverages SQLOS memory clerk but SQLOS' memory manager relies on BP? This is common chicken and egg problem that you often can observe in operating systems. The key here is that Buffer Pool never uses any type of page allocator from SQLOS. It only leverages Virtual and AWE SQLOS's interfaces.
           -----------------
          |    Buffer Pool    |
          -----------------
                    |
                    |
                    V
     --------------------------
     | Memory Clerk (VM/AWE) |
     --------------------------
                    |
                    |
                    V
          -----------------
          | Memory Node   |
          -----------------
All SQL Server's components optimized for 8KB allocations so that they can allocate memory through SQLOS's single page allocator and consequently through Buffer Pool. However there are cases when a component requires large buffers. If it happens allocation will be either satisfied by memory node's multi page allocator or by virtual allocator. As you might guess that memory will be allocated outside of Buffer Pool. This is exactly why I don’t like term MemToLeave, SQL Server does allocate memory out of that area!

Buffer Pool and AWE mechanism
When describing SQLOS memory manager and Buffer Pool, the discussion would be incomplete without describtion of how AWE fits in all of this. It is really important to understand how Buffer Pool allocates its memory when SQL Server configured to use AWE mechanisms. First, please remember, BP leverages SQLOS's memory clerk interfaces to allocate both VAS and physical pages through AWE. Second, there are several differences that you need to keep in mind. First BP reserves VAS in 4MB chunks instead of "single" large region. This enables SQL Server to release VAS when process is under VAS pressure. (We didn't have all bits and pieces to do this when server is not configured to use AWE mechanisms). Then it allocates all of its memory using AWE mechanism on demand. This is very big difference between SQL2000 and Yukon. In SQL Server 2000 BP would allocate all of its memory when using AWE mechanism right a way.

Buffer Pool is a preferred memory allocator for the whole server. In AWE mode it allocates its memory leveraging AWE mechanism. It means that all allocations allocated through SQLOS's single page allocator will come from pages allocated through AWE. This is what many people really missing. Let me make the point again: When Server is configured for AWE mode, most of it allocations are allocated through AWE mechanism. This is exactly the reason why you won't see private bytes and memory usage growing for SQL Server in this mode.

Since data pages are use relative addressing, i.e. self contained, Buffer Pool can map and unmap them into and out of process's VAS. Other components could have done the same if they were not relying on the actual allocation address. Unfortunately there are no components right now other than BP that can take advantage of AWE mechanism.

112.What is mean by Virtual memory?

         One way to extend the amount of memory accessible by a program is to use disk.The area of the hard disk that stores the RAM image is called a page file.Or in short Virtual memory is a technique to execute processes that may not be completely in main memory. It abstracts main memory into an extremely large storage. This array of storage is mapped on to main memory and its backing store. The backing store is usually an area of the disk outside of the file-system.

113.What is mean by Atomicity?

In database systems, atomicity (or atomicness; from Greek a-tomos, undividable) is one of the ACID transaction properties. In an atomic transaction, a series of database operations either all occur, or nothing occurs. A guarantee of atomicity prevents updates to the database occurring only partially, which can cause greater problems than rejecting the whole series. Another example: If one wants to transfer some amount of money from one account to another, then he/she would start a procedure to do it. However, if a failure occurs, then due to atomicity, the amount will either be transferred completely or will not even start. Thus atomicity protects the user from losing money due to a failed transaction.

114. What info will be provided by sp_spaceused?

      Displays the disk space reserved and used by the whole database.

115.sp_helpdb

It will show database name,size,databaseid,compatibility level etc.

116.How to get the current date  - select getdate()

117 .What info you will get from msconfig?

Start up options and boot.ini informations

118.Index rebuild and reorganize.

This topic describes how to reorganize or rebuild a fragmented index in SQL Server 2012 by using SQL Server Management Studio or Transact-SQL. The SQL Server Database Engine automatically maintains indexes whenever insert, update, or delete operations are made to the underlying data. Over time these modifications can cause the information in the index to become scattered in the database (fragmented). Fragmentation exists when indexes have pages in which the logical ordering, based on the key value, does not match the physical ordering inside the data file. Heavily fragmented indexes can degrade query performance and cause your application to respond slowly.
You can remedy index fragmentation by reorganizing or rebuilding an index. For partitioned indexes built on a partition scheme, you can use either of these methods on a complete index or a single partition of an index. Rebuilding an index drops and re-creates the index. This removes fragmentation, reclaims disk space by compacting the pages based on the specified or existing fill factor setting, and reorders the index rows in contiguous pages. When ALL is specified, all indexes on the table are dropped and rebuilt in a single transaction. Reorganizing an index uses minimal system resources. It defragments the leaf level of clustered and nonclustered indexes on tables and views by physically reordering the leaf-level pages to match the logical, left to right, order of the leaf nodes. Reorganizing also compacts the index pages. Compaction is based on the existing fill factor value.
The first step in deciding which defragmentation method to use is to analyze the index to determine the degree of fragmentation. By using the system function sys.dm_db_index_physical_stats, you can detect fragmentation in a specific index, all indexes on a table or indexed view, all indexes in a database, or all indexes in all databases. For partitioned indexes, sys.dm_db_index_physical_stats also provides fragmentation information for each partition.
The result set returned by the sys.dm_db_index_physical_stats function includes the following columns.
Column
Description
avg_fragmentation_in_percent
The percent of logical fragmentation (out-of-order pages in the index).
fragment_count
The number of fragments (physically consecutive leaf pages) in the index.
avg_fragment_size_in_pages
Average number of pages in one fragment in an index.
After the degree of fragmentation is known, use the following table to determine the best method to correct the fragmentation
avg_fragmentation_in_percent value
Corrective statement
> 5% and < = 30%
ALTER INDEX REORGANIZE
> 30%
ALTER INDEX REBUILD WITH (ONLINE = ON)*
* Rebuilding an index can be executed online or offline. Reorganizing an index is always executed online. To achieve availability similar to the reorganize option, you should rebuild indexes online.
alter index <index name> on <tablename> Rebuild/reorganize
119. Explain briefly about RAID Systems
RAID systems are arrays of disk drives that provide fault tolerance, more storage capacity, and better performance for the disk subsystem, depending on the configuration.Although RAID hardware systems are not part of the SQL Server configuration,they directly affect SQL Server’s performance. There are a variety of RAID levels,each of which uses a different algorithm for fault tolerance. The most common RAID levels used with SQL Server are 0, 1, 5, and 10.

RAID 0 is also known as disk striping because it creates a disk file system called a stripe set. RAID 0 gives the best performance for read and write operations because it spreads these operations across all the disks in the set. However,RAID 0 does not provide fault tolerance; if one disk fails, you lose access to all the data on the stripe set.
RAID 1, also known as disk mirroring, provides a redundant copy of the selected disk. RAID 1 improves read performance but can degrade the performance of write operations.

RAID 5, the most popular RAID level, stripes the data across the disks of the RAID set as does RAID 0, but it also adds parity information to provide fault tolerance.Parity information is distributed among all the disks. RAID 5 provides better performance than RAID 1. However, when a disk fails, read performance decreases.

RAID 10, or RAID 1+0, includes both striping without parity and mirroring.RAID 10 offers better availability and performance than RAID 5, especially for write-intensive applications.

120.What are the differences between Full and Simple recovery and Bulk log models?

A recovery model is a database configuration option that controls how transactions are logged, whether the transaction log is backed up, and what restore options are available for the database

In the Full recovery model, the database engine logs all operations onto the transaction log, and the database engine never truncates the log. The Full recovery model lets you restore a  database to the point of failure.

Simple recovery model, the database engine minimally logs most operations and truncates the transaction log after each checkpoint. In the Simple recovery model, you cannot back up or restore the transaction log. Furthermore, you cannot restore individual data pages.

In the Bulk-Logged recovery model, the database engine minimally logs bulk operations
such as SELECT INTO and BULK INSERT. In this recovery model, if a log backup contains any bulk operation, you can restore the database to the end of the log backup, not to a point in time. The Bulk-Logged recovery model is intended to be used only during large bulk operations
121.What are the protocols available in sql server?

Shared memory
Named Pipes
TCP/IP
VIA
122.What is VLF? How to get information reg VLF?
Each transaction log file is divided logically into smaller segments called virtual log files. Virtual log files are the unit of truncation for the transaction log. When a virtual log file no longer contains log records for active transactions, it can be truncated and the space becomes available to log new transactions.
The smallest size for a virtual log file is 256 kilobytes (KB). The minimum size for a transaction log is 512 KB, which provides two 256-KB virtual log files. The number and size of the virtual log files in a transaction log increase as the size of the log file increases.
The following DBCC command can be used  to know how many Virtual Log Files or VLFs are present in your log file.
DBCC LOGINFO
123. How to recycle a SQL server log?

Use Master
go
 DBCC ERRORLOG
Go
124.What is mean by ACID property?
ACID (atomicity, consistency, isolation, durability) is a set of properties that guarantee that database transactions are processed reliably. In the context of databases, a single logical operation on the data is called a transaction. For example, a transfer of funds from one bank account to another, even though that might involve multiple changes (such as debiting one account and crediting another), is a single transaction.
Atomicity
Atomicity requires that each transaction is "all or nothing": if one part of the transaction fails, the entire transaction fails, and the database state is left unchanged. An atomic system must guarantee atomicity in each and every situation, including power failures, errors, and crashes.
Consistency
The consistency property ensures that any transaction will bring the database from one valid state to another. Any data written to the database must be valid according to all defined rules, including but not limited to constraints, cascades, triggers, and any combination thereof.
Isolation
Isolation refers to the requirement that no transaction should be able to interfere with another transaction. One way of achieving this is to ensure that no transactions that affect the same rows can run concurrently, since their sequence, and hence the outcome, might be unpredictable. This property of ACID is often partly relaxed due to the huge speed decrease this type of concurrency management entails.[citation needed]
Durability
Durability means that once a transaction has been committed, it will remain so, even in the event of power loss, crashes, or errors. In a relational database, for instance, once a group of SQL statements execute, the results need to be stored permanently.
125.How to hide an instance?
In SQL Server Configuration Manager, expand SQL Server Network Configuration, right-click Protocols for <server instance>, and then select Properties.
On the Flags tab, in the HideInstance box, select Yes, and then click OK to close the dialog box. The change takes effect immediately for new connections.
126. What is 18456 error? How to trouble shoot it?

Login failure errors. Based on error state we will troubleshoot that.

ERROR STATE
ERROR DESCRIPTION
2 and 5
Invalid userid
6
Attempt to use a Windows login name with SQL Authentication
7
Login disabled and password mismatch
8
Password mismatch
9
Invalid password
11 and 12
Valid login but server access failure
13
SQL Server service paused
18
Change password required


127. How to get the OS free space from SQL Server

EXEC master..xp_fixeddrives

128.What is verbose log mode and how to enable it?
Verbose logging is a computer logging mode that records more information than the usual logging mode. (Verbose means "using more words than necessary".) Verbose logging options are usually enabled specifically for troubleshooting because they create large log files and can slow down performance.
129. How to enable and disable trace flag?

DBCC TRACEON (3205);  This flag is switched on only for the current connection.

The following example switches on trace flag 3205 globally
DBCC TRACEON (3205, -1);

130.Time zone of Cluster logs?   GMT

131.Path of cluster log   -C:\Windows\Cluster\Cluster.log – Need to open command prompt and run the command “cluster log /g” to get the cluster logs created in 2008 cluster and in the path %windir%\Cluster\Reports.

132. Difference between sp_who and sp_who2?

Sp_who2 provides more information than sp_who like CPU time,Disk I/O, Last batch time etc.

133.Difference between LooksAlive and IsAlive checks by Microsoft Cluster Service.

When an instance of SQL Server is running in a Clustered Environment, Microsoft Cluster Service performs two checks to confirm if the SQL Server resource is still available. One is LooksAlive and the other one is IsAlive check.

LooksAlive is a basic check in which the Cluster service queries the Windows Service Control Manager to check if the SQL Server service is still running. By default this check happens every 5 seconds. During IsAlive check the Cluster Service connects to the SQL Server instance with the help of c:\windows\system32\sqsrvres.dll and runs SELECT @@SERVERNAME against the instance. This check does not check if the user databases are online or not. It just checks the SQL Server instance availability. This check happens every 60 seconds by default.

During the IsAlive check the Cluster Service connects to the SQL Server Instance. What privileges does the Cluster Service Service Account needs to have on the SQL Server instance?

The cluster service account needs privileges sufficient to execute SELECT @@SERVERNAME command against the SQL Server instance. Any user who has Public rights on the instance can execute this query. By default, the Cluster Service startup account is part of the Local Administrators group on the cluster nodes. Until SQL Server 2005, the Builtin\Administrators group was granted SA privileges on the instance during SQL Server installation. Hence the Cluster Service startup account had System Administrator privileges on the instance. Usually during the hardening process it is a common practice to “revoke” access to Builtin\Administrators login on the instance. If this is done on an instance running in Clustered environment, the cluster service service account will not have access to the instance and as a result the SQL Server resource will not come online after hardening! Similar issue occurred in some other team in my organization. They tightened the instance too hard. The issue was resolved after the Builtin\Administrators login was granted Public or higher privileges on the instance.

134.What is the SQL Server Buffer Cache?
The Buffer Cache (sometimes called the Data Cache) is an area of the SQL Server Buffer Pool (which in turn is part of the overall SQL Server Process space) and in simple terms is the amount of your database data (data pages) that are stored in memory.

135.What is Buffer Cache Hit Ratio?
The performance measure buffer cache hit ratio expresses as a percentage, how often a request for a database data page, can be served from the Buffer Pool. The alternative action is for SQL Server to have to fetch the data page from disk.

So for example, if you have a 500MB database, a server with 4GB and SQL Server is configured to use all available memory, you will have a very good Buffer Cache Hit ratio of 99% or above because your entire database can easily fit into memory.

How can I determine the Buffer Cache Hit Ratio of my server?
In order to determine you Buffer Cache Hit Ratio you can use the Windows Performance Monitor tool.1.Just go to: Start – Programs –  Administrative Tools – Performance
2.Right click the graph and choose: Add Counters.
3.Select Performance object: SQLServer:Buffer Manager
4.Add: Buffer Cache Hit Ratio.

135.CXPACKET wait type.

CXPACKET waits aren’t necessarily bad, or even a sign of a problem, they are normal and expected if queries are executing using parallelism. I wouldn’t recommend that CXPACKET alone ever be considered a reason to reduce or change the configuration of the ‘max degree of parallelism’ configuration option.

136.What is NUMA
Non-Uniform Memory Access (NUMA) is a computer memory design used in Multiprocessing, where the memory access time depends on the memory location relative to a processor. Under NUMA, a processor can access its own local memory faster than non-local memory, that is, memory local to another processor or memory shared between processors.
137.What is Wright Ahead Log?
A write-ahead log ensures that no data modifications are written to disk before the associated log record.
SQL Server maintains a buffer cache into which it reads data pages when data must be retrieved. Data modifications are not made directly to disk, but are instead made to the copy of the page in the buffer cache. The modification is not written to disk until either the database is checkpointed, or the modifications must be written to disk so the buffer can be used to hold a new page. Writing a modified data page from the buffer cache to disk is called flushing the page. A page modified in the cache but not yet written to disk is called a dirty page.
At the time a modification is made to a page in the buffer, a log record is built in the log cache recording the modification. This log record must be written to disk before the associated dirty page is flushed from the buffer cache to disk. If the dirty page were flushed before the log record, it would create a modification on disk that could not be rolled back if the server failed before the log record were written to disk. SQL Server has logic that prevents a dirty page from being flushed before the associated log record. Because log records are always written ahead of the associated data pages, the log is called a write-ahead log.
138. How to set recovery model by using T-SQL?
ALTER DATABASE <database_name> SET RECOVERY FULL | SIMPLE | BULK_LOGGED

139. What is Live Lock?

A Live lock is one, where a request for exclusive lock is denied continuously because a series of overlapping shared locks keeps on interfering each other and to adapt from each other they keep on changing the status which further prevents them to complete the task. In SQL Server Live Lock occurs when read transactions are applied on table which prevents write transaction to wait indefinitely. This is different then deadlock as in deadlock both the processes wait on each other.

A human example of live lock would be two people who meet face-to-face in a corridor and each moves aside to let the other pass, but they end up moving from side to side without making any progress because they always move the same way at the same time and never cross each other. This is good example of live lock.

140 .What is the pupose of log file?

Transaction log file is basically to keep a record of all modifications made to the database.

141.Error 9002  - Transaction log file is full
142. Error 1105 -
143. What is Copy-only Backup?

A copy-only backup is a SQL Server backup that is independent of the sequence of conventional SQL Server backups. Usually, taking a backup changes the database and affects how later backups are restored. However, occasionally, it is useful to take a backup for a special purpose without affecting the overall backup and restore procedures for the database. copy-only backups serve this purpose. The transaction log is never truncated after a copy-only backup.

The types of copy-only backups are as follows:

Copy-only full backups (all recovery models)

A copy-only full backup cannot serve as a differential base or differential backup and does not affect the differential base.

Copy-only log backups (full recovery model and bulk-logged recovery model only)

A copy-only log backup preserves the existing log archive point and, therefore, does not affect the sequencing of regular log backups. A copy-only log backup can sometimes be useful for performing an online restore.

144. How to change owner of a database?

sp_changedbowner 'sa'

145.How to verify Last LSN of a backup?

Restore Headeronly from disk = “g:\ backups\adventureworks.bak”

146.How verify the backup file?

Restore verifyonly from disk = “g:\ backups\adventureworks.bak” (But use backup database with checksum).

147.What is mean by Piece meal restore? 

Piecemeal restore, which was introduced in SQL Server 2005, allows databases that contain multiple filegroups to be restored and recovered in stages. Piecemeal restore involves a series of restore sequences, starting with the primary filegroup and, in some cases, one or more secondary filegroups. Piecemeal restore maintains checks to ensure that the database will be consistent in the end. After the restore sequence is completed, recovered files, if they are valid and consistent with the database, can be brought online directly.
Piecemeal restore works with all recovery models, but is more flexible for the full and bulk-logged models than for the simple model.
Every piecemeal restore starts with an initial restore sequence called the partial-restore sequence. Minimally, the partial-restore sequence restores and recovers the primary filegroup and, under the simple recovery model, all read/write filegroups. During the piecemeal-restore sequence, the whole database must go offline. Thereafter, the database is online and restored filegroups are available. However, any unrestored filegroups remain offline and are not accessible. Any offline filegroups, however, can be restored and brought online later by a file restore.
Regardless of the recovery model that is used by the database, the partial-restore sequence starts with a RESTORE DATABASE statement that restores a full backup and specifies the PARTIAL option. The PARTIAL option always starts a new piecemeal restore; therefore, you must specify PARTIAL only one time in the initial statement of the partial-restore sequence. When the partial restore sequence finishes and the database is brought online, the state of the remaining files becomes "recovery pending" because their recovery has been postponed.
Subsequently, a piecemeal restore typically includes one or more restore sequences, which are called filegroup-restore sequences. You can wait to perform a specific filegroup-restore sequence for as long as you want. Each filegroup-restore sequence restores and recovers one or more offline filegroups to a point consistent with the database. The timing and number of filegroup-restore sequences depends on your recovery goal, the number of offline filegroups you want to restore, and on how many of them you restore per filegroup-restore sequence.
148. What is Default collation for SQL Server? What is collation?
      SQL2000 :    SQL_Latin1_General_CP1_CI_AS
SQL2005 :    Latin1_General_CI_AS
SQL2008 R2 : SQL_Latin1_General_CP1_CI_AS

149.What is Full table scan?

When figuring out how best to access data within SQL Server, the query optimizer takes the query that you submit to it and performs some analysis of the objects involved.  It tries to determine what indexes (just like the numbering of pages in a book) are available and if they are useful or not to your query, it tries to determine if the criteria in the where clause of the query allows it to use any of the indexes in an efficient manner.  When there is no other efficient way to access data within the tables involved in the query, the query processor will process all of the rows in the table one at a time to give you the results.This is referred to as a Full Table Scan.
150. What are the instance aware services and instance unaware services ?
Instance-aware services are associated with a specific instance of SQL Server, and have their own registry hives. We can install multiple copies of instance-aware services by running SQL Server Setup for each component or service.
1.SQL Server 
2.SQL Server Agent
3.Analysis Services 
4.Reporting Services
5.Full-text search
Instance-unaware services are shared among all installed SQL Server instances. They are not associated with a specific instance, are installed only once, and cannot be installed side-by-side.
1.Integration Services 
2.SQL Server Browser
3.SQL Server Active Directory Helper
4.SQL Writer

151. How to check how many users are connected to instance?
   SELECT * FROM sys.dm_exec_connections

152.Max no of databases per instance -32,767

Maximum no of files in a database? - 32,767
Maximum no of file groups in a database - 32,767
Maximum size of row – Limited by available storage.
Maximum no of connections - 32,767
153. Compare the row counts in these tables by using the COUNT(*) function
SELECT COUNT(*) FROM tb1
SELECT COUNT(*) FROM tb2
154. How deadlock victim is selected?

Instance of SQL Server chooses the session that is less expensive to roll back as the deadlock victim

155.Default size of system databases.

 master - 11.0 MB
tempdb -8.0 MB
model -0.75 MB
msdb -12.0 MB

156. Default recovery model of system databases
Master – Simple
Model – Full
MSDB – Simple
Resource - --
Tempdb – Simple.

157 . What is suspended state?
It means that the request currently is not active because it is waiting on a resource. The resource can be an I/O for reading a page, A WAITit can be communication on the network, or it is waiting for lock or a latch. It will become active once the task it is waiting for is completed

158. Where will you find the SQL 2000 setup log files
To locate any problems during setup, check the sqlstp.log, setup.log and errorlog files.

The following files, found on your server, can provide valuable error messages if your SQL Server installation fails:

sqlstp.log (located in your \Windows or \WinNT directory)
errorlog file (located in the \Log directory of the target installation directory)
setup.log file (located in your \Windows or \WinNT directory)

159. What is a thread and a process? Differentiate between them.

Processes and Threads

An instance of a running application is known as a process. Every process has at least one thread (the main thread) but can have many. Each process is allotted its own virtual memory address space. All threads
within the process share this virtual memory space. Each thread in a process gets its own set of volatile registers. A volatile register is the software equivalent of a CPU register. Processes can be initiated by many different types of applications. Console apps, graphical user interface (GUI) apps, Windows services, out-ofprocess COM servers, and so on are examples of EXEs that can be executed to instantiate a process. SQL Server can run as both a console app and a Windows service.

160.Differentiate between user mode and kernel mode

User Mode vs. Kernel Mode

Windows uses two processor modes: user mode and kernel mode. User application code runs in user mode; OS code and device drivers run in kernel mode.
The Intel x86 family of processors actually supports four operating modes (also known as rings). These are numbered 0 through 3, and each is isolated from the others by the hardware. a crash in a lower-priority mode will not destabilize higher-priority modes.

161.SQL 2000 fails with “ Pending file rename operation”.What to do?

Go to registry and and current control set\control\session manager and delete “Pending file rename operation”.
162. What is an execution plan?
An execution plan, simply put, is the result of the query optimizer's attempt to calculate the most efficient way to imple­ment the request represented by the T-SQL query you sub­mitted.
Execution plans can tell you how a query will be executed, or how a query was executed. They are, therefore, the DBA's primary means of troubleshooting a poorly performing query. Rather than guess at why a given query is performing thousands of scans, putting your I/O through the roof, you can use the execution plan to identify the exact piece of SQL code that is causing the problem. For example, it may be scanning an entire table-worth of data when, with the proper index, it could simply backpack out only the rows you need. All this and more is displayed in the execution plan.
163. if u bring database online form offline it will go through some phases what are those?
A restore is a multiphase process. The possible phases of a restore include the data copy, redo (roll forward), and undo (roll back) phases:

•The data copy phase involves copying all the data, log, and index pages from the backup media of a database to the database files.

•The redo phase applies the logged transactions to the data copied from the backup to roll forward that data to the recovery point. At this point, a database typically has uncommitted transactions and is in an unusablestate. In that case, an undo phase is required as part of recovering the database.

•The undo phase, which is the first part of recovery, rolls back any uncommitted transactions and makes the database available to users.After the roll back phase, subsequent backups cannot be restored.
163.What is affinity mask option?
Use the affinity mask option to increase performance on symmetric multiprocessor (SMP) systems (with more than four microprocessors) operating under heavy load. You can associate a thread with a specific processor and specify which processors Microsoft SQL Server™ will use. You can exclude SQL Server activity from processors given specific workload assignments by the Windows NT 4.0 or Windows 2000 operating system.
If you set a bit representing a processor to 1, that processor is selected for thread assignment. When you set affinity mask to 0 (the default), the Windows NT 4.0 or Windows 2000 scheduling algorithms set the thread's affinity. When you set affinity mask to any nonzero value, SQL Server affinity interprets the value as a bit mask that specifies those processors eligible for selection. Excluding SQL Server threads from running on particular processors helps evaluate the system's handling of processes specific to Windows NT 4.0 or Windows 2000. For example, you can use affinity mask to evaluate whether an additional network interface card (NIC) increases performance or assess NIC performance with increasing loads.
164. What is “Native client”?
It contains the SQL Server ODBC driver for Linux and ODBC/ OLE DB provider for Windows , supporting native connectivity to Microsoft SQL Server.
165. What is SQL server Browser?
The SQL Server Browser service introduced in SQL Server 2005, runs as a Windows service and it is an extension of SQL Server Resolution Protocol (SSRP) of SQL Server 2000. Like SSRP it plays an important role while working with more than one SQL Server instance installed on the same machine. The basic purpose of the SQL Server Browser service is to provide instance and port information to incoming connection requests.
166. What is default count of SQL server logs?
  1 current log & 6 Archive
167. What is HOTFIX, CU, Service pack?
Service Pack

Definition: A service pack is a tested, cumulative set of all hotfixes, security updates, critical updates, and updates. Service packs may also contain additional fixes for problems that are found internally since the release of the product and a limited number of customer-requested design changes or features.
Cumulative update (CU)
Definition: A CU is a roll-up update that contains all previous critical on-demand hotfixes to date. Additionally, a CU contains fixes for issues that meet the hotfix acceptance criteria. These criteria may include the availability of a workaround, the effect on the customer, the reproducibility of the problem, the complexity of the code that must be changed, or other reasons.
Hotfix
hotfix package does not replace a service pack. A hotfix package is optional. A hotfix package can be installed or uninstalled at any time. Additionally, hotfix packages are cumulative. Therefore, the latest on-demand hotfix package or cumulative update hotfix package includes all previously released hotfixes.
168.How to recover MSDB crash?
1.1 Copy any other instance msdb databse, mdf and ldf files to corrupted msdb database location.
1.2 Restart the sql server
1.3 Now ,the MSDB database will see on corrupted of msdb instance.
1.4 Restore the MSDB database from backup of msdb previous copy (before corrupted msdb).
1.5 You can see all jobs now.
169. What is the purpose of the Surface Area Configuration Tool in SQL Server 2005?
Surface Area Configuration tool is used to selectively install and start key services and features to reduce attackable surface of the system. A system Admin can use this tool to enable/disable features after the installation has happened. This tool is a set of dialog boxes which allows a user to enable/disable various stored procedures, windows/web services, remote client connectivity etc.
170. What is “RTM”?
Short for release to manufacturing, the version of a software product that is given to manufacturers to bundle into future versions of their hardware products. RTM versions are typically released to manufacturers before they are released to the general public so that the manufacturers can work out any bugs the software may encounter with hardware devices. The release of an RTM version does not necessarily mean that the creators have worked out all the problems with the software; there still may be more versions of the product before it is released to the general public.
171.What is MSDTC?
MSDTC is used by SQL Server and other applications when they want to make a distributed transaction between more than one machine.  A distributed transaction is simple a transactions which spans between two or more machines.  The basic concept is that machine 1 starts a transaction, and does some work.  It then connects to machine 2 and does some work.  The work on machine 2 fails, and is cancled.  The work on machine 1 needs to then be rolled back
172.What is Heap structure?
In SQL Server 2005 data is stored within tables. Data within a table is grouped together into allocation unites based on their column data types, what it means is one kind of data types are stored together in allocation unites. Data within this allocation unit is stored in pages. Each pages are of size 8KB. Group of 8 pages is stored together and they are referred as Extent. Pages within a table store the data rows with structure which helps to search/locate data faster. If the data of table is not logically sorted, in other word there is no order of data specified in table it is called as Heap Structure.
173.What is MDAC?
Microsoft Data Access Components (commonly abbreviated MDAC; also known as Windows DAC) is a framework of interrelated Microsoft technologies that allows programmers a uniform and comprehensive way of developing applications that can access almost any data store.
174. How to take a back-up with a password?
BACKUP DATABASE Northwind TO DISK='C:\Northwind.BAK' WITH MEDIAPASSWORD='mssqltips'
GUI Does not support this option and we need to provide the password while restoring.
RESTORE DATABASE Northwind FROM DISK='C:\Northwind.BAK' WITH MEDIAPASSWORD='mssqltips'
175.
176.What is not recorded in Bulk recovery model?
1)Bulk import operations (bcp, BULK INSERT, and INSERT... SELECT). For more information about when bulk import into a table is minimally logged.
2)SELECT INTO operations.
3)WRITETEXT and UPDATETEXT statements when inserting or appending new data into the text, ntext, and image data type columns.
4)The minimally logged index operations are as follows:
CREATE INDEX operations (including indexed views) & ALTER INDEX REBUILD or DBCC DBREINDEX operations.
176. What is an End point?
Endpoints control the capability to connect to an instance. In SQL Server 2005, you can create two types of endpoints: TCP or HTTP. Database mirroring uses TCP endpoints for communications. HTTP endpoints, on the other hand, service SOAP requests.
177. What are the jobs created while configuring Log shipping?
Primary Server – Backup Transaction log
Secondary Server – Copy job & Restore job . if monitor instance is same then one alert job also will be created.

178. SQL Server Architecture

Components of the SQL Server Engine


The above figure shows  the general architecture of SQL Server, which has four major components.
Three of those components, along with their subcomponents are shown in the fi gure: the
relational engine (also called the query processor), the storage engine, and the SQLOS.
(The fourth component is the protocol layer, which is not shown.) Every batch submitted
to SQL Server for execution, from any client application, must interact with these four
components.
The protocol layer receives the request and translates it into a form that the relational
engine can work with, and it also takes the fi nal results of any queries, status messages, or
error messages and translates them into a form the client can understand before sending
them back to the client. The relational engine layer accepts T-SQL batches and determines
what to do with them. For T-SQL queries and programming constructs, it parses, compiles,
and optimizes the request and oversees the process of executing the batch. As the batch
is executed, if data is needed, a request for that data is passed to the storage engine. The
storage engine manages all data access, both through transaction-based commands and
bulk operations such as backup, bulk insert, and certain DBCC commands. The SQLOS layer
handles activities that are normally considered to be operating system responsibilities, such
as thread management (scheduling), synchronization primitives, deadlock detection, and
memory management, including the buffer pool.



The SQLOS
a single application layer has been designed to manage all operating system resources that are specifi c to SQL Server.The two main functions of SQLOS are scheduling and memory management, both of which
we’ll talk about in detail later in this section

Synchronization
Memory Brokers
SQL Server Exception Handling
Deadlock Detection


Memory

SQL Server organized the memory it allocates into two regions
1.Buffer Pool
2.Mem To Leave

Buffer Pool is the primary allocation pool in SQL Server.Servers primarily as data ad index page cache and is also used for memory allocations less than 8 KB.
Mem to leave consists of the virtual memory space within the user mode address space that is not used by the buffer pool.
If AWE is enabled then it will become an extension of B Pool.

How memory allocations are calculated?

On start up  SQL Server looks up at the physical memory in the machine It reserves the memtoleave and then allocate all the buffer pool region and finally come back and free the memtoleave region this ensures that MemToLeave  is a single contiguous region.
Mem to leave – Extended stored procedures,COM objects,Linked servers,SQL CLR,allocations larger than 8 KB those are called as Multipage allocations.

Memto leave is calculated as follows:

MaxworkerthreadsX stacksize+Default reservation size = 384 MB by default.

B Pool – is calculated as follows
If AWE is not enabled- Buffer pool is sized based on the remaining memory in user address space---Memtoleave(384MB).
Once User address space is calculated the previously calculated memtoleave is subtracted from it and then the remaining memory is compared with the physical memory available in the server and the lower of the two is buffer pool. And it is then compared with Max memory and the lower of the two will be the buffer pool.

If AWE is enabled then the max size of buffer pool is the upper limit of physical RAM or Max server setting whichever is smaller.

Explain about AWE

If you are using SQL Server 2000 Standard Edition under Windows NT 4.0, Windows 2000 (any version), or Windows 2003 (any version), or are running SQL Server 2000 Enterprise Edition under the Standard Edition Windows NT 4.0, Windows 2000, or Windows 2003, or if your server has 4GB or less of RAM, the "awe enabled" option should always be left to the default value of 0, which means that AWE memory is not being used.
The AWE (Advanced Windowing Extensions) API allows applications (that are written to use the AWE API) to run under Windows 2000 Advanced Server or Windows 2000 Datacenter Server (or Windows 2003 Enterprise and Datacenter Editions) to access more than 4GB of RAM.
SQL Server 2000 and SQL Server 2005 Enterprise Edition (not SQL Server 2000 or 2005 Standard Edition) are AWE-enabled and can take advantage of RAM in a server of 4GB or more.
If the operating system is Windows 2000 Advanced Server, SQL Server 2000 Enterprise Edition can us up to 8GB of RAM. If the operating system is Windows 2000 Datacenter Server, SQL Server 2000 Enterprise can use up to 64GB of RAM.
SQL Server 2005 Enterprise Edition supports AWE memory, allowing the use of physical memory over 4 gigabytes (GB) on 32-bit versions of Microsoft Windows operating systems. Up to 64 GB of physical memory is supported.
In order for the operating system and SQL Server 2000 or SQL Server 2005 Enterprise Edition to take advantage of the additional RAM, two steps must be completed.
Exactly how you configure AWE memory support depends on how much RAM your server has. To configure Windows 2000 or 2003, you must enter one of the following switches in the boot line of the boot.ini file, and reboot the server:
         4GB RAM:  /3GB (AWE support is not used)
         8GB RAM:  /3GB /PAE
         16GB RAM:  /3GB /PAE
         16GB + RAM:  /PAE
The /3GB switch is used to tell SQL Server to take advantage of 3GB out of the base 4GB of RAM that Windows 2000 supports natively. If you don't specify this option, then SQL Server will only take advantage of 2GB of the first 4GB of RAM in the server, essentially wasting 1GB of RAM.
AWE memory technology is used only for the RAM that exceeds the base 4GB of RAM, that's why the /3GB switch is needed to use as much of the RAM in your server as possible. If your server has 16GB or less of RAM, then using the /3GB switch is important. But if your server has more than 16GB of RAM, then you must not use the /3GB switch. The reason for this is because the 1GB of additional RAM provided by adding the /3GB switch is needed by the operating system in order to take advantage of all of the extra AWE memory. In other words, the operating system needs 2GB of RAM itself to mange the AWE memory if your server has more than 16GB of RAM. If 16GB or less of RAM is in a server, then the operating system only needs 1GB of RAM, allowing the other 1GB of RAM for use by SQL Server.
Once this step is done, the next step is to set the "awe enabled" option to 1 within SQL Server 2000 or 2005 Enterprise Edition, and then restart the SQL Server service. Only at this point will SQL Server be able to use the additional RAM in the server.
One caution about using the "awe enabled" setting is that after turning it on, SQL Server no longer dynamically manages memory. Instead, it takes all of the available RAM (except about 128MB which is left for the operating system). If you want to prevent SQL Server from taking all of the RAM, you must set the "max server memory" option (described in more detail later in this article) to a figure that limits SQL Server to the amount or RAM you specify. (7.0, 2000, 2005) Updated 1-2-2004
*****
If you find that you are running into a memory bottleneck, and assuming you have the money to spend, SQL Server 2000 and SQL Server 2005 Enterprise Edition can support up to 64GB of RAM. How much RAM SQL Server 2000 or SQL Server 2005 Enterprise Edition can use depends on which version of Windows 2000 or Windows 2003 you are using and how much RAM your server can support. Assuming your server can handle it, SQL Server 2000 Enterprise Edition supports up to 8GB under Windows Advanced Server 2000 and Windows 2003 Enterprise, and up to 64GB under Windows Data Center for both Windows 2000 and Windows 2003.
Normally, 32-bit CPUs, such as the Pentium family of processors, can only support up to 4GB of RAM because of its limited address space. To get around this limitation, SQL Server 2000 and SQL Server 2005 Enterprise Edition supports a feature called AWE (Address Windowing Extensions) that allows up to 64GB of RAM to be addressed.
Assuming you configure the appropriate hardware and software, AWE support is not turned automatically on, you have to do this step manually. To turn AWE support on, you must change the "awe enabled" advanced SQL Server 2000 or 2005  option from 0 to 1. For example, to turn on AWE support:
SP_CONFIGURE 'show advanced options', 1
RECONFIGURE                             
GO
SP_CONFIGURE 'awe enabled', 1
RECONFIGURE
GO
AWE memory cannot be dynamically managed, like memory is normally managed in SQL Server. This means that SQL Server will automatically grab all the RAM it can when it starts (except for about 128MB, which is reserved for the operating system), but it will not release any of this RAM until SQL Server is turned off. If your server is a dedicated SQL Server, then this might be OK. But if you are running other software on the server, or are running multiple instances of SQL Server, then you must specify the maximum amount of RAM that SQL Server can grab when it is started. This can be done using the "max server memory" configuration option. If you change this setting, you will have to stop and start the mssqlserver service in order for the new setting to take affect. 
To set the maximum amount of memory that AWE memory can access, you can use SQL Server's "max server memory" configuration option. For example:
SP_CONFIGURE 'max server memory', 4096
RECONFIGURE
GO
In the above example, we are telling SQL Server to only use 4GB of RAM, leaving any other RAM available in the server free for other applications.
While multiple instances of SQL Server can be used with AWE memory, you probably won't want to, as it can be a headache to manage. In fact, running multiple instances of SQL Server in AWE memory defeats the purpose of more RAM in the first place. Generally, your goal of using AWE memory should be to support a single, very large instance of SQL Server, not lots of smaller instances running on a single server


Expalin about checkpoint

CHECKPOINT
Forces all dirty pages for the current database to be written to disk. Dirty pages are data or log pages modified after entered into the buffer cache, but the modifications have not yet been written to disk. For more information about log truncation, see Truncating the Transaction Log.
Syntax
CHECKPOINT
Remarks
The CHECKPOINT statement saves time in a subsequent recovery by creating a point at which all modifications to data and log pages are guaranteed to have been written to disk.
Checkpoints also occur:
When a database option is changed with ALTER DATABASE. A checkpoint is executed in the database in which the option is changed.
When a server is stopped, a checkpoint is executed in each database on the server. These methods of stopping Microsoft® SQL Server™ 2000 checkpoint each database:
Using SQL Server Service Manager.
Using SQL Server Enterprise Manager.
Using the SHUTDOWN statement.
Using the Windows NT command net stop mssqlserver on the command prompt.
Using the services icon in the Windows NT control panel, selecting the mssqlserver service, and clicking the stop button.
The SHUTDOWN WITH NOWAIT statement shuts down SQL Server without executing a checkpoint in each database. This may cause the subsequent restart to take a longer time than usual to recover the databases on the server.
SQL Server 2000 also automatically checkpoints any database where the lesser of these conditions occur:
The active portion of the log exceeds the size that the server could recover in the amount of time specified in the recovery interval server configuration option.
If the database is in log truncate mode and the log becomes 70 percent full.
A database is in log truncate mode when both these conditions are TRUE:
The database is using the simple recovery model.
One of these events has occurred after the last BACKUP DATABASE statement referencing the database was executed:
A BACKUP LOG statement referencing the database is executed with either the NO_LOG or TRUNCATE_ONLY clauses.
A nonlogged operation is performed in the database, such as a nonlogged bulk copy operation or a nonlogged WRITETEXT statement is executed.
An ALTER DATABASE statement that adds or deletes a file in the database is executed.
Permissions
CHECKPOINT permissions default to members of the sysadmin fixed server role and the db_owner and db_backupoperator fixed database roles, and are not transferable.
See Also
ALTER DATABASE
Checkpoints and the Active Portion of the Log
recovery interval Option
Setting Database Options
SHUTDOWN
Checkpoints and the Active Portion of the Log
Checkpoints and the Active Portion of the Log
Checkpoints minimize the portion of the log that must be processed during a full recovery of a database. During a full recovery, two types of actions must be performed:
The log may contain records of modifications not flushed to disk before the system stopped. These modifications must be rolled forward.
All the modifications associated with incomplete transactions (transactions for which there is no COMMIT or ROLLBACK log record) must be rolled back.
Checkpoints flush dirty data and log pages from the buffer cache of the current database, minimizing the number of modifications that have to be rolled forward during a recovery.
A SQL Server 2000 checkpoint performs these processes in the current database:
Writes to the log file a record marking the start of the checkpoint.
Stores information recorded for the checkpoint in a chain of checkpoint log records. The LSN of the start of this chain is written to the database boot page.
One piece of information recorded in the checkpoint records is the LSN of the first log image that must be present for a successful database-wide rollback. This LSN is called the Minimum Recovery LSN (MinLSN) and is the minimum of:
The LSN of the start of the checkpoint.
The LSN of the start of the oldest active transaction.
The LSN of the start of the oldest replication transaction that has not yet replicated to all subscribers.
Another piece of information recorded in the checkpoint records is a list of all outstanding, active transactions.
Deletes all log records before the new MinLSN, if the database is using the simple recovery model.
Writes to disk all dirty log and data pages.
Writes to the log file a record marking the end of the checkpoint.
The portion of the log file from the MinLSN to the last-written log record is called the active portion of the log. This is the portion of the log required to do a full recovery of the database. No part of the active log can ever be truncated. All log truncation must be done from the parts of the log before the MinLSN.
This is a simplified version of the end of a transaction log with two active transactions. Checkpoint records have been compacted to a single record.

LSN 148 is the last record in the transaction log. At the time the checkpoint recorded at LSN 147 was processed, Tran 1 had been committed and Tran 2 was the only active transaction. That makes the first log record for Tran 2 the oldest log record for a transaction active at the time of the last checkpoint. This makes LSN 142, the begin transaction record for Tran 2, the MinLSN.
Checkpoints occur:
When a CHECKPOINT statement is executed. The current database for the connection is checkpointed.
When ALTER DATABASE is used to change a database option. ALTER DATABASE checkpoints the database when database options are changed.
When an instance of SQL Server is stopped by:
Executing a SHUTDOWN statement.
Using the SQL Server Service Control Manager to stop the service running an instance of the database engine.
Either of these methods checkpoints each database in the instance of SQL Server.
When an instance SQL Server periodically generates automatic checkpoints in each database to reduce the amount of time the instance would take to recover the database.
Automatic Checkpoints
SQL Server 2000 always generates automatic checkpoints. The interval between automatic checkpoints is based on the number of records in the log, not time. The time interval between automatic checkpoints can be highly variable. The time interval between automatic checkpoints is long if few modifications are made in the database. Automatic checkpoints occur frequently if a lot of data is modified.
The interval between automatic checkpoints is calculated from the recovery interval server configuration option. This option specifies the maximum time SQL Server should use to recover a database during a system restart. SQL Server estimates how many log records it can process in the recovery interval during a recovery operation. The interval between automatic checkpoints also depends on whether or not the database is using the simple recovery model.
If the database is using either the full or bulk-logged recovery model, an automatic checkpoint is generated whenever the number of log records reaches the number SQL Server estimates it can process during the time specified in the recovery interval option.
If the database is using the simple recovery model, an automatic checkpoint is generated whenever the number of log records reaches the lesser of these two values:
The log becomes 70 percent full.
The number of log records reaches the number SQL Server estimates it can process during the time specified in the recovery interval option.
Automatic checkpoints truncate the unused portion of the transaction log if the database is using the simple recovery model. The log is not truncated by automatic checkpoints if the database is using the full or bulk-logged recovery models. For more information, see Truncating the Transaction Log.
Long-Running Transactions
The active portion of the log must include every part of all uncommitted transactions. An application that starts a transaction and does not commit it or roll it back prevents SQL Server from advancing the MinLSN. This can cause two types of problems:
If the system is shut down after the transaction has performed many uncommitted modifications, the recovery phase of the subsequent restart can take considerably longer than the amount of time specified in the recovery interval option.
The log may grow very large because the log cannot be truncated past the MinLSN. This happens even if the database is using the simple recovery model, in which the transaction log is normally truncated on each automatic checkpoint.
Replication Transactions
The active portion of the log must also contain all transactions marked for replication, but that have not yet been replicated to a subscriber. If these transactions are not replicated in a timely manner, they can also prevent truncation of the log.


How to recover a database from suspect mode
Step 1:
Open SQL Server Management Studio and connect to the SQL Server. Make sure that your current database is set to master
Step 2:
Clear the suspect mode of the database using sp_resetstatus DatabaseName. This will clear the suspect flag and make the database available online
Step 3:
Change the database status to Emergency using the following command. Emergency mode allows you to access the databases as normal but with no consistency guarantee. This option also allows us to export the table data so that we can minimize the damage.
ALTER DATABASE DatabaseName SET EMERGENCY;
Step 4:
Restrict database to single user by changing the access mode as mentioned below
ALTER DATABASE DatabaseName SET SINGLE_USER;
Step 5:
Run the CHECKDB command with “REPAIR_ALLOW_DATA_LOSS”  option. This option should be tried as last option as it always behaves the way it is named. We are not sure of what data it removes.
DBCC CHECKDB (DatabaseName, REPAIR_ALLOW_DATA_LOSS) WITH NO_INFOMSGS, ALL_ERRORMSGS;
Even though the step 4 is expected to result in data loss while repairing the database, I was lucky to restore the database without any loss. My job would have been much easier if the admin of the database server followed few general guidelines that allow recovery of database in case of any unexpected failures. Here are the few of them
Backup your data frequently. (once in two days or daily or even more frequent depending on your needs)
Have multiple backups. Move the backups to external drives or tapes frequently
Validate that your backups are good by performing trial restores to alternate server
Run CHECKDB regularly if possibly in your case, to know how long it regularly takes
2

A guide to recover a database out from Suspect mode
The first step is always to inform your customer about the outage before they come back to you. I find most of the times this proactive step becomes a life saver. In this way, you will not be questioned for the time you will spend to bring the database online.
Then refer to the SQL Server error log to find the root cause of the issue. It will give you a clear reason mentioning why the database is in Suspect mode. From SQL Server 2005 onwards, I recommend to use a filter while viewing the log and use "Database name" as the filter criteria. In this way you will only see the logs related to that particular database and then refer to the latest log/s for the root cause. Up until SQL Server 2000, the only option is reading the log from the latest entry backwards to find the root cause. Assuming you found the reason why database is in suspect mode, now you need to take appropriate step to fix the issue.
Here I will discuss some of the possible issues which can put a database in Suspect mode and recommend appropriate resolution.
Possibility 1: Some one deleted/misplaced a data/log file when SQL Server was offline and the database can not start because of missing file. The solution is to place the missing data/log file in proper location. The SQL Server Error Log error message will give you the exact name and path of the missing file. Once you place the file execute below command to bring your database online with no data loss.
RESTORE DATABASE WITH RECOVERY
Possibility 2: SQL Server could not access or place an exclusive lock on the data or log file while coming online. Typically I experience this when SQL Server is shared with some other tool (like Antivirus), which puts an exclusive lock on the data/log file. To resolve it, use process explorer and kill the file handler which placed lock on the file. You may want to involve your System Admins to get this step executed. Then execute below command and you will have your database online with no data loss:
RESTORE DATABASE WITH RECOVERY
Possibility 3: This is a worst case scenario. Database is in suspect because of a corrupted transaction. This is a bad news as you may have to lose data at this point unless you have a good backup! Also this is the most common case I saw for putting an OLTP database in Suspect mode.
The root cause of this issue is actually from a guarantee taken by SQL Server to ensure transaction consistency under fundamental ACID property of RDBMS. The root cause of this issue is most likely SQL server abruptly went down/restarted in the middle of a transaction and while coming back, SQL server could not complete (commit/rollback) the transaction.
At this point, I recommend you to take a decision. If you have a good backup and can restore the database in an acceptable time up to an acceptable point, then go ahead for it. But if restore is not an option at this point, then you have to execute below steps:
Caution! Below steps will cause you to lose data and hence are extremely dangerous to execute. I recommend trying all other possible options including calling Microsoft Support before executing below steps.
1. Switch the Emergency mode on for the database using below command:
ALTER DATABASE SET EMERGENCY;
2. Then execute below command:
dbcc checkdb ('',repair_allow_data_loss)
Please note that as stated above this is an extremely dangerous command to execute. It is a one- way command (that is you can not rollback back this execution once you started it) which can cause loss in data or database integrity. Technically, by executing this command you are actually authorizing SQL Server to force transactional recovery to run and skip the errors. If it faces errors, this operation scavenges as much out of the transaction log as it can and then rebuilds the transaction log from scratch. So taking this step is really the last resort you should try after every other attempts fails.
After this operation is complete you will have your database back online. However, you'll most likely have lost a bunch of data, broken constraints and inherent business logic in the database but at least you haven't lost everything. You may want to involve your customers to run a sanity check on the data quality at this point.
Possibility 4: If you find out that your data file is corrupted then most likely you have OS / Hardware level failure. For this these type of failure or anything really weird which you can not fixed easily with in acceptable amount of time, your best bet is restore from backup with out really wasting time in a "Code Red" situation.
So in this article, I covered some possible reasons which can put a database in "Suspect" and then the options to be back in business in shortest possible time. To conclude, I would like to add below basic homework tips so that you can be in a good shape while handling this situation:
1. Always ensure that you have a good backup/DR strategy and your customer is in agreement with the risk of possible data/time loss in case of disaster
2. Do not ever attempt to "detach" a database which is in Suspect. This will do nothing but you will probably loose your ability to repair it.
3. Read and understand the error log before taking any action and do not panic. Nothing in computer science is "unexplainable" so if you keep your calm and apply intelligence, you will come out of any situation with honor.
If you have any recommendation or suggestion or experience some situation where this article did not help you to resolve the situation, please drop me a note at info@consultdba.com .
http://www.sql-server-performance.com/2012/recovery-sql-server-suspect-mode/
So, you're out of options and you're going to have to recreate msdb. Here's what to do (change the directory paths to suit your installation):
1. Detach the damaged msdb. You can't just detach msdb because you're not allowed to detach system databases. However, you can if you start the server with trace flag 3608. I did this by shutting down the server, navigating to the directory 'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Binn' and doing the following: start sqlservr.exe -c -T3608
2. Move or rename the damaged msdb files (msdbdata.mdf and msdblog.ldf in the 'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data' directory)
3. Run the instmsdb.sql script in the 'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Install' directory
4. Shutdown and restart the server without the 3608 trace flag
This works on SQL Server 2000 as well.
http://blogs.msdn.com/b/sqlserverstorageengine/archive/2006/06/06/619304.aspx



High CPU Issues
Problem
Someone has reported a performance issue for your SQL Server application. When you look into the database server you see CPU utilization is very high and the SQL Server process is consuming most of the CPU. You launch SSMS and run sp_who2 and notice that there are a few SPIDs taking a long time to complete and these queries may be causing the high CPU pressure. At the server level you can only see the overall SQL Server process, but within SQL Server you can see each individual query that is running. Is there a way to tell how much CPU each SQL Server process is consuming? In this article I explain how this can be done.
Solution
Normally we monitor this type of issue using sp_who2, Activity Monitor or DMVs. If there are a lot of processes running on your instance and the CPU is very high, then it's hard to find the exact process eating up your CPU using just the SQL Server tools. One way to correlate the data between what is running within SQL Server and at the Windows level is to use SPID and KPID values to get the exact process.
Before discussing how to do this, we should discuss these terms:
SPID is the SQL Server Process ID number and is assigned by SQL Server to each new connection. It starts with one and is globally unique. SPID 1 through 50 are reserved for system uses and are not used for any user connections.
KPID is the kernel-process ID. Under SQL Server for Windows this is the thread ID number, also known as "ID Thread," and is assigned by Windows when the thread is created. The Thread ID number is a system-wide identifier that uniquely identifies the thread. KPID is visible by querying the KPID column of master..sysprocesses. It is only filled in for spid numbers four and higher. You can also get KPID/ID Thread from Windows Perfmon using the "Thread" object and the "ID Thread" counter.
I am sure you are familiar with the SPID value, but have you ever noticed KPID? Probably not unless you queried the sysprocesses view directly. Let's look at example of how this works.

Example
Step 1
If we look at Task Manager on the server we can see the CPU usage. So the first step is to verify that the SQL Server process (sqlservr), and not some other process, is responsible for the excessive CPU use. Remember, the “sqlservr” process is made up of many different threads, each one generally (but not always) representing a specific user connection. In the screenshot below we see the sqlservr process is taking approximately 70 percent of CPU.

Now that we know SQL Server is the main culprit and responsible for the high CPU, the next step is to find out which SQL Server process is causing the high CPU.
Step 2
Now we need to find out what SQL Server thread is causing the high CPU. Once we have this ID Thread we can correlate that ID thread (KPID) to the SPID in SQL Server.
We will use Performance Monitor to get this info. Type perfmon in a Windows CMD prompt or launch from Control Panel.
Click on Add counters and select the "Thread" object in the drop down.
Select these counters at the same time:
% Processor Time
ID Process
ID Thread
Thread State
Thread Wait Reason
In the right pane, you will see multiple instances from multiple applications, which are running on that server. Since we are looking for “sqlservr” select all of the instances that begin with “sqlservr” from the list box as shown below and click Add. You can add other counters as needed and will see below that I added a few additional counters.

Step 3
Press (Ctrl+R) or click on the view Report tab to change from graphical to report view as shown below. Here you should be able to find which ID Thread is eating up your CPU resources by using the scroll bar to see all instances.
Below we have identified which thread is causing the problem. Here you can see ID Thread 30 is taking 46.841% of the total CPU. You can also find the ID Thread which is 872. Now that we know the ID Thread, we can run a query in SSMS to find the actual query.

Step 4
Our last step is to correlate the Thread ID (KPID) identified in the last step to the SPID. To do this, run the following query in Query analyzer:
SELECT spid, kpid, dbid, cpu, memusage FROM sysprocesses WHERE kpid=872


Step 5
From the query above we can see SPID 71 is causing the issue. To find how many threads and open transactions this is running we can run this query.
SELECT spid, kpid, status, cpu, memusage, open_tran, dbid FROM sysprocesses WHERE spid=71


Step 6
To get the exact query that is running, we can run DBCC INPUTBUFFER using the SPID. The below output shows us that a backup job is causing our CPU issues on our server.

After identifying the SPID we need to trouble shoot further.


Profiler Setup

To find what activity is associated with the cursor operation you will use SQL Server Profiler, a performance analysis tool that comes with SQL Server and can be used to trace events and activities on your SQL server.
Since Profiler tracing adds overhead, we must very narrowly define our criteria for analysis in Profiler in order to not overburden the server being monitored.  Since we are still operating from the assumption that we are searching for the root cause of an activity causing high CPU on SQL Server, it would not be helpful to add to the load unnecessarily.
Open Profiler and select New/Trace from the File menu.  Select the affected server from the dropdown list.
On the “General” tab enter the information about your trace.  Name your trace, select Template Name “Blank”, select a location to save the trace and set the maximum rows.  Since we have the option to save the trace information to a table we shall do so. 


Figure 1 Configuring Profiler trace

Select the “Events” tab and add the following events:
Cursors: CursorClose, CursorExecute, CursorOpen,
Performance: Show Plan Text
Stored Procedures: RPC:Completed, RPC:Starting, SP: StmtCompleted, SP: StmtStarting


Figure 2 Configuring Profiler events

These are the events that we will trace in Profiler.
Select the “Data Columns” tab and add the following to the “Columns” section:

Start Time, BinaryData, TextData.  These values will join EventClass and SPID which are already present.


Figure 3 Configuring Profiler Columns

These are the data items that will be visible in the trace results.
Select the “Filters” tab to set up Profiler to only show the events from the SPID that we discovered in the earlier sections to be the offending process.  In the example picture we will presume that the offending SPID was found to be 111.  Enter 111 in the SPID/Equals filter to see only events associated with SPID 111. 


Figure 4 Configuring Profiler filter

Profiler Execution

Click the Run button and you should see data scrolling by.  If the operation you are trying to trace has already gone into the use of cursors you may have difficulties telling exactly what is causing the high CPU.  If you are fortunate, a quick examination of the trace results will point to the exact cause.
The trace below shows several select statements interspersed with the cursor executions.


Figure 5 Profile trace results

You should be able to analyze the trace results to determine your high CPU culprit, even if it involves a cursor action.  Some detective work may be required but it is still much simpler than guessing what SQL Server process is causing your problem.

System Monitor is generally the best means to determine if the server is CPU bound. You should look to see if the Processor:% Processor Time counter is high; values in excess of 80% processor time per CPU are generally deemed to be a bottleneck. You can also monitor the SQL Server schedulers using the sys.dm_os_schedulers view to see if the number of runnable tasks is typically nonzero.
1.Excessive compilation and recompilation.
You can use System Monitor (PerfMon) or SQL Trace (SQL Server Profiler) to detect excessive compiles and recompiles.

System Monitor (Perfmon)
SQL Server: SQL Statistics: Batch Requests/sec
SQL Server: SQL Statistics: SQL Compilations/sec
SQL Server: SQL Statistics: SQL Recompilations/sec
SQL Trace
If the PerfMon counters indicate a high number of recompiles, the recompiles could be contributing to the high CPU consumed by SQL Server. We would then need to look at the profiler trace to find the stored procedures that were being recompiled. The SQL Server Profiler trace gives us that information along with the reason for the recompilation. You can use the following events to get this information.
SP:Recompile / SQL:StmtRecompile
EventSubClass data column
it is more useful to monitor SQL:StmtRecompiles as this event class is fired when any type of batch, ad hoc, stored procedure, or trigger is recompiled.

fn_trace_gettable ( 'e:\recompiletrace.trc' , 1)

Showplan XML For Query Compile

Using DMV’s
When you use the sys.dm_exec_query_optimizer_info DMV, you can get a good idea of the time SQL Server spends optimizing

In particular, look at the elapsed time, which is the time elapsed due to optimizations. Since the elapsed time during optimization is generally close to the CPU time that is used for the optimization (since the optimization process is very CPU bound), you can get a good measure of the extent to which the compile time is contributing to the high CPU use.
Another DMV that is useful for capturing this information is sys.dm_exec_query_stats.
Sql_handle
Total worker time
Plan generation number – It indicates how many times query has been recompiled.

Resolution
To avoid recompilations that are due to changes in statistics (for example, when the plan becomes suboptimal due to change in the data statistics), specify the KEEPFIXED PLAN query hint
Turning off the automatic updates of statistics statistics for indexes
Check to see if the stored procedure was created with the WITH RECOMPILE option or if the RECOMPILE query hint was used. If a procedure was created with the WITH RECOMPILE option, in SQL Server 2005, we may be able to take advantage of the statement level RECOMPILE hint if a particular statement within that procedure needs to be recompiled.

2. Inefficient query plan
An inefficient query plan may cause increased CPU consumption.
The query against sys.dm_exec_query_stats is an efficient way to determine which query is using the most cumulative CPU.
, query against sys.dm_exec_cached_plans by using filters for various operators that may be CPU intensive

Resolution :

Tune the query with the Database Engine Tuning Advisor to see if it produces any index recommendations.
Run UPDATE STATISTICS on the tables involved in the query and check to see if the problem persists.

3. Intra-query parallelism
When generating an execution plan for a query, the SQL Server optimizer attempts to choose the plan that provides the fastest response time for that query. If the query’s cost exceeds the value specified in the cost threshold for parallelism option and parallelism has not been disabled, then the optimizer attempts to generate a plan that can be run in parallel. A parallel query plan uses multiple threads to process the query, with each thread distributed across the available CPUs and concurrently utilizing CPU time from each processor
Detection
Intra-query parallelism problems can be detected using the following methods.
System Monitor (Perfmon)
Look at the SQL Server:SQL Statistics – Batch Requests/sec counter and see “SQL Statistics Object” in SQL Server Books Online for more information.
Resolution
Any query that runs with a parallel plan is one that the optimizer believes is expensive enough that it would exceed the cost threshold of parallelism, which defaults to five (roughly 5-second execution time on a reference machine). Any queries identified through the methods above are candidates for further tuning.
Poor Cursor Usage

Applications that open cursors and fetch one row (or a small number of rows) at a time can easily become bottlenecked by the network latency, especially on a wide area network (WAN)
System Monitor (Perfmon)
By looking at the SQL Server:Cursor Manager By Type – Cursor Requests/Sec counter, you can get a general feel for how many cursors are being used on the system by looking at this performance counter. Systems that have high CPU utilization because of small fetch sizes typically have hundreds of cursor requests per second.

Use a trace that includes the RPC:Completed event class search for sp_cursorfetch statements. The value of the fourth parameter is the number of rows returned by the fetch. The maximum number of rows that are requested to be returned is specified as an input parameter in the corresponding RPC:Starting event class


I/O Bottlenecks
PhysicalDisk Object: Avg. Disk Queue Length represents the average number of physical read and write requests that were queued on the selected physical disk during the sampling period. If your I/O system is overloaded, more read/write operations will be waiting. If your disk queue length frequently exceeds a value of 2 during peak usage of SQL Server, then you might have an I/O bottleneck.
Avg. Disk Sec/Read is the average time, in seconds, of a read of data from the disk. Any number
Less than 10 ms - very good
Between 10 - 20 ms - okay
Between 20 - 50 ms - slow, needs attention
Greater than 50 ms – Serious I/O bottleneck
Avg. Disk Sec/Write is the average time, in seconds, of a write of data to the disk.
Physical Disk: %Disk Time is the percentage of elapsed time that the selected disk drive was busy servicing read or write requests. A general guideline is that if this value is greater than 50 percent, it represents an I/O bottleneck.
        sys.dm_io_pending_io_requests DMV can be used to find the pending IO requests.
Check the memory configuration of SQL Server. If SQL Server has been configured with insufficient memory, it will incur more I/O overhead. You can examine following counters to identify memory pressure
Buffer Cache hit ratio
Page Life Expectancy
Checkpoint pages/sec
Lazywrites/sec
Increase I/O bandwidth.
Add more physical drives to the current disk arrays and/or replace your current disks with faster drives. This helps to boost both read and write access times. But don't add more drives to the array than your I/O controller can support.
Add faster or additional I/O controllers. Consider adding more cache (if possible) to your current controllers.
Examine execution plans and see which plans lead to more I/O being consume. It is possible that a better plan (for example, index) can minimize I/O. If there are missing indexes, you may want to run Database Engine Tuning Advisor to find missing indexes

Memory Issues

It can be external or internal.

Physical – External -Physical RAM may be low and SQL Server detects and reduce the commit charge of the buffer pool.
 Internal –Responding to external memory pressure
                Changed memory settings
                Changes in internal distribution

Virtual – External -Running low in Page file.
              Internal – Running low on VAS due to fragmentation/Allocation.
External physical memory pressure

Open Task Manager in Performance view and check the Physical Memory section, Available value. If the available memory amount is low, external memory pressure may be present. The exact value depends on many factors, however you can start looking into this when the value drops below 50-100 MB. External memory pressure is clearly present when this amount is less than 10 MB.
Memory: Available Bytes counter in System Monitor.

If external memory pressure exists and you are seeing memory-related errors, you will need to identify major consumers of the physical memory on the system. To do this, look at  Process: Working Set performance counters or the Mem Usage column on the Processes tab of Task Manager and identify the largest consumers.


sys.dm_os_memory_clerks DMV as follows to find out how much memory SQL Server has allocated through AWE mechanism.

External virtual memory pressure
Check the Commit Charge section. If Total is close to the Limit, then there exists the potential that page file space may be running low.
Paging File: %Usage, Paging File: %Usage Peak.
If Paging File: %Usage Peak (or Peak Commit Charge) is high, check the System Event Log for events indicating page file growth or notifications of “running low on virtual memory”.

Internal physical memory pressure
To determine the amount of memory that belongs to the buffer pool, we can take a look at the DBCC MEMORYSTATUS output. In the Buffer Counts section, look for the Target value

System Monitor (Perfmon)
You can also check the following counters for signs of memory pressure (see SQL Server Books Online for detailed description):
SQL Server: Buffer Manager object
Low Buffer cache hit ratio
Low Page life expectancy
High number of Checkpoint pages/sec
High number Lazy writes/sec

Internal virtual memory pressure
VAS consumption can be tracked by using the sys.dm_os_virtual_address_dump DMV. VAS summary can be queries using the following view.

The following list outlines general steps that will help you troubleshoot memory errors.
1. Verify if the server is operating under external memory pressure. If external pressure is present, try resolving it first, and then see if the problem/errors still exist.
2. Start collecting performance monitor counters for SQL Server: Buffer Manager, SQL Server: Memory Manager.
3. Verify the memory configuration parameters (sp_configure), min memory per query, min/max server memory, awe enabled, and the Lock Pages in Memory privilege. Watch for unusual settings. Correct them as necessary. Account for increased memory requirements for SQL Server 2005.
4. Check for any nondefault sp_configure parameters that might indirectly affect the server.
5. Check for internal memory pressures.
6. Observe DBCC MEMORYSTATUS output and the way it changes when you see memory error messages.
7. Check the workload (number of concurrent sessions, currently executing queries).

DBCC FREESYSTEMCACHE -Releases all unused cache entries from all caches.
DBCC FREESESSIONCACHE-Flushes the distributed query connection cache
DBCC FREEPROCCACHE-Removes all elements from the plan cache, removes a specific plan from the plan cache by specifying a plan handle or SQL handle.
701 - There is insufficient system memory to run this query.
802 - There is insufficient memory available in the buffer pool.
8628 - A time out occurred while waiting to optimize the query. Rerun the query.
8645 - A time out occurred while waiting for memory resources to execute the query. Rerun the query.

Slow running queries

The cause of the blocking can be a poor application design, bad query plans, the lack of useful indexes, and an SQL Server instance that is improperly configured for the workload.

Blocking

sys.dm_os_wait_statistics for overall and cumulative waits for SQL Server to the session-specific sys.dm_os_waiting_tasks that breaks down waits by session
To find sessions that have been granted locks or waiting for locks, you can use the sys.dm_tran_locks DMV

sp_block.

For example, a 200-second blocked process threshold can be configured in SQL Server Management Studio as follows:
8. Execute Sp_configure ‘blocked process threshold’, 200 
9. Reconfigure with override
Once the blocked process threshold is established, the next step is to capture the trace event. The trace events of blocking events that exceed the user configured threshold can be captured with SQL Trace or Profiler.
10. If using SQL Trace, use sp_trace_setevent and event_id=137.
11. If using SQL Server Profiler, select the Blocked Process Report event class (under the Errors and Warnings object). See Figure 2.

Expalin about Transaction Log architecture
Transaction Log Logical Architecture

SQL Server 2008
SQL Server 2005
The SQL Server transaction log operates logically as if the transaction log is a string of log records. Each log record is identified by a log sequence number (LSN). Each new log record is written to the logical end of the log with an LSN that is higher than the LSN of the record before it.
Log records are stored in a serial sequence as they are created. Each log record contains the ID of the transaction that it belongs to. For each transaction, all log records associated with the transaction are individually linked in a chain using backward pointers that speed the rollback of the transaction.
Log records for data modifications record either the logical operation performed or they record the before and after images of the modified data. The before image is a copy of the data before the operation is performed; the after image is a copy of the data after the operation has been performed.
The steps to recover an operation depend on the type of log record:
Logical operation logged
To roll the logical operation forward, the operation is performed again.
To roll the logical operation back, the reverse logical operation is performed.
Before and after image logged
To roll the operation forward, the after image is applied.
To roll the operation back, the before image is applied.
Many types of operations are recorded in the transaction log. These operations include:
The start and end of each transaction.
Every data modification (insert, update, or delete). This includes changes by system stored procedures or data definition language (DDL) statements to any table, including system tables.
Every extent and page allocation or deallocation.
Creating or dropping a table or index.
Rollback operations are also logged. Each transaction reserves space on the transaction log to make sure that enough log space exists to support a rollback that is caused by either an explicit rollback statement or if an error is encountered. The amount of space reserved depends on the operations performed in the transaction, but generally is equal to the amount of space used to log each operation. This reserved space is freed when the transaction is completed.
The section of the log file from the first log record that must be present for a successful database-wide rollback to the last-written log record is called the active part of the log, or the active log. This is the section of the log required to do a full recovery of the database. No part of the active log can ever be truncated.
Transaction Log Physical Architecture
The transaction log is used to guarantee the data integrity of the database and for data recovery. The topics in this section provide the information about the physical architecture of the transaction log. Understanding the physical architecture can improve your effectiveness in managing transaction logs.
The transaction log in a database maps over one or more physical files. Conceptually, the log file is a string of log records. Physically, the sequence of log records is stored efficiently in the set of physical files that implement the transaction log.
The SQL Server Database Engine divides each physical log file internally into a number of virtual log files. Virtual log files have no fixed size, and there is no fixed number of virtual log files for a physical log file. The Database Engine chooses the size of the virtual log files dynamically while it is creating or extending log files. The Database Engine tries to maintain a small number of virtual files. The size of the virtual files after a log file has been extended is the sum of the size of the existing log and the size of the new file increment. The size or number of virtual log files cannot be configured or set by administrators.
The only time virtual log files affect system performance is if the log files are defined by small size and growth_increment values. If these log files grow to a large size because of many small increments, they will have lots of virtual log files. This can slow down database startup and also log backup and restore operations. We recommend that you assign log files a size value close to the final size required, and also have a relatively large growth_increment value
The transaction log is a wrap-around file. For example, consider a database with one physical log file divided into four virtual log files. When the database is created, the logical log file begins at the start of the physical log file. New log records are added at the end of the logical log and expand toward the end of the physical log. Log truncation frees any virtual logs whose records all appear in front of the minimum recovery log sequence number (MinLSN). The MinLSN is the log sequence number of the oldest log record that is required for a successful database-wide rollback. The transaction log in the example database would look similar to the one in the following illustration.

When the end of the logical log reaches the end of the physical log file, the new log records wrap around to the start of the physical log file.

This cycle repeats endlessly, as long as the end of the logical log never reaches the beginning of the logical log. If the old log records are truncated frequently enough to always leave sufficient room for all the new log records created through the next checkpoint, the log never fills. However, if the end of the logical log does reach the start of the logical log, one of two things occurs:
If the FILEGROWTH setting is enabled for the log and space is available on the disk, the file is extended by the amount specified in growth_increment and the new log records are added to the extension. For more information about the FILEGROWTH setting, see ALTER DATABASE (Transact-SQL).
If the FILEGROWTH setting is not enabled, or the disk that is holding the log file has less free space than the amount specified in growth_increment, an 9002 error is generated.
If the log contains multiple physical log files, the logical log will move through all the physical log files before it wraps back to the start of the first physical log file.
Checkpoints and the Active Portion of the Log
Checkpoints flush dirty data pages from the buffer cache of the current database to disk. This minimizes the active portion of the log that must be processed during a full recovery of a database. During a full recovery, the following types of actions are performed:
The log records of modifications not flushed to disk before the system stopped are rolled forward.
All modifications associated with incomplete transactions, such as transactions for which there is no COMMIT or ROLLBACK log record, are rolled back.
Checkpoint Operation

A checkpoint performs the following processes in the database:
Writes a record to the log file, marking the start of the checkpoint.
Stores information recorded for the checkpoint in a chain of checkpoint log records.
One piece of information recorded in the checkpoint is the log sequence number (LSN) of the first log record that must be present for a successful database-wide rollback. This LSN is called the Minimum Recovery LSN (MinLSN). The MinLSN is the minimum of the:
LSN of the start of the checkpoint.
LSN of the start of the oldest active transaction.
LSN of the start of the oldest replication transaction that has not yet been delivered to the distribution database.
The checkpoint records also contain a list of all the active transactions that have modified the database.
If the database uses the simple recovery model, marks for reuse the space that precedes the MinLSN.
Writes all dirty log and data pages to disk.
Writes a record marking the end of the checkpoint to the log file.
Writes the LSN of the start of this chain to the database boot page.
Activities That Cause a Checkpoint
Checkpoints occur in the following situations:
A CHECKPOINT statement is explicitly executed. A checkpoint occurs in the current database for the connection.
A minimally logged operation is performed in the database; for example, a bulk-copy operation is performed on a database that is using the Bulk-Logged recovery model.
Database files have been added or removed by using ALTER DATABASE.
An instance of SQL Server is stopped by a SHUTDOWN statement or by stopping the SQL Server (MSSQLSERVER) service. Either action causes a checkpoint in each database in the instance of SQL Server.
An instance of SQL Server periodically generates automatic checkpoints in each database to reduce the time that the instance would take to recover the database.
A database backup is taken.
An activity requiring a database shutdown is performed. For example, AUTO_CLOSE is ON and the last user connection to the database is closed, or a database option change is made that requires a restart of the database.
Automatic Checkpoints

The SQL Server Database Engine generates automatic checkpoints. The interval between automatic checkpoints is based on the amount of log space used and the time elapsed since the last checkpoint. The time interval between automatic checkpoints can be highly variable and long, if few modifications are made in the database. Automatic checkpoints can also occur frequently if lots of data is modified.
Use the recovery interval server configuration option to calculate the interval between automatic checkpoints for all the databases on a server instance. This option specifies the maximum time the Database Engine should use to recover a database during a system restart. The Database Engine estimates how many log records it can process in the recovery interval during a recovery operation.
The interval between automatic checkpoints also depends on the recovery model:
If the database is using either the full or bulk-logged recovery model, an automatic checkpoint is generated whenever the number of log records reaches the number the Database Engine estimates it can process during the time specified in the recovery interval option.
If the database is using the simple recovery model, an automatic checkpoint is generated whenever the number of log records reaches the lesser of these two values:
The log becomes 70 percent full.
The number of log records reaches the number the Database Engine estimates it can process during the time specified in the recovery interval option.
For information about setting the recovery interval, see How to: Set the Recovery Interval (SQL Server Management Studio).
Automatic checkpoints truncate the unused section of the transaction log if the database is using the simple recovery model. However, if the database is using the full or bulk-logged recovery models, the log is not truncated by automatic checkpoints. For more information, see Transaction Log Truncation.
The CHECKPOINT statement now provides an optional checkpoint_duration argument that specifies the requested period of time, in seconds, for checkpoints to finish. For more information, see CHECKPOINT (Transact-SQL).
Active Log

The section of the log file from the MinLSN to the last-written log record is called the active portion of the log, or the active log. This is the section of the log required to do a full recovery of the database. No part of the active log can ever be truncated. All log records must be truncated from the parts of the log before the MinLSN.
The following illustration shows a simplified version of the end-of-a-transaction log with two active transactions. Checkpoint records have been compacted to a single record.

LSN 148 is the last record in the transaction log. At the time that the recorded checkpoint at LSN 147 was processed, Tran 1 had been committed and Tran 2 was the only active transaction. That makes the first log record for Tran 2 the oldest log record for a transaction active at the time of the last checkpoint. This makes LSN 142, the Begin transaction record for Tran 2, the MinLSN.
Long-Running Transactions

The active log must include every part of all uncommitted transactions. An application that starts a transaction and does not commit it or roll it back prevents the Database Engine from advancing the MinLSN. This can cause two types of problems:
If the system is shut down after the transaction has performed many uncommitted modifications, the recovery phase of the subsequent restart can take much longer than the time specified in the recovery interval option.
The log might grow very large, because the log cannot be truncated past the MinLSN. This occurs even if the database is using the simple recovery model, in which the transaction log is generally truncated on each automatic checkpoint.
Replication Transactions

The Log Reader Agent monitors the transaction log of each database configured for transactional replication, and it copies the transactions marked for replication from the transaction log into the distribution database. The active log must contain all transactions that are marked for replication, but that have not yet been delivered to the distribution database. If these transactions are not replicated in a timely manner, they can prevent the truncation of the log. For more information, see How Transactional Replication Works.
Write-Ahead Transaction Log
SQL Server 2008
SQL Server 2005
This topic describes the role of the write-ahead transaction log in recording data modifications to disk. For basic information about transaction logs, see Transaction Log Overview.
SQL Server uses a write-ahead log (WAL), which guarantees that no data modifications are written to disk before the associated log record is written to disk. This maintains the ACID properties for a transaction. For more information about transactions and ACID properties, see Transactions (Database Engine).
To understand how the write-ahead log works, it is important for you to know how modified data is written to disk. SQL Server maintains a buffer cache into which it reads data pages when data must be retrieved. Data modifications are not made directly to disk, but are made to the copy of the page in the buffer cache. The modification is not written to disk until a checkpoint occurs in the database, or the modification must be written to disk so the buffer can be used to hold a new page. Writing a modified data page from the buffer cache to disk is called flushing the page. A page modified in the cache, but not yet written to disk, is called a dirty page.
At the time a modification is made to a page in the buffer, a log record is built in the log cache that records the modification. This log record must be written to disk before the associated dirty page is flushed from the buffer cache to disk. If the dirty page is flushed before the log record is written, the dirty page creates a modification on the disk that cannot be rolled back if the server fails before the log record is written to disk. SQL Server has logic that prevents a dirty page from being flushed before the associated log record is written. Log records are written to disk when the transactions are committed.
How to rebuild Master database

How to rebuild master database in SQL 2000

1. Shutdown Microsoft® SQL Server™ 2000, and then run Rebuildm.exe. This is located in the Program Files\Microsoft SQL Server\80\Tools\Binn directory.
2. In the Rebuild Master dialog box, click Browse.
3. In the Browse for Folder dialog box, select the \Data folder on the SQL Server 2000 compact disc or in the shared network directory from which SQL Server 2000 was installed, and then click OK.
4. Click Settings. In the Collation Settings dialog box, verify or change settings used for the master database and all other databases.
Initially, the default collation settings are shown, but these may not match the collation selected during setup. You can select the same settings used during setup or select new collation settings. When done, click OK.
5. In the Rebuild Master dialog box, click Rebuild to start the process.
The Rebuild Master utility reinstalls the master database.
Note  To continue, you may need to stop a server that is running.

How to rebuild master database in SQL 2005

When system databases are rebuilt, all database objects and data in master, model and msdb system databases are removed. Rebuilding the master database installs all system databases to their initial location.

To rebuild the system databases you need to run the setup command from the Command prompt and follow the following procedure:

1. Click Start, click Run, type cmd, and then click OK.
2. Run the following command to rebuild the system databases:

start /wait <CD or DVD Drive>\setup.exe /qn INSTANCENAME=Instance_Name REINSTALL=SQL_Engine REBUILDDATABASE=1 SAPWD=StrongPassword
 Eg : start/wait C:\Prasad\Softwares\sql2005\sql2005\Servers\setup.exe /qn instancename=SANKARPRASAD\test1 REINSTALL=SQL_Engine REBUILDDATABASE=1 SAPWD=P@ssw0rd


In SQL 2008
setup /ACTION=REBUILDDATABASE /QUIET /INSTANCENAME=<instance name> /SQLSYSADMINACCOUNTS=<accounts>
where
<instance name> is either the name of your named instance or MSSQLSERVER for the default instance
<accounts> are Windows groups or individual accounts to provision as sysadmin
If you have SQL configured for Mixed Authentication Mode use the same syntax except you must also provide the /SAPWD parameter to specify the SA password. If you don't, you will get an error.
You can go to maintenance in the setup.exe and run repair also.


How to rename a database?
Sp_renamedb ‘oldname’,’newname’

How to rename a table
Sp_renam ‘table_first’,’table_last;
Go

What is sp_configure command
Use sp_configure to display or change server level settings.To change the database level settings use alter database

What is the difference between a CHAR and VARCHAR datatypes?
VARCHARs are variable length strings with a specified maximum length.If a string is less than the maximum length then it is stored verbatim without any extra characters eg names and e-mails.CHAR’s are fixed length strings with a specified set length.If a string is less than the set length then it is padded with extra characters.Eg: Phone number and ZIP codes.

How do you find the list of fixed hard drives and free space on the server?
We can use the following stored procedure to figure out the number of fixed drives a system has along with the free space on each of those drives.

EXEC master.dbo.xp_fixeddrives

How to optimize stored procedures?
1.Include SETNOCOUNT on statement
2.Use shema name with object name
3.Do no prefix sp_ in the stored procedure name
4.Try to avoid cursors
5,Use TRY/CATCH for error handling
6.Optimize queries and fine tune indexes.
7.Use variables and temptables appropriately.


How to delete duplicate rows?

We can delete duplicate rows using the CTE with the ROW_NUMBER () feature of SQL server 2005 & 2008.

What are aggregate functions?
Aggregate functions perform a calculation on a set of values and return a single value.Aggregate function ignore NULL values.The following are some of the aggregate functions.AVG,SUM,BIG etc.

What is the use of @@SPID in SQL Server?
A SPID is the server process id of the current connection and using that session ID we can find out the SPID that  the last query was executed.

What is the difference between  an index seek and index scan

A scan is the scenario where we look at every item before selecting.This is not always bad especially with small lists you intend to analyze thoroughly.An index scna means that SQL server reads all rows in a table and then returns only the rows that satisfy the search criteria.When an index scan is performed all the rows in the leaf level of the index are scanned.This essentially means that all the rows of the index are examined instead of the table directly.This is sometimes compared to a table scan in which all the table data is read directly.
An index seek will only affect the rows that satisfy a query condition and the pages that contain these qualifying rows.interms of perfoamnce this is highly beneficial when a table has a very large number of rows and you want to return a very small percentage of those rows in the query.

How do you recompile a stored procedure at run time?
We can use a RECOMPILE hint with a query and recompile only that particular query.However if the parameters are used in many statements in the stored procedure and we  want to recompile all the statements then instead of using the RECOMPILE optin with a query we have a better option that uses the WITH RECOMPILE hint during stored procedure creation or execution
SQL Profiler
SQL Server Profiler provides a graphical interface to the SQL Trace API that enables you to capture data for events within SQL Server. You can then analyze the trace data to determine causes of performance degradation, blocking, deadlocking,or other error events.. Inside the database engine, SQL Server provides an event subsystem called SQL Trace that is based on an external application programming interface (API). This external API enables you to call SQL Trace by using a variety of parameters that define events and columns of data to capture. The SQL Trace subsystem also enables you to specify optional filters on the data being captured so that you can focus your analysis.

We can store the information collected using profiler into a table and a file.Storring it to a file is preferred.

Selecting the trace events and data columns to capture is the most important step in
defining a trace

You should specify the SP: StmtCompleted or SP: StmtStarting event only after you have narrowed the focus of your trace. These events capture every statement executed within a stored procedure.On high-volume systems, capturing every statement can quickly generate extremely large trace logs.

To save the trace data to a table, you would first have to open each file and save each to a separate table. After you save each file as a separate table, you would have to manually combine all 15 tables into a single table for analysis.

fn_trace_gettable is a built-in function that returns the contents of a trace file in a tabular format. There is even an option to iteratively walk down all of the rollover files. Therefore, by creating a statement that performs a SELECT * INTO <table> FROM fn_trace_gettable (‘<filename>’) operation, you can have all 15 trace files loaded into a single table.

One of the main causes of not being able to reproduce the problem on a test system is having a different query plan generated on the test system.

you can gather query plan information, called showplans, in a trace. A showplan provides a record of the query that was executed along with the plan that was generated for that execution. By capturing the showplan, you can compare the plans generated for multiple permutations of the same query to determine whether the plan changes over time. You can also use this information to analyze where the performance issues are occurring within the query.
To include showplan information in a trace, you would look under the Performance event class, which provides a variety of options

SQL Server provides a special type of trace called a replay trace, which enables you to capture a workload that you can then replay on a test system.

Commonly used Events

For deadlock troubleshooting

1.Lock

a)Lock:Dealock graph – XML description of deadlock
a)Lock:Deadlock – Indicates the two concurrent transactions deadlocked eachother
b)Lock:Deadlock chain – Each of the events leading the deadlock

2.Objects:

a)Objects:altered -
b)Objects:created – Create index,Create table etc
c)Objects:deleted  -Drop index,Drop table etc.

3.Performance

a)Show plan all –Displays query plan with full compile time details.

4.Security Audit

a)Audit backup restore
b)Audit DBCC event
c)Audit Database management even

5.Stored procedures

a)SP:Completed   - Indicates when stored procedure has completed
b)SP:Starting   - Indicates when SP has started
c)SP:stmtcompleted – Indicates a transact SQL statement within a SP completed
d)SP:stmtstarting- Indicates a transact SQL statement within a SP started
e)SP:Recompile – Indicates when SP has recompiled

6.TSQL

a)SQL :stmtstarting – indicates a transact SQL statement starting
b)SQL:stmtcompleted- indicates a transact SQL statement completed
c)SQL:Batch starting - indicates a transact SQL batch starting
d)SQL:Batchcompleted - indicates a transact SQL batch completed.

DTA

The Database Engine Tuning Advisor (DTA) is the greatly enhanced replacement to the
Index Tuning Wizard tool that shipped with previous versions of SQL Server. DTA
plays an important role in an overall performance solution, letting you leverage the
query optimizer to receive recommendations on indexes, indexed views, or partitions
that could improve performance.

DTA applies the same concept, taking a workload file as an input and then exhaustively testing each query against all possible permutations of indexes, indexed views, and partitions to come up with the best possible solution.
DTA requires you to provide it with a workload that it can analyze. You can provide
the workload in a variety of formats, including a trace file, a trace table, or a Transact-
SQL script
The most common workload used within DTA is a trace file. You can generate this trace by using SQL Server Profiler, which ships with a template designed to capture the data DTA needs to perform its analysis. To generate the trace file, launch Profiler,select the Tuning trace template, and save the results to a file. Alternatively, you can load the trace into a table that DTA uses to perform its analysis.

You also have to specify the databases and tables that you want to tune within the workload. DTA uses the database you specify for the workload analysis as the basis for making tuning decisions.
After a DTA analysis session is complete, you can save DTA’s recommendations from the Actions menu. When you save recommendations, DTA creates a script file that contains the Transact-SQL code required to implement all the recommendations.

What is 3G B swith?
Operating systems based on Microsoft Windows NT technologies have always provided applications with a flat 32-bit virtual address space that describes 4 gigabytes (GB) of virtual memory. The address space is usually split so that 2 GB of address space is directly accessible to the application and the other 2 GB is only accessible to the OS. The /3GB switch allocates 3 GB of virtual address space to an application and 1 GB to Kernel mode.
What happens if a diff backup runs?
A differential backup is based on the most recent, previous full data backup. A differential backup captures only the data that has changed since that full backup.
Creating a differential backups can be very fast compared to creating a full backup. A differential backup records only the data that has changed since the full backup upon the differential backup is based. This facilitates taking frequent data backups, which decrease the risk of data loss. However, before you restore a differential backup, you must restore its base. Therefore restoring from a differential backup will necessarily take more steps and time than restoring from a full backup because two backup files are required.
A differential backup captures the state of any extents (collections of eight physically contiguous pages) that have changed between when the differential base was created and the when differential backup is created. This means that the size of a given differential backup depends on the amount of data that has changed since the base. Generally, the older a base is, the larger a new differential backup will be. In a series of differential backups, a frequently updated extent is likely to contain different data in each differential backup.
The following illustration shows how a differential backup works. The figure shows 24 data extents, 6 of which have changed. The differential backup contains only these 6 data extents. The differential backup operation relies on a bitmap page that contains a bit for every extent. For each extent updated since the base, the bit is set to 1 in the bitmap.

The required syntax is:
BACKUP DATABASE database_name TO <backup_device> WITH DIFFERENTIAL
1. What is minimum RAM required for SQL server?
512 MB but 1 GB is recommended
How to prevent writing backup information entries in SQL Server error logs?
Enable Trace flag 3226          DBCC TRACEON (3226,-1)
What are the different backup and restore commands in SQL

FULL
BACKUP DATABASE <database name> TO DISK = ’<directory>\<filename>‘ WITH INIT   

The INIT parameter,
which is the most common, tells SQL Server to overwrite anything in the backup
device that might already exist before starting the backup operation

DIFF
BACKUP DATABASE <database name> TO DISK = ’<directory>\<filename>‘ WITH DIFFERENTIAL

Translog
BACKUP LOG <database name> TO DISK = ’<directory>\<filename>‘ WITH INIT

File group

BACKUP DATABASE <database name> FILEGROUP = ’<filegroup name>‘ TO DISK = ’<directory>\<filen
ame>‘

Mirror to  example

BACKUP DATABASE PUBS TO DISK=‘C:\DEMO\BACKUP\PUBS1B.BAK’, DISK=‘C:\DEMO\BACKUP\PUBS1B.BAK’
MIRROR TO DISK=‘\\BAKSERVER1\BACKUP\PUBSMIRROR1A.BAK’, DISK=‘\\BAKSERVER1\BACKUP\
PUBSMIRROR1B.BAK’
MIRROR TO DISK=‘\\BAKSERVER2\BACKUP\PUBSMIRROR2A.BAK’, DISK=‘\\BAKSERVER2\BACKUP\
PUBSMIRROR2B.BAK’
MIRROR TO DISK=‘\\BAKSERVER3\BACKUP\PUBSMIRROR3A.BAK’, DISK=‘\\BAKSERVER3\BACKUP\
PUBSMIRROR3B.BAK’

Restore

RESTORE DATABASE PUBS FROM DISK = ’C:\DEMO\BACKUP\PUBSFULL.BAK’ WITH REPLACE, STANDBY = ’C:\
DEMO\BACKUP\PUBSSTANDBY.STN’
This command uses the contents of the PUBSFULL.BAK file for the restore operation.
The REPLACE option tells SQL Server to overwrite the existing database named
PUBS. The STANDBY option leaves the database in a restoring state: Writes are not
allowed to occur in the database, but users can connect to the database and issue
SELECT statements.
The other important clauses in any restore command are WITH RECOVERY or WITH
NORECOVERY.

Restoring a Differential Backup

RESTORE DATABASE PUBS FROM DISK = ’C:\DEMO\BACKUP\PUBSFULL.BAK’ WITH NORECOVERY
RESTORE DATABASE PUBS FROM DISK = ’C:\DEMO\BACKUP\PUBSDIFF.BAK’ WITH RECOVERY

Restoring a Transaction Log Backup

--Full
RESTORE DATABASE AdventureWorks FILEGROUP = ’FG1’ FROM DISK = ’C:\TEST\AWFG1.BAK’ WITH
NORECOVERY
--Differential
RESTORE DATABASE AdventureWorks FROM DISK = ’C:\TEST\FG1DIFF1.BAK’ WITH NORECOVERY
--Transaction log
RESTORE LOG AdventureWorks FROM DISK = ’C:\TEST\AW2.TRN’ WITH RECOVERY

Recovering to a point in time following a disaster
Recovering databases without any data loss would be much easier if problems always occurred just after you completed a backup and before your application issued any additional transactions. Alas, we are never that lucky. So in any disaster scenario, you always have transactions in the log that have not yet been backed up.
For this reason, your first step in any recovery operation is to issue one final BACKUP LOG command.This process captures all remaining committed transactions that have not been backed up and is commonly referred to as backing up the tail of the log. Because you can issue a BACKUP LOG command against a database even if every data file, including the primary data file, is no longer available, the only excuse for not backing up the tail of the log would be when the transaction log no longer exists.The backup of the tail of the log then becomes the final transaction log that you

Restore a corrupted page
USE MASTER
GO
RESTORE DATABASE PUBS PAGE = ’1:88’
FROM DISK=‘C:\HA\DEMO\BACKUP\PUBSMIRROR1.BAK’
WITH RECOVERY

Restore with media errors
By using the
WITH CONTINUE_AFTER_ERROR option, damaged media sectors are skipped, and
any readable parts of the media will be restored.

What is guest user account?
The guest user account allows a login without a user account to access a database. A login assumes the identity of the guest user when both of the following conditions are met:
The login has access to an instance of Microsoft® SQL Server™ but does not have access to the database through his or her own user account.
The database contains a guest user account.
Permissions can be applied to the guest user as if it were any other user account. The guest user can be deleted and added to all databases except master and tempdb, where it must always exist. By default, a guest user account does not exist in newly created databases.
Pages and Extents

The fundamental unit of data storage in SQL Server is the page. The disk space allocated to a data file (.mdf or .ndf) in a database is logically divided into pages numbered contiguously from 0 to n. Disk I/O operations are performed at the page level. That is, SQL Server reads or writes whole data pages.
Extents are a collection of eight physically contiguous pages and are used to efficiently manage the pages. All pages are stored in extents

Page type
Contents
Data
Data rows with all data, except text, ntext, image, nvarchar(max), varchar(max), varbinary(max), and xml data, when text in row is set to ON.
Index
Index entries.
Text/Image
Large object data types:
text , ntext, image, nvarchar(max), varchar(max), varbinary(max), and xml data
Variable length columns when the data row exceeds 8 KB:
varchar , nvarchar, varbinary, and sql_variant
Global Allocation Map, Shared Global Allocation Map
Information about whether extents are allocated.
Page Free Space
Information about page allocation and free space available on pages.
Index Allocation Map
Information about extents used by a table or index per allocation unit.
Bulk Changed Map
Information about extents modified by bulk operations since the last BACKUP LOG statement per allocation unit.
Differential Changed Map
Information about extents that have changed since the last BACKUP DATABASE statement per allocation unit.


Extents
Extents are the basic unit in which space is managed. An extent is eight physically contiguous pages, or 64 KB. This means SQL Server databases have 16 extents per megabyte.
To make its space allocation efficient, SQL Server does not allocate whole extents to tables with small amounts of data. SQL Server has two types of extents:
Uniform extents are owned by a single object; all eight pages in the extent can only be used by the owning object.
Mixed extents are shared by up to eight objects. Each of the eight pages in the extent can be owned by a different object.
A new table or index is generally allocated pages from mixed extents. When the table or index grows to the point that it has eight pages, it then switches to use uniform extents for subsequent allocations. If you create an index on an existing table that has enough rows to generate eight pages in the index, all allocations to the index are in uniform extents.


What is a page header?
Each page begins with a 96-byte header that is used to store system information about the page. This information includes the page number, page type, the amount of free space on the page, and the allocation unit ID of the object that owns the page.

What is a boot page?

Every database has a single page that stores critical information about the database itself. It's always page 9 in file 1 (the first file in the PRIMARY filegroup). You can examine the page using DBCC PAGE and it will interpret all the fields for you, but there's another command, DBCC DBINFO, that also dumps all this info. If the boot page is corrupt, you can't run DBCC CHECKDB so you can't possibly run repair, and you can't put the database into EMERGENCY mode so you can't extract data into a new database. It means that there's NO WAY to recover from a corrupt boot page EXCEPT to restore from backups.

How to rename an instance?

We can not rename a named instance.We can rename a default instance as follows
sp_dropserver "oldservername"
sp_addserver "newservername" , local

The sp_dropserver / sp_addserver method works only for the DEFAULT instance though. On the default instance only, by using sp_dropserver / sp_addserver you can change the server name reported by @@SERVERNAME. You would also have to change your machnine name if you wanted remote clients to be able to connect to that instance using that name.

What are the default logins present after an instance is installed?

2005- Bultin administrator,sa,NTAUTHORITY\system
When an update statistic job is running, if blocking occurs. How to resolve?

UPDATE Statistics with Full Scan does not cause blocking. It is an online operation i.e. table will be available to read while statistics are updated.

The X lock that is being held by the statistics operation is on a statistics resource - that is a resource that only statistics operations care about. It exists to limit the number of stats operations in progress on a single table to one at a time. Auto-statistics operations do not wait for it but just abort if they cannot get it. Manual statistics operations do wait for the resource

LDF got deleted what to do?
USE master;
GO
EXEC sp_detach_db @dbname = 'test1';
EXEC sp_attach_single_file_db @dbname = 'test1',
    @physname =
N'C:\Prasad\MSSQL\Test1\MSSQL.1\MSSQL\Data\test1.mdf';
How to find out missing and unused indexes?

sys.dm_db_missing_index_group_stats
Returns summary information about missing index groups, for example, the performance improvements that could be gained by implementing a specific group of missing indexes.
sys.dm_db_missing_index_groups
Returns information about a specific group of missing indexes, such as the group identifier and the identifiers of all missing indexes that are contained in that group.
sys.dm_db_missing_index_details
Returns detailed information about a missing index; for example, it returns the name and identifier of the table where the index is missing, and the columns and column types that should make up the missing index.
sys.dm_db_missing_index_columns
Returns information about the database table columns that are missing an index.

How to  move system datbases?

Tempdb

ALTER DATABASE tempdb
MODIFY FILE ( NAME =’tempdev’,
FILENAME = ‘E:\Microsoft SQL Server\MSQL\Data\tempdev.mdf’)
ALTER DATABASE tempdb
MODIFY FILE ( NAME =’templog’,
FILENAME = ‘E:\Microsoft SQL Server\MSQL\Log\templog.ldf’)

MSDB

1.Open SQL Server Configuration Manager
2.Stop the SQL Server Service and make sure SQL Server Agent is stopped as well
3.Open the properties of SQL Server, in Advance tab, add parameter -c;-m;-T3608 into the parameter values, and then click OK. That's makes SQL to be openned in single user mode.
4.Restart SQL Server Service and Open SQL Server Management Studio without connect to any server. And then click on New Query and run the query:
use master
go
sp_detach_db ‘msdb’
go

5.Move msdbdata.mdf and msdblog.ldf in drive C (usually in C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data) to another drive
6.Delete the added parameter and restart service of SQL Server and SQL Server Agent
7.Attach MSDB database using query:
use master
go
sp_attach_db ‘msdb’,’D:\Database Files\msdbdata.mdf’, ’D:\ Database Files \msdblog.ldf’
go

Master

Open SQL Server Configuration manager from program files. In SQL Services Nodes, right clieck on SQL Server(i.e.MSSQLSERVER) and choose Properties and go to Advanced TAB and Edit Startup Parameters values to point to planned location for the master database data abd log files and then click on ok.

The parameter value for the data file must follow the -d parameter and the value for the log file must follow the -l parameter. The following example shows the parameter values for the default location of the master data and log files.

-dC:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\
master.mdf;-eC:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\
LOG\ERRORLOG;-lC:\Program Files\Microsoft SQLerver\MSSQL10.MSSQLSERVER\MSSQL\
DATA\mastlog.ldf


If the planned relocation for the master data and log files is E:\SQLData, the parameter values would be changed as follows:

-dE:\SQLData\master.mdf;-eC:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\LOG\ERRORLOG;-lE:\SQLData\mastlog.ldf

stop the instance by right click and choose stop and move the master.mdf and master.ldf to new location.and restart the instance.

Verify the file change for the master database by running the following query.

CopySELECT name, physical_name AS CurrentLocation, state_desc
FROM sys.master_files
WHERE database_id = DB_ID('master');
GO




New Features in SQL Server 2012

1. SQL Server multi-subnet clustering: You can now configure a SQL Server failover cluster using clustered nodes on different subnets.
2. BUILTIN\administrators and Local System (NT AUTHORITY\SYSTEM) are not automatically provisioned in the sysadmin fixed server role
3. Flexible failover policy for cluster health detection
4. Indirect checkpoints: The indirect checkpoints feature provides a database-specific alternative to automatic checkpoints, which are configured by a server property. Indirect checkpoints implements a new checkpointing algorithm for the Database Engine. This algorithm provides a more accurate guarantee of database recovery time in the event of a crash or a failover than is provided by automatic checkpoints. To ensure that database recovery does not exceed allowable downtime for a given database, you can specify the maximum allowable downtime for that database.
5. Column store index

Mention some of the important Perfmon counters

Process : Processor time
Avg disk Queue length
Avg disk reads/sec
Avg disk writes/sec

Buffer Manager : Buffer cache hit ration
                             Page life expectancy
                            Lazy writes/sec
                           Available bytes
                         Total memory
                        Target memory

Benefits of backup compression

1.Less space requirements
2.Less IO,CPU and memory requirements
3.Less time to backup the databases



Name 10 Dynamic management Views

1.sys.dm_db_index_physical_stats: This dynamic management function returns information regarding data and index information for data tables and views
2.sys.dm_exec_requests: This view shows information regarding each request occurring in the SQL Server instance.
3.sys.dm_exec_sessions: This returns one row per authenticated session on SQL Server. This view is useful when you want to find out all of the users who are connected to the database server.
4.sys.dm_os_performance_counters: The information returned by this view lists performance statistics directly related to SQL Server. The information returned represents the internal performance counters as displayed through the Windows Performance Monitor.
5.sys.dm_exec_sql_text: This is a dynamic management function that returns the SQL Server statement based upon a sql handle
6.sys.dm_db_file_space_usage: Returns space usage information for each file in the database.
7.sys.dm_db_mirroring_connections - Returns a row for each connection established for database mirroring.
8.sys.dm_db_mirroring_auto_page_repair - Returns a row for every automatic page-repair attempt on any mirrored database on the server instance. This view contains rows for the latest automatic page-repair attempts on a given mirrored database, with a maximum of 100 rows per database
9.sys.dm_db_index_operational_stats- Returns current low-level I/O, locking, latching, and access method activity for each partition of a table or index in the database.
10.sys.dm_db_index_usage_stats- Returns counts of different types of index operations and the time each type of operation was last performed.
11.sys.dm_db_missing_index_details- Returns detailed information about missing indexes, excluding spatial indexes.
12.sys.dm_db_missing_index_groups- Returns information about what missing indexes are contained in a specific missing index group, excluding spatial indexes.
13.sys.dm_exec_cached_plans- Returns a row for each query plan that is cached by SQL Server for faster query execution. You can use this dynamic management view to find cached query plans, cached query text, the amount of memory taken by cached plans, and the reuse count of the cached plans.
14.sys.dm_io_pending_io_requests- Returns a row for each pending I/O request in SQL Server.
15.sys.dm_io_virtual_file_stats- Returns I/O statistics for data and log files.
16.sys.dm_repl_articles- Returns information about database objects published as articles in a replication topology.
What is a covering index?
A covering index is a non-clustered index which includes all columns referenced in the query and therefore, the optimizer does not have to perform an additional lookup to the table in order to retrieve the data requested. As the data requested is all indexed by the covering index, it is a faster operation.
To be considered as a covering index, all columns referenced by the query must be contained in the index. So this means all columns in the SELECT, JOIN, WHERE, GROUP BY, HAVING etc etc.
What is a key lookup?
A key lookup is an extra read which the query has to make in the absence of a suitable covering index.
In the query, the number of columns involved in the select statement may exceed the number of columns in the non-clustered index and when this happens, for each row in the result set, a key lookup is performed. So 500 rows in the result set equates to 500 extra key lookups.

What is database snap shot? Explain sparse tile? Can we restore using snap shot?


A Database Snapshot provides a point-in-time, read-only copy of the source database. The only information contained within a Database Snapshot are the pages that changed in the source database since the Database Snapshot was created.

CREATE DATABASE database_snapshot_name
ON
(
NAME = logical_file_name,
FILENAME = ’os_file_name’
) [ ,...n ]
AS SNAPSHOT OF source_database_name

■ You cannot back up, restore, or detach the Database Snapshot.
■ It must exist on the same SQL Server instance as the source database.
■You cannot drop, detach, or restore the source database when a Database Snapshot
is present.
■You cannot create Database Snapshots against system databases.

Database Snapshot provides a point-in-time copy of a source database. Because it provides a copy of the data in the database, you can use it to recover in a variety of situations.In the event of data being accidentally damaged or if an administrative process makes changes that are unwanted, you can extract the original version of the data from the Database Snapshot and move it back into the source database using either an INSERT or UPDATE statement.

The RESTORE DATABASE command contains a special clause that enables a DBA to revert a database from a Database Snapshot. This operation would invalidate any other Database Snapshots created against the source database, so you must drop all other Database Snapshots before you can perform a revert. Additionally, any operation that relies on a contiguous transaction log chain will be interrupted because the restore process will rebuild the transaction log.

Slow running queries
Mainly two reasons

1.Blocking

SQL Server 2005 provides more detailed and consistent wait information, reporting approximately 125 wait types compared with the 76 wait types available in SQL Server 2000. The DMVs that provide this information range from sys.dm_os_wait_statistics for overall and cumulative waits for SQL Server to the session-specific sys.dm_os_waiting_tasks that breaks down waits by session
To find sessions that have been granted locks or waiting for locks, you can use the sys.dm_tran_locks DMV

sp_block. The block report in Figure 1 lists sessions that are blocked and the sessions that are blocking them.

2.Index problems

Sys.dm_db_index_operational_stats provides comprehensive index usage statistics, including blocks. In terms of blocking, it provides a detailed accounting of locking statistics per table, index, and partition.
Tempdb full
Use sys.dm_db_task_space_usage and we will get session_id and request_id fromthis
Then use sys.dm_exec-requests and get the sql_handle and plan_handle
Get the details from sys.dm_exec_sql_text and sys.dm_exec_query_plan.
What are the steps to be done to upgrade SQL from one version to another

Log file got deleted in SQL 2000 and 2005.What you will do to recover?

there’s an undocumented and unsupported command called DBCC REBUILD_LOG. It deletes the transaction log file(s) and creates a new one. It completely disregards any uncommitted transactions that may exist – it just deletes them. This means that these uncommitted transactions don’t get a chance to roll back.
What does this mean? Well, in the best case, the only in-flight transactions were altering user data – so your business logic, inherent and constraint-enforced relationships between tables, and the basic logical integrity of your user data are all broken. In the worst case, the in-flight transactions were altering the structure of the database (e.g. doing a page split) so that fact that they didn’t get a chance to rollback means the database may be structurally corrupt!
Implications

1: Logical data loss
Imagine you’re at an ATM transferring $1000 from your checking account to your savings account. On the SQL Server in the bank’s data-center, the transaction happens in two parts – update the record in the checking accounts table with the balance minus $1000 and then update the record in the savings account table with the balance plus $1000. The process is half-way through – $1000 has been debited from your checking account, but not yet credited to your savings account, when disaster strikes! A work crew outside the data-center accidentally cuts the power and the machine hosting SQL Server powers down. The ATM displays one of those infuriating “We’re sorry, our computers are unavailable at present” and you walk away grumbling but think nothing more about it.
Meanwhile, the power’s been restored to the data-center and SQL Server is going through crash recovery. The partially-completed transaction on your account should rollback and credit back the $1000 to your checking account. But the new DBA at the bank decides that its taking too long for the system to come back up. He deletes the transaction log and rebuilds it to get the system up faster.
Unfortunately, the portion of the transaction log that had not had a chance to recover included the transaction involving your bank account. Even more unfortunately, a checkpoint occurred right before the power loss, and the database page containing the updated checking account balance was flushed to disk. When the transaction log is deleted and rebuilt, your transaction can’t rollback – because it’s simply gone. So the $1000 debit from your checking account is not rolled back – you’ve lost $1000!!
2: Physical database corruption.
Imagine an insert into a table with a single non-clustered index. Under the covers, the insert happens in two parts – insert the record into the table and then insert the corresponding non-clustered index record. Imagine a similar disaster-recovery situation as I described above occurring after the table insert has been done but not the non-clustered index insert. If the log is rebuilt, then the table and its index are not in sync – actual physical corruption!
How Does EMERGENCY-Mode Repair Work?
C2 Audit
Microsoft® SQL Server includes a C2 Audit Mode that can be configured through SQL Server Management Studio or with the stored procedure sp_configure. Selecting this option configures the server to record both successful and unsuccessful attempts to access statements and objects.C2 Audit Mode always enables all event classes in the Security Audit category. For most production sites, auditing all event classes results in an unacceptable performance degradation of the SQL Server. If you use C2 Audit mode, do not enable all event classes.
For MS SQL 2005, C2 Audit Mode data is saved in a file in the \MSSQL\Data directory of default instances, and the \MSSQL$instancename\Data directory of named instances. When the audit log file reaches its size limit of 200 MB, the SQL Server creates a new file, closes the old file, and writes all new audit records to the new file. This auditing process continues until the audit data directory fills up or auditing is turned off.
Important:
C2 Audit Mode saves a large amount of event information to the log file, which can grow quickly. If the data directory in which logs are being saved runs out of space, the SQL Server shuts itself down. If auditing is set to start automatically, you must either restart the instance with the -f flag (which bypasses auditing), or free up additional disk space for the audit log.
Enabling C2 Audit Mode
You must be a member of the sysadmin fixed server role to perform this procedure.
Run the stored procedure sp_configure with Advanced Options enabled followed by the RECONFIGURE command:
sp_configure 'show advanced options', 1;
RECONFIGURE
To enable auditing, run sp_configure with the c2 audit mode parameter set to 1:
EXEC sp_configure 'c2 audit mode', 1
    RECONFIGURE

    After changing the C2 audit mode setting, restart the server.
If you have enabled C2 auditing, you might want to disable login auditing on the Security tab of the SQL Server Properties window in SQL Server Enterprise Manager (described previously). If you do not disable this feature, the auditing functions record the same type of event twice, unnecessarily degrading server performance.
Disaster Recovery: What to do when the SA account password is lost in SQL Server 2005
1.First start SQL in single user mode
2.The connect to SQL using SQLCMD
3.Type SQLCMD –SMSSQL$instancename -E
4.Type EXEC sp_addsrvrolemember 'domain or computername\username', 'sysadmin';
GO

Myths

30-24) you can back up a single table
No. You can effectively back up single table if it happens to be wholely contained on a single filegroup, but there's no way to say BACKUP TABLE.
30-23) you can back up a mirror database
No. A mirror database is not accessible except through a database snapshot. And you can't back up that either.
30-20) log backups always clear the log
No.
If there's no concurrent data backup running, a log backup will always try to clear the log, and only succeed in clearing the inactive portion of the log - the log that's only considered 'required' by SQL Server because it hasn't yet been backed up. If anything else is holding the log 'required', it cannot be cleared, even though it has been backed up. Subsequent log backups will check again and again until the time comes when that portion of the log can be cleared.
30-18) you cannot backup a corrupt database
No. In most cases you can use the WITH CONTINUE_AFTER_ERROR option to back up the corrupt database. If that fails (maybe because of a damaged boot page or file-header page), there are no other options apart from OS-level file backups.
30-19) you cannot stop someone doing a BACKUP LOG .. WITH NO_LOG or TRUNCATE_ONLY operation
No. In SQL Server 2008 it's not possible any more (yay!) and in 2005 and before, use trace flag 3231 which turns the operation into a no-op.
30-17) log backups will be the size of the log
No. The log has to accommodate the space necessary to roll back active transactions, the amount of space returned by DBCC SQLPERF (LOGSPACE) on a busy system accurately reflect the amount of log records in the log
30-15) you can back up a database snapshot
No. It's not implemented, but would be great if you could.
30-16) you can use database snapshots instead of log backups
No. A database snapshot is only usable while the database on which it is based is usable and online. If the source database is corrupted, the database snapshot most likely is too. If the source database goes suspect, so does the database snapshot.
30-13) a tail-of-the-log backup is always possible
No. A tail-of-the-log backup is one that backs up all the log generated since the last log backup, in an exceptional situation. If the data files are damaged, you can still do a tail-of-the-log backup EXCEPT if the un-backed-up log contains a minimally-logged operation. That would require reading data extents - which cannot be done if the data files are damaged. For this reason, the BULK_LOGGED recovery model should not be used on databases that have 24x7 user transactions.
30-07) full and differential backups only contain the log generated while the backup was running
No. A full or differential backup contains enough log to be able to recover the database to a transactionally-consistent view of the database at the time the data-reading portion of the backup finished (or as far back as the oldest log record that transactional replication has not yet processed - to ensure that replication works properly after a restore).
30-04) concurrent log backups are not possible while a full or differential backup is in progress
No, this changed in SQL Server 2005.
30-02) switching from the FULL recovery model to the BULK_LOGGED recovery model and back again breaks the log backup chain
No. It just doesn't. Switching from either FULL or BULK_LOGGED to SIMPLE *does* break the log backup chain however.
30-01) backup operations cause blocking
No. Backup operations do not take locks on user objects. Backups do cause a really heavy read load on the I/O subsystem so it might look like the workload is being blocked, but it isn't really. It's just being slowed down. There's a special case where a backup that has to pick up bulk-logged extents

More on how much transaction log a full backup includes
By: Paul Randal
Posted on: January 31, 2008 8:01 am
In a previous post (see here) I debunked a myth about how much transaction log a full backup would include. I had a question in the blog post comments that asked (paraphrasing):
The full backup has to include all the transaction log from the begin LSN of the oldest active transaction at the time the data read portion of the backup ends, until the LSN at which the data read portion ends. If that begin LSN is later in time than the LSN of the checkpoint that backup does initially, why does the full backup need to include all thr transaction log between the checkpoint and the begin LSN? What is it used for?
I replied in the comments with a quip that it would be easier to reply with a whiteboard and a timeline – so I got all enthusiastic and created a picture in Powerpoint to help explain better.

Consider the timeline in the picture above for a full backup (the red numbers match the list below):
1. The backup operation take a checkpoint to force all dirty pages in the buffer pool to disk – both those containing changes from transactions that have committed and those containing changes from transactions that are still in-flight. The backup operation then starts reading the allocated pages in the database.
2. The read operation reads page X
3. Transaction A starts
4. Transaction A makes a change to page X. The copy in the backup is now out-of-date. Note that the backup will not read page X again – it's already passed that point in the database.
5. Transaction B starts. It won't complete before the data read operation completes so it's begin LSN is the oldest active transaction begin LSN.
6. Transaction A commits. This commits the changes to page X.
7. The backup data read operation completes and transaction log reading starts.
Now, the reason that the transaction log is read is so that the restore operation can recover the database so it is transactionally consistent as of the point in time when the read data operation completed.
If the transaction log was only included from the oldest active transaction begin LSN (point 5), then the copy of page X that was restored from the backup (read at point 2) would not be updated with the changes from transaction A (that happened at point 4). This means that it would not be transactionally consistent with the rest of the database as of the time the read data operation completed (point 7).
So, (ignoring replication) the minimum LSN of the transaction log that's included in the full backup is MIN (LSN of last checkpoint, LSN of oldest active transaction). This ensures that recovery can REDO log records to bring pages up-to-date and UNDO log records for transactions that had not committed.

Fix heap fragmentation by creating and dropping a clustered index.
If you create a clustered index, all the linkages to the heap records are no longer valid and so all the nonclustered indexes must be rebuilt automatically to pick up the new clustering key links. If you drop the clustered index again, all the clustering key links are now invalid so all the nonclustered indexes must be rebuilt automatically to pick up the new heap physical location links.
In other words, if you create and then drop a clustered index, all the nonclustered indexes are rebuilt twice. Nasty.
If you think you can use ALTER TABLE ... REBUILD in SQL Server 2008 to fix heap fragmentation, you can, but it causes all the nonclustered indexes to be rebuilt as the heap record locations obviously change.
Now, what about if you rebuild a clustered index? Well, that depends on what version you're on and whether you're doing a simple rebuild or changing the definition of the index. One major point of misconception is that moving a clustered index or partitioning it changes the cluster keys - it doesn't.

What are the steps needs to be followed while upgrading an instance?
■ Create a series of checklists
■ Back up all important files
■ Ensure database consistency
■ Reserve enough disk space
■ Transfer login information
■ Disable all startup stored procedures
■ Stop replication
■ Quit all applications
Register your servers after the upgrade
■ Repopulate full-text catalogs
■ Update statistics
■ Update usage counters – DBCC update usage
■ Configure your new SQL Server installation

New Features in SQL Server 2008

1.Hot Add CPU
2.Transparent data encryption
Enable encryption of entire database,data files and log files to secure data from unauthorised users
3.Automatic Recovery of Data Pages
SQL Server 2008 enables the principal and mirror machines to transparently recover from 823/824 types of data page errors by requesting a fresh copy of the suspect page from the mirroring partner transparently to end users and applications.
4Data Compression
Enable data to be stored more effectively, and reduce the storage requirements for your data. Data compression also provides significant performance improvements for large I/O bound workloads, like data warehousing.
5Backup Compression
Keeping disk-based backups online is expensive and time-consuming. With SQL Server 2008 backup compression, less storage is required to keep backups online, and backups run significantly faster since less disk I/O is required.
❑ SQL Server Extended Events — SQL Server 2005 introduced the ability to associate SQL Profiler traces with Windows Performance Log counters. This was extremely helpful in identifying poorly performing queries or the lack of sufficient resources in the system to handle certain events. SQL Server 2008 takes this a step further by introducing SQL Server Extended Events.Extended events allow database administrators to get a better understanding of the system behavior by correlating SQL Server data to the operating system or database applications. This is handled by directing output from extended events to Event Tracing forWindows (ETW).
❑ Resource Governor — The Resource Governor is a new feature that allows administrators to specify configuration options that limit the amount of CPU and memory available to incoming requests. This can help prevent applications or queries from consuming 100 percent of the CPU or all available memory. The Resource Governor uses configurable workload groups, which
define what the CPU and memory limits are for any session that is classified as being a member of that group. Classification can be performed based on a number of system functions or user-defined functions.
❑ Policy-Based Management— SQL Server 2008 includes features that allow administrators greater control over their server environments by enforcing behaviors or constraints through a policy-based mechanism. In addition to using the included policies, administrators can create their own policies to configure servers to meet compliance requirements and standardize naming conventions, thereby simplifying administration.
❑ Centralized Management— Central Management servers are SQL Servers that can be configured to manage multiple servers as part of a group. You can also execute queries against a SQL Server group that can return results to either a combined set or a separate pane per server. A Central Management server can also be used to enforce management policies against multiple
target servers simultaneously.
❑ Query Editor IntelliSense — SQL Server Management Studio now provides IntelliSense functionality in the Query Editor. The IntelliSense functionality provides auto-completion ability,error underlining, quick info help, syntax pair matching, and parameter help.
❑ PowerShell Provider — SQL Server 2008 includes new features that integrate with Windows PowerShell to help administrators automate many SQL Server 2008 tasks. PowerShell is an administrative command-line shell and scripting language that can make it easier to perform many common tasks through automation. The PowerShell provider in SQL Server 2008 exposes SQL Server Management Objects (SMO) in a structure similar to file system paths. SQL Server PowerShell also includes several SQL Server cmdlets for running scripts and other common tasks.
❑ Compressed Indexes and Tables— Compression is now supported for tables, indexes, and indexed views on either rows or pages. Compression operations will have an effect on performance.Because of this, row and page compression can be configured on a per-partition basis.For example, you could choose to compress a Read Only partition, but leave a Write-intensive
partition uncompressed to minimize impact on the CPU.
❑ FILESTREAM — FILESTREAM is a new storage mechanism for storing data on the file system,rather than in the database itself. SQL Server 2008 applications can use FILESTREAM to take advantage of the storage and performance benefits of the NTFS file system while maintaining transactional consistency with the files themselves. Developers can leverage FILESTREAM as a mechanism for allowing large files to be maintained by the application database, without causing the database to become unnecessarily bloated.
❑ Partition Switching —Simply put, Partition Switching enables you to move data between partitions for a table or index. Data can be transferred between partitions without disrupting the integrity of the table or index.
❑ MERGE Statement— Transact-SQL includes a new MERGE statement that, based on the results of a join with a source table, can perform INSERT, UPDATE, or DELETE operations against a target table.For example, you can use MERGE to  incrementally update a destination table by comparing the differences from a source table.

How SQL Server Browser service will work?


On start-up, the SQLBrowser Service queries the registry to discover all the names and port numbers of
installed servers and reserves UDP Port 1434. It then listens on UDP Port 1434 for SQL Server Resolution
Protocol (SSRP) requests and responds to the requests with the list of instances and their respective port
assignments so that clients can connect without knowing the port number assignment. There are definite security considerations to this arrangement, so it is very important that no unauthenticated traffic on UDP Port 1434 be allowed on the network, because the service will respond to any request on that port. This creates the potential of exposing more information about the server instances than some organizations find acceptable.
If the SQLBrowser Service is disabled, it will be necessary to specify a static port number for all named
instances of the SQL Server Service and to configure all client applications that connect to those instances with the appropriate connection information.

Virtualization Considerations
SQL Server 2008 is the first version of SQL Server that is supported in virtual environments; however,there are some limitations. Microsoft will officially only support installations of SQL Server in Hyper-V environments on Windows Server 2008, and clustering of virtual machines is not supported. Because of the continued improvement in virtualization technology, it is becoming a much more attractive option to companies that want to either consolidate hardware or take advantage of some of the recovery and portability options available. It’s been my experience that the biggest bottleneck that occurs when running SQL
Server in a virtual machine is I/O performance. For this, I strongly recommend using SAN storage for the database and transaction log files to avoid storing database information in a virtual hard drive file.

Database options
Autoclose -  It says once SQLServer stopped using the database it should close the files/database. Next time it will re  open. Here it creates a little overhead but the file will not be used by the operating system so that we can have a file level backup.

Autocreate statistics – It’s a best options and is enabled by default

Autoshrink- It helps to shrink the database data file when ever possible

Autoupdate statistics – It updates statistics.

Cursor options- Normally developers used it.
Other options are recovery model, Collation and compatability.

Database read only-
Encryption enabled
 Restrict access- can be taken as single user mode.

Relying  solely on BACKUP ... WITH CHECKSUM leaves you susceptible to in-memory corruptions. If a bad memory chip, badly-written XP, or other rogue Windows process corrupts a SQL Server data file page in memory, and then it gets written to disk, you've got a corrupt page with a valid checksum - and nothing will catch that except DBCC CHECKDB.

Page restoration is possible only in Full and Bulk logged recovery model