/ 2015

Update Job owner for all SQL Agent jobs


DECLARE @name_holder VARCHAR(1000)
DECLARE My_Cursor CURSOR
FOR
SELECT [name]  FROM msdb..sysjobs
OPEN My_Cursor
FETCH NEXT FROM My_Cursor INTO @name_holder
WHILE (@@FETCH_STATUS <> -1)
BEGIN
exec msdb..sp_update_job
        @job_name = @name_holder,
        @owner_login_name = 'sa'
FETCH NEXT FROM My_Cursor INTO @name_holder
END
CLOSE My_Cursor
DEALLOCATE My_Cursor

Change Maintenance Plan Owner, Change SSIS Package Owner


If you're used to administering your SQL Servers using Windows Authentication, one side effect is that you'll end up owning a lot of databases, jobs, plans and packages. Sometime this has undesirable effects such as a job or package not working properly. But it's not always obvious how you can change the owner of a package once it's created. Note that it is slightly different for each version of SQL Server.

SQL Server 2008 Maintenance Plan or SSIS Package

--change the owner of a SQL Server 2008 Maintenance Plan or SSIS Package
UPDATE [msdb].[dbo].[sysssispackages]
SET [ownersid] = 0x01 --sa user
WHERE [name] = 'YOUR_MAINT_PLAN_OR_PACKAGE'

SQL Server 2005 Maintenance Plan or SSIS Package

--change the owner of a SQL Server 2005 Maintenance Plan or SSIS Package
UPDATE [msdb].[dbo].[sysdtspackages90]
SET [ownersid] = 0x01 --sa user
WHERE [name] = 'YOUR_MAINT_PLAN_OR_PACKAGE'

SQL Server 2000 Maintenance Plan

--change the owner of a SQL Server 2000 Maintenance Plan
UPDATE [msdb].[dbo].[sysdbmaintplans]
SET [owner] = 'sa'
WHERE [plan_name] = 'YOUR_MAINT_PLAN'

SQL Server 2000 DTS package

--change the owner of a SQL Server 2000 DTS package
--note you need to update the owner column as well
UPDATE [msdb].[dbo].[sysdtspackages]
SET [owner] = 'sa',
[owner_sid] = 0x01 --sa user
WHERE [name] = 'YOUR_DTS_PACKAGE'
Before I learned this trick, I would literally delete and recreate the package when logged in as SA just to get the package under the proper owner. Otherwise, every time you edit the plan, the job owner will change to match the plan owner which could cause the job to fail.

SQL Job owner issue

select s.name,l.name
 from  msdb..sysjobs s
 left join master.sys.syslogins l on s.owner_sid = l.sid

 select s.name,l.name
from msdb..sysssispackages s
 left join master.sys.syslogins l on s.ownersid = l.sid

 SELECT  s.name ,
        SUSER_SNAME(s.owner_sid) AS owner
FROM    msdb..sysjobs s
ORDER BY name

Common Language Runtime (CLR)

The Common Language Runtime (CLR) is the heart of the Microsoft .NET Framework and provides the execution environment for all .NET Framework code. Code that runs within the CLR is referred to as managed code. The CLR provides various functions and services required for program execution, including just-in-time (JIT) compilation, allocating and managing memory, enforcing type safety, exception handling, thread management, and security.
With the CLR hosted in Microsoft SQL Server (called CLR integration), you can author stored procedures, triggers, user-defined functions, user-defined types, and user-defined aggregates in managed code. Developers use compiled OO languages like C# or Visual Basic .NET to write code and to have the code executed as if it were a T-SQL procedure, function, or trigger. Because managed code compiles to native code prior to execution, you can achieve significant performance increases in some scenarios.
Managed code uses Code Access Security (CAS), code links, and application domains to prevent assemblies from performing certain operations. SQL Server 2005, SQL Server 2008, and SQL Server 2012 uses CAS to help secure the managed code and prevent compromise of the operating system or database server.

Enable CLR in SQL Server

In SQL Server 2005, SQL Server 2008, and SQL Server 2012, the Common Language Runtime (CLR) is off by default. In an effort to improve security, Microsoft has turned many features "off by default". This is a big change from the old policy of turning every feature on so that developers weren't discouraged from using the feature due to difficulties in getting the feature to work.

EXEC sp_configure 'show advanced options', '1'
Go

Configuration option 'show advanced options' changed from 0 to 1. Run the RECONFIGURE statement to install.


Reconfigure
Go

Command(s) completed successfully.


EXEC sp_configure 'clr enabled', '1'
Go

Configuration option 'clr enabled' changed from 0 to 1. Run the RECONFIGURE statement to install.


Reconfigure
Go

Command(s) completed successfully.


EXEC sp_configure 'show advanced options', '0'
Go

Configuration option 'show advanced options' changed from 1 to 0. Run the RECONFIGURE statement to install.


Reconfigure
Go

Command(s) completed successfully.

Monitoring and Tuning for Performance



The goal of monitoring databases is to assess how a server is performing. Effective monitoring involves taking periodic snapshots of current performance to isolate processes that are causing problems, and gathering data continuously over time to track performance trends. Microsoft SQL Server and the Microsoft Windows operating system provide utilities that let you view the current condition of the database and to track performance as conditions change.

Monitoring SQL Server lets you do the following:
  • Determine whether you can improve performance. For example, by monitoring the response times for frequently used queries, you can determine whether changes to the query or indexes on the tables are required.
  • Evaluate user activity. For example, by monitoring users trying to connect to an instance of SQL Server, you can determine whether security is set up adequately and test applications or development systems. For example, by monitoring SQL queries as they are executed, you can determine whether they are written correctly and producing the expected results.
  • Troubleshoot any problems or debug application components, such as stored procedures.
Monitoring is important because SQL Server provides a service in a dynamic environment. The data in the application changes. The type of access that users require changes. The way that users connect changes. The types of applications accessing SQL Server may even change, but SQL Server automatically manages system-level resources such as memory and disk space so the need for extensive system-level manual tuning is minimized. But monitoring lets administrators identify performance trends to determine if changes are necessary.
To monitor any component of SQL Server effectively, follow these steps:
  1. Determine your monitoring goals.
  2. Select the appropriate tool.
  3. Identify components to monitor.
  4. Select metrics for those components.
  5. Monitor the server.
  6. Analyze the data.
These steps are discussed in turn below.
To monitor SQL Server effectively you should clearly identify your reason for monitoring. Reasons can include the following:
  • Establish a baseline for performance.
  • Identify performance changes over time.
  • Diagnose specific performance problems.
  • Identify components or processes to optimize.
  • Compare the effect of different client applications on performance.
  • Audit user activity.
  • Test a server under different loads.
  • Test database architecture.
  • Test maintenance schedules.
  • Test backup and restore plans.
  • Determining when to modify your hardware configuration.
After determining why you are monitoring, you should select the appropriate tools. The Windows operating system and SQL Server provide a complete set of tools to monitor servers in transaction-intensive environments. These tools clearly reveal the condition of an instance of the SQL Server Database Engine or an instance of SQL Server Analysis Services.
Windows provides the following tools for monitoring applications that are running on a server:
  • System Monitor, which lets you collect and view real-time data about activities such as memory, disk, and processor usage.
  • Performance logs and alerts.
  • Task Manager.
For more information about Windows Server or Windows tools, see the Windows documentation.
SQL Server provides the following tools for monitoring components of SQL Server:
  • SQL Trace
  • SQL Server Profiler 
  • SQL Server Management Studio Activity Monitor
  • SQL Server Management Studio Graphical Showplan
  • Stored procedures
  • Database Console Commands (DBCC)
  • Built-in functions
  • Trace flags
For more information about SQL Server monitoring tools, see Tools for Performance Monitoring and Tuning.
The third step to monitoring an instance of SQL Server is to identify the components that you monitor. For example, if you are using SQL Server Profiler to trace a server you can define the trace to collect data about specific events. You can also exclude events that do not apply to your situation.
After identifying the components to monitor, determine the metrics for components you monitor. For example, after selecting the events to include in a trace, you can choose to include only specific data about the events. Limiting the trace to data that is relevant to the trace minimizes the system resources required to perform the tracing.
To monitoring the server, run the monitoring tool that you have configured to gather data. For example, after a trace is defined, you can run the trace to gather data about events raised in the server.
After the trace has finished, analyze the data to see if you have achieved your monitoring goal. If you have not, modify the components or metrics that you used to monitor the server.
The following outlines the process for capturing event data and putting it to use.
  1. Apply filters to limit the event data collected.
    Limiting the event data allows for the system to focus on the events pertinent to the monitoring scenario. For example, if you want to monitor slow queries, you can use a filter to monitor only those queries issued by the application that take more than 30 seconds to run against a particular database. For more information, see How to: Set a Trace Filter (Transact-SQL) and How to: Filter Events in a Trace (SQL Server Profiler).
  2. Monitor (capture) events.
    As soon as it is enabled, active monitoring captures data from the specified application, instance of SQL Server, or operating system. For example, when disk activity is monitored using System Monitor, monitoring captures event data such as disk reads and writes and displays it to the screen. For more information, see Monitoring Resource Usage (System Monitor).
  3. Save captured event data.
    Saving captured event data lets you analyze it later or even replay it using SQL Server Profiler. Captured event data is saved to a file that can be loaded back into the tool that originally created it for analysis. SQL Server Profiler permits event data to be saved to a SQL Server table. Saving captured event data is important when you are creating a performance baseline. The performance baseline data is saved and used when comparing recently captured event data to determine whether performance is optimal. For more information, see Using SQL Server Profiler.
  4. Create trace templates that contain the settings specified to capture the events.
    Trace templates include specifications about the events themselves, event data, and filters that are used to capture data. These templates can be used to monitor a specific set of events later without redefining the events, event data, and filters. For example, if you want to frequently monitor the number of deadlocks and the users involved in those deadlocks, you can create a template defining those events, event data, and event filters; save the template; and reapply the filter the next time that you want to monitor deadlocks. SQL Server Profiler uses trace templates for this purpose. For more information, see How to: Set Trace Definition Defaults (SQL Server Profiler) and How to: Create a Trace Template (SQL Server Profiler).
  5. Analyze captured event data.
    To be analyzed, the captured, saved event data is loaded into the application that captured the data. For example, a captured trace from SQL Server Profiler can be reloaded into SQL Server Profiler for viewing and analysis. For more information, see Viewing and Analyzing Traces with SQL Server Profiler.
    Analyzing event data involves determining what is occurring and why. This information lets you make changes that can improve performance, such as adding more memory, changing indexes, correcting coding problems with Transact-SQL statements or stored procedures, and so on, depending on the type of analysis performed. For example, you can use the Database Engine Tuning Advisor to analyze a captured trace from SQL Server Profiler and make index recommendations based on the results. For more information, see Tuning the Physical Database Design.
  6. Replay captured event data.
    Event replay lets you establish a test copy of the database environment from which the data was captured and repeat the captured events as they occurred originally on the real system. This capability is only available in SQL Server Profiler. You can replay them at the same speed as they originally occurred, as fast as possible (to stress the system), or more likely, one step at a time (to analyze the system after each event has occurred). By analyzing the exact events in a test environment, you can prevent harm to the production system. For more information, see Replaying Traces.

SQL Query Performance Tuning

Creating useful indexes is one of the most important ways to achieve better query performance. Useful indexes help you find data with fewer disk I/O operations and less system resource usage.
To create useful indexes, you much understand how the data is used, the types of queries and the frequencies they are run, and how the query processor can use indexes to find your data quickly.
When you choose what indexes to create, examine your critical queries, the performance of which will affect user experience most. Create indexes to specifically aid these queries. After adding an index, rerun the query to see if performance is improved. If it is not, remove the index.
As with most performance optimization techniques, there are tradeoffs. For example, with more indexes, SELECT queries will potentially run faster. However, DML (INSERT, UPDATE, and DELETE) operations will slow down significantly because more indexes must be maintained with each operation. Therefore, if your queries are mostly SELECT statements, more indexes can be helpful. If your application performs many DML operations, you should be conservative with the number of indexes you create.
SQL Server Compact includes support for showplans, which help assess and optimize queries. SQL Server Compact uses the same showplan schema as SQL Server 2008 R2 except SQL Server Compact uses a subset of the operators. For more information, see the Microsoft Showplan Schema at http://schemas.microsoft.com/sqlserver/2004/07/showplan/.
The next few sections provide additional information about creating useful indexes.

Create Highly-Selective Indexes

Indexing on columns used in the WHERE clause of your critical queries frequently improves performance. However, this depends on how selective the index is. Selectivity is the ratio of qualifying rows to total rows. If the ratio is low, the index is highly selective. It can get rid of most of the rows and greatly reduce the size of the result set. It is therefore a useful index to create. By contrast, an index that is not selective is not as useful.
A unique index has the greatest selectivity. Only one row can match, which makes it most helpful for queries that intend to return exactly one row. For example, an index on a unique ID column will help you find a particular row quickly.
You can evaluate the selectivity of an index by running the sp_show_statistics stored procedures on SQL Server Compact tables. For example, if you are evaluating the selectivity of two columns, "Customer ID" and "Ship Via", you can run the following stored procedures:
sp_show_statistics_steps 'orders', 'customer id';

RANGE_HI_KEY RANGE_ROWS EQ_ROWS DISTINCT_RANGE_ROWS
------------------------------------------------------------
ALFKI               0            7                   0
ANATR               0            4                   0
ANTON               0           13                   0
AROUT               0           14                   0
BERGS               0           23                   0
BLAUS               0            8                   0
BLONP               0           14                   0
BOLID               0            7                   0
BONAP               0           19                   0
BOTTM               0           20                   0
BSBEV               0           12                   0
CACTU               0            6                   0
CENTC               0            3                   0
CHOPS               0           12                   0
COMMI               0            5                   0
CONSH               0            4                   0
DRACD               0            9                   0
DUMON               0            8                   0
EASTC               0           13                   0
ERNSH               0           33                   0
(90 rows affected)
And
sp_show_statistics_steps 'orders', 'reference3';

RANGE_HI_KEY RANGE_ROWS EQ_ROWS DISTINCT_RANGE_ROWS
------------------------------------------------------------
1               0            320                   0
2               0            425                   0
3               0            333                   0
(3 rows affected)

The results show that the "Customer ID" column has a much lower degree of duplication. This means an index on it will be more selective than an index on the "Ship Via" column.
For more information about using these stored procedures, see sp_show_statistics (SQL Server Compact), sp_show_statistics_steps (SQL Server Compact), and sp_show_statistics_columns (SQL Server Compact).

Create Multiple-Column Indexes

Multiple-column indexes are natural extensions of single-column indexes. Multiple-column indexes are useful for evaluating filter expressions that match a prefix set of key columns. For example, the composite index CREATE INDEX Idx_Emp_Name ON Employees ("Last Name" ASC, "First Name" ASC) helps evaluate the following queries:
  • ... WHERE "Last Name" = 'Doe'
  • ... WHERE "Last Name" = 'Doe' AND "First Name" = 'John'
  • ... WHERE "First Name" = 'John' AND "Last Name" = 'Doe'
However, it is not useful for this query:
  • ... WHERE "First Name" = 'John'
When you create a multiple-column index, you should put the most selective columns leftmost in the key. This makes the index more selective when matching several expressions.

Avoid Indexing Small Tables

A small table is one whose contents fit in one or just a few data pages. Avoid indexing very small tables because it is typically more efficient to do a table scan. This saves the cost of loading and processing index pages. By not creating an index on very small tables, you remove the chance of the optimizer selecting one.
SQL Server Compact stores data in 4 Kb pages. The page count can be approximated by using the following formula, although the actual count might be slightly larger because of the storage engine overhead.
<sum of sizes of columns in bytes> * <# of rows>
<# of pages> = -----------------------------------------------------------------
4096
For example, suppose a table has the following schema:
Column Name
Type (size)
Order ID
INTEGER (4 bytes)
Product ID
INTEGER (4 bytes)
Unit Price
MONEY (8 bytes)
Quantity
SMALLINT (2 bytes)
Discount
REAL (4 bytes)
The table has 2820 rows. According to the formula, it takes about 16 pages to store its data:
<# of pages> = ((4 + 4 + 8 + 2 + 4) * 2820) / 4096 = 15.15 pages

We recommend that you always create indexes on primary keys. It is frequently useful to also create indexes on foreign keys. This is because primary keys and foreign keys are frequently used to join tables. Indexes on these keys lets the optimizer consider more efficient index join algorithms. If your query joins tables by using other columns, it is frequently helpful to create indexes on those columns for the same reason.
When primary key and foreign key constraints are created, SQL Server Compact automatically creates indexes for them and takes advantage of them when optimizing queries. Remember to keep primary keys and foreign keys small. Joins run faster this way.

Use Indexes with Filter Clauses

Indexes can be used to speed up the evaluation of certain types of filter clauses. Although all filter clauses reduce the final result set of a query, some can also help reduce the amount of data that must be scanned.
A search argument (SARG) limits a search because it specifies an exact match, a range of values, or a conjunction of two or more items joined by AND. It has one of the following forms:
  • Column operator <constant or variable>
  • <constant or variable> operator Column
SARG operators include =, >, <, >=, <=, IN, BETWEEN, and sometimes LIKE (in cases of prefix matching, such as LIKE 'John%'). A SARG can include multiple conditions joined with an AND. SARGs can be queries that match a specific value, such as:
  • "Customer ID" = 'ANTON'
  • 'Doe' = "Last Name"
SARGs can also be queries that match a range of values, such as:
  • "Order Date" > '1/1/2002'
  • "Customer ID" > 'ABCDE' AND "Customer ID" < 'EDCBA'
  • "Customer ID" IN ('ANTON', 'AROUT')
An expression that does not use SARG operators does not improve performance, because the SQL Server Compact query processor has to evaluate every row to determine whether it meets the filter clause. Therefore, an index is not useful on expressions that do not use SARG operators. Non-SARG operators include NOT, <>, NOT EXISTS, NOT IN, NOT LIKE, and intrinsic functions.

When determining the access methods for base tables, the SQL Server Compact optimizer determines whether an index exists for a SARG clause. If an index exists, the optimizer evaluates the index by calculating how many rows are returned. It then estimates the cost of finding the qualifying rows by using the index. It will choose indexed access if it has lower cost than table scan. An index is potentially useful if its first column or prefix set of columns are used in the SARG, and the SARG establishes a lower bound, upper bound, or both, to limit the search.

Response time is the time it takes for a query to return the first record. Total time is the time it takes for the query to return all records. For an interactive application, response time is important because it is the perceived time for the user to receive visual affirmation that a query is being processed. For a batch application, total time reflects the overall throughput. You have to determine what the performance criteria are for your application and queries, and then design accordingly.
For example, suppose the query returns 100 records and is used to populate a list with the first five records. In this case, you are not concerned with how long it takes to return all 100 records. Instead, you want the query to return the first few records quickly, so that you can populate the list.
Many query operations can be performed without having to store intermediate results. These operations are said to be pipelined. Examples of pipelined operations are projections, selections, and joins. Queries implemented with these operations can return results immediately. Other operations, such as SORT and GROUP-BY, require using all their input before returning results to their parent operations. These operations are said to require materialization. Queries implemented with these operations typically have an initial delay because of materialization. After this initial delay, they typically return records very quickly.
Queries with response time requirements should avoid materialization. For example, using an index to implement ORDER-BY yields better response time than does using sorting. The following section describes this in more detail.

Index the ORDER-BY / GROUP-BY / DISTINCT Columns for Better Response Time

The ORDER-BY, GROUP-BY, and DISTINCT operations are all types of sorting. The SQL Server Compact query processor implements sorting in two ways. If records are already sorted by an index, the processor needs to use only the index. Otherwise, the processor has to use a temporary work table to sort the records first. Such preliminary sorting can cause significant initial delays on devices with lower power CPUs and limited memory, and should be avoided if response time is important.
In the context of multiple-column indexes, for ORDER-BY or GROUP-BY to consider a particular index, the ORDER-BY or GROUP-BY columns must match the prefix set of index columns with the exact order. For example, the index CREATE INDEX Emp_Name ON Employees ("Last Name" ASC, "First Name" ASC) can help optimize the following queries:
  • ... ORDER BY / GROUP BY "Last Name" ...
  • ... ORDER BY / GROUP BY "Last Name", "First Name" ...
It will not help optimize:
  • ... ORDER BY / GROUP BY "First Name" ...
  • ... ORDER BY / GROUP BY "First Name", "Last Name" ...
For a DISTINCT operation to consider a multiple-column index, the projection list must match all index columns, although they do not have to be in the exact order. The previous index can help optimize the following queries:
  • ... DISTINCT "Last Name", "First Name" ...
  • ... DISTINCT "First Name", "Last Name" ...
It will not help optimize:
  • ... DISTINCT "First Name" ...
  • ... DISTINCT "Last Name" ...
Note
If your query always returns unique rows on its own, avoid specifying the DISTINCT keyword, because it only adds overhead

Sometimes you can rewrite a subquery to use JOIN and achieve better performance. The advantage of creating a JOIN is that you can evaluate tables in a different order from that defined by the query. The advantage of using a subquery is that it is frequently not necessary to scan all rows from the subquery to evaluate the subquery expression. For example, an EXISTS subquery can return TRUE upon seeing the first qualifying row.

Note
The SQL Server Compact query processor always rewrites the IN subquery to use JOIN. You do not have to try this approach with queries that contain the IN subquery clause.

For example, to determine all the orders that have at least one item with a 25 percent discount or more, you can use the following EXISTS subquery:
SELECT "Order ID" FROM Orders O
WHERE EXISTS (SELECT "Order ID"
FROM "Order Details" OD
WHERE O."Order ID" = OD."Order ID"
AND Discount >= 0.25)
You can also rewrite this by using JOIN:
SELECT DISTINCT O."Order ID" FROM Orders O INNER JOIN "Order Details"
OD ON O."Order ID" = OD."Order ID" WHERE Discount >= 0.25

Limit Using Outer JOINs

OUTER JOINs are treated differently from INNER JOINs in that the optimizer does not try to rearrange the join order of OUTER JOIN tables as it does to INNER JOIN tables. The outer table (the left table in LEFT OUTER JOIN and the right table in RIGHT OUTER JOIN) is accessed first, followed by the inner table. This fixed join order could lead to execution plans that are less than optimal.
For more information about a query that contains INNER JOIN, see Microsoft Knowledge Base.

If your application runs a series of queries that are only different in some constants, you can improve performance by using a parameterized query. For example, to return orders by different customers, you can run the following query:

SELECT "Customer ID" FROM Orders WHERE "Order ID" = ?

Parameterized queries yield better performance by compiling the query only once and executing the compiled plan multiple times. Programmatically, you must hold on to the command object that contains the cached query plan. Destroying the previous command object and creating a new one destroys the cached plan. This requires the query to be re-compiled. If you must run several parameterized queries in interleaved manner, you can create several command objects, each caching the execution plan for a parameterized query. This way, you effectively avoid re-compilations for all of the

The SQL Server Compact query processor is a powerful tool for querying data stored in your relational database. However, there is an intrinsic cost associated with any query processor. It must compile, optimize, and generate an execution plan before it starts doing the real work of performing the plan. This is particularly true with simple queries that finish quickly. Therefore, implementing the query yourself can sometimes provide vast performance improvement. If every millisecond counts in your critical component, we recommend that you consider the alternative of implementing the simple queries yourself. For large and complex queries, the job is still best left to the query processor.
For example, suppose you want to look up the customer ID for a series of orders arranged by their order IDs. There are two ways to accomplish this. First, you could follow these steps for each lookup:
  1. Open the Orders base table
  2. Find the row, using the specific "Order ID"
  3. Retrieve the "Customer ID"
Or you could issue the following query for each lookup:
SELECT "Customer ID" FROM Orders WHERE "Order ID" = <the specific order id>

The query-based solution is simpler but slower than the manual solution, because the SQL Server Compact query processor translates the declarative SQL statement into the same three operations that you could implement manually. Those three steps are then performed in sequence. Your choice of which method to use will depend on whether simplicity or performance is more important in your applica


Differentiate between user mode and kernel mode?


User Mode vs. Kernel Mode

Windows uses two processor modes: user mode and kernel mode. User application code runs in user mode; OS code and device drivers run in kernel mode.
The Intel x86 family of processors actually supports four operating modes (also known as rings). These are numbered 0 through 3, and each is isolated from the others by the hardware. a crash in a lower-priority mode will not destabilize higher-priority modes.

What is Copy-only Backup in SQL ?

A copy-only backup is a SQL Server backup that is independent of the sequence of conventional SQL Server backups. Usually, taking a backup changes the database and affects how later backups are restored. However, occasionally, it is useful to take a backup for a special purpose without affecting the overall backup and restore procedures for the database. copy-only backups serve this purpose. The transaction log is never truncated after a copy-only backup.

The types of copy-only backups are as follows:

Copy-only full backups (all recovery models)

A copy-only full backup cannot serve as a differential base or differential backup and does not affect the differential base.

Copy-only log backups (full recovery model and bulk-logged recovery model only)

A copy-only log backup preserves the existing log archive point and, therefore, does not affect the sequencing of regular log backups. A copy-only log backup can sometimes be useful for performing an online restore.

What is verbose log mode and how to enable it?

Verbose logging is a computer logging mode that records more information than the usual logging mode. (Verbose means "using more words than necessary".) Verbose logging options are usually enabled specifically for troubleshooting because they create large log files and can slow down performance.

How to hide an instance in SQL Server ?

In SQL Server Configuration Manager, expand SQL Server Network Configuration, right-click Protocols for <server instance>, and then select Properties.
On the Flags tab, in the HideInstance box, select Yes, and then click OK to close the dialog box. The change takes effect immediately for new connections.

How to recycle a SQL server log?


Use Master
go
 DBCC ERRORLOG
go

What is VLF? How to get information reg VLF?

Each transaction log file is divided logically into smaller segments called virtual log files. Virtual log files are the unit of truncation for the transaction log. When a virtual log file no longer contains log records for active transactions, it can be truncated and the space becomes available to log new transactions.

The smallest size for a virtual log file is 256 kilobytes (KB). The minimum size for a transaction log is 512 KB, which provides two 256-KB virtual log files. The number and size of the virtual log files in a transaction log increase as the size of the log file increases.

The following DBCC command can be used  to know how many Virtual Log Files or VLFs are present in your log file.

DBCC LOGINFO

Explain Index rebuild and reorganize in SQL Server

This topic describes how to reorganize or rebuild a fragmented index in SQL Server 2012 by using SQL Server Management Studio or Transact-SQL. The SQL Server Database Engine automatically maintains indexes whenever insert, update, or delete operations are made to the underlying data. Over time these modifications can cause the information in the index to become scattered in the database (fragmented). Fragmentation exists when indexes have pages in which the logical ordering, based on the key value, does not match the physical ordering inside the data file. Heavily fragmented indexes can degrade query performance and cause your application to respond slowly.

You can remedy index fragmentation by reorganizing or rebuilding an index. For partitioned indexes built on a partition scheme, you can use either of these methods on a complete index or a single partition of an index. Rebuilding an index drops and re-creates the index. This removes fragmentation, reclaims disk space by compacting the pages based on the specified or existing fill factor setting, and reorders the index rows in contiguous pages. When ALL is specified, all indexes on the table are dropped and rebuilt in a single transaction. Reorganizing an index uses minimal system resources. It defragments the leaf level of clustered and nonclustered indexes on tables and views by physically reordering the leaf-level pages to match the logical, left to right, order of the leaf nodes. Reorganizing also compacts the index pages. Compaction is based on the existing fill factor value.

The first step in deciding which defragmentation method to use is to analyze the index to determine the degree of fragmentation. By using the system function sys.dm_db_index_physical_stats, you can detect fragmentation in a specific index, all indexes on a table or indexed view, all indexes in a database, or all indexes in all databases.

For partitioned indexes, sys.dm_db_index_physical_stats also provides fragmentation information for each partition.

The result set returned by the sys.dm_db_index_physical_stats function includes the following columns.
Column
Description
avg_fragmentation_in_percent
The percent of logical fragmentation (out-of-order pages in the index).
fragment_count
The number of fragments (physically consecutive leaf pages) in the index.
avg_fragment_size_in_pages
Average number of pages in one fragment in an index.
After the degree of fragmentation is known, use the following table to determine the best method to correct the fragmentation
avg_fragmentation_in_percent value
Corrective statement
> 5% and < = 30%
ALTER INDEX REORGANIZE
> 30%
ALTER INDEX REBUILD WITH (ONLINE = ON)*
* Rebuilding an index can be executed online or offline. Reorganizing an index is always executed online. To achieve availability similar to the reorganize option, you should rebuild indexes online.

alter index <index name> on <tablename> Rebuild/reorganize

What is mean by Virtual memory?

         One way to extend the amount of memory accessible by a program is to use disk.The area of the hard disk that stores the RAM image is called a page file.Or in short Virtual memory is a technique to execute processes that may not be completely in main memory. It abstracts main memory into an extremely large storage. This array of storage is mapped on to main memory and its backing store. The backing store is usually an area of the disk outside of the file-system.

During bulk operations what will be logged in the transaction log?

Compared to the full recovery model, which fully logs all transactions, the bulk-logged recovery model minimally logs bulk operations, although fully logging other transactions. The bulk-logged recovery model protects against media failure and, for bulk operations, provides the best performance and least log space usage.

Under the bulk-logged recovery model, if a log backup covers any bulk operations, the log backup contains both log records and the data pages that were changed by bulk operations. This is necessary to capture the results of the bulk-logged operations. The incorporated data extents can make a log backup very large. Additionally, backing up the log requires access to the data files that contain the bulk-logged transactions

Using Bulk Logged recovery model for Bulk Operations will reduce the size of Transaction log backups. Backups under BULK_LOGGED recovery model, will include log records and the data pages changed by bulk operations, which will actually make a log backup very large.

Running a query and getting error" insufficient memory to run query" .How to troubleshoot?

If you look at the output of DBCC MEMORYSTATUS that is automatically logged to the error log on 701 error messages, it will have entries similar to the following:
Additionally, if you query the sys.dm_os_memory_clerks dynamic management view (DMV), during the time the batch is getting executed, the single_pages_kb column for the USERSTORE_SXC cache show a continuous growth over a period of time that leads to the 701 error Cause
The amount of memory allocated to store a request in SQL Server depends on:
    The batch size (number of RPCs per request)
    Type of parameters.
For certain types of parameters (for example sql_variant), SQL Server can save the requests in memory in a potentially inefficient manner. When a client sends a large batch of requests that use these types of parameters, multiple RPCs can be sent in one request. In this scenario, the server accumulates the whole request in memory before it is executed. This could potentially lead to 701 error discussed in symptoms section.
Resolution:
 You can use one of the following workarounds:
Reduce batch sizes.
Change parameter types, for example, replace sql_variants with other types.

RESTORE DATABASE WITH STOPAT IN SQL

We are using SQL Server 2005. We have a database say "TESTSTOPAT", this database is in Full Recovery Model. Unfortunately we don't have any backup for this database. Today some one deleted records from one of the table.
Now we want to recover from those changes. What we did is took a Full backup of database and then tried to restore it with a new name with STOPAT option but it fails with below errors:

Msg 4338, Level 16, State 1, Line 1
The STOPAT clause specifies a point too early to allow this backup set to be restored. Choose a different stop point or use RESTORE DATABASE WITH RECOVERY to recover at the current point.
Msg 3013, Level 16, State 1, Line 1
RESTORE DATABASE is terminating abnormally.


To be frank till the time I also didn't look to this option i.e. STOPAT clause with Full Database, as previously this option was coming for LOG files only (in SQL 2000). I did some research and based on that below are my findings.


**************************************************
RESTORE DATABASE WITH STOPAT
**************************************************

Previously in SQL 2000 STOPAT clause was provided for RESTORE LOG wherein you can "halt" the recovery proces at particular point in time. This feature is very useful for recovering from accidental user errors and such.

Now in SQL Server 2005, there is a STOPAT clause provided in the RESTORE DATABASE command also. RESTORE DATABASE WITH STOPAT wow great thing lets test it.

STEP :1
************

--Created a new database.

CREATE DATABASE TESTSTOPAT
GO


--Created a table in this newly created database and inserted values. Time when this query was executed was 14-Oct-2010 20:1043.827.

USE TESTSTOPAT
GO
CREATE TABLE TESTSTOPAT (I INT, II INT)
GO
INSERT INTO TESTSTOPAT VALUES (1,1)
INSERT INTO TESTSTOPAT VALUES (2,2)
INSERT INTO TESTSTOPAT VALUES (3,3)
INSERT INTO TESTSTOPAT VALUES (4,4)
INSERT INTO TESTSTOPAT VALUES (5,5)
INSERT INTO TESTSTOPAT VALUES (6,6)
INSERT INTO TESTSTOPAT VALUES (7,7)
INSERT INTO TESTSTOPAT VALUES (8,8)
INSERT INTO TESTSTOPAT VALUES (9,9)
INSERT INTO TESTSTOPAT VALUES (10,10)
INSERT INTO TESTSTOPAT VALUES (11,11)
GO
SELECT GETDATE()
GO

Time : 2010-10-14 20:10:43.827


STEP :2
***********

--Then we execute a DELETE statement. Time when this query was executed was 14-Oct-2010 20:11:40.437.

DELETE FROM TESTSTOPAT WHERE II>9
GO
SELECT GETDATE()
GO

Time : 2010-10-14 20:11:40.437


STEP :3
***********

--Then we took a full database backup. Time when this backup occurs was 14-Oct-2010 20:13:14.640.

BACKUP DATABASE TESTSTOPAT TO DISK='C:\TESTSTOPATFULL.BAK'
GO
SELECT GETDATE()
GO

Time : 2010-10-14 20:13:14.640

STEP :4
***********

--Now lets try to restore this database, I want to restore it before execution of STEP 2 i.e. Delete statement as I want to recover from that DELETE operation.

RESTORE DATABASE [TESTSTOPAT2]
FROM DISK = N'C:\TESTSTOPATFULL.BAK'
WITH FILE = 1, MOVE N'TESTSTOPAT' TO N'C:\TESTSTOPAT2.mdf', MOVE N'TESTSTOPAT_LOG' TO N'C:\TESTSTOPAT2_LOG.LDF', NOUNLOAD, REPLACE, STATS = 10,STOPAT='OCT 14, 2010 20:11:00 PM'
GO

Output

Msg 4338, Level 16, State 1, Line 1
The STOPAT clause specifies a point too early to allow this backup set to be restored. Choose a different stop point or use RESTORE DATABASE WITH RECOVERY to recover at the current point.
Msg 3013, Level 16, State 1, Line 1
RESTORE DATABASE is terminating abnormally.


Looking at the output what we saw is that RESTORE is failing. But why, MS has provided option of RESTORING database with STOPAT then why this is failing.

Here comes the reason
*****************************

As per Books on Line

A full backup (formerly known as a database backup) backs up the entire database, including part of the transaction log (so that the full backup can be recovered). Full backups represent the database at the time the backup completed. The transaction log included in the full backup allows it to be used to recover the database to the point in time at which the backup was completed.

That means Full Database backup can only be restored to the time at which backup was completed not prior to that. So if this the case then


• Without log backups, there is simply no way to achieve a true point-in-time restore.

• What we do when we specify STOPAT in RESTORE DATABASE is to test if the full database backup is already ahead of the point in time you want to stop at. If it is, then the command errors out (like it does above).

• Why it errors out is to provide you a clear signal that even if you restore this full backup and then restore subsequent transaction log backups on top (specifying a STOPAT) then those transaction log
backups would fail with errors since the database is already rolled forward to a point in time ahead of the STOPAT

• In the above case, you would have wasted a lot of time and effort first restoring the database and then the transaction log backup before finding out that you should have started with a earlier full backup.

Capacity Management Script using SQL Query



How to Find Table Size of Database in SQL 2000,2005

* This Script will List Table size in a database
SQL Server 2005 
SQL Server 2000*/

declare @id        int                                          
declare @type  character(2)                      
declare @pages                int                                          
declare @dbname sysname
declare @dbsize dec(15,0)
declare @bytesperpage               dec(15,0)
declare @pagesperMB                  dec(15,0)

create table #spt_space
(
                objid                      int null,
                rows                      int null,
                reserved              dec(15) null,
                data                       dec(15) null,
                indexp                  dec(15) null,
                unused                 dec(15) null
)

set nocount on

-- Create a cursor to loop through the user tables
declare c_tables cursor for
select    id
from      sysobjects
where   xtype = 'U'

open c_tables

fetch next from c_tables
into @id

while @@fetch_status = 0
begin

                /* Code from sp_spaceused */
                insert into #spt_space (objid, reserved)
                                select objid = @id, sum(reserved)
                                                from sysindexes
                                                                where indid in (0, 1, 255)
                                                                                and id = @id

                select @pages = sum(dpages)
                                                from sysindexes
                                                                where indid <>
                                                                                and id = @id
                select @pages = @pages + isnull(sum(used), 0)
                                from sysindexes
                                                where indid = 255
                                                                and id = @id
                update #spt_space
                                set data = @pages
                where objid = @id


                /* index: sum(used) where indid in (0, 1, 255) - data */
                update #spt_space
                                set indexp = (select sum(used)
                                                                from sysindexes
                                                                where indid in (0, 1, 255)
                                                                and id = @id)
                                                    - data
                                where objid = @id

                /* unused: sum(reserved) - sum(used) where indid in (0, 1, 255) */
                update #spt_space
                                set unused = reserved
                                                                - (select sum(used)
                                                                                from sysindexes
                                                                                                where indid in (0, 1, 255)
                                                                                                and id = @id)
                                where objid = @id

                update #spt_space
                                set rows = i.rows
                                                from sysindexes i
                                                                where i.indid <>
                                                                and i.id = @id
                                                                and objid = @id

                fetch next from c_tables
                into @id
end

select    TableName = (select left(name,60) from sysobjects where id = objid),
                Rows = convert(char(11), rows),
                ReservedKB = ltrim(str(reserved * d.low / 1024.,15,0) + ' ' + 'KB'),
                DataKB = ltrim(str(data * d.low / 1024.,15,0) + ' ' + 'KB'),
                IndexSizeKB = ltrim(str(indexp * d.low / 1024.,15,0) + ' ' + 'KB'),
                UnusedKB = ltrim(str(unused * d.low / 1024.,15,0) + ' ' + 'KB')
                               
from      #spt_space, master.dbo.spt_values d
where   d.number = 1
and        d.type = 'E'
order by reserved desc
drop table #spt_space
close c_tables
deallocate c_tables