/ August 2015

Differentiate between user mode and kernel mode?


User Mode vs. Kernel Mode

Windows uses two processor modes: user mode and kernel mode. User application code runs in user mode; OS code and device drivers run in kernel mode.
The Intel x86 family of processors actually supports four operating modes (also known as rings). These are numbered 0 through 3, and each is isolated from the others by the hardware. a crash in a lower-priority mode will not destabilize higher-priority modes.

What is Copy-only Backup in SQL ?

A copy-only backup is a SQL Server backup that is independent of the sequence of conventional SQL Server backups. Usually, taking a backup changes the database and affects how later backups are restored. However, occasionally, it is useful to take a backup for a special purpose without affecting the overall backup and restore procedures for the database. copy-only backups serve this purpose. The transaction log is never truncated after a copy-only backup.

The types of copy-only backups are as follows:

Copy-only full backups (all recovery models)

A copy-only full backup cannot serve as a differential base or differential backup and does not affect the differential base.

Copy-only log backups (full recovery model and bulk-logged recovery model only)

A copy-only log backup preserves the existing log archive point and, therefore, does not affect the sequencing of regular log backups. A copy-only log backup can sometimes be useful for performing an online restore.

What is verbose log mode and how to enable it?

Verbose logging is a computer logging mode that records more information than the usual logging mode. (Verbose means "using more words than necessary".) Verbose logging options are usually enabled specifically for troubleshooting because they create large log files and can slow down performance.

How to hide an instance in SQL Server ?

In SQL Server Configuration Manager, expand SQL Server Network Configuration, right-click Protocols for <server instance>, and then select Properties.
On the Flags tab, in the HideInstance box, select Yes, and then click OK to close the dialog box. The change takes effect immediately for new connections.

How to recycle a SQL server log?


Use Master
go
 DBCC ERRORLOG
go

What is VLF? How to get information reg VLF?

Each transaction log file is divided logically into smaller segments called virtual log files. Virtual log files are the unit of truncation for the transaction log. When a virtual log file no longer contains log records for active transactions, it can be truncated and the space becomes available to log new transactions.

The smallest size for a virtual log file is 256 kilobytes (KB). The minimum size for a transaction log is 512 KB, which provides two 256-KB virtual log files. The number and size of the virtual log files in a transaction log increase as the size of the log file increases.

The following DBCC command can be used  to know how many Virtual Log Files or VLFs are present in your log file.

DBCC LOGINFO

Explain Index rebuild and reorganize in SQL Server

This topic describes how to reorganize or rebuild a fragmented index in SQL Server 2012 by using SQL Server Management Studio or Transact-SQL. The SQL Server Database Engine automatically maintains indexes whenever insert, update, or delete operations are made to the underlying data. Over time these modifications can cause the information in the index to become scattered in the database (fragmented). Fragmentation exists when indexes have pages in which the logical ordering, based on the key value, does not match the physical ordering inside the data file. Heavily fragmented indexes can degrade query performance and cause your application to respond slowly.

You can remedy index fragmentation by reorganizing or rebuilding an index. For partitioned indexes built on a partition scheme, you can use either of these methods on a complete index or a single partition of an index. Rebuilding an index drops and re-creates the index. This removes fragmentation, reclaims disk space by compacting the pages based on the specified or existing fill factor setting, and reorders the index rows in contiguous pages. When ALL is specified, all indexes on the table are dropped and rebuilt in a single transaction. Reorganizing an index uses minimal system resources. It defragments the leaf level of clustered and nonclustered indexes on tables and views by physically reordering the leaf-level pages to match the logical, left to right, order of the leaf nodes. Reorganizing also compacts the index pages. Compaction is based on the existing fill factor value.

The first step in deciding which defragmentation method to use is to analyze the index to determine the degree of fragmentation. By using the system function sys.dm_db_index_physical_stats, you can detect fragmentation in a specific index, all indexes on a table or indexed view, all indexes in a database, or all indexes in all databases.

For partitioned indexes, sys.dm_db_index_physical_stats also provides fragmentation information for each partition.

The result set returned by the sys.dm_db_index_physical_stats function includes the following columns.
Column
Description
avg_fragmentation_in_percent
The percent of logical fragmentation (out-of-order pages in the index).
fragment_count
The number of fragments (physically consecutive leaf pages) in the index.
avg_fragment_size_in_pages
Average number of pages in one fragment in an index.
After the degree of fragmentation is known, use the following table to determine the best method to correct the fragmentation
avg_fragmentation_in_percent value
Corrective statement
> 5% and < = 30%
ALTER INDEX REORGANIZE
> 30%
ALTER INDEX REBUILD WITH (ONLINE = ON)*
* Rebuilding an index can be executed online or offline. Reorganizing an index is always executed online. To achieve availability similar to the reorganize option, you should rebuild indexes online.

alter index <index name> on <tablename> Rebuild/reorganize

What is mean by Virtual memory?

         One way to extend the amount of memory accessible by a program is to use disk.The area of the hard disk that stores the RAM image is called a page file.Or in short Virtual memory is a technique to execute processes that may not be completely in main memory. It abstracts main memory into an extremely large storage. This array of storage is mapped on to main memory and its backing store. The backing store is usually an area of the disk outside of the file-system.

During bulk operations what will be logged in the transaction log?

Compared to the full recovery model, which fully logs all transactions, the bulk-logged recovery model minimally logs bulk operations, although fully logging other transactions. The bulk-logged recovery model protects against media failure and, for bulk operations, provides the best performance and least log space usage.

Under the bulk-logged recovery model, if a log backup covers any bulk operations, the log backup contains both log records and the data pages that were changed by bulk operations. This is necessary to capture the results of the bulk-logged operations. The incorporated data extents can make a log backup very large. Additionally, backing up the log requires access to the data files that contain the bulk-logged transactions

Using Bulk Logged recovery model for Bulk Operations will reduce the size of Transaction log backups. Backups under BULK_LOGGED recovery model, will include log records and the data pages changed by bulk operations, which will actually make a log backup very large.

Running a query and getting error" insufficient memory to run query" .How to troubleshoot?

If you look at the output of DBCC MEMORYSTATUS that is automatically logged to the error log on 701 error messages, it will have entries similar to the following:
Additionally, if you query the sys.dm_os_memory_clerks dynamic management view (DMV), during the time the batch is getting executed, the single_pages_kb column for the USERSTORE_SXC cache show a continuous growth over a period of time that leads to the 701 error Cause
The amount of memory allocated to store a request in SQL Server depends on:
    The batch size (number of RPCs per request)
    Type of parameters.
For certain types of parameters (for example sql_variant), SQL Server can save the requests in memory in a potentially inefficient manner. When a client sends a large batch of requests that use these types of parameters, multiple RPCs can be sent in one request. In this scenario, the server accumulates the whole request in memory before it is executed. This could potentially lead to 701 error discussed in symptoms section.
Resolution:
 You can use one of the following workarounds:
Reduce batch sizes.
Change parameter types, for example, replace sql_variants with other types.

RESTORE DATABASE WITH STOPAT IN SQL

We are using SQL Server 2005. We have a database say "TESTSTOPAT", this database is in Full Recovery Model. Unfortunately we don't have any backup for this database. Today some one deleted records from one of the table.
Now we want to recover from those changes. What we did is took a Full backup of database and then tried to restore it with a new name with STOPAT option but it fails with below errors:

Msg 4338, Level 16, State 1, Line 1
The STOPAT clause specifies a point too early to allow this backup set to be restored. Choose a different stop point or use RESTORE DATABASE WITH RECOVERY to recover at the current point.
Msg 3013, Level 16, State 1, Line 1
RESTORE DATABASE is terminating abnormally.


To be frank till the time I also didn't look to this option i.e. STOPAT clause with Full Database, as previously this option was coming for LOG files only (in SQL 2000). I did some research and based on that below are my findings.


**************************************************
RESTORE DATABASE WITH STOPAT
**************************************************

Previously in SQL 2000 STOPAT clause was provided for RESTORE LOG wherein you can "halt" the recovery proces at particular point in time. This feature is very useful for recovering from accidental user errors and such.

Now in SQL Server 2005, there is a STOPAT clause provided in the RESTORE DATABASE command also. RESTORE DATABASE WITH STOPAT wow great thing lets test it.

STEP :1
************

--Created a new database.

CREATE DATABASE TESTSTOPAT
GO


--Created a table in this newly created database and inserted values. Time when this query was executed was 14-Oct-2010 20:1043.827.

USE TESTSTOPAT
GO
CREATE TABLE TESTSTOPAT (I INT, II INT)
GO
INSERT INTO TESTSTOPAT VALUES (1,1)
INSERT INTO TESTSTOPAT VALUES (2,2)
INSERT INTO TESTSTOPAT VALUES (3,3)
INSERT INTO TESTSTOPAT VALUES (4,4)
INSERT INTO TESTSTOPAT VALUES (5,5)
INSERT INTO TESTSTOPAT VALUES (6,6)
INSERT INTO TESTSTOPAT VALUES (7,7)
INSERT INTO TESTSTOPAT VALUES (8,8)
INSERT INTO TESTSTOPAT VALUES (9,9)
INSERT INTO TESTSTOPAT VALUES (10,10)
INSERT INTO TESTSTOPAT VALUES (11,11)
GO
SELECT GETDATE()
GO

Time : 2010-10-14 20:10:43.827


STEP :2
***********

--Then we execute a DELETE statement. Time when this query was executed was 14-Oct-2010 20:11:40.437.

DELETE FROM TESTSTOPAT WHERE II>9
GO
SELECT GETDATE()
GO

Time : 2010-10-14 20:11:40.437


STEP :3
***********

--Then we took a full database backup. Time when this backup occurs was 14-Oct-2010 20:13:14.640.

BACKUP DATABASE TESTSTOPAT TO DISK='C:\TESTSTOPATFULL.BAK'
GO
SELECT GETDATE()
GO

Time : 2010-10-14 20:13:14.640

STEP :4
***********

--Now lets try to restore this database, I want to restore it before execution of STEP 2 i.e. Delete statement as I want to recover from that DELETE operation.

RESTORE DATABASE [TESTSTOPAT2]
FROM DISK = N'C:\TESTSTOPATFULL.BAK'
WITH FILE = 1, MOVE N'TESTSTOPAT' TO N'C:\TESTSTOPAT2.mdf', MOVE N'TESTSTOPAT_LOG' TO N'C:\TESTSTOPAT2_LOG.LDF', NOUNLOAD, REPLACE, STATS = 10,STOPAT='OCT 14, 2010 20:11:00 PM'
GO

Output

Msg 4338, Level 16, State 1, Line 1
The STOPAT clause specifies a point too early to allow this backup set to be restored. Choose a different stop point or use RESTORE DATABASE WITH RECOVERY to recover at the current point.
Msg 3013, Level 16, State 1, Line 1
RESTORE DATABASE is terminating abnormally.


Looking at the output what we saw is that RESTORE is failing. But why, MS has provided option of RESTORING database with STOPAT then why this is failing.

Here comes the reason
*****************************

As per Books on Line

A full backup (formerly known as a database backup) backs up the entire database, including part of the transaction log (so that the full backup can be recovered). Full backups represent the database at the time the backup completed. The transaction log included in the full backup allows it to be used to recover the database to the point in time at which the backup was completed.

That means Full Database backup can only be restored to the time at which backup was completed not prior to that. So if this the case then


• Without log backups, there is simply no way to achieve a true point-in-time restore.

• What we do when we specify STOPAT in RESTORE DATABASE is to test if the full database backup is already ahead of the point in time you want to stop at. If it is, then the command errors out (like it does above).

• Why it errors out is to provide you a clear signal that even if you restore this full backup and then restore subsequent transaction log backups on top (specifying a STOPAT) then those transaction log
backups would fail with errors since the database is already rolled forward to a point in time ahead of the STOPAT

• In the above case, you would have wasted a lot of time and effort first restoring the database and then the transaction log backup before finding out that you should have started with a earlier full backup.

Capacity Management Script using SQL Query



How to Find Table Size of Database in SQL 2000,2005

* This Script will List Table size in a database
SQL Server 2005 
SQL Server 2000*/

declare @id        int                                          
declare @type  character(2)                      
declare @pages                int                                          
declare @dbname sysname
declare @dbsize dec(15,0)
declare @bytesperpage               dec(15,0)
declare @pagesperMB                  dec(15,0)

create table #spt_space
(
                objid                      int null,
                rows                      int null,
                reserved              dec(15) null,
                data                       dec(15) null,
                indexp                  dec(15) null,
                unused                 dec(15) null
)

set nocount on

-- Create a cursor to loop through the user tables
declare c_tables cursor for
select    id
from      sysobjects
where   xtype = 'U'

open c_tables

fetch next from c_tables
into @id

while @@fetch_status = 0
begin

                /* Code from sp_spaceused */
                insert into #spt_space (objid, reserved)
                                select objid = @id, sum(reserved)
                                                from sysindexes
                                                                where indid in (0, 1, 255)
                                                                                and id = @id

                select @pages = sum(dpages)
                                                from sysindexes
                                                                where indid <>
                                                                                and id = @id
                select @pages = @pages + isnull(sum(used), 0)
                                from sysindexes
                                                where indid = 255
                                                                and id = @id
                update #spt_space
                                set data = @pages
                where objid = @id


                /* index: sum(used) where indid in (0, 1, 255) - data */
                update #spt_space
                                set indexp = (select sum(used)
                                                                from sysindexes
                                                                where indid in (0, 1, 255)
                                                                and id = @id)
                                                    - data
                                where objid = @id

                /* unused: sum(reserved) - sum(used) where indid in (0, 1, 255) */
                update #spt_space
                                set unused = reserved
                                                                - (select sum(used)
                                                                                from sysindexes
                                                                                                where indid in (0, 1, 255)
                                                                                                and id = @id)
                                where objid = @id

                update #spt_space
                                set rows = i.rows
                                                from sysindexes i
                                                                where i.indid <>
                                                                and i.id = @id
                                                                and objid = @id

                fetch next from c_tables
                into @id
end

select    TableName = (select left(name,60) from sysobjects where id = objid),
                Rows = convert(char(11), rows),
                ReservedKB = ltrim(str(reserved * d.low / 1024.,15,0) + ' ' + 'KB'),
                DataKB = ltrim(str(data * d.low / 1024.,15,0) + ' ' + 'KB'),
                IndexSizeKB = ltrim(str(indexp * d.low / 1024.,15,0) + ' ' + 'KB'),
                UnusedKB = ltrim(str(unused * d.low / 1024.,15,0) + ' ' + 'KB')
                               
from      #spt_space, master.dbo.spt_values d
where   d.number = 1
and        d.type = 'E'
order by reserved desc
drop table #spt_space
close c_tables
deallocate c_tables

Resource Governor

The Resource Governor in SQL Server 2008 allows workloads to be differentiated and prioritized. This can be incredibly useful when you need to guarantee that certain critical processes have enough resources to operate efficiently, while still sharing resources with other less important jobs. It can also be used to insulate key processes from runaway queries and to prioritize queries. Prior to the Resource Governor,you could use multiple instances and a careful array of configuration values (such as affinity masks) to divide up resources. This solution was less than ideal because, in most cases, the resources used by one instance were reserved and could not be shared with other instances. The Resource Governor solves this problem by sharing resources when they are available and only kicking in to enforce limits when there is resource contention.

At this time, the Resource Governor is only available in the Enterprise, Developer, and Evaluation versions of SQL Server 2008.

The Resource Governor is comprised of three key components: a classifier function that differentiates workloads, resource pools that describe limits for shared server resources, and workload groups that are used to enforce policies for similar workloads. Figure  shows how these components work together.



Configuring the Resource Governor The Resource Governor must be configured before you can use it in a meaningful way. Configuration consists of the following four steps: 
1.Configure resource pools.
2.Configure workload groups.
3.Create a classifier function.
4.Assign a classifier function and reconfigure. 

1 Configure Resource Pools
Resource pools are used to define logical boundaries for physical resources. It is helpful to think of a resource pool as a kind of virtual server instance. Each pool allows you to specify minimum and maxi-mum utilization percentages for CPU and memory usage. The minimum values represent the minimum guaranteed resource availability, and the maximum values represent the shared resource limit.
It is normal to see resource usage for a particular pool exceed the defined maximum value. If a resource is not required by another pool, then whatever is available to SQL Server as a whole is free to be used. The maximums are only enforced when the server has to divvy resources between competing pools.Two resource pools are automatically created by SQL Server — the internal pool and the default pool.


The internal pool is used by SQL Server itself for critical system tasks and cannot be altered in any way. It is an unrestricted pool and will consume resources as necessary, even if it means violating the limits established for the other pools. The default pool is used for everything else that has not been explicitly placed into a custom pool. The default pool cannot be created or deleted, but it can be re-configured to limit resource use.
Additional resource pools can be created as needed, but the total of all minimum values for a resource cannot exceed 100 percent, and the total of all maximum values for a resource must be between the total minimum and 100 percent. Resource pools can be created, altered, or dropped from SSMS, or by using DDL statements. For example, the following command will create a new resource pool that is limited to 50 percent of the total system memory and 25 percent of the total CPU resources:
 

CREATE RESOURCE POOL poolFinance
WITH
(
MAX_CPU_PERCENT=25,
MAX_MEMORY_PERCENT=50
);

2 Configure Workload Groups
Workload groups are used to define policies that apply to similar requests as determined by the classifier function. There are two built-in workload groups the internal group and the default group which are assigned to the internal pool and the default pool, respectively. The internal group is used by SQL Server itself for critical tasks and cannot be changed. The default group is used for everything else that has not been explicitly assigned to a user-defined group. The default group cannot be moved or deleted; however, you can configure it. Additional user-defined workload groups can be created as needed and assigned to either the default pool or to a user-defined pool. If necessary, user-defined workload groups can even be moved between pools.
The following policies (parameters) are available when creating a workload group:
 a) IMPORTANCE — This indicates the relative importance of tasks within a given resource pool. Validvalues are HIGH, MEDIUM, and LOW. The default value is MEDIUM.
b)REQUEST_MAX_MEMORY_GRANT_PERCENT — This is the maximum memory that a single request can use from the pool. If insufficient memory is available, then the request will be held until a memory grant becomes available or the request times out. The default value is 25 percent.
c) REQUEST_MAX_CPU_TIME_SEC — This setting specifies the maximum CPU time that a request can use. Exceeding this value will not stop a request from processing; instead, a CPU Threshold Exceeded event will be raised, and the request will continue. The default is 0, which means unlimited.
d)REQUEST_MEMORY_GRANT_TIMEOUT_SEC — This parameter sets the number of seconds that arequest will wait for a memory grant before failing. The default is 0, which uses an internal calculation to determine the value based on the query cost.

e)MAX_DOP — This sets the maximum degree of parallelism for requests. This setting overrides the server ‘‘max degree of parallelism’’ setting and sets an upper limit on the MAX_DOP query hint. The default is 0, which uses the system default setting.
f)GROUP_MAX_REQUESTS — Sets the maximum number of simultaneous requests for the group. The default is 0, which means unlimited.

The following example will create a workload group and assign it to resource pool pool Finance:

CREATE WORKLOAD GROUP wrkgroupFinance
WITH
(
IMPORTANCE = MEDIUM,
REQUEST_MAX_MEMORY_GRANT_PERCENT = 50,
REQUEST_MAX_CPU_TIME_SEC = 0,
REQUEST_MEMORY_GRANT_TIMEOUT_SEC = 0,
MAX_DOP = 4,
GROUP_MAX_REQUESTS = 25
)
USING poolFinance;

3)Create a Classifier Function:-

The classifier function is used by the Resource Governor to decide which workload group to use for an incoming session. When the Resource Governor is enabled, this function will be executed after authentication and any logon triggers. It must return the name of the workload group to assign the incoming
session to. If the classifier function fails for any reason or returns an invalid group name, then the session will be assigned to the ‘‘default group.

         Make sure to test your classifier function before putting it into production. A poorly written classifier function can render the system unusable by causing all new sessions to time out. In addition, make sure to enable the Dedicated Administrator Connection. The DAC bypasses the classifier function and can be
used to gain access to the server if there is a problem.

Classifier functions are subject to a few special conditions:

a)They should always be created with server scope, meaning they should reside in the master database.
b)Only one classifier function can be active at any point in time, and the active function cannot be  dropped from the database.

c) The classifier function must finish quickly to avoid causing connection time-outs.

The following table includes several functions that are useful in classifying workload groups. Included are a few functions that are new to SQL Server 2008 and are intended specifically for this task:

Function                                           Description
HOST_NAME     Returns the name of the workstation.
APP_NAME        Returns the name of the application; however, not every application sets this value.
SUSER_NAME   Returns the login name of the user in the syslogins table.
SUSER_SNAME Returns the login name of the user based on their security
                        identifier.
IS_SRVROLEMEMBER Determines if the login is a member of a fixed server role.
IS_MEMBER Determines if the login is a member of a Windows group or
                    database role.
LOGONPROPERTY Returns information about the login, including the default
                              database.
CONNECTIONPROPERTY Returns information about the connection that the    request originated from, including the IP address and authentication mode.

ORIGINAL_DB_NAME
Returns the name of the database that was specified in   the connection string.

The following example demonstrates how to create a classifier function:

USE master
GO
CREATE FUNCTION fnTestClassifier()RETURNS SYSNAME
WITH SCHEMABINDING
AS
BEGIN
DECLARE @grpName SYSNAME
IF (SUSER_SNAME() = ‘sa’)
SET @grpName = ‘wrkgroupAdmin’
ELSE IF (APP_NAME() like ‘%Logistics%’)
SET @grpName = ‘wrkgroupLogDep’
ELSE IF (APP_NAME() like ‘%REPORT SERVER%’)
SET @grpName = ‘wrkgroupReports’
ELSE
SET @grpName = ‘default’
RETURN @grpName
END;

4 )Assign the Classifier Function and Reconfigure

The final step in configuring the Resource Governor is to assign the classifier function. Before completing the configuration, make sure that you have thoroughly tested the function, and also make sure to enable the Dedicated Administrator Connection just in case something goes wrong.
The following example shows how to complete the configuration:

ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION = dbo.fnTestClassifier)
GO
ALTER RESOURCE GOVERNOR RECONFIGURE
The new configuration should take effect immediately.

Migrating Logins & Users in SQL Server 2005 ,2008,2008R2, 2012,2014

Moving a database to a new server
When you move a database between SQL Server instances the database-level Users and their permissions will also be moved as this information is stored in database-level system tables.
What isn’t automatically transferred is information about the Users’ parent Logins, so you will need to check whether:
  1. The necessary Logins exist on the new server
  2. The Logins have the correct server-level permissions
  3. The User and Login Security Identifiers (SIDs) match (issue for SQL Authenticated Logins only)
  4. The passwords are correct (issue for SQL Authenticated Logins only)
Scripting Login information
Prior to a database move or a complete server migration/upgrade you’ll want to script out statements to 1) create the logins, 2) add them to any server level roles and 3) grant (or deny) them any additional permissions.
This can be accomplished with the script logins_script_20150813.sql. You can adjust the WHERE clause of the query that populates the @Logins table (see below) to filter for Logins of particular interest. You must execute the entire script as one batch as it uses table variables which have a batch-level scope.
Update this portion of the larger script to build statements for specific Logins:
INSERT INTO@Logins(PrincipalId,Name,SID,IsDisabled,Type,PasswordHash )
SELECT p.principal_id,p.name,p.sid,p.is_disabled,p.type,l.password_hash
FROM sys.server_principalsp
LEFTJOINsys.sql_loginsl ONl.principal_id =p.principal_id
WHERE p.typeIN('U','G','S')AND
p.name <>N'sa'AND
p.name NOTLIKEN'##%'AND
p.name NOTLIKEN'distributor_admin'AND
p.name NOTLIKEN'NT [SA]%';

The script only generates the commands you’ll need to recreate the logins on another server. It does not execute any of the commands. It’s very lightweight and will run quickly.
For SQL Authenticated accounts the CREATE LOGIN scripts will include the hashed passwords and SIDs. It’s not possible to un-hash passwords, but this mechanism will transfer the passwords correctly to a new server.
For Windows Authenticated accounts passwords and SIDs are controlled outside of SQL Server. A given account will automatically use the same SID and password across all servers where it exists as a Login.
Applying the Login Scripts
On the destination server only execute the scripted commands that you need. It’s a security risk to grant server-level access to users who do not need it.
If some of the logins already exist on the destination server script them out (using logins_script_2015051.sql) then compare the password hashes, SIDs and permissions with those from the source server. You’ll want to be careful not to break existing functionality on the destination server to accommodate a newly restored database.
Check for Orphaned Users
After adding and updating the Logins execute the script users_orphaned.sqlagainst the newly restored database to check for database Users that don’t have a corresponding Login on the destination server.
-- Check for database users without parent logins on the server
SELECT name,type_desc,create_date,modify_date,sid
FROM sys.database_principals
WHERE typeIN('U','G','S')AND
name NOTIN('dbo','guest','INFORMATION_SCHEMA','sys')AND
name NOTIN(SELECT name
FROM sys.server_principals);

You’ll need to create Logins for any orphaned users that are needed for your applications. Consider dropping any Users you determine are no longer needed in the database.
Synchronizing SIDs
As a last step, execute the script users_logins_sync_SIDs.sql against the newly restored database to check for database users having SIDs that differ from those of their corresponding Logins. The script assumes the Logins and Users will have matching names. This is typical, but not required.
SELECT l.name ASLoginUserName,l.sidASLoginSID,u.sidASUserSID,
'ALTER USER '+QUOTENAME(u.name)+' WITH LOGIN = '+
QUOTENAME(l.name)+';'ASSyncCmd
FROM sys.server_principalsl
INNERJOINsys.database_principalsu ONu.name =l.name
WHERE l.type='S'AND
u.sid<>l.sid;

The script will generate the ALTER USER statements necessary to update the Users’ SIDs to match those of their corresponding Logins. Review the statements for correctness before executing them.