Monday, July 27, 2015

SQL Max Memory Limit Too Low: Increase SQL server Maximum Memory from Command Prompt

SQL Server is well known for being a memory hog. When installed, the default configuration allocates a ridiculous amount of memory (2147483647 MB) to the instance. In effect, this grants SQL Server access to whatever memory is assigned to the server, which it will try to use at times at the expense of operating system tasks. Best practice is therefore to reduce the memory allocated to the instance to reserve some for the OS, but be careful! If you accidentally enter 10 thinking you’re allocating 10 GB to SQL or simply hit return too early, you’re in for a rude awakening.

The Maximum Server Memory setting is measured in Megabytes. If you allocate too little memory (such as 10 MB), you will bring the SQL instance and its databases to their knees and won’t be able to get back into SQL Management Studio to correct it. If you try, you’ll get the following error message:

A connection was successfully established with the server, but then an error occurred during the login process. (provider: Shared Memory Provider, error: 0 – No process is on the other end of the pipe.) (Microsoft SQL Server, Error: 233)

I this recently on one of my production SQL server 2012 boxes, it took a few minutes for the users to realize that they are being kicked out from the database. One of my coworker set it to 28MB instead of 28GB.  We tried to stop SQL Server , and start it using Command prompt.  The OS shows pending task, but couldn't started the service. After waiting for 10 minutes, we decided reboot the computer. In a nutshell these were the steps we followed

  1. Stop all SQL Services thru “services.msc” , if  you set the memory to way low, SQL Server Configuration Manager wont work.
  2. Run CMD as administrator, and run the statement  sqlservr -f -m”SQLCMD” 
  3. If you see the same connection error, follow step 4 otherwise go to step
  4. Next step is to disable the auto start for SQL Server service and restart the server, you need to run “services.msc” and do it
  5. Once the system is back online, run step 2. you should be able to connect to your sql instance

image

   6.  Now, leave the above command prompt and open another command prompt as admin and run the following statement, press ctrl+c to close the connection

SQLCMD
sp_configure ‘show advanced options’,1;
GO
RECONFIGURE;
GO
sp_configure ‘max server memory’,4096;
GO
RECONFIGURE;
GO

 

image

 

7. Close the command prompts

8.  Set the services to auto start, and start them from services

9. Connect to the Server using SSMS  and verify the memory settings.

Thursday, July 16, 2015

What's the best PAGE_VERIFY setting

I always set this value to CHECKSUM. When CHECKSUM is enabled for the PAGE_VERIFY database option, the SQL Server Database Engine calculates a checksum over the contents of the whole page, and stores the value in the page header when a page is written to disk. When the page is read from disk, the checksum is recomputed and compared to the checksum value that is stored in the page header. This helps provide a high level of data-file integrity.

The following query, generate a script to  change the page verify option to CHECKSUM in case its different.

-- Generate ALTER DATABASE statements to change Page Verify option to CHECKSUM
SELECT N'ALTER DATABASE [' + db.name + N'] SET PAGE_VERIFY CHECKSUM WITH NO_WAIT;'
FROM sys.databases AS db
WHERE db.page_verify_option_desc <> N'CHECKSUM';




 



Note: Remember, for existing database, the changes wont take effect immediately after you make the change. The database engine will calculate it only when the page is read thru any of  DML statement.



 



 



For more info check this post  http://www.littlekendra.com/2011/01/25/pageverify/

Tuesday, March 24, 2015

What causes Multi-Page allocations?

In SQL Server versions before SQL 2012 single page allocations and multi-Page allocations are handled by different components, the Single Page Allocator (which is responsible for Buffer Pool allocations and governed by 'max server memory') and the Multi-Page allocator (MPA) which handles allocations of greater than an 8K page. If there are many multi-page allocations this can affect how much memory needs to be reserved outside 'max server memory' which may in turn involve setting the -g memory_to_reserve startup parameter.

So what kinds of query result in MPA activity? Here are few scenarios

1. A workload that has stored procedures with a large # of parameters (say > 100, > 500), and then invoked via large ad hoc batches, where each SP has different parameters will result in a plan being cached for this “exec proc” batch. This plan will result in MPA.

clip_image001

2. Another workload would be large adhoc batches of the form:

clip_image002

In SQL 2012 all page allocations are handled by an "Any size page allocator" and included in 'max server memory'. The buffer pool effectively becomes a client of the any size page allocator, which in turn relies on the memory manager.

SQL Maintenance Plan error “Could not generate mail report.An exception occurred while executing a Transact-SQL statement or batch.No global profile is configured. Specify a profile name in the @profile_name parameter.”

Some body have this error : Could not generate mail report.An exception occurred while executing a Transact-SQL statement or batch.No global profile is configured. Specify a profile name in the @profile_name parameter. When i execute my maintenance plan.

 

Resolution

The issue is related to the default email profile, you have to set a default email profile in order for the maintenance plan send the email alert.

 

Please Follow These steps to Resolve this Problem

1. Go to Database mail

2. Right Click  Click on Configure Database mail.

3. wizard will click next

4.select Manage Profile security

5. Click Next

6. Beside Profile name  by default there will be no , Please make it yes

7. Finish

Wednesday, February 25, 2015

Difference between DateTime2, DateTime and SmallDateTime

Here are the main differences you should know about these three date types:

  Range of Dates

Accuracy

Size

usage Applies to
smalldatetime

January 1, 1900 - June 6, 2079

one Minute. smalldatetime values with 29.998 seconds or lower are rounded down to the nearest minute; values with 29.999 seconds or higher are rounded up to the nearest minute.

4 bytes

DECLARE @myDate SmallDatetime

SQL 2000 >+
Datetime

January 1, 1753 - December 31, 9999

three-hundredths of a second (equivalent to 3.33 milliseconds or 0.00333 seconds).Values are rounded to increments of .000, .003, or .007 seconds

8 bytes

DECLARE @myDate Datetime

SQL 2000 >+
Datetime2

January 1, 0001 - December 31, 9999

100ns

can take 6-8 Bytes; by default it will take 7 bytes. 6 bytes for precisions less than 3; 7 bytes for precisions 3 and 4. All other precisions require 8 bytes

DECLARE @myDate   Datetime2(7)

>=SQL 2008

 

 

Armed with this knowledge, you may want to use SmallDateTime instead of DateTime if you only need to represent dates from January 1, 1900 to June 6, 2079 and you do not need accuracy below 1 minute. Why? Simple! Using SmallDateTime will reduce the amount of data your queries are pulling back. The size of each row will be a bit smaller.

Thursday, February 5, 2015

Identifying Unused Databases

This is one of the issues the consultant faces, when there client has little documentation on the systems and the databases they use.  The first step in this process is simply looking at the date & time stamps of the .MDF and .LDF files, which could help in some cases,To evaluate whether it was worth keeping the databases online, I had some investigation to do.
Step 1 : Who is Connected?
The following TSQL summarizessummarises who is connected. It ignores the currently connected user (@@SPID) and system processes (spid's up to 50)

SELECT loginame , nt_username, COUNT(*) AS Connections
FROM sys.sysprocesses
WHERE spid > 50 and spid != @@SPID
GROUP BY loginame , nt_username
ORDER BY COUNT(*) DESC


Step2 : Check data



Although not related to recent use, this gives an idea of size, historic and the table names might help decide if it is worth keeping. You never know, it might even be empty!



SELECT i.[database_id],
DB_NAME(i.[database_id]) AS [Database]
, s.name
AS [SchemaName]
, o.name
AS [TableName]
,
MAX(i.[last_user_lookup]) AS [last_user_lookup]
,
MAX(i.[last_user_scan]) AS [last_user_scan]
,
MAX(i.[last_user_seek]) AS [last_user_seek]
FROM sys.dm_db_index_usage_stats AS i
INNER JOIN sys.objects o ON o.object_id = i.object_id
INNER JOIN sys.schemas s ON o.schema_id = s.schema_id
WHERE i.[database_id] = DB_ID()
AND s.name <> 'sys'
GROUP BY i.[database_id], s.name, o.name
ORDER BY i.[database_id], s.name, o.name


Step3 : When were tables last  accessed



Table usage can be gathered from the index usage statistics. This query shows the usage for all user tables in the current database. It excludes the system tables.



SELECT i.[database_id],
DB_NAME(i.[database_id]) AS [Database]
, s.name
AS [SchemaName]
, o.name
AS [TableName]
,
MAX(i.[last_user_lookup]) AS [last_user_lookup]
,
MAX(i.[last_user_scan]) AS [last_user_scan]
,
MAX(i.[last_user_seek]) AS [last_user_seek]
FROM sys.dm_db_index_usage_stats AS i
INNER JOIN sys.objects o ON o.object_id = i.object_id
INNER JOIN sys.schemas s ON o.schema_id = s.schema_id
WHERE i.[database_id] = DB_ID()
AND s.name <> 'sys'
GROUP BY i.[database_id], s.name, o.name
ORDER BY i.[database_id], s.name, o.name


Step4 : Check the latest date values on the database



Assuming that there is a datatime  column and is populated periodically, this could give a better picture whether the data has been altered recently.



 



DECLARE @cols TABLE
(
r
INT IDENTITY ,
TableName
VARCHAR(100) ,
ColumnName
VARCHAR(100)
);
DECLARE @i INT ,
@nSQL NVARCHAR(1000);
DECLARE @MaxData TABLE
(
TableName
VARCHAR(100) ,
MaxDate DATETIME2
);

INSERT INTO @cols
SELECT TABLE_NAME ,
COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE DATA_TYPE IN ( 'Date', 'datetime', 'datetime2' )
AND OBJECTPROPERTY(OBJECT_ID(TABLE_NAME), 'IsSystemTable')=0;

SELECT @i=@@ROWCOUNT;

WHILE @i>0
BEGIN
SELECT @nSQL='SELECT '+''''+TableName+''''+', MAX('+ColumnName
+') FROM '+TableName ,
@i -= 1
FROM @cols
WHERE r=@i;

INSERT INTO @MaxData
( TableName, MaxDate )
EXECUTE ( @nSQL
);
END;

SELECT *
FROM @MaxData
ORDER BY 2 DESC;


 



Step5 : Audit / run Trace



Audit the database logins associated with the database, or run a trace on that database ; Some applications, access the database on some specific times like year end,  end financial period etc. I wont suggest you go and delete the database if all the above steps intended towards deletion. You could take the database offline for few months before you delete it.

Wednesday, January 28, 2015

Backup database to disk = ‘Nul’

To understand what this is doing, first we need to understand what Nul is, in the context of a file. It is not just a misspelling of NULL.

DOS, since the earliest days, has had a number of devices that can be treated like files, eg LPT1, COM1 and CON. NUL is another of those devices (virtual files, if you like) and is equivalent to the unix \dev\nul, a ‘file’ that deletes any data that’s written to it.

What are the impact of this Backup location

Anything that’s written to NUL is discarded. So when a transaction log backup is taken at ‘NUL’ , you are actually breaking the transaction log chain. That means, you wont be able to restore the transaction logs. As SQL thinks that the log chain is intact, it discards the log records that were sent to Nul as it would after a normal log backup. It does not switch the DB into a pseudo-simple recovery model. Inactive log records are not discarded at a checkpoint and future log backups will succeed.

That’s the main difference. Log records will still be retained after a backup to NUL. Log backups will succeed after a backup to NUL. They’re useless for restoring the DB because there’s a log backup missing, but they’ll succeed.

Warning!
This command runs great accept under the following conditions:

  1. If you run a differential backup, don’t run this command! The reason is that the differential backup picks up pages that have changed. Running the backup command resets the flag on the page. Since the backup to a NUL: device reads the page, but does not back it up to media, the flag is reset as if it HAD been backed up… so the database does NOT back it up during the next differential. If you have run a backup to NUL: and do differential backups, run a FULL backup immediately and everything will be fine.
  2. If you have heavy disk IO now, a backup to a NUL: device will only add more stress slowing things down more.
  3. For reason #2, run this command in off hours.

How to recover space from a huge transaction log file

Most of the novice dba’s forget to take the transaction log backups, which eventually ends up consuming a lot of disk space. This is one of the questions people ask me very often. Here are the steps which I tell them to follow.

  1. Take the full backup if the database
  2. Check to see if something is keeping SQL from reusing existing log space:
    SELECT name, log_reuse_wait_desc
    FROM sys.databases
    WHERE name = '<your_db_name>'
  3. Change the recovery to ‘Simple’
  4. Find the fileID, use the following statement
  5. USE <your_db_name>
    EXEC sp_helpfile
  6. Shrink the log file

USE <your_db_name>
DBCC SHRINKFILE ( 2 )

     6.  Resize the log file  so that it returns free space to the OS

ALTER DATABASE <your_db_name> MODIFY FILE ( NAME = <logical_log_file_name>, SIZE = 4GB )

     7.  Change the recovery to Full and take another backup

Thursday, January 8, 2015

What causes Multi-Page allocations?

In SQL Server versions before SQL 2012 single page allocations and multi-Page allocations are handled by different components, the Single Page Allocator (which is responsible for Buffer Pool allocations and governed by 'max server memory') and the Multi-Page allocator (MPA) which handles allocations of greater than an 8K page. If there are many multi-page allocations this can affect how much memory needs to be reserved outside 'max server memory' which may in turn involve setting the -g memory_to_reserve startup parameter.

So what kinds of query result in MPA activity? Here are few scenarios

1. A workload that has stored procedures with a large # of parameters (say > 100, > 500), and then invoked via large ad hoc batches, where each SP has different parameters will result in a plan being cached for this “exec proc” batch. This plan will result in MPA.

clip_image001

2. Another workload would be large adhoc batches of the form:

clip_image002

In SQL 2012 all page allocations are handled by an "Any size page allocator" and included in 'max server memory'. The buffer pool effectively becomes a client of the any size page allocator, which in turn relies on the memory manager.