Moving A Database With Attach and Detach

I found myself doing several database migrations this weekend and because these databases were generally small (Less than 10 GB), I choose the route to detach the database, copy the files to the new location and re-attach.  To remove the databases, I detach them from the current instance.  I am not worried about backup history with this as the server will be decommissioned.

EXEC master.dbo.sp_detach_db @dbname = N'MyDatabase'
GO 

If you refresh your database list, you will no longer see the database listed; however, the files are still on the server.  You can now copy those to the new location and attach them. You need only pass in the name of the database and the location of the files and SQL Server will add the database and make it available to you.

EXEC sp_attach_db @dbname = N'MyDatabase',
    @filename1 = N'E:\MSSQL\MyDatabase.mdf',
    @filename2 = N'F:\MSSQL\MyDatabase_log.ldf'; 

Don’t forget to add your security and any jobs on the old server you need to copy over for this database.

Backup History

As someone who has helped design systems, I know auditing can take a back seat to more ‘pressing’ requirements; however, as anyone responsible for maintaining a system will know, a good audit trail can be immensely helpful in tracking down a problem. When it comes to backup information, SQL Server actually has a lot of good detail which can be very useful. For example, SQL Server can tell you the very first time you took a backup of your database. This script can give you this information by database.

Select MIN(backupset.backup_finish_date), backupset.database_name, backupset.server_name
FROM msdb.dbo.backupset
INNER JOIN msdb.dbo.backupmediafamily ON backupset.media_set_id = backupmediafamily.media_set_id
GROUP BY backupset.database_name, backupset.server_name
Order by backupset.database_name, backupset.server_name
The very first backup?

While it may be nostalgic to look at the very first backup you took, it is not practical to keep all that information around?   Why is this?  Because, as Books On Line puts it–“When a restore [or Backup] is performed, backup history tables and restore history tables are modified.”  Just like any audit table, this information grows and as each operation is performed, more data is added to the mix.  Over time these operations can actually be slowed because of the amount of data.

History Cleanup

Just like any good audit table, we need to have a plan to trim the fat on a regular basis.  What to trim?  Well, my thoughts are they should line up with your requirements for data longevity.  If the policy says we keep backups around (even on tape) for 6 months, then I would make it six months and run the cleanup process every week.  Your time may vary by application and in that case, you can set it for the longest requirement in the environment.  Regular cleanup is the key–you never know how long those servers are going to be around.  Isn’t that right all you SQL 2000 boxes out there?   :)

Script It Out

Removing old records is straightforward with stored procedure sp_delete_backuphistory.  You only have to provide a date and SQL Server will remove ALL backup history up to that date.  I normally have a SQL Agent job, scheduled to run once a week, looking something like this.

USE msdb
GO
DECLARE @BackupDate DATETIME;
SET @BackupDate = GETDATE() - 365;

Exec sp_delete_backuphistory @BackupDate;

We Can Pivot If You Want To

Screen Shot 08-01-14 at 10.59 PMSo I get that I am a little late to the party here; however, I find myself having to pivot from time to time and am always looking back for code as a refresher.  While pivot is pretty cool, it always takes me a second to connect all the dots so I hope this post help do that.

So, with Pivot, we want to take data that looks like the data on the right–there are multiple rows for each ProductId and would like to get each product ID on one line with the quantities from each Bin.

When we are finished, the data will look like this.

Screen Shot 08-01-14 at 10.59 PM 001

One row for each product id

In my example, I am using the [Production].[ProductInventory] table in the adventureworks2012 database.  To make a row become columns, we must know ALL possible values we want to make as a column.  We specify these in the FOR line of the pivot syntax.  This query will create the columns for each value I specified in the Shelf column

SELECT [ProductID],[A],[B],[C],[D],[E],[F],[G],[H],[I],[J],[K],[L]
	FROM [Production].[ProductInventory]
	 PIVOT
	 (
		SUM (Quantity)
		FOR SHELF IN ([A],[B],[C],[D],[E],[F],[G],[H],[I],[J],[K],[L])
	 ) AS PVT 

Screen Shot 08-01-14 at 11.13 PMI should see something like this.  We are getting closer–we have our columns, but we still a line for each record.  We only want to see one line per product id.  This is where we put the columns we want from the table and the pivot syntax together in an almost sub-query like format and select the columns as we want them to display.

The syntax is something like this.  The first two rows are the columns I want, the second two are the columns I need from the table, and then add the pivot syntax followed by an order by and presto, you have your data the way you want it.

SELECT [ProductID],[A],[B],[C],[D],[E],[F],[G],[H],[I],[J],[K],[L]
	FROM
	(SELECT [ProductID], Shelf, Quantity
	FROM [Production].[ProductInventory]) p
	 PIVOT
	 (
		SUM (Quantity)
		FOR SHELF IN ([A],[B],[C],[D],[E],[F],[G],[H],[I],[J],[K],[L])
	 ) AS PVT
	 ORDER BY [ProductID]

All the World’s a Stage

I didn’t realize Pinal Dave had published his site stats last year and I recently came upon them and was very impressed with those numbers.  Kudos to him for all his effort.  My blog on the other hand would need him to chop off about 6 digits and we might be getting a little closer.  My blog won’t every be that popular; however, it is interesting to see where people are coming from and it makes me think the few posts I have up are helping someone.  The SQLFamily has always been about helping others and I am glad to play a small part.

Thanks Mom for all the hits.  :)

Screen Shot 08-01-14 at 04.31 PM

Enqueueing A Message

After restoring a SharePoint database to the development environment, I came across the following error which would repeat every minute.

An exception occurred while enqueueing a message in the target queue. Error: 15404, State: 19. Could not obtain information about Windows NT group/user ‘Domain\User’, error code 0x6e.

I saw a post about ensuring the SharePoint service account had the appropriate access and after verifying mine were good, I saw a suggestion to change the database owner.  While it seemed rather strange, I thought it would at least check the owners of all the databases. I used this query:

SELECT name, suser_sname(owner_sid) FROM sys.databases

This lists all the databases and their owners and they all seemed good–A NULL in the second column would indicate an unresolved user; however, I went ahead and change them all to SA as there were various owners.

USE DBX
GO
EXEC sp_changedbowner 'sa'

The error stopped occurring in the log and life resumed. :)

Sizing the SQL Server Error Log

Sometimes SQL Server generates so many messages you think it actually enjoys it.  While there may be some other factors in creating a new log by default, it appears to me the most common scenario is when SQL Server restarts.  If your systems are anything like mine, you want SQL Server to stay up and this can cause the logs to get very large and this makes finding specific message more painful than it needs to be.

To me it makes sense to separate the logs by day. EXEC sp_cycle_errorlog will create a new SQL Server error log.  I normally create a job that runs a midnight and creates a new error file.  By default, SQL Server only keeps 6 error log files and I normally increase that number to 30.

Files To Keep
If the server has already been set up, I normally use the GUI by right clicking on SQL Server Logs and choose configure. If I am setting up the server, I have this piece of code in my scripts.Screen Shot 08-01-14 at 01.59 PM

USE [master]
GO
EXEC xp_instance_regwrite N'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer', N'NumErrorLogs', REG_DWORD, 30
GO

Cycling the Logs
This code will create a new job to run at midnight and cycle the logs.


USE [msdb]
GO

/****** Object:  Job [Cycle Error Log]    Script Date: 08/01/2014 13:31:39 ******/
BEGIN TRANSACTION
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0
/****** Object:  JobCategory [[Uncategorized (Local)]]]    Script Date: 08/01/2014 13:31:39 ******/
IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'[Uncategorized (Local)]' AND category_class=1)
BEGIN
EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N'JOB', @type=N'LOCAL', @name=N'[Uncategorized (Local)]'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

END

DECLARE @jobId BINARY(16)
EXEC @ReturnCode =  msdb.dbo.sp_add_job @job_name=N'Cycle Error Log',
  @enabled=1,
  @notify_level_eventlog=0,
  @notify_level_email=0,
  @notify_level_netsend=0,
  @notify_level_page=0,
  @delete_level=0,
  @description=N'No description available.',
  @category_name=N'[Uncategorized (Local)]',
  @owner_login_name=N'sa', @job_id = @jobId OUTPUT
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
/****** Object:  Step [Cycle Error Log]    Script Date: 08/01/2014 13:31:40 ******/
EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'Cycle Error Log',
  @step_id=1,
  @cmdexec_success_code=0,
  @on_success_action=1,
  @on_success_step_id=0,
  @on_fail_action=2,
  @on_fail_step_id=0,
  @retry_attempts=0,
  @retry_interval=0,
  @os_run_priority=0, @subsystem=N'TSQL',
  @command=N'EXEC sp_cycle_errorlog',
  @database_name=N'master',
  @flags=0
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobschedule @job_id=@jobId, @name=N'Midnight Ride',
  @enabled=1,
  @freq_type=4,
  @freq_interval=1,
  @freq_subday_type=1,
  @freq_subday_interval=0,
  @freq_relative_interval=0,
  @freq_recurrence_factor=0,
  @active_start_date=20140801,
  @active_end_date=99991231,
  @active_start_time=0,
  @active_end_time=235959
 -- @schedule_uid=N'bca9d15e-e47f-40a4-a236-e14020f77b76'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
    IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:

GO

Point In Time Recovery

It is nice to see new features you would have liked in prior versions make it into the current version.  In SQL Server 2012, Point-In-Time recoveries were made available via SQL Server Management Studio (SSMS).  Point-In-Time recoveries had been available via T-SQL previously; however, there is now a way to visual this process.  This post demonstrates how to do this with the 2014 version of SSMS.

Full Recovery Mode

Point-In-Time recoveries require the database to be in full recovery as we are going to be using the log backup to finalize the restore.  In this scenario, I take a full backup nightly and then transactional log backups regularly throughout the day.  We will need the full backup file and ALL the transaction logs taken to the point in time to which we want to recover.  In my case it is just one; however, there could be many–it just depends on your backup strategy and the time of the failure/error.

This post won’t go over the backups themselves, and I just realized I don’t have a post on those.  I will have to link once I set it up.

Restoring A Database

Screen Shot 07-28-14 at 10.06 PMIf the database doesn’t exist, already you would right-click on Databases, then choose Restore Database …

 

Screen Shot 07-28-14 at 10.13 PMIf we are restoring a copy of the database on the same instance, we can choose Database as the source and then choose the files we wish to restore from the backup history.  In this example, I choose the source of device and choose both the backup file and the transaction log backup, which you can see highlighted in the box.  Clicking on the Timeline button on the restore to line allows us to specify the time to which we want to restore.

Screen Shot 07-28-14 at 10.14 PMI only need to choose Specific date and time and I can see the options available to me.  I can either modify the time field or I can use what looks like a progress bar to slide to the time I want.  As I have chosen the database name on the previous window, all the logic needed to restore my database has been updated and the database will restore to the time I put in the window.

If the database already exists on the server, the steps are similar; however, we will have to break up the restore into two steps.   First, we restore the database and set the recovery to NoRecovery.  We only have the option to choose Tasks->Restore->Transaction Logs when the database is in recovery mode.

There’s Always T-SQL

And we also have the option of restoring the database via T-SQL and while there are lots of options, these seem to work well for me.  This is also done in two steps.  First the Full Restore–and I KNOW you don’t have your databases on the C:\ drive, so please forgive my poor example.  I’ve got kids to feed and can’t afford big servers with lots of disks.

Restore Database RestoreTest
FROM DISK = 'C:\TEMP\AW.bak'
WITH MOVE 'AdventureWorks2012_Data' TO 'C:\MSSQL\DATA\RestoreTest_Data.mdf'
, MOVE 'AdventureWorks2012_Log' TO 'C:\MSSQL\DATA\RestoreTest_log.ldf'
, NORECOVERY, STATS = 10

Then the transaction log.  The secret sauce is the STOPAT option.  That’s it.  I hope you won’t have to use it everyday; however, when that error or issue comes your way, with this process you will able to get back to the point you wanted.

RESTORE LOG RestoreTest
FROM DISK = 'C:\Temp\AW.trn'
WITH RECOVERY,
STOPAT = N'2014-07-28T21:48:47'

TempDB Configuration Matters

TempDB Database

TempDB Database

A re-posting originally on b2bsol.com

For each SQL Server database installation there are several accompanying system databases that help keep SQL Server running and functional.  They store information about security, what the server is doing, and how the system should process certain functions.  The TempDB database doesn’t keep permanent information–in fact, each time SQL Server restarts, the TempDB database gets purged and re-built as if it is a new database.  Why then should I care about the TempDB database?  Well, I am glad you asked.

Lots of Hats

The TempDB database has several functions it handles; some of which include: Sorting, local and global tables, index re-organizing, hash comparisons, XML Variables, spooling, triggers, snapshot isolation, and other internal functions.  In short–it is the workhorse of the environment and ALL databases on the system will interact with it at some point, so the configuration becomes very important.

Transaction Dependent

Because local table variables and hash joins are stored in the TempDB database, queries that use these objects/functions have a part of their processing done in the TempDB database.  When a query needs to move a process to TempDB, it creates a worktable and write data out.  What happens when I have 1,000 transactions per second and all of them have a hash join?  I will have a line out the door of threads waiting to be able to write to TempDB for their turn.  I can potentially have GAM (Global Allocation Map) and SGAM(Secondary Global Allocation Map) contention.  Also, because these processes are dependent for a transaction to process, the speed in which than can be written and read becomes important.

Providing some relief

There are two recommendations that can provide immediate to your TempDB environment.  The first is to create multiple TempDB files of the same size.  This will help alleviate the SGAM contention.  The second is to move the TempDB data files to the fastest disks in the system and/or to the most number of spindles as possible.  TempDB is the first candidate for Flash drives should the entire system not be able to take advantage of those disks.

How large should my TempDB files be? Determining the size of the TempDB files may be a bit of trial and error; however, if you have databases already on the system, one way to help make a decision is to run DBCC CHECKDB WITH ESTIMATEONLY to get the size the CHECKDB command uses.

How many TempDB Files should I have? While there are varying ideas, the one I hold to is 1 file per CPU core up to 8 files.  Monitoring will help provide insight if more are needed in the future.

The Importance of Backups

A re-post of a posting at b2bsol.com.

Too many organizations do not have adequate protection of their data and are susceptible to data loss.  While security is important and you might think this post is about implementing policy to limit your exposure to hacking, I am talking about something much more basic than that–I am talking about database backups.  A database backup is the first step in ensuring data availability and limiting exposure to corruption or human error.

What kind of backup strategy do you need?  Well, what is your tolerance for data loss?

To help you answer this question, there are two components you need to consider.

RPO – Recovery Point Objective

TechTarget defines RPO as “the age of files that must be recovered from backup storage for normal operations to resume if a computer, system, or network goes down as a result of a hardware, program, or communications failure.”  My definition would be something like–the moment in time you want to be able to restore to.  If you experience corruption or disk failure, how close do you need to get to that point in time?  Defining this metric–which will vary from system to system, will give you your RPO.

RTO – Recovery Time Objective

TechTarget defines RTO as “the maximum tolerable length of time that a computer, system, network, or application can be down after a failure or disaster occurs.”  My definition would be–The time needed to restore a system to usable functionality. I should note this time would include the alerting and response of your team if user intervention is required.

It would be easy to say, I want less than 1 second of data loss (RPO) and  less than 5 minutes of downtime (RTO); however, don’t expect to pay TGIF prices for Ruth Chris service.  Microsoft has made great strides in giving SQL Server many options for High Availability and Disaster Recovery and the ability to keep the system up ; however, none of these solutions remove the requirement to take backups.  The amount of history you keep will depend on your business requirements and the costs associated with keeping that storage.

If your organization does not have the RPO and RTO points defined, it is definitely time to make it happen.

Unable To Login to SQL Server Error 18456

I installed SQL Server on a machine the other day, I connected from my desktop, finished my normal server setup routine and life went on. Later, I connected to the machine via Remote Desktop, fired up SSMS and got this error message.
LoginFailure18456

With an accompanying error in the log
Login failed for user ‘MyAccount’. [CLIENT: ]
Error: 18456, Severity: 14, State: 11.

Very strange, I thought to myself. Turns out I had added my account to the local administrator’s group and not created a Login on the SQL Server. (It also appears I missed a step on my server checklist, as I normally remove the Builtin\Administrators group, but who is keeping score?)

I create the Login, and specified the group, and I was good to go.

CREATE LOGIN [Domain\Account] FROM WINDOWS;
EXEC sp_addsrvrolemember [Domain\Account], ‘sysadmin';