A re-posting originally on b2bsol.com
For each SQL Server database installation there are several accompanying system databases that help keep SQL Server running and functional. They store information about security, what the server is doing, and how the system should process certain functions. The TempDB database doesn’t keep permanent information–in fact, each time SQL Server restarts, the TempDB database gets purged and re-built as if it is a new database. Why then should I care about the TempDB database? Well, I am glad you asked.
Lots of Hats
The TempDB database has several functions it handles; some of which include: Sorting, local and global tables, index re-organizing, hash comparisons, XML Variables, spooling, triggers, snapshot isolation, and other internal functions. In short–it is the workhorse of the environment and ALL databases on the system will interact with it at some point, so the configuration becomes very important.
Because local table variables and hash joins are stored in the TempDB database, queries that use these objects/functions have a part of their processing done in the TempDB database. When a query needs to move a process to TempDB, it creates a worktable and write data out. What happens when I have 1,000 transactions per second and all of them have a hash join? I will have a line out the door of threads waiting to be able to write to TempDB for their turn. I can potentially have GAM (Global Allocation Map) and SGAM(Secondary Global Allocation Map) contention. Also, because these processes are dependent for a transaction to process, the speed in which than can be written and read becomes important.
Providing some relief
There are two recommendations that can provide immediate to your TempDB environment. The first is to create multiple TempDB files of the same size. This will help alleviate the SGAM contention. The second is to move the TempDB data files to the fastest disks in the system and/or to the most number of spindles as possible. TempDB is the first candidate for Flash drives should the entire system not be able to take advantage of those disks.
How large should my TempDB files be? Determining the size of the TempDB files may be a bit of trial and error; however, if you have databases already on the system, one way to help make a decision is to run DBCC CHECKDB WITH ESTIMATEONLY to get the size the CHECKDB command uses.
How many TempDB Files should I have? While there are varying ideas, the one I hold to is 1 file per CPU core up to 8 files. Monitoring will help provide insight if more are needed in the future.