We have all been there–next, next, next–let’s just get this install done. While there are many ways to get SQL Server installed on a server, the real impact is in the way we administer the database and when we want to get good performance. Listener James Youkhanis asked us about our install checklist and Steve and I realized we hadn’t made one public before. In this episode of the SQL Data Partners podcast, we talk about some of the settings we are looking for when we set up a server. We don’t have time to reference every single item, but hit some highlights. Your list might differ slightly, but our list has been built from the experiences we have seen over time from various installations. We love to other opinions, so give us your thoughts!
This page and the checklist available below will continue to evolve over time. This is the version we have now.
“One of those [settings] that can be a life saver, but you have to have it enabled before you actually need life saving.” Thoughts on enabling the DAC in SQL Server.
Listen to Learn
- The four areas of SQL Server installation – Preparation, Installation, Configuration, and Post Install
- The trace flags now considered defaults on most systems
- The cost threshold for parallelism setting we recommend
- Azure specific settings to consider
Transcription: SQL Server Install Checklist
Carlos L Chacon: Companeros, welcome to the SQL data partners podcast. The podcast dedicated to SQL server related topics which is designed to help you become familiar with what’s out there. How you might use those features, or ideas on how you might apply them in your environments. This is Episode 69, I am Carlos L Chacon.
Steve Stedman: I am Steve Stedman and this week’s topic is on SQL server setup. This is something that was requested on Twitter by James for us to talk about things that we do and maybe compared to things he does on how to set up and configure your SQL server servers.
Carlos L Chacon: Yes, exactly, I think this is something that we all have to do as administrators and we all kind-of have our own unique blend to it, and so as he proposed his idea, he actually sent us his checklist or kind-of a comparison of sorts, so we thought it would be interesting to kind-of put together our thoughts and experiences on it and then talk about some of the different sections.
Steve Stedman: Yes, exactly and I think what’s really interesting about this is every time I see one of these from somebody else, I usually see that there are many things in common with what I normally do, but then I see that there are also one or two things that I hadn’t thought about that before or maybe I do it but it’s not actually on the checklist, so I think it’s good to look at the different lists.
Carlos L Chacon: Sure, exactly, just how different people approach kind-of skinning that cat if you will.
Steve Stedman: Yes. Then the phases we’re going to go through with this really the prep, the install, the config and then sort-of the post install or the things you’re going to do a little bit later.
Carlos L Chacon: That’s right. We will actually have our checklist available. We haven’t really made it available to the public or on our blogs previous, and we will do that in this episode and that will be at sqldatapartners.com/serversetup. There we’ll have those downloads, you can get that access and kind-of, again, kind-of share and compare notes as to how you go about doing different things.
Steve Stedman: Yes.
Carlos L Chacon: I think we’ll go ahead and get into that conversation, Companeros, welcome to the show. Okay, so the first step as we get into how we want to set up our SQL servers, maybe the most obvious but there’s a couple of things I think we want to check off the box before we actually do the install of SQL server.
Steve Stedman: Yes, and I think part of that is basically getting the server ready to go so that you’ll be able to get SQL server installed and its going to be happy to run after that. One of the very first things I always do is install the BG info application. If you haven’t used that, what it is, is it’s great when you’re dealing with virtual machines or when your often remote desk topping to different machines is that when you log in, it sets a bunch of information on your Windows desktop so that you can look and see immediately what server your on and what drive letters are there and how much memory is there and things like that. BG info is one of the very first things I’ll put on there just so I know I’m on the right server.
Carlos L Chacon: Exactly, and I can tell you a little bit more information about that.
Steve Stedman: Yes.
Carlos L Chacon: One of the interesting things that we think about the configuration, and again we’ve had some conversations with Argenis and he’s going to be back on the show to talk about dis-configuration, but I think again, each individual environment is going to differ as to what kinds of disks they have and what’s made available to them. Whether we’re talking about local on board disks of a physical server, right? SAN, and then how that SAN is all made up.
Ultimately I think we generally want to try and separate some of the data, and we generally think about that in data and then log files and then Temp DB.
Steve Stedman: Yes. Really the place to start with that is your C Drive first. No matter how you’re going to configure things with your different drive letters, what you want to know is that you should put nothing, really nothing but the operating system on your C Drive.
Carlos L Chacon: Yes. Do we include, for example I know a lot of times the default SQL server stuff will get sent to, like even server logs and things, are we still sending that to a different drive as well?
Steve Stedman: I know when you do the default install it usually installs to some stuff in program files and different sub directories under there. My preference is, I like to install that at a higher level outside of the program files directory, often times on a different drive letter so that you can call it out as here’s the install of SQL server 2016 for instance. Then that way, you’ve got it there, it’s not on your C Drive, but it’s on…that could, I guess the install could be on your C Drive, but I like to put it on a different drive letter than the C Drive itself.
Carlos L Chacon: No it does make sense, I think particularly when start taking a peek at, if for whatever reason you wanted to keep around some system health extended event sessions or other logs you wanted to keep around for a longer time, again it just reduced the chance that you have coming in danger with filling up that C Drive for whatever reason.
Steve Stedman: Right, yes. Then after you’ve got your C Drive and you know you’re going to put your data and logs and everything somewhere else, that’s where you really need to figure out how you’re going to split that up. Usually, we’ll have a drive that’s where SQL Server installs to, and that’s typically where I leave the system databases.
Lets say that could be your drive letter E, and then you want to set your user databases, the data files for those generally on a different drive, we can just call that F for now. Then you want to have your log files on a different drive, or yet another different drive from those E and F drives we’ve just set up. The reason for that is the logs have very different IO behaviors than what you need for your data files, and generally you can get better performance by having those on different types of storage.
Carlos L Chacon: That’s right. Again, I think Flash is changing this dynamic slightly and so again ultimately its going to depend on your set of circumstances, these recommendations I think are towards the spinning rust if you will, so spinning disks and kind-of diversifying that, those IO differences.
Steve Stedman: I think the other thing its not just if you’re using the spinning disks or spinning rust as Art Genis liked to call it, but if you have tiered storage, where you have some spinning disks and some SSDs, you can then, if they’re on different drive letters, you’ve got better opportunity to move those around or better promote them to some faster storage.
Carlos L Chacon: That’s right, and that’s kind-of where, or this idea, or the suggestion of using mount points instead of drive letters comes into play, is that in the event that you need to move that data around, it can be done in the background and SQL server is none the wiser.
Steve Stedman: Right, exactly.
Carlos L Chacon: You don’t have to start moving around your data files like detach and re-attach because you need to go from a slower disk to a higher tier.
Steve Stedman: Then going through this we’ve got our database files, the two sets of them, the system and the user, and we’ve got our database log files, then the next thing that we want to put on fast storage as well will be our Temp DB data and log files. Generally I like to start out with Temp DB data and log files on the same disk, but depending on how things perform you might need to split those off on a different disks or different mount points. Finally you need to have a place for where your backups are going to go.
Carlos L Chacon: Yes, exactly.
Steve Stedman: All right, so other things we want to do with that is we usually start out and format the disks to 50…usually we start out and format the disks to 64K allocation blocks for the databases and the log files. I know there’s a lot of different takes on SAN storage with that, but generally it’s a good practice to start with that. Then benchmark that storage before you install SQL server. There’s different benchmarking tools you can get out there to see what kind of performance you’re getting and what kinds of sequential reads and rights are you able to get on that storage.
I know working with Azure one of the things we discovered is that some of the Azure storage, the difference between the standard and the premium was basically when the SQL server goes from a usable disk speed to not usable or vice versus depending on which package you’re looking at there.
Carlos L Chacon: Exactly, I think the other reason why you would want to consider taking these matrix before you go ahead and with SQL server is that at least you have a baseline. Should things change over time and you see that performance, for whatever reason, usually I guess it would go down, I guess if it goes faster nobody’s going to complain, but if it changes for whatever reason, at least you have those numbers to say well when I installed it, I was here and now I’m at this other number, can you help me understand what’s going on.
It could just be that there’s increase traffic on that SAN kind-of behinds the scenes there, and you can provide that information to those administrators to help you get back to where you were.
Steve Stedman: Yes, that’s a really good point. Then the other thing you’re going to want to do to prepare the server before installing SQL server is to set up the service accounts and grant the right permissions there for the accounts that SQL server and the SQL server agent are going to run as.
Carlos L Chacon: Yes, that’s right.
Steve Stedman: You’re going to need those as you go through the install process.
Carlos L Chacon: Yes, exactly. That kind-of rounds out, we may also want to think about configuring the port number that we’re going to be using. Of course the SQL server uses 1433 as a default, you’re environment may or may not require you to change that. If you do have to change that, you may also then need to think about opening up ports or whatever other security mechanisms you need to do in order for SQL server to be able to communicate with your applications.
Steve Stedman: Yes, good point. All right, so jumping into the install then, there’s really a couple of different ways to look at the install. One is you’ve got a freshly provision machine and you’re popping in the install media or an ISO and you’re going to run through all the set up processes, or the steps and you’re going to choose what you want to install.
The other side of it is, you might be working in like an Azure VM environment where you just ask for VM to be permissioned with SQL server already installed on it. Some of the steps are going to be a little bit different depending on which one of those environments you’re in, but either way, at the end of the process, you’re going to end up with SQL server installed and hopefully you’ve got all the right components that you need installed at that point and not too many of them.
Carlos L Chacon: Right. That’s probably the biggest point right, is that if you have somebody else install that, they’re just going to go through and check everything. Then that becomes a little bit complicated because if you don’t need SSAS or SSIS for that matter, don’t put those on there unless you’re going to be using them.
Steve Stedman: Some people argue and say well, “I just might use them one day so why not install them.” My response to that usually is well, it’s just extra overhead. Then you might say, well but if I’m not using it, it’s not extra overhead, but where you get that extra overhead is when you get into doing service packs and updates in that if you’ve got SSRS or SSAS installed, and you’re installing that service packer update, it’s going to update it even if you’re not using it. That’s going to add a little bit of time when you do those server packs and updates. I like to not install the components that you’re not going to use.
Carlos L Chacon: Exactly.
Steve Stedman: Okay, so I think then once you’ve gone through and you’ve got it installed, now if you’re doing the install yourself, you will have picked all the disks locations that where data files are going to go, default locations for logs, things like that. If you’re using a pre-provision machine through Azure VM for instance, you might have to go through and manually set some of those when you’re done to set where the default location is if you just create database without specifying a path, where those databases are going to go. If you don’t do that, they’re going to end up in your default install location.
Carlos L Chacon: I think this is probably one of the benefits of having a configuration or an install checklist, is keeping the environments consistent because one of the biggest pains is, we kind-of get into the post installation steps, is that when they’re not the same, it just makes administration that much more difficult because paths change and this server I have to do this, that server I have to do that. You have to start thinking about those differences and it could be difficult.
I think that’s why one of the reasons that the power shell installs have become much more popular. It just enables you to have that script. You maybe make a change for the network account or the log in information, and other than that everything kind-of stays the same. You know you’re going to be getting the same install no matter what.
Steve Stedman: Yes, and that can save a lot of time if you know that things are in the same location and the same configuration on all of the SQL servers that you’re working with there.
Okay, so you finished the install. You’ve now go in sort-of the configuration mode and this is where you want to go through and start setting up the SQL server to, I don’t know, behave in the way you want it to behave.
Carlos L Chacon: Exactly.
Steve Stedman: Usually the first thing I’ll start with there once we know we’ve got all the right directories and things like that set up for the system is to start with the SQL server service and the SQL server agent accounts and making sure that they’re going to run as the desired either domain user or active directory user that we want to have them run as.
Often times you’re going through that, it will take a couple of re-starts and maybe a re-boot of the machine to make sure that they actually all come up and start correctly and they all have the right permissions to be able to run as a service there.
Carlos L Chacon: Right, and I know that, again our security minded folks are going to have different recommendations there, however I must admit that the majority of the installs that I’ve done, the service account for both the…i’m sorry, what am I trying to say there? The active directory account that I’m using for the service account and the agent account are the same.
Now a lot of people are going to slap my hands for that because you don’t want your service account for the engine to have the same access that your agent might.
Steve Stedman: Right. My experience on that usually has come down to your active directory administrator. How savvy are they on these specific permissions that you need for different services. Depending on what you can get from that active directory administrator, that may really control whether you have different user account there or not. I would prefer to have a different user account just from the security perspective.
Carlos L Chacon: Sure. I think some of that is culture right? As you start requesting, so however you go about requesting those servers, it might be at that time that you start thinking about what those accounts are, if you have a culture requesting both accounts, obviously it makes it easier to do that.
Then you start kind-of getting up into your, you know, I need accounts and yadda, yadda, yadda, you can make that more difficult.
Steve Stedman: Then once you’ve got the server running that way and with those users, you’re going to want to make sure that people can actually connect to it and use it remotely rather than just the local desktop. That usually involves coming back and checking the configuration on the firewall allowing ports 1433 and 1434, 1434 is for the dedicated admin connection, but allowing those ports through the firewall and allowing TCP connections.
There’s scenarios where you might not be using those, but generally that’s where you’re going to start. If you’re not using those, you probably know what ports you need to allow through.
Carlos L Chacon: Yes, that’s another great point that I think we missed on the install step, the consideration is not installing SQL Server Management Studio on the server itself. This will help force you or limit the number of times you actually have to log into the server. As you’re doing that, you’re testing that remotely, you know that you have to administer it from your own PC like another thing.
Steve Stedman: That’s one thing that comes up a lot. Why do we want to avoid like remote desktop into the SQL database and running the SQL Server Management Studio from there?
Carlos L Chacon: No, exactly. It ultimately comes down to that memory reservation, because you are then part of the operating system, SQL server, or the server rather, is going to seed memory to you potentially if it needs to, for you to be able to get on there. That’s just resources that SQL server is not going to have access to.
Steve Stedman: That reminds me of a database that I worked on a few months back where it had eight GBs of RAM so it wasn’t a gigantic amount of memory, but it should’ve been more than enough, but what happened was you had all kinds of people who were remote desktopping in. This remote desktop connection took up a GB and another one took up a GB and a half, and another one took up two GBs because they were running development environments and things like that. Next thing you know all these remote desktop connections were taking up all the memory and SQL server was squeezed down to just a few GBs of memory out of what it should’ve had up to about eight.
Anytime you can sort-of force that connection off the server but over to someone’s desktop, you’re just going to allow more memory for SQL server to run in rather than somebody’s desktop.
Carlos L Chacon: That’s right.
Steve Stedman: Once you’ve got to that point, we kind-of go through the rest of the configuration there and we’ll go through and usually I’ll enable instant file initialization, which is one of those that we talked about on a tuning review on a few pod casts back. Then we go through and if it’s a SQL server only dedicated box, we’re going to do things like enable lock pages and memory.
Carlos L Chacon: That’s kind-of an important point, and people your mileage may vary, but if I know that if that SQL server is dedicated, or that server is dedicated only to SQL server, then I feel much more comfortable with that setting if it’s going to be used for different things, or you know you just have a cultural or people are going to be logging in to the server all the time, I might be a little bit more hesitant to pull that trigger.
Steve Stedman: Then I’m going to go through and do a few more settings at this point where I’m going to do things like set up default fill factor if it hasn’t been specified. Set up the cost threshold for parallelism. Usually I’ll start that out at 50 on a new database and then kind-of tune it from there to see what it needs to be. Set up the max degree of parallelism. Turn on optimize for adhock workloads. Setup the backup compression default, this is one of those that will make it easier when you set up backups later, you don’t actually have to remember to turn on a compression, it will just use the server default. Then all your backups will be compressed.
Then of course, you turn on the dedicated remote admin connection, which allows you to get into server when you’re in specific scenarios that normal connections would not be allowed.
Carlos L Chacon: Exactly, that’s one of those that can be a life saver, but you have to have it enabled before you actually need life saving.
Steve Stedman: Then after that, we’ll go through and also set up multiple Temp DB files and there’s a lot of different takes on how much, or how big they should be and all that. I would usually start out with eight Temp DB files for the database and then modify from there as needed. I would also usually expand them to be large enough so that they don’t have to have growth very often.
Carlos L Chacon: That’s another one of these areas where again, lots of different opinions there, but what I have seen is that if I’m setting up my SQL server, it has it’s own Temp DB drive, let’s just say, or mount point whatever, then they’re going to allocate, I’ve chosen an allocation size based on how big my data files I think they’re going to be. There’s usually some formula that I’m using to estimate that.
They’ve dedicated that space to me if again, whatever that number is, if you decide…like I want to take up all of that space, if I have more cores there and I decided that I want to use Temp ten DB files or for whatever reason, go ahead. I haven’t seen too many people arguing that if you have too many you kind-of get into trouble. I think in high end systems, you have a million cores kind-of a thing, yes you might come into some problems, but I think again, let me use that space it’s dedicated to me, I want to just nip that in the bud to begin with.
Steve Stedman: That’s a really good point, I mean if you’ve got a dedicated drive that’s a fixed size that’s going to be used for Temp DB you may as well fill it up with, or almost fill it up, and I caution you, don’t fill it all the way to one hundred percent, leave a little bit of space at the end there, but almost fill it up with Temp DB because it will prevent the need to have auto growth on that or for the files to grow over time.
I think that one of the things you might experience when you do that is say, oh well I’ll fill it up to ninety-five percent, one of the things that happens is some of your SANS administrators panic a little bit saying, “Ah, the disk is full.” Well it’s as full as it’s ever going to be because we’ve just used up what we’re going to use for Temp DB at that point.
Carlos L Chacon: Exactly. No, that’s right so I guess I would caveat that and say I would probably go up to normally eighty percent is the threshold number, so seventy-nine percent. That way you do know once you cross that eighty percent threshold that you are seeing some growth and maybe again, if your other monitoring situations haven’t alerted you that Temp DB needs some attention, that one would get my attention.
Steve Stedman: Part of the configuration I’m going to do at this point is turning on some trace flags. The trace flags that I’m going to turn on will vary a little bit depending on whether is SQL server 2016 or 2012, 2014, 2008 or R2, but generally with everything up to 2016 or prior to 2016, I’ll turn on trace flag 1117 and 1118. I think those are now the default on 2016 so you don’t need to turn them on.
Eleven-seventeen, what that does is allows you to grow all the files in a file group equally. If you do have something like Temp DB, if you haven’t filled up that entire disk, it will make it so that when you grow that Temp DB, it will grow all the files at the same time.
Carlos L Chacon: Right, so if you had four files and one needs to grow, all four will grow at the same time.
Steve Stedman: Yes, exactly. That’s important because if they grow at different rates and different sizes, the bigger ones will end up taking more of the Temp DB load and therefore needing to grow more and more so you end up with some really oddball sized data files without this trace flag.
The other one was 1118 which is for full extents only. This can help with a little bit of the overall performance there when a Temp DB is growing and being allocated, but it can also impact other databases.
Then a couple others that I usually turn on are 3026 which is to enable check sums on backups. What this does is it makes it so when your backup files are written in check sum gets written to each of those chunks that gets written out, and when you’re doing the restore it will tell you if anything has changed or anything has gone bad in that file from what it had intended to be written to disk initially.
Then I’ll also turn on trace flag 3226. What 3226 does is it silences the backup messages going to the error log, SQL sever error log. It’s one of those things that I find it frustrating when you go in and look at your error log and all you see there is a bunch of backup succeeded messages. I don’t need to know that, I only need to know when something failed. Thirty-two twenty-six turns off all those backup succeeded messages.
Then depending on the specific SQL server or SQL server version or the environment, there may be a couple of other trace flags that would be considered at this point as well.
Carlos L Chacon: Exactly right. Again, your mileage may vary there with some of those other trace flags.
Steve Stedman: Whatever you’re doing, I would make sure that you test those trace flags before using them in a production environment.
Carlos L Chacon: Yes. There’s one that we haven’t talked about and that is optimize for adhock workloads. I admit that’s the one I’m flipping kind-of automatically now, is the default standard as well.
Steve Stedman: That’s one that I will, on a new server configuration I will almost always turn that on and I have never seen it cause any type of negative performance issues. It does help improve performance from what I’ve seen, but I’ve never seen it hurt any of the environment.
Then at this point, once we’ve gone through a few of those settings, this is the point that I’ll usually install database health monitor and point it at this newly built database and then use the quick scan report and database health monitor to go through and catch any other settings that I think may need to be set.
I think there’s a hand full of them that we’ve already covered here, but database health monitor and quick scan report will catch a few more things that we’ve might of overlooked somewhere along the way.
Carlos L Chacon: What’s interesting is James kind-of approached us with this topic, I thought, ‘hmm, why haven’t we put out some of that information?’ I think some of it is because its kind-of baked in to the database health monitor tool now and so while it’s not necessarily scripted out, because the script says point the tool at your server and then go fix anything that needs to be fixed there.
Steve Stedman: Precisely. Once we’ve done that, one of the things that it will catch right off is that DBCC check DB has never been run on your databases. I have a job that I installed that’s generally a customer job that goes through and does DBCC check DB and the one that I generally use just checks for a certain amount of time and then if that time passes, and I usually set it to like five or ten minutes, and if that amount of time passes, when it finishes the next check, it doesn’t go on to check anymore databases.
Then I usually adjust that so that it doesn’t run check DB for twenty-four hours a day, it will only run it for a certain amount of time, and the next day it will catch the oldest databases that haven’t been checked recently. The other option there is the halogran scripts for doing the checks as well.
Carlos L Chacon: Now we’re really kind-of in the post-installation process. You’ve changed all of the SQL server’s specific type settings, now we’re going to start installing things that are, kind-of in the sense unique to each environment and the way they like to administrate as those SQL servers.
We’ve kind-of talked about the health monitor, we’ve mentioned scripts, there are a hand full of other community scripts that are out there.
Steve Stedman: Yes, SP_ who is active-
Carlos L Chacon: Exactly, they’re kind-of like bunched in as…I like to have it on all my servers just because if that’s the way that I then begin working or troubleshooting when I need to come back to that server.
Steve Stedman: I think that part of the, like when we talked about those when we mentioned database health monitor, part of it is you don’t have to actually install the database health monitor application on the server itself, you can put that on your desktop and connect it to the server then have it create the monitoring database on that server or on another database if you want to.
Carlos L Chacon: Exactly. The other things that again, I think will be environment specific might be some certain auditing that you want to enable or that you need to enable. Ultimately extended events also come into play here. Do you have some default extended event templates that you want to have installed?
Steve Stedman: Definitely good points. Then the other thing, I think maybe this is one that we missed on configuring the database, but creating a database owner user that you can use to switch the owner of most of your databases so that the owner of the database is not SA. That’s one of those things that you can get into some security risks if your database is owned by SA and there’s a couple of examples out there of how people can get escalated security by running scripts as the database owner. It’s good practice to set the default database owner to be a user that does not have connect-ability to the database. They’re denied connect to SQL.
Carlos L Chacon: Sure, and I guess depending on your security method, I think we’re assuming that most people still run the, what’s the name of it, mixed mode? Yes, mixed mode.
Steve Stedman: Yes.
Carlos L Chacon: You can still create a local log in should you need to do that.
Steve Stedman: Right. In this case you could, if you were running in the active directory mode only, you could create a domain user called DB Owner that’s been denied the ability to connect and set the database owner to be that user as well. It doesn’t have to be an actual SQL user.
Okay, I think that kind-of hits on most of the normal set up scripts that I go through. Oh, enabling the dedicated admin connection, did we mention that?
Carlos L Chacon: We did talk about that.
Steve Stedman: That’s an important one. Okay, so yes we’ve got that. I think that’s it for my usual to do list on setting up a server.
Carlos L Chacon: Another consideration that you might have is you might potentially be migrating from one server to the next, this could be in the case of a version upgrade. You’ve gone and you’ve installed all these other things and now you need to bring over the user information from the previous environment.
I admit that I had been using SP was it helprev log in for the longest time, you go to the old one, you have to install two procedures as a hashing algorithm, then you run that and it will script out with a password the information to put onto the new one.
Found a post by Chrissy LeMear, power show, who would’ve thought, MVP, and her argument is it’s 2016 why are you still using that stuff that was created for SQL 2000? I thought, ‘Hmm. Chrissy, you’ve got a great point there.’ Through the databasetools.io program, they have a way, and she actually points out some limitations which I hadn’t considered, but there is a tool set there now that will allow you to actually create, again using power show, you can connect to the old instance, connect to the new instance, specify some parameters of how you want that to be set, filter, all those kinds of things, then it will go ahead and take care of migrating those options over.
One of the bigger areas I found are compelling reasons to do that, is it will take care of the individual groups or roles. If you’ve assigned a role to a user, the SP helprev login won’t grab that information for you, it will just create the account. This will allow you to bring over those roles as well.
Steve Stedman: That looks like a great option because I know that’s one of those things that can be very painful if you don’t have a script or a tool in place to help move all those users and permissions over.
Carlos L Chacon: Exactly. I guess that’s kind-of our checklist. The way we kind-of go about doing things. Thanks to James we will kind-of put this organized out there on the web and you’re welcome to download it. If you do these things differently, we’d love to get your feed back there as well. We know that everybody has a unique environment, unique considerations that maybe we haven’t addressed.
I know we brought up a couple of things Steve, but I wonder are there any other changes that we need to point out for the Azure installs?
Steve Stedman: Oh, yes good point. On the Azure installs there’s a couple of components that need to be installed so that the Azure administrative interface can, or the Azure management end of things can connect and know what’s going on with the health of the database. There’s a couple of processes that usually get installed there.
Then, just…if it’s running as an Azure virtual machine, the rest of the settings are very similar to what you’re going to be running with normal SQL server. I guess one difference there is with backups. You’ve got the ability to do your backups to Azure blob storage rather than just normal disks.
Carlos L Chacon: Right.
Steve Stedman: That’s something that you want to consider and take a look at too. There’s also some automated backups with Azure where just sort-of out of the box it does your SQL backups for you and that’s one of them that I have never been entirely comfortable with that, so I usually turn that off and then make sure that the backups are managed by whatever process I would normally use.
Carlos L Chacon: Yes, exactly and I think that those automated processes sometimes it can get tricky with point in time restorers and things like that. I can do that for you, but there are some limitations as well.
Steve Stedman: Just think of your Azure VMs in this case as just another VM running SQL server with just a few additional settings you have to add to it.
Carlos L Chacon: One more point that we should make there in with the Azure VMs and that is the D Drive.
Steve Stedman: Oh, good point.
Carlos L Chacon: It can be very tempting, and we’ve done different things in the past, so the D Drive, and have a little note there that says, ‘this is a flash storage ray’, or ‘this is flash storage’. Don’t put anything here because when you bounce your box, those files are going to go away. Then you think to yourself, hmm, that sounds like a pretty interesting place to put Temp DB.
Steve Stedman: Yes, and there’s a lot of posts out there that say give it a shot.
Carlos L Chacon: Sure.
Steve Stedman: That’s one of those that I’ve give it a shot, I’ve tried it, what I found is that the D Drive that you get on your Azure VM is flash storage, however it doesn’t seem to perform as well as the flash storage through the premium storage that you get through Azure. There’s a few steps that you have to go through because with Temp DB you can put your files there, and you just have to make sure that it’s set up so that when your system re-starts, and if that a brand new formatted disk that the permissions get set right so that Temp DB can recreate your files.
Carlos L Chacon: Exactly. That could be kind-of a stain your pants kind-of movement when, particularly when it re-boots and you’re not aware for whatever reason, that D Drive, there is a possibility that you could get a new D Drive basically. As Steve mentioned, the security won’t be set there and your SQL server is going to stop.
Steve Stedman: Yes, because what its going to attempt to do is it’s going to create those Temp DB files, when that fails, SQL server says, I can’t go any further. We can’t start the system at that point.
Carlos L Chacon: That will be just one of those that you’ll have to make sure that those who have access to the SQL server, your admins and whatnot, that if for whatever reason, they decided they want to bounce the box, patching, depending on how you’re patching goes, that they understand that’s something that they’ll have to check afterwards. It’s not every time that will happen, but it’s just one of those that you need to be aware of it.
Steve Stedman: You may test this, you may re-boot your SQL server twenty-five times and never have a problem, then Azure decides that machines going to be moved somewhere different and re-provision somewhere and it comes up and that D Drive is empty and doesn’t’ have the right permissions. I’ve learned that one from the school of hard knocks. It seemed like a good idea at the time, but I know better now.
Carlos L Chacon: I guess I will say, again you use with caution, but there was a situation where the drive…we had a classic model VM and we had allocated all of the disks that were allowed, the eight disks, stripped them and we were still…we needed premium disks and we couldn’t get them because we were in classic model.
Steve Stedman: Oh sure.
Carlos L Chacon: By moving Temp DB over there, at least we got some breathing room because we had additional disk in play until we were able to get premium disks in there.
Steve Stedman: I guess the way I would say it is in a perfect world, like in typical physics examples, in a perfect world I would not recommend using the D Drive for Temp DB if you have other options. However, if you’re in a pinch, read about it, understand what happens and make sure you get the permissions set right and you can use it.
Carlos L Chacon: That’s right. Okay again, James thank you for sharing your notes with us. Hopefully this is of some benefit to you. We will be posting this out on the show notes for today’s episode at sqldatapartners.com/serversetup and we’d love to get your feedback. Did we miss something, do you not like it, do you disagree with us, in some shape or fashion, you can let us know on Twitter at #sqlpodcast or reach out to us on Twitter. I’m at carloslchacon-
Steve Stedman: I’m at sqlemt.
Carlos L Chacon: We’d love to get that feedback and so I guess anything else before we end Steve?
Steve Stedman: I think that wraps it up for this week.
Carlos L Chacon: Awesome. Okay Companeros, we’ll see you on the SQL trail.