Compañeros, in episode 57 of the SQL Data Partners Podcast we take a page from the SQL Server Radio podcast and discuss some SQL Server features we think could improve. We talk about the redundancy of default maintenance plans, the pain of autogrowth and autoshrink, why NO LOCK hints shouldn’t be used, what’s so bad about Activity Monitor, and more.  It should be noted here, that we LOVE SQL Server, we just being a bit picky with some of the tools.  🙂

Our Top SQL Server Offenders

  • Default file autogrowth settings
  • Default maintenance plans
  • Shrink database or Auto Shrink
  • NOLOCK Hints
  • Table Variables
  • Instance settings – Processors tab
  • SQL Server Log File Viewer
  • Sql Server Logs file history options
  • Spatial Data
  • Data Tuning Advisor
  • Activity Monitor

Tuning Minute
We also chat about the what, where, why, and how of the Dedicated Administrator Connection (DAC) in SQL Server.

Transcript

Introduction

Steve: Compañeros, welcome to the SQL Data Partners Podcast! A podcast dedicated to SQL Server related topics which is designed to help you become familiar with what’s out there, how you might use those features or ideas, and how you might apply them in your environment. This is episode 57 and I’m Steve Stedman.

Carlos:  And I am Carlos L Chacon. Welcome, Compañeros, we’re glad to have you. We’re doing things a bit differently if you can’t tell already. We’re shaking things up a little. We ask our guests one of the things they could change about SQL Server and as Steve and I have gotten some of that input we’ve talked amongst ourselves, we’re like, “Hey, let’s do a program dedicated to that.” One of the impetus for this idea, truth be told, came from the SQL Server Radio program. So we’re friends with that show, Matan and Guy over in Israel do a fantastic job and have lots of fun with their program. And episode 50, they did a list of the 50 worst features. Now we’re not going to recreate that whole list of 50, but we did pick out a handful that we thought we could talk about. We will post their list on the show notes today and we’ll do that at SQLDataPartners.com/worstfeatures and we’ll get that up there for you guys. Take a peak and provide some feedback. It will be interesting to get your take on what did you like, what did you not like about some of the things we talked about? In fact, we’ll invite you to leave a comment on social media, right? So share your thoughts with us using the hashtag #SQLPodcast and let us know either your feature that you’d change or what you thought about what we talked about and how you’d react differently.

Steve: We might be opening up a big can of worms there with some of our opinions on this one, Carlos.

Carlos: [laughs] Yes. Be nice. It’s just theory, just opining, right?  And we should say, “All hail to the Microsoft people.” They’re a lot smarter than we are and yes, while we might nitpick a little, we are extremely grateful for the product that they created.

Steve: Yes, absolutely. And then after we get through our list, we’ll follow up with a Tuning Minute.

Carlos: So today we’re going to be talking about the DAC, or the dedicated administrator connection, why you’d use it, and kind of how that came to my rescue not too long ago in a client engagement. I’ll tell you the story about it, why you’d want to turn it on, and what’s it for.

Steve: Here we go.

The Least Favorite Features

Carlos: Hey Compañeros, welcome to the show. So changing things up a little, I don’t have to welcome you to the show, Steve.

Steve: I’m already here.

Default Auto Growth

Carlos: You’re already here. You haven’t gone away. But for that we’re grateful, right? For all the rest of the Compañeros, seven episodes we’ve done together. So this is another great episode. So one of the first things that we want to talk about is features or functionality that we think we can change, and that is the default auto growth settings for databases. Which in fact has gotten quite a makeover in 2016.

Steve: Yep. Absolutely, so this is one that for many years it’s just frustrated me. Where you go into work on a database and you see that it’s set up with the default auto growth settings, which initially for many years for SQL Server it’s been 1 MB of growth on the data file and 10 percent growth on the log file. What you end up with on this is that over time you get a very fragmented database, because growing the database in small amounts leads to excessive fragmentation on disk and it leads to even more so when you’re shrinking your database. That’s one of those other naughty items, but we’ll come back and talk about that one a little bit later. The nice thing was, in SQL Server 2016, Microsoft updated this. So if you just create a database using all of the defaults you’ll get 64 MB autogrowth on both your log file and your data file.

Carlos: Yeah, I think that’s going to be a local change as things get bigger, but also things are moving faster too so in the past maybe growing that much would cause some issues. You shouldn’t be noticing that so much now. Obviously that doesn’t remove your responsibility as a database administrator to be watching that, and taking care of those file growths, but at least you’ll be able to put a backup on.

Steve: And the best practice is to really grow your files so it’s large enough that you don’t really have to use the autogrowth. But however it doesn’t always happen that way and people forget to grow the files and are relying on autogrowth, oftentimes that can lead to the fragmentation. And this change in 2016 with the 64 MB autogrowth default is definitely an improvement on where it’s been in the past.

Default Maintenance Plans

Steve: And next on the list, and this was one of my gripes, was default maintenance plans. Yes, specifically around shrink database, rebuilding indexes, and rebuilding stats. I’m not going to hit on shrink database much, we’ll hit on that a little bit later, but someone who’s new to SQL Server and they realize, “I want to set up some maintenance tasks. I want to make sure that my database is in good shape.” They go in and they right-click and get the maintenance plan wizard

Carlos: Yep.

Steve: And when you do that maintenance plan wizard you get this nice set of checkboxes and you say, “I like this and I like that” and you go through and check the integrity and shrink the database and I’ll reorganize the indexes, sure, why not? Rebuild them, update statistics, and rebuild the history.

Carlos: Yeah, sounds like a catch-all, right? Like a one sized fits all.

Steve: I mean, those are all, well not all of them, but most of them are things that you’ll want to do at one time or another. But the problem you run into is that if you just choose all of the defaults without understanding what will happen you get a maintenance plan that ends up in a default order where it first goes through and reorganizes all the indexes. And keep in mind what the reindex is doing is it goes through and is rearranging all of the different pages so that the index can perform as best it can without doing a full rebuild. And then what it does the next step is it throws out that index and completely rebuilds it from scratch.

Carlos: [laughs] So you’re saying I may have duplicated effort there?

Steve: yes. You could be doing a bunch of effort that is completely thrown away. And then one of the things that when you rebuild indexes, it also updates the statistics associated with those indexes.

Carlos: That’s right.

Steve: So once you’ve done an index rebuild, you have the absolutely best statistics you’re ever going to have on that index, because it scans the entire index when it gets rebuilt and the statistics are 100 percent scanned there. And then the maintenance goes and updates the statistics, probably using the default of 10 or 5 percent, and it builds you statistics that are not as good as the statistics you ahd when your index was rebuilt.

Carlos: And you’re doing that work all over anyway.

Steve: And you’re doing it all over. So you end up doing these with effectively moving and scanning all of your tables at least three times. This is one of those that I commonly run into when I’m doing performance tuning or performance analysis where I ask the question of, “Why are you reorganizing your indexes and then rebuilding them?” And they say, “I thought that was the way it should work.” As we can see, it causes a lot of performance trouble. A lot of excessive I/O.

Carlos: Sure, and then the other thing I think, being part of the SQL Community you benefit from this. There are scripts for this, but the idea of the rebuild index job is that it will rebuild all of those indexes and it’s no respecter of indexes if you will. Ultimately if you want to have good care and feeding, to back up for a second, in certain systems after it gets to a certain size, you might be taking longer than your maintenance window is. So then I’ve seen people just disable them because they’re like, “Oh, it takes too long, I can’t do it.” So now there’s no care and feeding which is even worse. So having something that you can run there regularly that isn’t a respecter and doesn’t sweat the small stuff, if you will, is probably better overall than scripts is probably better from a maintenance perspective.

Steve: Absolutely. And this is one of those things that I haven’t looked recently but I remember in one of the SQL 2016 previews there were some improvements on these default maintenance plans but you get to a point where you add them all in and you’re really doing more work than you need to be doing. So you need to understand that and not do it over and over and over again.

Shrink Database and Autoshrink

Carlos: Sure. So the shrink database we actually talked about in episode 51 a little bit, let’s just hit that and tie it into the autoshrink as well.

Steve: This is one of those that I think was number one on the worst list at SQL Server Radio was shrink database or perhaps the auto-shrink, one of those two. But the thing is, shrink database or autoshink, the difference there is that they’re doing the same thing effectively is that they’re going through and they’re freeing up as much space as can possibly be freed up from the database and they’re packing everything in as tight as it can be effectively without any real regard for performance. And the difference is one of them you issue as a maintenance plan or as a DBCC command, and you have control over when it happens. But with autoshrink, you don’t have control over that. SQL Server hits a point in time where it says, “I need to autoshrink.” Let’s say that you have something going on that you want to have good performance with, autoshrink then runs and significantly impacts performance while it’s running, and then once it’s done perhaps impacts performance even longer because next time you need to insert any rows or change anything in that database, you then need to have an autogrow event. It’s slow. So you can get into sort of battling back and forth between your autoshrink and autogrowth if you don’t have it configured well. So this is one of those that if we could just completely get rid of shrink database and the autoshrink feature in SQL Server, I would probably be happy that. However, I’d want to keep the ability to shrink a file. To go in and shrink a log file or a specific database file if you need to.

Carlos: Yeah, that’s true. Some of the log files do get out of control. Especially with bad care and feeding of your maintenance and your indexes, your logs are going to get big, potentially. I’ve often wondered where this came from because it kind of seems very Access-ish to me, because they had this idea of compaction, right, of the database and the Access people and my limited experience of them is, “You’re having problems? Compact it!” I think ti’s kind of the same idea, and I’ve told this story before, but I was sitting down with a SharePoint consultant at a client site. They were having some issues and I was the data guy to come in and talk with them. And the words that literally came off his lips were, “the reason your database is slow is because you’re not shrinking your database.”

Steve: Ooh, and that is so wrong.

Carlos: I was like, oh, okay, I’m not sure who you are but you’ve lost your credibility with me.

Steve: I think back to one of the very first SQL Servers I ver worked on, maybe it had a 50 to 100 MB harddrive. This was 25 years ago, and at that point in time shrinking your database might have been really important because disk was incredibly expensive

Carlos: Sure, that’s true.

Steve: But in today’s world it’s just not something that you really want to be doing.

NO LOCK Hint

Carlos: So the next on is the NO LOCK hint and I think this is on the list, ultimately, because there’s a lot of misinformation out there.

Steve: Absolutely, yes. The NO LOCK hint is one of my peeves on SQL Server, actually. I see it used a lot and really 99 percent of the time I see it used, people think that it’s doing something different than what it does. So, I heard the statement, “But I want to run a query in the production system but not impact or block anyone else. Shouldn’t I just use NO LOCK?” And the answer there is NO. The NO LOCK hint tells SQL Server to ignore other people’s query locks. Basically, to read dirty or uncommitted data at that point, which can lead to missing rows or phantom rows or data showing up in results. But it doesn’t do anything to stop the locking or the blocking in any way on the query that’s calling it. So it’s the equivalent of saying read uncommitted on a specific table that it’s referencing.

Carlos: Now, would that be your advice to people who want to move away from that? Is to transition the isolation level to READ UNCOMMITED?

Steve: Well, effectively whether you’re using NO LOCK on every table in your query it or you’re just using the isolation level of READ UNCOMMITED, the result is the exact same thing. But what I would, if you’re running into this, and you can’t fix –

Carlos: Let me go back. Ultimately the idea is that it’s not giving you what you want. To your point, people think, “Oh, NO LOCK means no impact.”

Steve: Right.

Carlos: So that isolation level is not quite the same thing either, right? It’s not preventing impact.

Steve: Right. We’ll get to alternatives here in a second. But let’s take the example of your bank account and you have the bank account, probably wherever you bank is storing that in a database somewhere. Let’s say that it’s SQL Server and on payday you happen to get your paycheck deposited and you see a nice lump of money there. And then you run a report or somebody runs a query while an index is being defragmented or a page split is occurring, it could show that you have two paychecks on that query. Then a moment later you run that query again and you have no paycheck on that payday.

Carlos: [Laughs]

Steve: Now if it was me and that was my bank, and they’re telling me that I have twice my normal paycheck or no money in my account, I would be really concerned and perhaps consider changing banks at that point.

Carlos: Yeah, it wouldn’t take too many times for that to happen.

Steve: Yep. Exactly. So there’s some solution, some way around this. If you’re finding that you need to use NO LOCK, either with NO LOCK hint or READ UNCOMMITED, because you have queries that are running and not returning, which is one of the almost legitimate reasons that people do it quite often. The thing you really need to look at, at that point, is what’s the root of the locking or blocking and figure out what’s causing that. For instance, you might have a really big update statement that’s being run and it’s locking it for everyone. If that’s something you can break up and run smaller or figure out how to run faster, you can eliminate the locking so you don’t have to use NO LOCK when you’re running the queries.

Carlos: Sure

Steve: Another alternative is that you could a kea look at is the READ COMMITTED SNAPSHOP Isolation level. If you put your entire SQL instance into read committed snapshot isolation, which has less locking going on at that point, and reduces the need for NO LOCK. So one of the things that people argue on this, is, “You’re never going to see the phantom reads or the duplicate rows when you’re using NO LOCK.” Well, there is an example I did where I had a table with just a million rows in it and it had a clustered index on a GUID or a unique identifier, not your best practice but it was a great way to show fragmentation and page splits and things. And I went through and I was just updating that unique identifier to a new value which caused rows to be shifted around. I had that running in one window and I had another session open in SQL Server Management Studio and all I did was count from that table. And I hit refresh on it and refresh on it. Sometimes it would say I had a million rows on that table; sometimes it would say I had a million and ten. Other times it would say I had 990,000 rows in there and sometimes it would say I had 995,000.

Carlos: And it was the same number of rows, you were just updating?

Steve: Exactly, because what happens is when you’re updating a row to cause it to go from one page to another, and then you’re using NO LOCK, you might scan that row on the first page and then it shifts to the second page before you get to it, and you count that row again. So it’s a very legitimate case of, if you’re having blocking issues, if you turn NO LOCK on you might get those missing rows or duplicate rows getting returned.

Carlos: Wow, very interesting. I was one of those who think, and it does run in a lot of circles, in fact it was just yesterday somebody was like, “Yeah I’m running these big queries but I’m using NO LOCK so it should be good.” And it’s just that misinformation there about what the expected result is.

Steve: Yep. I use dto work with a guy who always referred to this jokingly as the, “NO LOCK is the go faster switch”. Anytime you’ve got a slow query you throw a NO LOCK on it and you get faster results. Well, you might not get the results you’re looking for.

Carlos: So another kind of misinformation one and this goes back to our previous episode with Wayne, is the table variables. So just kind one of those that, hey they’re in memory and everything is good. But upon further inspection, maybe not so good.

Steve: There is a lot of misinformation out there about this one. And there are people who will argue with you that if you use a table variable instead of a temp table, it’s not going to use tempDB. And that’s just not true.

Carlos: Right. So I think for our more expanded discussion on this you can go back to episode 48 with Wayne and take a peek at that where we talk about the differences there.

Steve: Or, if you’re going to the summit this year you can see him talk about it live. Which I’m looking forward to this year perhaps going to myself.

Carlos: Because he’ll go through and show you all the logic behind it. You’ll do more than talk about it.

Instance Settings for the Processor Tab

Steve: So the general rule here is to stay away from table variables and use temp tables instead. Alright the next on the list that we had, and this is one that a lot of people go into and think it’s the go faster switch. And that’s the instance settings for the processors tab.

Carlos: So you click under Properties on the instance and you click Set up the Page, and you click on Processors, this is what we’re talking about there.

Steve: Yep, and this is one of those that unless you really, really know what you’re doing, in almost all cases whatever you change on this screen is probably going to make things worse performance-wise.

Carlos: Sure.

Steve: And there’s a lot of documentation out there on it but in some very few fringe cases that perhaps Boost SQL Server Priority might help, but they’re very, very, very rare. And things like, well the default is to automatically set the processor infinity mask for all processors. Yeah, that’s a good thing you should leave it at the default. Automatically set the IO affinity. Yeah, leave it at the default. Don’t mess with those. Don’t go in unless you’ve got a really good case that you’ve tested in a development environment and you know it’s going to work well, don’t mess with those. Don’t mess with the worker threads. Don’t boost your SQL Server Priority, because that’s not the magic “go faster” switch.

Carlos: Exactly, you’re just treating it like an operating system level operation and again, you get mixed results there. In one interesting case there that someone did tell me about his, about making a change there, they had a big server…24 CPUs… let’s say 2008 standard edition could only take 12 CPUs if that memory holds correct. So they just installed two instances and had one instance use the first twelve and the other use the second 12.

Steve: Oh, there you go.

Carlos: But again, a fringe case.

Steve: Yep, an extreme fringe case. The simple thing is, if you pop up the properties and look at the Processors tab, look but don’t touch.

Carlos: That’s right.

Steve: Don’t change anything there.

Carlos: Yes, because when you start doing you may start sending a few more things to the SQL server log, which is the next item on our list.

Steve: There are a couple of things on this that I think could be improved. And the SQL Server log viewer is really a core DBA task to be regularly in there monitoring what’s going on with your SQL Server logs. But there’s some things on in it that are just difficult and painful to use. One of those is when you just bring up the SQL Server log file viewer and you bring up the list of log items up top, you get this little tiny gray box at the bottom where the log message is going to be displayed. And often times most of the actual message is hidden off the screen in that little gray box.  So you need to take that and slide it up until it takes up like half your screen before you can see the log message. Now the next thing that I think that is really poorly done with it, is it truncates errors that come back from SQL Server agent jobs, or other jobs, and it truncates it at 4000 characters. And often times I’ll be looking at a job that failed sometime

Carlos: And the first 3000 characters are gobbley-gook or tell you about the system.

Steve: Yep. And according to those 4000 characters, everything looks great. And somewhere after those 4000 characters there’s some useful error message that you can’t get to.

Carlos: Yeah, that’s the other side, sure.

Steve: The other one on this one would be better support for carriage-returned line feeds. Basically, when you get the error output from a stored procedure or a query that’s run, oftentimes it just piles it into the log file viewer without any carriage return or line feeds. And I end up copying it off onto something else and adding carriage returns in so I can see what’s going on.

Carlos: Formatting that a little bit.

Steve: Yep. So, really it’s the only way to view the SQL server log rather than writing T-SQL code to query it or reading it as a text file or using some kind of third party tool. So it’s one of things that is pretty ore to what people use every day and it would be nice if that was a little bit better.

Carlos: Sure, and if Mike Fal was here he’d be telling us to use PowerShell to read that.

Steve: Oh, good point. I forgot about that one, yeah. I’m sure Mike will razz me if he hears this.

Spatial Data

Carlos: So another one which is, so the feature itself, everything I’ve hard about the feature itself from people who are using it they’re very positive about it. So I might be razzing a feature that doesn’t need to be razzed, but bringing up spatial data. So spatial data, again, the people that like it like it a lot.  IN fact, I don’t have a ton of experience with it so while I was at a past summit I went to Hope Foley’s session on spatial data and I thought, okay, let me see what this is all about. And without question, there are some things that you can bring up and show in SQL Server Management Studio. My beef with it is why would you be showing that kind of visualization in SQL Server Management Studio? Now, admittedly, the spatial data type and the geometry types have been around since the early SQL Server 2008 R2. S it’s been before the reporting components have caught up, but I think particularly now we have PowerBI and all of the other visualizations in the reporting. In my mind, lets’ store the data that we need and we can visualize it in the reporting later.

Steve: Very good point there. I think that, a tool like PowerBI with the right visualization can use the spatial data in amazing ways. Far better than you’ll ever see in SQL Server Management Studio.

Carlos: Yeah, exactly. And so with the items that they have to have, they have to have spatial indexes and hints and compression. They even have different query plans for using spatial data, I think again it’s one of those things that if it makes sense for your organization, I would try to see if we can get that reporting done in a different layer.

Steve: And this might be a good time to remind people of the hashtag #SQLPodcast. If someone doesn’t agree with us, and I kind of throw it out on this one because I don’t care about spatial data so much, but if someone doesn’t agree throw it out on the hashtag and tell us what you think.

Carlos: There you go. Again, those who have been using it are going to hate me for it. So if you have experiences with it let us know.

SQL Server Components

Carlos: Okay, so we are getting outside of SQL Server with these next ones. And that is, so they’re not pure SQL Server but components that get installed with SQL Server. The first one is the data tuning advisor.

Steve: Yep. And the full name on it is the Database Engine Tuning Advisor. So even though we’re saying it isn’t core to SQL Server, it does sound pretty core if you look at the name.

Carlos: Sure, very good. I think you can only get it with the SQL Server install. I don’t think you can get it as a standalone install.

Steve: Exactly.

Carlos: So, one of the things again, I think the tool itself is very neat. I think it’s one of those where the way that you use it, you can kind of lead yourself on a bit in the sense that you’re not providing the tuning advisor with enough information to give you good data as a result. So it’s just not big enough. Or if you do get large enough data sets in there it kind of chokes and people’s patience runs out before it finishes processing.

Steve: I’ve seen the same thing with that.

Carlos: Now previously at some point they did add a plan cache option, but you had to take a profile or get some other excel file or export of some of this data and then feed it into the tuning advisor. So I think it was in 2014. That was the tool I used in preparation for this episode. I saw, oh, they added the plan cache. It gives you a slightly larger environment, or at least the things that are super important, assuming that your database has been on for a while, etc. There are caveats but this is a step forward in my mind. You get a better view, potentially.

Steve: Yep.

Carlos: But then it spits it out and unfortunately again I think people don’t necessarily review that output and say, “does this make sense to me or do I understand the changes that are happening?”

Steve: I think the tuning advisor, for what it is, it is an amazing piece of software engineering to be doing what it’s doing. I think it just doesn’t go all the way to do I guess what it needs. One of the things I see is it will suggest indexes and statistics, indexed views, partitioning and that’s about it. And there are a lot of other things you need to understand for performance of the database. For example, what’s happening in your T-SQL code? Or is there something in the stored procedure that you’re doing? It’s completely isolated on that. What I see is people jump into it expecting that it’s going to give them everything that could possibly be wrong with performance, and it really doesn’t do that.

Carlos: Yeah, it has a more narrow view, if you will.

Steve: And another thing that’s prevented me from using it a few times is it has problems with temp tables. Specifically when you’re not specifically declaring a temp table and you’re doing INSERT INTO and the temp table name, it doesn’t work at all. It just crashes and throws extremely confusing syntax error problems. So if you’ve ever got a query that calls a stored procedure that’s possibly doing something with temp tables and it works great in SSMS but in the tuning advisor it says, “syntax error”, well it could be that it’s using the INSERT INTO syntax to insert into a temp table. And then the other thing that I don’t like about the tuning advisor but there’s an option in there that says, “Just apply all the recommendations.”

Carlos: [laughs] Yeah that’s true. Like, “Hey! Trust me! What could go wrong?”

Steve: And it’ll go build a bunch of statistics and indexes and it’s very dangerous and can be very wasteful. Just like when you have the, when you’re looking at the actual execution plan and it has recommended indexes. You don’t want to apply all of those. Same thing with this. You don’t want to apply them without reviewing them and looking for duplicates or getting a good understanding. Otherwise you’d get a lot of overhead there with duplicate indexes.

Carlos: Or you create those indexes and you’re going to be locking up your tables in the middle of the day. All of a sudden your users are complaining because they can’t get in. And they’re like, “hey, what’s going on?”

Activity Monitor

Steve: That brings us into the next one, which is Activity Monitor.

Carlos: Yes, activity monitor. This is another one of those that I think is actually pretty cool. Which just may not get applied quite correctly. So the activity monitor it seems like it’s been around forever, right?

Steve: I think I stopped using it a while ago, so it does feel like it’s been around for a long time.

Carlos: Yes, that’s right. But it has the graphs at top so you can see the CPU percentage and kind of gives you some insight into your IO and into the batches. So that was kind of useful. I could just see how busy is my SQL Server at this moment. Then it kind of has these screens underneath that shows you different components. Where things are slow or intensive but it just kind of leaves you there and all of those columns are not super explanatory. They don’t really help you find the issue. The other reason I really fell out of favor with the activity monitor. I guess I should say that I talked with some of the Microsoft folks and they actually liked it. I guess it’s still their go-to, and that’s fine. If you know of components there and know what you’re getting back, yeah, it may still work for you. However my experience was that I worked in a shop, so the managers and the developers actually knew about Activity Monitor and so they would all have it open, just a tab in their SQL Server, and I don’t know if ti was random or if they were trying to run something, but it’d be like, it seems slow. So they’d look at it. And again, in most environments the CPU is not all that busy, or it’s a consistent state, generally. And then all of a sudden it would jump up 5 or 10 percent and they were like, “Oh my gosh! The CPU is at 50!” I was like, what do you want me to do about it?

Steve: And that could be completely normal for where it’s at every single day. But because they see the change, they look at it like that’s gotta be the cause of the problem and they end up chasing the wrong thing. So I think part of the key to that is really understanding what the different graphs and reports in the Activity Monitor mean. And that’s one of those things, I’m just going to do a quick side plug on Database Health Monitor here because I have a lot of reports in there. I had a similar problem. People would ask me a question, “What does this report mean?” or “What does this chart mean?” So what I ended up doing with Database Health Monitor is that I ended up putting a link in every single page to help that would go and describe what the details were around what that report is doing and what report works. And I think if the Activity Monitor had something like that, that would be a big improvement.

Carlos: That would be incredibly helpful. And to your point there, some of it is just knowing your system. In episode 28 with Erin Stellato we talked about baselines and how you can better know your environment. And admittedly, the Activity Monitor might be a place to get a feel for what those baselines might be. But if you are using Activity Monitor and it’s your monitor of choice, hashtag #SQLPodcast and we’d love to get that feedback.

Steve: Tell us what you love about it. Maybe we’re missing something here.

Carlos: There you go. Maybe there’s some feature we haven’t explored. So that kind of wraps up the features that we wanted to talk about.

Steve: So I think that’s one of the #SQLFamily questions that we ask: if you could change one thing about SQL Server what would it be? Well, here’s if we could change ten or eleven things. This is what it would be.

Carlos: That’s right.

Steve: And I hope we’ve done this in a constructive way. We’d love to see these things change but we’ll keep using the product even if they don’t.

Carlos: Yeah. That’s right. I am not interested in spinning up too many postgres instances all of a sudden as a result of this conversation.

Steve: [laughs] No thank you.

Carlos: Yes, well again if you liked what we talked about please let us know. I guess we do want to give a shout out to Alberto S. Gonzales in Episode 54 he liked what we had to say, he said we were crushing it, Steve.

Steve: Crushing it, nice!

Carlos: I’m sure that was the CEOs that were crushing it. Yeah again, so give us that feedback and let us know. We’d love to hear from you. If there’s other topics you remember on the SQLDataParners.com/podcast there’s also a mechanism to leave us a voicemail, if you will. Through your computer you can leave us a voice message. Or leave us a comment, that’s another way to interact with us.