Episode 40: Squeezing Data With Compression

Episode 40: Squeezing Data With Compression

Episode 40: Squeezing Data With Compression 560 420 Carlos L Chacon

Compañeros!  Did you know that acccording to an unofficial Twitter poll, only 60% percent of SQL Server Enterprise users said that they took advantage of the compression features? In this episode of the SQL Data Partners Podcast I chat with Jason Brimhall, about why that he thinks that is. He also answers your SQL Server compression questions: Why is it a bad idea to compress everything in your database? How does compression work? Why doesn’t compression always work? We also talk about why compression sometimes doesn’t help and he reveals a case study where compression moved the performance needle way back.

This episode is sponsored byDBMaestro

Episode 40 Quote:

“…Most of the time, because of the huge decrease in IO that is occurring, you see an improvement in performance. Queries improve dramatically in performance. I have seen, consistently, five to six times performance improvement when you compress tables.”  – Jason

Listen to learn…

  • The changes to compression in 2016
  • How compression affects CPU cycles and performance
  • The link between compression and indexes
  • When to use compression
  • How to tell if compression will work for your database
  • The deprecated SQL Server feature that compression relies on
  • The importance of test environments when using compression
  • The company motto that Jason says is the best advice he’d ever received
  • The mid-nineties movie that Jason would love to live out

As always, please join me on the trail by starting a conversation below or connecting on Twitter. If this podcast is helpful to you, please leave a review on iTunes so others can find it as well. Subscribe to the podcast via iTunesStitcher, and Podbean.

About Jason Brimhall
Jason is a SQL Server Microsoft Certified Master/Microsoft Certified Solutions Master and Microsoft SQL Server MVP. He’s currently employed by SQL Solutions Group where he consistently finds ways to maximize query performance. Jason is the author of two books.

Resources

Jason on Twitter
SQL Server Society of Las Vegas
SQL Server 2016 and Compression
Page Compression
Row Compression
Unicode Compression
SQL Server Data Types
SQL Server Extended Events

Carlos: So Jason, welcome to the show!

Jason: Hi! Glad to be here.

Carlos: So you’re quite the travelling man. You’re joining us all the way from Madison, Wisconsin!

Jason: Yeah, it’s clear down there at the bottom tip of South America. It’s the furthest south I’ve ever been.

Carlos: [laughing] Man, I’m going to have to go back on my geography there. Seeing as I spent two years in Argentina, I don’t remember that being at the southern tip of South America.

Jason: Well, it depends on which South America you’re talking about. [laughing]Carlos: Well there you go. So ultimately I’m hoping that your data compression knowledge is a little better than your geography, and that’s really what we want to talk about today is database compression.

Jason: Well actually as you segway into it, you know, it actually works very well because we just compressed the entire globe into the northern hemisphere, into the United States.

Carlos: [laughing] There you go. So I guess there are some similarities then, right? We’re going to take all that and squeeze it down into only what’s needed. I guess it’ll be a lot of mountain ranges at that point.

Jason: Yeah, lots of peaks and valleys.

 Carlos: So, database compression, right, is ultimately an Enterprise-only feature. Let’s get into that a little bit. Ultimately with compression we’re trying to shrink our database. Why wouldn’t we always want to compress everything? Or, what decisions or factors go into deciding what to compress?

Jason: So that’s actually a very good question, and it’s kind of a hard question to answer ti be honest, because ideally we’d want to compress everything, right?

Carlos: It makes sense. You’re talking about these huge systems and sure, disk is getting cheaper but with all this data kind of being everywhere being able to compact that seems like it would make sense.

Jason: Yeah, you know, everybody’s always trying to reduce their storage footprint no matter which way it is. I mean if you look at the SAN level, you have all these data de-duplication systems at the SAN level that de-duplicate at the block level trying to save on storage, right? And the same is true inside the database. So you sit there and go, “Well, why wouldn’t we want to de-duplicate or compress our data and remove all that extra space?” Well, a reason why we may not comes from the hard knocks of experience where I actually had a client who had a situation where we were able to take a 700 GB table and reduce it down to 135 GB. And we’ll get into some of the internals maybe about compression here in a little bit and explain how that’s possible and how it really works. But this table had a query that was executing and they had a lot of data in there and they were running low on disk space and it was just this one table that was chewing up most of the disk space. So the client had no actual test system. Their dev/test stage and production were all the same server: their production box. But that doesn’t ever happen in real world scenarios, right?

Carlos: Yeah, I’m sure. None of my companeros have instances like that.

Jason: In real-life situations, where you do with what you can afford. And that’s one server to be able to service up the production data and so in an effort to try and save them on some money we went and compressed that table. As a baseline prior to compressing, I tested their 31 query that was hitting that table. And the query would take 21 seconds or so to execute proper to the compression. 700 GB, mostly performing a clustered index scan the way the query was written and couldn’t rewrite the query for them because of application constraints. So compress, the table, get it down to 130 GB, and after that run that same baseline query and that same baseline query is now suddenly taking 10, 15, 20 minutes to execute.

Carlos: Oh wow.

Jason: And the only change having been compression. We only implemented page compression. So the rollback on that is, “Hey, this is the only change that we made so let’s confirm that this is actually what we’re seeing.” So as a test, we tried to rewrite the query and optimize it. And still running slow. And after that, I said, “OK, well, the change was compress the table. We compressed the clustered index. I’m going to uncompress the clustered index, I’m going to remove the page compression, and then rebuild the index to make sure that every page is scanned to remove that compression a bit. And we reran the query and it’s back down to 20 seconds. Immediately. So, the moral of the story is, besides test, test, test, test, is not every system is conducive to compression. And the only way that you can really actually see that it’s going to get you the benefits you need…  I mean, this wasn’t an update. This was just a pure read query and when you do reads, you know, less data and less IO should equate to better performance as well because you’re reading fewer pages.

Carlos: Right, that was my other thought there. With the compression we’ve talked a little bit about the disk. But of course, SQL Server reads everything from memory so if it pulls that smaller footprint into memory you in theory could fit more pages in there

Yep, you can definitely fit more pages into memory and I tested this particular case on both a cold test and a warm test. And so everybody understands what that is, you run the same query multiple times to make sure that you have the pages into cache and we’re pulling it from memory and not from disk. And no real difference in performance there. Now, you might say, “Well, obviously something else must have been happening that caused it to slow down.” Well, I monitored the CPU as well during this time, and there was no noticeable increase in CPU utilization during this particular query. So, I forget the exact metrics before but wen this query was running, this server was running at 50 percent CPU utilization. After the compression, the CPU utilization actually dropped to about 4 percent. Running the query it had stayed at 45 percent. So no noticeable uptick in CPU utilization. That kind of digresses into another talking point I have about compression. And that is CPU utilization.

Carlos: That seems to be the big hit, right?

Jason: So, a lot of people will sit there and say, “Well don’t compress, because you’re going to see a big hit on CPU.” And there’s a caveat to that, right? The caveat being that, yes, it does take a little bit of CPU to uncompress those pages and expand it out as it reads those pages. Understood. However, when you consider that you have, in most cases, those 20 second queries should have improved to 3 or 4 seconds or even faster. So if I have decreased my CPU stress for 16 seconds I have freed up CPU. Now, maybe I see an uptick for 3 or 4 seconds for the duration of that, maybe I don’t. If a query has executed for 21 seconds it has to have utilized X quantum of CPU, right? Well, if I can make it run in 3 to 4 seconds now it using a different quantum of CPU. Technically, fewer cycles of CPU but if it upticks say from 50 percent to 55 percent for that 3 to 4 seconds, then it’s only 3 to 4 seconds of an uptick in CPU usage as opposed to 20 seconds.

Carlos: You’re kind of getting back the rest of those cycles, in a sense.

Jason: Right. So it’s’ kind of a give and take. I have improved the performance of the query and removed those CPU cycles. Occasionally I may see an uptick in CPU cycles for the duration of that query and occasionally I will see lower CPU cycles. So, the actual CPU utilization is going to largely depend on the system on which you’re trying to compress and every case is going to be different. Your mileage may vary. Let’s throw another one out there. Another SQLism: It depends. Okay? You have to test. So what do you do? You set a baseline, you run a known set of queries, you compress it, you run that known set of queries again, and you compare the results before and after.

Carlos: Measure those. Interesting. So that kind of sounds like it’s a little bit of a crapshoot. Kind of, “You try it in your environment and see what happens.” Are there any, maybe not hard and fast rules, but general guidelines where you can say, “I’ve seen favorable compression when X.”

Jason: So, I will say that it’s less of a crapshoot and with the exception of that one situation where I started with that story in the beginning, that’s kind of the edge case scenario. And I like to tell that story because it underscores the necessity to test in your environment. Now, if that’s an edge case, now we’re talking about the 99.99 percent of the time were it’s not an edge case. It’s not that particular type of case where you see that huge difference in performance where all of a sudden it all hits the fan and nothing is performing like you expect it to. Most of the time, because of the huge decrease in IO that is occurring, you see an improvement in performance. Queries improve dramatically in performance. I have seen, consistently, five to six times performance improvement when you compress tables. And the five to six would typically be with page compression. You’re going to see different performance gains if you use a different compression algorithm. Say, row compression or columnstore compression, which is kind of like page compression which are on steroids. And then to throw the next generation into that is the in-memory hekaton, which is also build on kind of the same principles as the columnstore, using that X velocity and taking it to that next level and compressing that data super tiny so it can fit in memory better.

Carlos: Yeah, so you can put more in there.

Jason: So, it’s, as Microsoft has touted, you can see thirty to one hundred times improvement in query performance.

Carlos: But I think it’s important to point out that when we talk about compression from the SQL Server feature, we’re not talking about in-memory tables, correct? While that feature, or the columnstore inedex, the way you would set hat up and implement that are two different features.

Jason: That’s not using the compression index, sure.   It’s using a different syntax. They’re kind of compression as a building block for columnstore, so it’s just a building block for hekaton, right? So they’re all kind of intertwined and really, when you look right down at it, they’re all kind of intertwined back to a deprecated feature that you can’t turn off. It’s still there. That’s’ the varbinary storage format. That’s the original SQL Server compression that’s deprecated but even in 2016, it is still enabled and you cannot turn it off. That’s because they all kind of require that compression algorithm to be there for each of them to work. While the syntax is technically different for each of those, they are building blocks.

Carlos: So talking about data types, are there certain kinds of data or datatypes that wouldn’t be available for compression?

Jason: So, the types of stuff that you want to avoid for compression are going to be any of the data types that are going to be stored off row. And when we say off row, that means…

Carlos: Varchar max…

Jason: Right. Varchar max, nvarchar max, your nchar max, your char max, your varbinary, right? The MAX types. And it’s not always necessarily max, but anything that we have to story, once we get up to 2000 or 4000 depending on the kind of data being stored on that page, to the point where it has to overflow the data from that page to another page, where you just have a pointer and it’s filling up another page with all that data…. That’s what we’re talking about.

Carlos: So maybe just to circle that base, it might be helpful to explain that a little bit more. As SQL Server writes, we have these 8KB chunks called pages. But when you have a column or an integer or a char 10, it’s going to write that into the page. And as you mentioned, any of these MAX columns what it’s going to do is it’s going to say, “I don’t really know how big that thing’s gonna be, so I’m going to give it its own space. And for that single record you start writing, oh it says, “oh, varchar max, I’m going to give it another page.” And now you have two pages for that single record.

Jason: Yeah, and there are times where when you store, even with a MAX, we can spill that max out into more than just a couple of pages, right? We can have many pages where it’s only the data for that one row, one column. That stuff can’t be compressed because as what happened to that work, it goes back to how SAN de-duplication works. Without getting into the weeds too much on it, the way compression works in SQL server is: first you have this thing called row compression. It’s one of the options for compression. And what row compression does is it removes the spaces. All the extra characters from within the row. So it reduces the amount of space within the row. And then you have page compression, which first does row compression and removes all that extra space from the row and trims it up that way, and then what it does is it looks for the same data to appear in multiple places on the page. And then it does a data de-duplication. So it compresses the data by removing all that space, but then it de-duplicates it so it leaves kind of a token in each of the slots that it has replaced that data. So now it becomes a whole lot smaller and you have a reference pointer, much the same as what you have with SAN technology where they do de-duplicate the data. You have one known good value and the rest of them have a pointer back to this known good value. It removes a lot of space.

So with that explanation in there, now when you have a column one row with one column that has spilled over into two or three pages, you have this data that’s all on these other pages. It can’t compare to any other columns or any other rows on those other pages in order to de-duplicate the data, because as far as SQL Server is concerned, it’s unique data.

Carlos: So we’ve talked a little bit within that about the CPU bump that you may hit. But I guess I’m thinking- now that it’s doing that, other side effects of index rebuilding, for example? Any effects there? Is that uncompressing, looking at that, to resort it and recompressing? Or how does that work?

Jason: That’s a good question. The way that the indexing is going to work is when you‘re actually running a rebuild or a reorganize, that’s where you’re going to see some sort of uptick. It is going to require more resources. It does have to take and look at that page and say okay. And understand what happens with the pages when you rebuild these pages, right? What’s happening is that it’s sorting the pages, not the data on the pages unless it finds out that all the data has changed and we need to move data between pages. And that’s where you start to have the uptick in resource utilization is if when data all of the sudden, because of the splits and stuff, now you have to reorganize the data because you have too much free space on a page and data gets moved between pages. That right there, just like the potential when you’re using it normally, I’ve seen an uptick of maybe five percent in CPU utilization. Time wise with index rebuilds, I can’t honestly say I know how much longer index rebuilds take, because I set up intelligent builds where I just set up a time limit for my index rebuilds on a nightly basis. And I just check to make sure that they got rebuild within the time limit. I haven’t dived terribly deep to seeing, if, “Oh, this index took, say, two minutes and now that It’s page compressed it’s taking two minutes and thirty seconds.” You know? When I’m looking at just a few seconds here or there, it would have to be something dramatic for me to notice that it’s taken considerably longer. Otherwise it’s statistically non-different.

Carlos: Right, so I think if you have good care and feeding of your indexes and you’re putting them, maybe not off hours, but generally you tend to see them in the non-business hours when that works for your environment. But good care and feeding and you’re watching that.

Jason: And along the same lines, and that kind of ties back in with the principle discussed earlier, where the MAX datatypes where you can’t compress those pages. Just because you have that max datatype there doesn’t meant that some pages don’t get compressed. Some pages are still going to get compressed within an index or a table, so the same tokens till applies. You have some pages that are going to be at different stages of compression, none or row or page compression, within an index. So not every single page is going to be completely compressed. Say you have 100 pages but only 5 of them meet your compression requirements. You’re not going to see a huge bump in CPU utilization because it’s only 5 pages that were affected. Hopefully it’s not that few if you’re going to page compression.

Carlos: Yeah, we like to see that number a little higher. So any differences in encryption? In 2016 we’re talking about all kinds of new features and things. Compression’s not one that I’ve heard about. Any changes between the environments?

Jason: There haven’t been any real significant changes int eh base compression feature since R2 when they added the nchar and nvarchar into being able to be compressed

Carlos: So 2008 R2 was the last big changes to compression.

Jason: Right. The huge big change there. And you know, with justification there for Microsoft, this is a building block for the other big moneymakers. You know, your columnstore indexes whith ycou can now have multiple writable columnstore indexes in 2016. That was more important to make a change along those lines under the covers to really kind of see that varbinary and compression as the base building blocks of that kind of thing. I don’t see it changing too terribly much. That said, there’s still some hidden things in 2016 so maybe there have been some changes.

Carlos: Yeah, that’s true. It’s still kind of a mixed bag as to what’s going to be available and they continue to be working hard to get that out the door. So, an unofficial Twitter poll by Kevin Kline, he was asking who of the Enterprise features, and ultimately compression is an enterprise-only feature, he asked who of those with Enterprise edition are using compression. And again, it’s an unscientific poll. But he found that 60 percent that had it were using it. Now he was thinking it was going to be closer to 20 or 30 percent. So it actually doubled his expectation. So again, that kind of seems a little bit low to me. What are you seeing as an adoption rate and why do you think those numbers aren’t higher?

I think a large part of it is that, a) with it being that there are still a lot of people that don’t actually understand what compression is. That’s one part of it. Two, we have a lot of shops that are mom and pop shops and they don’t have a lot of time to investigate. So, “if we haven’t heard about it, it may be that this grand old smoking gun that we can possibly use but how do we know that it’s going to do any good?” And then the other part of it is, being an Enterprise feature, it costs a little bit more money. A lot of shops try to get away with Standard Edition type features.

Carlos: Sure, but I think he was asking about those who had Enterprise edition. So I think 100 percent of his pool had Enterprise and 60 percent were using it. So I think he was thinking it was only going to be 20 or thirty. Of those Enterprise people, why do you think it’s so low?

Jason: It primarily comes down to education, as the first one. And secondary is going to be that it comes down to time and money. A) Time to investigate and test, and B) money to be able to have a test environment where they can spare the cycles of the employees to do that kind of stuff.

Carlos: Fair enough.

Jason: I was trying to look real quick because I had thought I had heard somewhere that in 2016 they had dropped it from Enterprise to Standard. So I can’t find anything in the documentation that says that it is an Enterprise-only feature in 2016, but don’t quote me on that yet. I’ll have to research that some more so we can be absolutely certain about that.

Carlos: And if that information becomes available, we’ll try to put that in the show notes for today’s episode.

Jason: Here is something that did change in 2016. The Unicode was added previously but they have extended Unicode support for compression. So that’s a hidden feature that I hadn’t known about. I’d change but you have to be using Unicode for it to be making that big of a difference. But let me research, because I see nothing in the documentation that says it’s for Enterprise only in 2016.

Carlos: Okay. Well I think even that, some of the features like stretch, I don’t think they’ve come out and said one way or another if it’s going to be Standard or Enterprise or if certain features are only available.

Jason: There are certain features that you can almost absolutely say with 98% certainty that it’s going to be Enterprise level. Compression is one that they’ve been talking about for a while, they’re going to drop it down kind of like how they’re doing for Availability Groups. They’re making it so that Availability Groups, a limited version of it, is available in Standard Edition.

Kind of replace the mirroring idea?

Jason: Yeah. It seriously makes sense.

Carlos: Oh it does, it’s great. If you’re going to deprecate that feature you gotta give them something else to be able to use, because you want to have a DR strategy even for the smaller. Very good.

Jason. Other questions?

Carlos: No, thank you for this compression discussion. Let’s do SQL Family.

Jason: Okay. You’re not pumpkinized yet?

Carlos: [laughing] No, I haven’t turned into a pumpkin just yet.

Jason: [laughing] That better make it onto the show.

Carlos: We’ll see.  So one of the things I always like to talk to other folks about is the way they work. So what is your favorite SQL tool? It can be a free tool, a paid tool, why do you like and how do you use it?

Oh that’s easy. Extended Events.

Ah! You’ve been talking quite a bit about that. So why is Extended Events your favorite tool?

Jason: Well, Extended Events basically is the bomb. [laughing] If you pretty much want to be able to do anything from monitoring your server to blow up your server, Extended Events can do it for you. And  don’t quote me on that, I’m not going to show you how to blow up your server with Extended Events, but we all know that we can do it with Profiler. But there are some special ways to do it with Extended Events. Any time you do anything really stupidly, right, you can blow up a server. In all seriousness, it gives you some fantastic insight into the database engine, into the optimizer, into SQL Server, and allows you to be able to make much more educated decisions about what you’re doing with your job.

Carlos: So, you’ve just inherited one million dollars. What are you going to do with it?

Jason: Don’t’ ask me what I’m going to do with it. It doesn’t matter if I inherit it or not, it’s not mine to spend! [laughing]Carlos: [laughing] Oh, should I be asking your wife?

Jason: [laughing] Ask my kids.

Carlos: Yes, that is one thing that you and I share, is lots of kids.

Jason: Yeah, we all know that it’s not really for yourself that you make money. You make the money for the kids so that they can go spend it. Might as well hand over the paycheck to the kids. Um, one million dollars. It’s actually a rather easy decision and boring. Basically, it goes into first Uncle Sam’s hands.

Carlos: Yeah, there is that.

Jason: 50 percent of it is inheritance tax. So that gives me with 500 thousand dollars which basically pays off all my debt and gives me a good chunk of money to put into some sort of savings that would then be inherited by the children either for their college or for missions, weddings, houses, basically to them.

Carlos: Very good. So you’ve been around a few times. You’re a consultant now. What’s the best piece of career advice you’ve ever received?

Jason: Best career advice I have ever received? It’s actually kind of a difficult one there. I think it really just kind of boils down to “Just do it”. Not specifically career advice, but kind of lifetime advice. Don’t wait, just do it. Just get up and do it. If there’s something that you would be, I don’t want to say proud about, because it’s different than pride, but can be honorable.

I saw a quote the other day that said, “Failure is not trying something and getting it wrong. Failure is not starting.”

Exactly. You know, it’s scary to be honest, as a consultant. Because you know, I’ve got a family. That’s really the hardest part. And I think it was Stacia Varga, and she’s not the first person who said just do it, because I’ve heard that from church talks a log, long time ago.

There’s also that multi-billion dollar company in Oregon that’s been saying that for a while.

Jason: I’m not even thinking of that one, no. Besides, I’m an Asics man. [laughing] so, Stacia is besides just doing it, you have o believe in trusting yourself. You know you can do it, and you know you have the skills, now just trust yourself. So it was kind of good, because she’s been a consultant for a very long and we’ve lived and worked together in the Las Vegas User Group area. And we had some chats. And it actually made a lot of sense. It resonated, right? You just sit down and you do it and trust in yourself. The work, when you trust in yourself and you do a good job, the work just keeps on coming. Sure, there might be a little bit slower sometimes than others, but a lot of times it just keeps coming. And seriously, the same is true of being a full time employee. As a full-time employee, the work never ends. The more you do a job and the more you do it well, the more that gets entrusted to you by your boss or your employers. So do something and do something that will be honorable in your eyes and that would be honorable to the company. It doesn’t matter what the company is.

Carlos: Good stuff. Jason, our last question for the day. If you could have one superhero power for the day, what would it be and why would you want it?

Jason: [laughing] Would I one day be smart?

Carlos: I’m not even sure a superhero power could that for you, buddy. [laughing]Jason: [laughing] you know, a really cool superhero power some days would be replication.

Carlos: Replication? So kind of like duplicate yourself to be in two places at the same time?

Jason: You’ve seen that show Multiplicity, right?

Carlos: I don’t know if I remember that one.

Jason: Go look it up. I think it’s Multiplicity, where he basically duplicates himself time and time again. So I can become much more efficient. And should I forget something, maybe one of my replicants could remind me. The problem with that, is now we’re getting geeky, is you could get split brain syndrome much like mirroring could have in SQL Server. And we don’t want to have a split brain.

Carlos: There’s a potential downside there.

Jason: So, Multiplicity is with Michael Keaton. And it’s a, go look it up. It’s from the mid-nineties.

Carlos: Well Jason, thanks so much for taking the time to chat with us today.

Jason: Yes, thank you.

1 Comment

Leave a Reply

Back to top