Compañeros! Welcome back to the SQL Trail. Have you ever wondered what a LUN is? Have you given some thought about getting more into new storage options, but aren’t sure where to start?  You’re in luck because in Episode 53 Steve Stedman and I chat with THE Argenis Fernandez, storage guru and PASS Director-at-Large. We talk storage options, LUNs, IOPs, and why he thinks we all need to approach storage a lot differently.  Steve and I learn something new from him every time we talk to him and this time is no exception.

This episode is sponsored by

Cozyroc-logo-png

Episode 53 Quote:

“We’ve been avoiding the topic of storage for a long time. And you know what? We actually have ITIL to blame. Do you know it’s ITIL that actually started splitting responsibilities between operations and engineering? And then even within operations there will be people in charge of certain things. So isolation in environments and segmentation of duties in companies, that’s actually brought us to where we are today.” – Argenis Fernandez

Listen to learn…

  • Why Argenis thinks that drive letters are outdated… and what you should use instead
  • How ITIL ruins storage in most companies
  • The “Argenis definition” of a LUN, IOPS, and more…
  • The information Argenis thinks all DBAs should know
  • The difference between LUNs, mount points, and data stores
  • How to partition your data files and logs in physical storage vs. in virtualized environments
  • How IOPS fits into the SQL Server storage environment
  • What you should worry about instead of IOPs
  • The one thing he’d change about SQL Server

Join the conversation by leaving a comment below about your experiences with storage. And if this podcast is helpful to you, please leave a review on iTunes so others can find it.

ArgenisAbout Argenis Fernandez
Argenis is a Solutions Architect with PureStorage, a Microsoft SQL Server MVP, VMWare vExpert, and a well-known face to the #SQLFamily community, especially since he’s now the Director-at-Large of SQL PASS. He’s a regular speaker at PASS events, including SQL Server Summit. He also founded the PASS Security Virtual Chapter.

Resources
Follow Argenis on Twitter
Argenis’ blog
LUN Storage and its role in SAN Management
Storage performance: IOPS, latency and throughput
PASS Security Virtual Chapter
What is SQL Server IO Block Size?
Mount Points in SQL Server
What is the CXPACKET Wait Type, and How Do You Reduce It?
Episode 50: SQL Server Settings

Transcript

Carlos: So Argenis, welcome to the program!

Argenis: Thank you sir, thank you for having me.

Carlos: Yes, thanks for coming on. Ultimately today, we’re talking about storage. So I think it’s something that’s not near and dear to an administrator’s heart, it will be when they start digging into what’s going with their SQL Server environment.

Argenis: Yes, so it’s actually something that I wonder, actually wondered for a long time, why isn’t storage part of a DBA’s curricula, right? We’ve been avoiding the topic of storage for a long time. And you know what? We actually have ITIL to blame. Do you know it’s ITIL that actually started splitting responsibilities between operations and engineering? And then even within operations there will be people in charge of certain things, blah blah blah. So isolation in environments and segmentation of duties in companies, that’s actually brought us to where we are today. So it’s kind of a hard problem right now, because you have the storage silos where people only know storage and they don’t know much about databases and all they see has worked out coming in. And on the other hand you have DBAs who are sending that work out to storage and they only care about the rate group or the rate level that they’re using on that storage. And isn’t that a complete disconnect? This is where the problems arise, right?

Carlos: I think that each of those, depending on your environment, particularly, you can have different needs on what your storage needs to do for you.

Argenis: Oh yeah, so, I mean the good DBAs will be asking not just a one terabyte LUN rate ten, they will also say things like, “I want this LUN to be carved more for IOPS and this one to be carved more for bandwidth. And so that low leniency is a thing and high throughput is a thing, and they might not be the same thing on every single volume, depending on how you configure your storage device anyway. But the world is changing really fast and I will tell you, there are a lot of things going on that are going to make a lot of those concepts go away, basically.

Carlos: So before we get into that, I think it would be helpful to, just to cover some basics and review some of these terminologies. You threw out the word LUN. And IOPS. What is it your definition? What is the Argenis definition of a LUN?

Argenis: So a LUN is a logical unit number. It’s basically a concept from SCSI, the SCSI particle, right? For accessing storage devices. And so basically what it means is a target. You’re defining a storage target in your SCSI chain. You can have multiple devices on a single SCSI chain and the LUN will basically be the ID for any given target.

Carlos: Oh, okay.

Argenis: The way we see these on the database side is just drives. Actual, physical disks on Windows.

Carlos: Right, with a letter.

Argenis: So that’s actually something that I’m actively trying to change with people, in terms of the letters. They should think of each and every one in terms of mount points, right? You should really have a minimum amount of letters on your server. I mean come on, it’s 2016. Why would you be using a drive letter for everything? So what I tell folks is, you should have your C drive. Although I still don’t understand how come 2016 is still using the C drive. And the other drive letter that you have should basically be an anchor. A very small volume that acts as a mount point for everything else. So you really do not need to have any other drive letters in your system. That actually does away with a lot of, you know, corporate standards and things like that. But it’s for the better, right? I mean really, how many drive letters can you actually fit into a system? 23? It’s not 26, because C is reserved. B is reserved. A is reserved. You can only use 23 of them. Um, so if you have more than 23 LUNs, what do you do? I get people thinking about those things and you know, basically tell them, “Catch up with the times! Get with the program. It’s 2016, you shouldn’t be using drive letters. “

Carlos: Sure.

Steve: Okay, so with that then, one of the things with SQL Server that you like to do is put everything on one drive letter, typically in the past, and you have your logs on another drive letter. And your tempDB and your backups off to different places as well. How does that all apply then when we’re talking with LUNs and not using so many drive letters?

Argenis: Well each LUN could be a different mount point and then you would be placing your logs into one mount point and then your tempDB into another and then your data files into another, and so forth and so on. But there’s actually a news called a FOD where you know, with the advent of Azure A’s and things like that, the amount of volumes shouldn’t be in the twenties. It should be in the ten’s. And it also depends on the throughput of your entire system. But I’ll give you an example. At my previous job we had a single 25 terabyte LUN that contained all database files and transactional log sitting on the same LUN. I had a separate LUN for backups and a separate LUN for tempDB, but that’s because we were running on enough Azure A.  And we were actually not seeing any issues from running that way, however there’s another school of thought that says, “Yeah, even on a physical service you should split your logs and data files because you may run into queuing issues.” I personally have not run into those. There are some that say that under very high load systems you might run into those issues, which comes down to basically a maximum of 256 queued IOs per LUN. And that might actually affect you under very high performance environments. I have personally not run into those.

Something I tell my customers in particular is to go ahead and create less LUNs when it comes to physical servers, but when you’re talking about virtualized environments this is a completely different conversation and you should definitely split your data files, your transaction logs, your tempDB and your backups into multiple virtually managed devices. And those virtually managed devices don’t have to be all virtualized storage, VMDKs in the case of VMWare or VHDX’s in the case of Hyper-V. They can be RDM or pass-through disks. But you should have more of those in a virtualized environment. And so that’s kind of interesting, right?

On a physical world we’re targeting less volume, less LUNS, but in the virtual world we want more.

Carlos: So those mount points. We sort of talked about mount points as being location for our disks, right? So does that equate to the same on the virtual side? So I have these virtual disks, right? The terminology might be different but it’s still ultimately a mount point. True or false?

Argenis:  So it’s not necessarily a mount point. The way we call it on VMWare are data stores. A data store will be a volume or a LUN on your SAN. On top of that data store we would go ahead and create multiple VMDKs. And so these VMDKs will be your actual virtual disks. And so to catch up on that performance, what you would have on any given virtualized environment, you would have just a ton of data stores spread across multiple aggregates or whatever on the SAN. Create groups or what have you to give you performance on those. And on top of that you would go ahead and create your VMDKs that would actually end up containing your file systems for your database files, your tempDB, your transaction log, et cetera et cetera.

Carlos: Sure. So that is the idea, of striping, in the virtual environment, right?

So striping doesn’t necessarily mean create more data stores. Yeah you definitely can create more data stores. You cannot have a VMDK split across multiple data stores. You could have a volume split across multiple VMDKs but that’s different. And so you can have multiple files for a VMDK sitting on different data stores, but there will be all sorts of issues from that. The thing you don’t want to do in the end is rely on Windows striping for managing performance in those logs. Because in the end, basically letting the operating system do these things for you. And that’s basically a function that should be relegated to hardware as much as possible.

Carlos: It’s going to be much faster there than in the software.

Argenis: Definitely. The other concept for data store in the case of Hyper-V is a CSV, or clustered share volume. It would be the logical unit number would be basically that in the Hyper-V world.

Steve: Alright, so when we talk about performance and how much throughput we can get from storage, one of the things that generally comes up is the term “IOPs”. And I know that’s one of those where there’s lots of different ways of measuring it. Can you maybe talk a little bit about how that applies to storage in the SQL Server environment?

Argenis: Yeah, absolutely. So IOPs stands for “IOs per second”. So input and output operations per second. And so imagine when you’re doing something as simple as writing a single regulatory file, you have to write to the transaction log for that and then obviously there’s an asynchronous process that takes that record that you just wrote to the transaction log and writes it to the data file, right? That process of writing to the transaction log is actually an IO. Because there’s no caching of IOs performed against the transaction log, and this is very important on a SQL Server compliance system. You cannot have any caching of IO performance on a transaction log. Write IOs, that is. Read IOs against the transaction log are a different story.

Carlos: Sure.

Argenis: But when you’re writing to the transaction log, that IO has to be performed with write through all the way. No caching, no software caching, no hardware caching. It actually is to hit stable media. So…

Carlos: That’s how it’s going to do the roll-back, right? That’s why it’s so important, because that’s your transaction information.

Argenis: That’s right. It maintains the acidity of your database, right? It maintains it transactionally consistent. So anyway. IOPs are basically you can measure as, “how many IOs are you performing in a given period of time?” So calculate those per second, et cetera. There is a school of thought that cared a lot about IOPs. I want to say that that school of thought is losing relevance nowadays. The reason for that is what you care about, really, when you perform an IO is latency. It’s very performant, IO. I want that IO to be as fast as possible. And that is for my regular description of a general IO. I just want that IO to go as fast as possible. Which means that entire round trip for that IO to complete should be small. Which means the latency for that, which basically introduces into the latency for that IO should be small.

However, there are situations where we don’t’ care about IOPs. Like, how fast am I performing that IO right now? You care about throughput. For example, when you’re performing an index scan or a partition scan, you’re bringing a whole lot of pages into that buffer pool. What do you care about? How many IOPs you’re pushing into your storage device? No. You care about how gigabytes per second you’re pushing, and hopefully in your SAN it’s measured in gigabytes per second because if it’s not, I think it’s time for you to start looking for a new SAN.

Carlos: I think the point there is being, so the disk while important, it’s not the finish line. The finish line is that the buffer pool has to be filled.

Argenis: I mean, of course, and in the case of a backup for example. We’re not even talking about buffer pool concerns, because 8 pages are handled for a backup page are disfavored. But they are actually kept into buffers and those buffers get allocated and de-allocated and blah blah blah. They’re not about filling out the buffer pool, they’re about maintaining a level of throughput in the entire system when you’re reading from a bunch of database files and you’re writing to a bunch of database backup files. Like, that’s what you care about in the end for that particular operation. But when you’re doing an index scan or a partition scan, then you care about bringing those pages into memory as quick as possible. Or if you’re performing a data load, right? If you’re performing a bulk insert or things like that, you care about throughput where you’re writing a whole lot of data directly straight into the data files. Obviously, there’s some markers that get placed into the transaction logs, et cetera. But you care about throughput in those cases, you really don’t care about IOPs. However, when you are in a very transactional system and a very transactional environment, for example, a banking institution, a financial insurance, things like that. Typically you’re going to find a lot more transactionality in your workload than you would, you know, otherwise. It’s not going to be a more reporting workload and in that case you probably care about IOPs. How many IOPs do you perform in a given period of time? If I have, I don’t know, a thousand clients hitting my database at any point in time concurrently, then my IO subsystem better be able to handle those 1000 customers and then some. So that’s also a measure of performance in a system, but it’s not the only measure and it’s not the only thing you should care about. As a matter of fact, I tend to tell folks, “You probably want to care about a system that can give you low latency and high bandwidth at any point in time.” And so, the amount of IOPs that it can perform might actually be reduced as a function of having less latency and a lot more bandwidth. So does IOPs matter? It’s really more of a religious question nowadays actually.

Carlos: So what advice do you have? You mentioned the difference, or the silos if you will, we have DBAs over here and the SAN folks over there. IOPs has been one of the things that we’ve been asked for or we’re trying to find and give to them. Like, “Hey guys, I’m expecting this many, I’m not seeing that.” How do you see that conversation evolving then?

Argenis: I don’t think that IOPs are a good measure of even remotely anything nowadays quite honestly because, I will give you this example. If you have a system and it’s got a medium latency of 10 milliseconds. Let’s just pick a number. And you have a given amount of IOPs that you can perform on a target device. The moment that you switch to a device that can give you under one millisecond latency, I guarantee you that you’re not going to need that same amount of IOPs.

Share This