Are you giving yourself a HEAP of trouble by using tables with no clustered indexes? But it is faster you say . . . because you don’t have to worry about the sorting when inserting. Our guest, Rick Lowe, suggests everything is fast until you decide to SELECT, UPDATE, or DELETE. We talk about why you may or may not want to use HEAP tables. I know I was surprised to get his take on the use of HEAP tables and I think you will be, too. Do you agree with our assessment? Let us know in the comments below.
“A heap is a table that doesn’t have a clustered index. What that really means is a table that doesn’t have a primary key. There are times when the primary key and clustered index could be different, but more often than not, they’re the same.”
“The case I try to make when I talk about them is that there might not be as many use cases for heaps as we assume. They’re great for write-only workloads doesn’t mean that they’re great for workloads where you mostly write and sometimes do other things.”
“If it’s a situation where the table is going to just kind of be loaded and then just radically, completely and totally transformed and destroyed and written somewhere else, that could possibly be a use case for a heap.”
Listen to Learn
01:09 Compañero Shout-Outs
02:03 Intro to the guest and topic
05:09 Heaps are great if your workload is write-only
06:15 Deadlocks happen & Definition of “small table”
07:35 Heaps with ETL vs ELT
11:53 The SELECT INTO use case
12:58 Are we just skirting around heaps when we’re using an identity column?
14:19 Scenarios where you might use a heap
15:48 GUID as your primary key?
16:36 What about hot page scenarios?
19:25 It’s an awful idea to create a clustered index and then drop it
22:18 A heap may be better than a really poorly designed clustered index
24:32 It is okay to just stay away from heaps, especially if you’re new to execution plans
28:11 SQL Family Questions
31:22 Closing Thoughts
Rick is a Microsoft Certified Master with more than 15 years of experience working with SQL Server. He currently works as an independent performance DBA/Developer for clients across the USA and Canada. He started his career as a database developer, but over time became more and more interested in how the database engine operated … eventually becoming more focused on operational and performance issues than code.
Rick will work with all things relational, but most enjoys helping smaller companies get better performance from MSSQL, as well as smoothing over relationships between DBA and development teams.
Rick’s blog: https://dataflowe.wordpress.com/
Carlos: Compañeros, welcome to another edition of the SQL Data Partners Podcast. I am Carlos L Chacon, your host, and am joined today by Rick Lowe.
Carlos: And Eugene and Kevin are also here.
Carlos: So, we have a motley crew with you today. And our topic, we’re talking about heaps and we’ll get into the difference between what is a heap and how does that differ from other tables that you may have worked with. Before we get there, we do have a couple of Shout-outs we want to give. We want to give a shout-out to Peter from Australia. Select Top, so when you use your twitter names and you don’t include your real name there, that’s what I have to go with, so Select Top from Dumfriesshire, and I probably slaughtered that name, so I’ve offended everyone in Scotland, now. But he gave a shout-out on the episode on wait stats. Preston from Atlanta, gave a little love to Episode 131 on big data. And we also want to give a shout-out to Jason Vondersmith of Richmond, Virginia.
As everyone knows, a little shout-out to the SQL Trail conference that we’re putting together October 10th through the 12th in Richmond, Virginia. Find out more information at sqltrail.com. We hope to have you. Kevin is going to be with us again this year, so that will be a lot of fun.
And our show notes for today’s episode is going to be sqldatapartners.com/heap. Now Rick, I guess I’m going to defer to you. Should it be heap or should it be heaps with a s plural?
Rick: I would just go with heap, singular.
Carlos: Heap, okay. Heap, no s, or sqldatapartners.com/142. Okay, and so with that, let’s welcome Rick and get into our conversation. So Rick, thanks for joining us.
Rick: Thanks for having me.
Carlos: It is nice. You are in Colorado, and I should probably start keeping track of where our guests are from. But I feel like we don’t get enough folks from Colorado, so thanks for chatting with us today.
Rick: Well, then, it’s fortunate you caught me when you did. We’re in Colorado for maybe another 5 or 6 months and then heading elsewhere.
Carlos: Oh, is that right, okay. So is that a known destination just yet, or breaking news on the podcast?
Rick: Strangely no. Yeah, we know that we’re going to move some place with a lower cost of living, essentially, but yeah, haven’t completely zeroed in on a destination. Could be Washington State, could be Maine, could be somewhere in between.
Carlos: Wow, interesting. So only because my wife is from Washington State and you say lower cost of living, makes me go, hm, Colorado must be pretty expensive. Ultimately our topic today is heap tables.
Carlos: First I feel like we should define this. so when we talk about a heap table, what is a heap table?
Rick: Yeah, a heap is really just a table that doesn’t have a clustered index. Of course, that’s a really, really super geeky definition. The vast majority of the time, what that really means is a table that doesn’t have a primary key. It’s just, you know, there are times when the primary key and clustered index could be different, but more often than not, they’re the same.
Carlos: Sure, at least the 80/20 rule, if not the 99 out of 100 rule.
Rick: Yeah, exactly. Now to get even geekier on it, really the reason that matters is a heap is a table that does not have a binary search tree sitting on top of it.
Carlos: Okay, which is what that clustered index would give it.
Rick: Yep, exactly.
Carlos: Now, you could have other non-clustered indexes on a heap.
Rick: You certainly could.
Carlos: But we’re still talking about a heap table.
Carlos: So, one of the benefits, or one of the common ideas around heaps, and I admit that this is a thought that I’ve had is, oh, okay, now we’ve just said it could have non-clustered indexes, so we know it doesn’t necessarily “not have any indexes”. But because it doesn’t have that clustered index, we oftentimes seem to categorize it as in “the table with no indexes”.
Carlos: And therefore, because it doesn’t have any indexes, it doesn’t have copies of data, therefore, when I write to that table, it should be faster, right? And then of course, the downside is that when I need to SELECT out of it, it’s going to be slower, because I don’t have those copies to help me, or that B tree to help me find it, right?
Carlos: So I guess let’s take those two pieces one at a time. First, let’s go through the writing. When do I get benefits by having a heap, or do I?
Rick: The way I look at heaps is there are definitely use cases for them. They’re great for writing. The case I try to make when I talk about them is that there might not be as many use cases for heaps as we assume, so the writing is a great example. If we’re going to dump a wad of data into a table that’s completely unordered in one shot and never actually look at that data again, heaps are fantastic, because they are great for writes. The hitch is, the fact that they’re great for these kind of write-only workloads doesn’t mean that they’re great for workloads where you mostly write and sometimes do other things.
Carlos: Oh, okay. Meaning like, I’m writing 100,000 rows an hour, just to throw out a timeframe, but I only do a SELECT statement once or twice a day.
Rick: Yeah, and so the problem you’re going to fall into sometimes, and sometimes this is a problem, sometimes it isn’t, depends on use case. The issue you can see is when you do, do that one or two SELECTs per day, you might suddenly start to see some deadlocks around the heap that you would not notice if you had a clustered index on it. The biggest issue is just the lack of organization, so you know, heap by definition, and if you don’t have any non-clustered indexes on it, by definition, a heap doesn’t really have any ordering. It doesn’t really have any way to quickly search for data inside it. And even if we’re looking for data and happen to find it quickly, there’s no way of knowing whether or not any given piece of data is unique inside a heap. So kind of what that all boils down to is if you’re looking for something in a heap with a SELECT statement, you kind of have to search the entire table, to some extent. That’s maybe fine for smaller tables, but as the table gets larger and larger, you run into this issue called lock escalation where the fact that you’re looking through thousands of records, eventually it just means the entire honking table gets locked, as opposed to the individual records you’re searching at the time.
Carlos: Now I’m quite curious. What do you define as a small table?
Rick: For the purposes of this particular discussion, I would say 5000 rows, just because the lock escalation is a big issue, here.
Carlos: Sure. And then obviously column length and the number of columns might play into that a bit, as well. But yeah, for general ballpark numbers.
Rick: I suppose what I should really say is in terms of tables where I’d be really super happy about it being a heap, yeah, certainly smaller than 5000 rows, but I’d almost rather see a heap be a really super tiny table, like a page or two, max.
Carlos: One of the new features or options or approaches to integration work is not ETL, but ELT, so this idea of I’m just going to create a staging table that mimics exactly what I had and bring all the data over, so ideally there’s lot of writes there, so my question is, is that a good scenario for heaps?
Rick: Yeah, here’s my thinking on it. If it’s a situation where the table is going to just kind of be loaded and then just radically, completely and totally transformed and destroyed and written somewhere else, that could possibly be a use case for a heap. But if a clustered index is going to be created on the table eventually, I’d argue you’re better off, typically better off having a clustered index there to begin with. In my mind, a lot of times people will say, “well, it’s faster to insert into heap than into a table with a clustered index. I’d actually argue that’s not always true, but that’s another conversation. It’s kind of a tangent to what we’re talking about right now. But in my mind, the critical question is not, is it faster to insert into a heap than into a clustered index, but is it so much faster to insert into a heap that you actually have time to insert all the data and then also still create the clustered index afterwards, as opposed to just starting out with it.
Carlos: Right, okay. Kevin?
Kevin: Yeah, I come into this as a bit of a clustered index extremist. I like clustered indexes. I like them everywhere, every time, so my bias is very heavily skewed in that direction. I can see–
Rick: Sorry to interrupt. I just want to say, full disclosure, I actually have exactly the same bias.
Carlos: Oh, okay, here we go.
Kevin: And I agree with Rick that there are cases where yeah, I could see, you put a heap here and it’s not going to hurt. It’s probably not going to make that much of a difference and a scenario where I’m just dumping stuff into a staging table and SELECTing out of it right afterwards could be a fit for that. If you’re doing a more protracted ELT operation, like for example, I’m taking data from source A, then I’m going to integrate it with sources B and C. I’m going to JOIN to tables, I’m going to pull in stuff from some other system, and then I’m going to update this staging table repeatedly and then SELECT the results out of it, that’s a case where I think you’re past the point where a heap can be helpful, and I think you’re actively harming yourself by then.
Carlos: Sure, okay. Eugene?
Eugene: Yeah, I think, it feels to me like diminishing returns. If you’re trying to squeeze out this extra little bit of performance for basically something that’s more of an unstructured format, worry about heaps or primary keys is probably spending dollars to pick up pennies, as my boss used to say. I really feel like the whole idea with changing ETL to ELT is that you want to capture as much data as you can, and then worry about it later, so really, you’re starting to talk more about an archival kind of situation. That might be data lakes, which is a lot of times a lot of flag files, or it might be something more where you’re talking about a document kind of database like CosmosDB or MongoDB or something where you can specify an arbitrary structure or dynamic amount of columns or something like that. Even if you were to do that in SQL, now maybe you’re looking at (?) value and you’re still probably going to have a key so you can tie it back to your main table, so yeah, I don’t think that there would be a lot of value in the speed there. something that I could see is, there’s a lot of times that just to get some test data to play around with, I’ll do a SELECT INTO and make a copy of a table and that’s a heap. But generally, it doesn’t matter because usually I’m the only person working with it or it’s one or two other people. The big thing you have to be careful about, though, is remembering to actually clean it up, so a lot of times I’ll literally have the date or like a “delete me after whatever” just so it doesn’t hang around, because you can easily clutter up a database that way, if you’re not careful.
Carlos: Sure. Or they’re a temp table, you know, a lot of times you can– I’ve seen them that way.
Eugene: Right, yeah, no, that makes sense.
Kevin: Yeah, to show just how much of an extremist I am, I actually create my temp tables, usually I have clustered indexes on them.
Carlos: Now that is extreme.
Rick: That SELECT INTO is actually an important use case for some developers. I’d actually argue that’s part of the reason there are so many heaps around is yeah, the convenience of not having to worry about the schema.
Carlos: Gotcha. So is that basically where we’re saying, “hey guys, that’s bad design.” Don’t forget the rest of your design you’ve come up with a subset or some data that you like, but if this thing is going to live on, it needs care and feeding. Clothe it well.
Eugene: To me, it’s a data modeling thing. Because you think about it and the whole heap versus regular table, really depends on the granularity of your table and identifying one or a couple columns to be your key. Whereas, the whole, “oh, this schema’s dynamic or changing.” Well, that’s just a matter of adding more columns or changing columns. I would think the two are kind of unrelated. Even if you plan on making changes later, I would hope you have an idea of what level of normalization you have and what a single row represents in general, in which case, it shouldn’t be that hard to say, “this is going to be my primary key or my clustered index.”
Carlos: Sure. Do you think that there’s ever a time when they’re having trouble identifying that? I mean, I know it’s fairly simple, and we see a lot of times the identity values, so just the random, if you will, quote unquote, just incremental numbers being chosen as the primary key. Are we kind of skirting around heaps when we do that? We talk about the way the data is sorted, I just throw in an identifier key but that’s really not helping me sort the data in any other way.
Rick: Yeah, I’d say the value of the identity column, I mean, you’re right of that to some extent. We’re not thinking through our data modeling when we’re saying, yeah, just throw an identity column on it and that’ll be my key. But I still think there’s some genuine value there in future performance and future deadlock avoidance, and that might matter, even if we think the table’s only going to be around for a couple months. It might be temporary, we’re frequently wrong about that. Sometimes things that are just kind of meant to be there for a little while end up staying for years and years and years because we never get around to refactoring and doing it right. Yeah, I’d still argue for the identity column, even though it feels arbitrary if we aren’t really sure what else to make the key.
Carlos: Okay, so you’ve kind of run through my ideas. I know you’ve already laid your flag in the ground, here, but under what circumstances then might it be a good idea or plausible scenarios for us to use a heap?
Rick: One where I’ve seen heaps used, I don’t think this is a good use for them, but one place where I’ve seen them used, we’ve talked about having non-clustered indexes on heaps. I typically wince when I hear that, mostly because if you’re going to be building indexes on top of a heap anyway, why wouldn’t you just pick one to make it the clustered index? But I guess one use case where I’ve seen them is if somebody has a GUID that they really insist on using as their primary key, which, I hope I’m not going into too much detail here, but GUIDs at a high level, we don’t like seeing them as keys, typically, because it causes a lot of fragmentation issues.
Carlos: Right, because they’re random, they’re so random. There’s no order to them, so you’re going to have this, it’s almost like a compression and encryption, these two opposite forces in play, here.
Rick: Yeah, the thing that makes identity a great primary key is that it’s small and it consists in increments. It doesn’t jump around and cause page splits. So sometimes I see people put those into a heap and then build a non-clustered index on that GUID column to try to help searching performance. I’m a little sympathetic to that, but I’d still argue if you’re doing that, you’re still going to have a really, really, really horribly fragmented, non-clustered index, which has (?) GUIDs in it. So, the problem is, it’s not that much better and you can still, by having the heap and also this extra non-clustered index, you’re still kind of opening yourself up to a deadlock situation where you wouldn’t have to have one if you’d just built a clustered index to begin with.
Carlos: Now what of, I guess, maybe thinking through that just a little bit, let’s say I have inherited a database or I’m working in an environment where the database is 5 years old, the company has customers, and I need to change in that GUID. So being pro-clustered index, in the case where your primary key is a GUID, do you think there is a rationale for removing that primary key, I mean, is that what you were saying? Is that what you were alluding to?
Rick: When I inherit stuff like that and when I don’t have the flexibility to refactor it, I actually like making it the primary key, build a clustered index on it and just have a lower fill factor to kind of keep the fragmentation at bay.
Carlos: Okay, there you go.
Kevin: So what about hot page scenarios, where I have something so many different writers attempting to write and we have that incrementing key as the clustered index? At what point does it make sense to have distributed writes taking the cost of higher fragmentation?
Rick: A heap could help with that. I’d say an alternative to that would just be a partition table. Even if you’re partitioning in something kind of random like a modulus at the end of your identity column. You can still get multiple hot spots in the file with mechanisms other than heaps.
Kevin: Right, but at least you distribute it out by the factor of how ever many partitions you have?
Rick: Yeah, exactly. If you say, “I don’t want a single hot spot, but I’m okay with 4 or 5 hot spots.” You could partition the table in such a way that you get that.
Kevin: So, in practice, I want to throw out this question, how often do you guys see legitimate hot spots on tables that are driven by an incrementing key or by some regularly increasing value where you’re hitting that one page?
Rick: I personally don’t see it anywhere as often as I used to, with modern storage.
Kevin: Yeah, I can think of fewer than 5 scenarios that I’ve ever run into where that seems like it had been the problem.
Eugene: Yeah, anywhere that they would have that, they wouldn’t be calling me, because they’d have a full-time DBA working on the issue. I get called in whenever there’s like a Windows admin who kind of has to maintain this one server and none of the alerts are set up and they haven’t checked their backups. So I’ve never run into that issue, personally.
Kevin: Yeah, I asked the question because I wanted to throw a shout-out to the people in the audience who are raging at their podcast device of choice, because I know that there’s this hot spot problem, because I have to deal with it at work. I’m thinking that percentage of people is probably not a huge percentage of the overall market, it’s probably a fairly large percentage of the really interesting problems, but yeah I’ve dealt with hundreds of databases across several different companies over my career of however long it’s been, and rarely do I see this as an issue.
Carlos: Sure, or if there are hot spots, I would tend to imagine that a lot of them have to do more with on the SELECTing side or that balance of trying to INSERT or UPDATE and SELECT. So, in our conversation, where we are thinking about a heap, faster though those INSERTs, I mean, I’m not saying they’re never a problem, but it has not been the case for me.
Kevin: Yeah, so I guess the big advice is that if you’re experiencing hot spots, first step is to upgrade past SQL Server 6.5, then see if you still have that hot spot. If you still do, then you’ve got real rights. Another question that the answer might be “what are you doing on SQL Server 6.5?” There’s a historical piece of advice that I have a heap that’s fragmented and so I’m going to create a clustered index and then drop that clustered index, so Rick, what are your thoughts on that and why is it an awful idea?
Rick: Well, yeah, the answer is in the question. To do that, you’re actually creating a clustered index, do all the work to put this nice candidate key on it and then dropping it and then doing it and restructuring it back into a key. My usual advice is to stop at the halfway mark. You go ahead, you’ve created a clustered index on the heap, just go ahead and leave it as a clustered table, ideally. I guess maybe, being a little bit less tongue-in-cheek about it, those cases where you find yourself really, seriously wanting to defragment a heap are probably cases where you shouldn’t be using a heap. That’s another data point that the table needs a key of some kind. And of course, there is always, I guess trivia time, so there is actually the command, is ALTER TABLE REORGANIZE ALL, I believe, that makes it look like you’re reorganizing a heap. As I understand it, behind the scenes it’s actually doing exactly what you say of creating a clustered index on the heap and then dropping it.
Kevin: Yeah, I agree, and I think that the scenario that I can think of where maybe it fits is if you have a log table where the only time you ever really read from the log table is that one day every three years that the auditor comes in and demands it. And at that point, yeah, okay, maybe I rebuild the heap every once in a while, using the ALTER INDEX REBUILD on the heap or ALTER TABLE REBUILD. But otherwise, I’m not going to worry too much about it. Even in that case, actually with the log table, I’d probably still create a clustered index on the date or on some identity column and take advantage of the fact that probably it’s a monotonically increasing with respect to the log insertion date and I got to drop in the term “monotonically increasing” in this podcast, so if that was on your bingo card, fill that spot in.
Rick: I meant to say monotonically increasing, earlier and just spaced it.
Carlos: Yeah, they beat it to you and you’re making the guest look bad, Kevin, come on.
Kevin: Isn’t that my job? I’m kind of the person of the podcast who, I’m the Howard Cosell of the podcast.
Rick: But in all seriousness, if somebody finds their heap is getting fragmented because they’re inserting and deleting data, the other characteristic of heaps is that space from rows that got deleted do get reused eventually, so to some extent, if you’re wanting to defragment a heap, you’re probably wasting your time anyway, because it’s already not really ordered and the space that’s been freed up will get reused. Yeah, you have the forwarded records to worry about a little bit, but if you’re updating data in a heap, again, you probably shouldn’t be using a heap.
Kevin: Okay, so are there cases where a heap might be better than a really poorly designed clustered index? So, let’s say I did a clustered index like on a bit column and now I have the uniquifier going crazy and I’m adding a four-digit uniquifier and I guess if I have more than 2 billion true or false values, then I’m going to run into a different issue. But could you see a scenario where I’m just doing it so wrong that it might be better just to have it be a heap?
Rick: I would say a heap is better than a non-unique clustered index, for sure, because of the, yeah, uniquifier problem you mentioned on that whole error 666 issue, which is a real SQL Server error. Yeah, and I’m probably having a failure of imagination here. I’m having a hard time visualizing a scenario where you wouldn’t be better off just slapping an identity column on the table. Unless you’re expecting so much data that you have an end overflow.
Carlos: Sure. If you were going to make a change, that’s kind of the question, right? If you’re going to make that change, what would you do? Would a heap be the best answer? It sounds like there are some other options.
Rick: Yeah, I suppose my go-to solution for when you have a heap and you don’t know what else to do with it, if there’s not an obvious key is still clustered index on an identity column.
Kevin: That’s a good go-to. It’s a good default starting point if you don’t have anything better. Frankly, oftentimes, if you do have something better.
Rick: Yeah, I don’t know that this is the best choice. I did recently have one of these log type tables that you mentioned, Kevin, where the client had a non-unique clustered index. I probably didn’t really especially have to worry about running out of uniquifiers, but just because you can’t really easily inspect the uniquifiers, I was concerned about it. So I think I ended up having a three-part clustered index on a date column, a time column, and also an identity that was thrown just as an explicit uniquifier. I don’t love composite primary keys, but it felt like a better solution at the time than a non-unique clustered index.
Kevin: Fair enough.
Carlos: Okay, well, there you go. So last thoughts? Last call?
Kevin: Let Eugene go first, please because I have THE last question.
Eugene: Probably involves food or old movies. Yeah, so my question is I present on just basic execution plans and usually I give very broad advice that unless you’re dealing with a table that’s 50 rows, as a rule of thumb, any time that you see row ID lookups or whatever it is for a heap, just bad. Just heaps are bad, you’ve got a problem, stay away from heaps. And do you think when you’re dealing with people who are just learning about execution plans, so clearly entry level people, is it reasonable to make that broad brush of advice? Because obviously we talked about a couple of cases where it makes sense, so do you think it’s fine to kind of over-simplify things and say, “if you’ve got heaps, you’ve probably got problems.”
Rick: I actually like that advice. I think the way I look at it, so you said in the context of execution plans, especially in that context, the thing I like about it is, if you’re looking at an execution plan, you either know there’s a problem, or you suspect there’s a problem. Yeah, kind of my rule of thumb is that yeah, if it’s actually sitting on my desk and I see any indication that a heap’s involved, the heap’s usually got to go.
Eugene: Got it.
Rick: Yeah, I actually like that advice.
Kevin: Yeah, and I would follow it up and say that there are times when you give the not-quite-always-correct-but-mostly-correct rule of thumb and then as somebody learns more about the topic, you can come back to it later and say, “oh by the way, here’s the nuance that we missed out earlier on.” So, it’s like saying, “can you turn left at a red light or at a stop sign without stopping?” We’re like, “oh, no, you can’t turn left at this stop here on a red light.” Later on, you come back and say, “well, in certain states, you actually are allowed to turn left on red from a one-way street into a one-way street.” But that scenario is so infrequent that I love being at that one street where I can do it, but it’s not something I’m going to waste a lot of time on when I’m teaching people the basics of driving. Just start with the general rule that says, “no, you can’t do that.” And then later on, come in and explain the nuance. I think that’s better than just trying to have the 1200-page dissertation, because I can’t possibly not explain everything and all of its assumptions and all of those underlying assumptions.
Eugene: After the podcast, you have to tell me what states you can do that in, because I didn’t know that you could do that.
Kevin: Ohio is actually one of those states.
Eugene: Oh nice, not too far.
Kevin: Yep, so you can drive to Columbus and it’s only like a 3 or 4-hour drive for you and then you can find that street in downtown where you can take that left turn on red.
Eugene: All right.
Carlos: So I don’t know if it’s legal or not, but they do, I mean I’ve done it in Richmond.
Kevin: Carlos admits to potentially breaking the law on podcast.
Carlos: I’m like, well, so hey, if it’s available, I’m like, eh, let’s do it. Okay, last question.
Kevin: The last question, Rick? Which would be the better title for this podcast, A Heap of Trouble or A Heap of Hurting?
Rick: I would say A Heap of Hurting.
Carlos: There you go, so now you make me change it. That’s the third change and only because I can’t do exactly what you said, it’s going to be called, “Do you have a heap of change?” What did you call it?
Kevin: Heap of Hurting.
Carlos: Heap of Hurting. There we go.
Rick: Sounds like an artifact from a D&D.
Kevin: Or Professional Wrestling. I can see the Macho Man Randy Savage saying this. Carlos, when are you going to get him on the podcast?
Carlos: Yeah, I’m working on it. My people are working on it. Okay, should we do SQL Family?
Carlos: Okay, so Rick, these questions are for you. Your all-time favorite movie?
Rick: I’m pretty old-school. I’d say Ferris Bueller’s Day Off.
Carlos: There you go. I was going to say, not old enough for Kevin, but that’ll work.
Kevin: That’s alright. I did like Ben Stein’s role in that film.
Carlos: Yeah, that’s right.
Kevin: I thought he played it very well.
Eugene: I got to watch that movie in Sociology class, actually. Why it was related, I’m not entirely sure.
Carlos: Oh boy. City or place you’d most like to visit?
Rick: Oh, shoot, I forgot to think about this. I know, I shouldn’t have to think about it. I should just want to go. Oh heck, how about Melbourne, Australia.
Carlos: There you go. Food that reminds you of your childhood?
Rick: Pizza, which I still eat constantly, to this day.
Carlos: There you go. Any certain toppings or is this like a cheese pizza?
Rick: So if I’m trying to remind myself of childhood, I’d say just pepperoni, but yeah. I’ve moved way past that now, as an adult.
Kevin: Do you use a fork and knife to eat pizza?
Carlos: Ah, that’s a critical question.
Rick: Only when necessary.
Carlos: Or when the other people at the table are using forks.
Kevin: Avant garde, Carlos, you’ve got to be avant garde.
Rick: If I can pick it up, I do, but yeah, sometimes it’s just too loaded.
Carlos: What about Chicago-style?
Kevin: So funny thing is, when I would make Chicago-style pizza, it may not have been totally Chicago-style, I’d still eat it with my hands. I could still pick it up and lift it, eat it like a fun slice of pie.
Carlos: Okay, so now Rick, how did you first get started with SQL Server?
Rick: Originally, I got involved with databases just because I was at a new job, I wasn’t very busy yet and someone came by my office and said, “hey, do you have time for an Oracle install?” You know, cause that’s not hard.
Rick: So, I did well enough with that. I kind of got to be their database guy and ended up having to do Oracle and also SQL Server. Sybase SQL Server at the time, mostly in DEC Unix and MySQL, kind of all at once. Yeah, just over time I just decided I really liked SQL Server and drifted over to Microsoft SQL Server as a specialty.
Carlos: Now, if there was one thing you could change about SQL Server, what would it be?
Rick: I would put turn on read commited snapshot isolation by default.
Carlos: Okay, there you go.
Rick: I still think that’s one of the few things Oracle got right.
Carlos: Yeah, okay, interesting. Now, the best piece of career advice that you have received?
Rick: Probably, and this wasn’t even really career advice. I think someone said it when I was still in school. In real life, when the tests come up, you open the books.
Carlos: Ah, there you go.
Rick: Yeah, it was a teacher trying to explain why he mostly gave open book exams, but I still find it’s appropriate today. Use your resources.
Carlos: Yeah, interesting. I like that. I like that, yeah. Our last question for you, see Kevin, you don’t actually get the last question. I get the last question. If you could have one superhero power what would it be and why do you want it?
Rick: This is a boring one. I’d like to have speed learning.
Carlos: Speed learning, okay.
Rick: So as a consultant so I could go even farther down the road of, “yeah, just tell me what you need and I’ll be ready tomorrow.”
Carlos: Yes, I’ll have that. There might be some speed typing involved with that, but yeah. Well, awesome. Rick, thanks so much for being on the program today.
Rick: Thanks for having me.
Carlos: So there you go, compañeros, that is going to do it for today’s episode. I admit, my take-away was I thought there would be a few more opportunities for heaps and again, I don’t know if it’s a cultural thing or somehow the tribal knowledge has gotten in there that heaps will somehow help you with inserting, and of course we didn’t get into specific examples, but Rick has some of those up on his website and on his SQLSaturday, if you go to the SQLSaturday site, his presentation on heaps include some details and some scripts that you can use to do some testing there on your own. Our thanks again to Rick for coming on the program today. As always, compañeros, you can reach out to us on social media. We’d love to hear your thoughts on the program or topics you’d like us to talk about. You can reach me on LinkedIn at Carlos L Chacon and we’ll see you on the SQL Trail.