1400So it turns out an old dog can learn new tricks–or at least that is what happened to me when I chatted with Grant Fritchey from Red Gate software about SQL Server Statistics.  Always taking an opportunity to teach, Grant talks about why statistics are important and we go over some fundamental items you may just want to know about.  As always compañeros, have fun on the SQL trail.

Show Notes

Grant on Twitter
Grant’s blog
Grant talks about Minion ReIndex
SQL Server Statistics Questions We Were Too Shy to Ask
traceflag 2371
DBCC Show_Statistics

Transcription: SQL Server Statistics

Carlos L Chacon: Welcome to the SQL Server data partners podcast. My name is Carlos L Chacon, your host, and this is episode 11. Today were talking about statistics, with Grant Fritchey from Red gate software. Grant is a Microsoft SQL Server MVP, and product evangelist for Red gate.

Now, you can teach an old dog new tricks, and I found out with my conversation with Grant that there’s a few things about statistics that I still didn’t know. Even after my preparation for our conversation. I hope that you will find it enjoyable. Grant is always a funny guy to chat with.

We’ll have show notes available for you at sqldatapartners.com/podcast. We are going a little bit longer than normal. We’re actually going to push about 40 minutes today, so hang in there till the end. It will be well worth it.

You may notice that the sound quality is a little bit better here in this opening, giving you a little bit of preview if you will for the new microphone that I’m using. This will be going full time in the next episode, episode number 12.

Hang in there with us for one more episode and we’ll start getting quality better in episode number 12. Again, this is episode number 11, compañeros, as always, welcome to the show.

Children: SQL Data Partners[music]

Carlos: Grant, welcome to the show.

Grant Fritchey: Hey, thank you. Thanks for having me.

Carlos: I remember a job interview that I had several years ago. One of the things they asked me was about statistics.I don’t know that I remember my exact comment, but I know that I was little bit nervous about it because I had grabbed something online, and I think I gave an answer about helping the query, but I didn’t necessarily have a really, really good grasp.

Over the years, some of the content that you’ve put out has really helped me to understand statistics better. I’m super glad that we can have you on the show here today.

Grant: Thanks, I appreciate it.

Carlos: Let’s go ahead and jump into that. What would we give as a better description of statistics than I gave in that interview those many years ago.

Grant: [laughs] The best way to put it, is that statistics are the principal driver for the decisions made by the query optimizer. They’re the first thing that’s going to look at it, in addition the objects you’re referencing, it cares about statistics, because the statistics are what allows it to make its row target.Its guesses as to how many rows it’s like to retrieve based on the indexes, the columns, your where clause, all that fun stuff, and how you’re doing your joins. All that plays into a factor, but it all feeds off of statistics.

If you’ve got good statistics, the optimizer’s going to make better choices. If you’ve got no or questionable statistics, the optimizer might make really, really bad choices. It all comes back to statistics driving the optimizer.

Carlos: That’s right, so I think I’ll go with that. The statistics are used or a sample data if you will. In the table that I have, how many of this value do I have? I’m going to keep a record of some of that. I think we can go as high up to 200 records? Now that hasn’t changed, right?

Grant: No, no that hasn’t changed yet. You’re talking about the histogram.

Carlos: The histogram? That’s right. It keeps that information up to 200 lines of what my estimated data is. Then it uses that, as you mentioned, to then you formulate those queries and be able to know based on what you’ve just requested of me, how my best able to give you back that data.

Grant: Absolutely. The funny things is, because you’ve just hit the most important thing about statistics. You created index, person ID. You expect the person ID index to be accurately reflected in the statistics and it will be.Let’s say you create another index. You create this index on entry date and person name or something. The interesting bit is that the histogram, which you already nailed on as being extremely important, is only going to be on that first column.

Even though you a compound index, even though you got a high degree of selectivity and possibly a very accurate representation of data possible, the histogram is only on the first column.

It limits…I don’t want to say limits the capability, but it certainly means that that first column and column choice in your indexes is extremely important because of how statistics deals with it.

Carlos: That’s an interesting though there, and that may be why I think sometimes we can craple with our indexes and which ones we should be using. You mentioned column choice there.We’re getting into that. Why these statistics are then important is because ultimately they’re going to affect our queries, right? They may affect our index choices or the way we create those? That order…

Grant: Yup.

Carlos: …that you mentioned. That’s why we’re caring about them. Now you mentioned that they’re only going to be created on that first column of my index. How can I look at the statistics of my index or my table to see what statistics are there?

Grant: It’s actually funny, you’re going to be looking at the database consistency check, or, no they don’t call it that anymore. It’s the database control console or something, DBCC, I forgotten what the new name is. It was database consistency check for 20 years, so I’m old.[laughter]

Grant: Anyway, DBCC…

Carlos: I hadn’t realized they renamed it, so I guess that shows how much attention I’m paying to there.

Grant: Yeah, there is a new name. DBCC has a command called “show statistics.” That’s your baby. It’s really simple command that calls DBCC show statistics, pass it in an object name, table.Then it will show you the statistics on the table. Passing an index, it will show you statistics on that index. You can also look at just statistics because statistics are created on a index automatically.

But, assuming you’ve got the defaults on SQL server when you reference a column in a where clause, if it does not have an index, SQL server will go and create statistics for the column. It’s auto-creative statistics.

It’s a good thing, don’t panic, but it is something going on. You’ll get these funky system statistics names on your tables if you go and look at the statistics there. You can look at the management studio, they’ll you which statistics you’ve got.

Carlos: Those auto-generated ones are the ones that start with WA.

Grant: Yeah, WA, which means the state of Washington.

Carlos: State of Washington. Good old Washington. My wife’s from that state, so I can’t complain too much.[laughter]

Grant: I still don’t know quite why it’s in a system name, but there it is.

Carlos: There you go. Naming choices can be important, right? You’ll never know when that’s going to come back to show up in millions of people’s environments.

Grant: Some developer having a fun one day, now you live with it forever.

Carlos: Yeah, that’s right. We’ve talked a little bit about those statistics, where they live, how we want to see them. We’ve talked about changing those statistics or maybe updating them. What kind of influence do we have over those statistics?You mentioned they get created by an index. They get created…excuse me…on an index by default. Do we have any control over when they get updated or how to change.

Grant: No, none at all. [laughs]

Carlos: I’m like, “Ugh, I should think I’ve got one.”[crosstalk]

Grant: [inaudible 9:05] . Of course you could. You’ve got quite a few switches and knobs that you can turn. Probably not as many as people would like. As important as statistics are, the level of control and degree of control we have is somewhat limited.As you said, statistics are created automatically on one indexes or created automatically on those columns. They’re also maintained automatically. As your data’s modified, add, delete, edit, you cross certain thresholds inside that data and then it updates the statistics.

The rules are simple. If there’s no data at all and you add data, you get new statistics. That one’s easy. If you are less than 500 rows in a table and you modify up to, you modify more than 500, by modify I mean add, edit, delete…

Carlos: Changes, 500 changes.

Grant: Five hundred changes. Then you will get a statistics update. Then above 500 rows, it’s…

Carlos: I think it goes to 20 percent then, 20 percent of change?

Grant: Yeah, 20 percent. Once it’s above 500 rows, it’s 20 percent of the table’s changed. Thank you, gosh. I need a new brain.That’s where things get fun. This is the automatic process. Two things to know about it. One, that’s the automated process by which it goes on. Those are going to be occurring while you’re working, but two things to know.

One is that 20 percent, when you’re looking at 500 or 1,000 rows, it’s not that many rows that have to get modified. Ten thousand, hundred thousand, it’s still not that many modifications.

When you get to a million rows, 10 million rows, 500 million rows, suddenly 20 percent gets extremely, excruciatingly painful to wait for a statistics update. That’s an issue, and we’ll talk about that in one second, because it’s one of the knobs you can’t tweak.

The other issue that you’re going to run into is that the statistics update, the automatic statistics update is sampled. Meaning, it doesn’t read the entire data set like it does when, let’s say you create an index, it’s going to read the entire data set, come up with a distribution of that data and give you what the statistics look like.

Whereas, the automatic updates are sampled, so you are going to see a slightly different data values, because it’s going to just take 200 random points inside you data set and come up with what you’ve got. That could be problematic.

Now you’re going to talk about the controls that we have. We do have a few controls, and the first one addresses that 20 percent.

Carlos: Yes.

Grant: What we can do is we’ve got a traceflag 2371. It’s available in 2008 R2SP1 or greater. [laughs] No I don’t remember that, I looked it up.

Carlos: You have to look it up every time?

Grant: Yeah, I have to look it up every time. What it does is after 25,000 rows, it turns the percentage down over time. As the number of rows you have grows, the percentage of changes required before an automatic update occurs is reduced and reduced and reduced.I think somebody calculated what their calculation is, but the calculation’s not published so I don’t bother telling people. I just say, “It works better.”

Carlos: It works better, there you go. You get beyond that 20 percent quicker and…

Grant: Yeah.

Carlos: …It enable you to have better stats there.

Grant: Absolutely. Then when you get into other controls, you can do is you get into…this stuff gets a little bit more…how best to put it…esoteric and arcane. There are other traceflags, 4137. For 2014 and greater, 9471 and 9472.These affect the way the optimizer makes calculations based on compound predicates. Meaning, I’ve got two columns. I’m doing “and”, or I’m doing an “or”. In that case, you can affect the way it makes those calculations by changing these traceflags.

They’re used pretty rarely. You’ve got to be in a pretty particular situation and most of the time, these traceflags are only…They’re not set on the server, most of the time. Most of the time, they recalled it as a quitter end.

It’s pretty rare to use them. I like to bring them and mention that they’re there, and talk to you, “Hey when you’re hitting possible selectivity issues in your and or clauses, they’re going way…I don’t know why it’s doing this. Maybe try some traceflags, maybe. Experiment anyway.”

Carlos: Very good. I haven’t used those traceflags, to be honest at all, in doing what I’m doing. Maybe working with a little bit smaller potatoes. [laughs]

Grant: Yeah. I’ve not personally used them, I’ve seen them used.

Carlos: Ultimately, that discussion was…

Grant: Knob twisting.

Carlos: That’s a little bit of knob twisting. We can’t influence the way SQL server chooses to do the…build those statistics. For example, you talked about sampling versus full scan.

Grant: That was the other thing I meant to bring up. Go ahead, or you want me to go ahead?

Carlos: Oh no, go ahead.

Grant: I did mention knobs, right? The first knobs you have are these traceflags to affect how these things are consumed, or indicates the 2371 how they’re generated. Past that, you’ve got a command called UPDATE_STATISTICS.Direct, create…and you guys also create statistics for that matter, or drop statistics. But the UPDATE_STATISTICS is the interesting one, because the UPDATE_STATISTICS lets us determine how we want to do our statistics changes.

You can say, “UPDATE_STATISTICS,” and then define a sample rate say, “Oh, 50 percent of the data. Look at half of it. I can’t afford you look at all of it because I’m feeling It affect my performance when I do that.”

[crosstalk]

Grant: Yeah, we can do that, or you can say, “Hey, UPDATE_STATISTICS against this particular table or against this statistics with full scan,” meaning look at all the data I need. Accurate, accurate, accurate statistics. You can make adjustments up and down there from that.You can also turn off Auto Update Statistics individually. You can have it on for everything, but then you can do an UPDATE_STATISTICS against a particular table, and then tell it NORECOMPUTE. What that does, it disables the Auto Update Statistics.

If you’re doing the end memory tables by the way, you can…there is no automatic update on statistics. None. You do need to maintain your statistics yourself. However, I’m going to test this on 2016, but on 2014, the end memory tables, you can’t run DBCC SHOW_STATISTICS.

Carlos: Interesting.

Grant: You can’t see how out of date or off your statistics are, you just have to make a guess and say, “Yeah, we’ll update them now.”

Carlos: My DBA aura is telling me that, “Yes now is the time.”

Grant: Yeah, right, exactly. It’s just feeling it one with the universe.

Carlos: That’s right.

Grant: Yes, now we should update our statistics.

Carlos: Interesting. That’s something that I was going to wear off, is being able to disable the automatics statistics on an individual table, or I guess I’m assuming, an index as well.

Grant: Or just a single statistic, yeah.

Carlos: Or a single statistic. What scenarios would that be most likely?

Grant: You’re talking high end stuff, right? You’re getting up in the “We’ve got millions of rows and we’ve set the statistics during what we consider out downtime or our maintenance window. We’re so big, statistics updates are painful.”For most people, they won’t even know the statistics update. You won’t even feel the fact that they’re occurring, but you’re in that situation where statistics update’s painful so, “I’m going to disable it.”

Or, you’re in a situation where, “We need to sample on a very particular rate, whether it’s a full scan or reduced value, and we don’t want to rely on the auto stats update,” which is uncontrolled to a very large degree.

“We’re going to control it. It’s going to do exactly this way, exactly like this, but we’re going to disable it.” But again, that’s where you either got a very odd data distribution, or you’ve got big, big, big data. That’s usually what it is.

If you got really odd data distribution and small data, then just update the statistics a lot. It’s your best… [laughs] We had a situation that was a horrible database design. Horrible. I freely admit this, it wasn’t my design, but I was trying to fix it and failing. I freely admit.

I didn’t know what to do to fix it. The situation was so bad, the data coming in, it would age the statistics and then we would get a bad parameter sniffing issue would occur, it would make really horrible gasses on the statistics and we would get bad execution plans.

The queries were running…normally running, you know 100 milliseconds suddenly running in 10 seconds, it was screaming bad. We were doing update statistics with full scan every five minutes on one table.

Carlos: I was going to say, that should probably be a very unique situation in a one off, I guess on one table, right?

Grant: It absolutely was, it was a horrible, horrible compromise but it actually fixed the problem that we were having because we got clean statistics on a regular basis. There’s a little bit of locking, a little bit of blocking every five minutes but not as noticeable as when that query suddenly became a 10-second query.

Carlos: That’s an interesting thought there. You mentioned the influence that would feel on updating statistics.On an individual instance, I think I would agree that the one area that I have seen where the statistics can potentially…or updating them and this is your maintenance strategy is if you have even 20, let’s just say even 20 instances in your environment and you have your maintenance to kick off on all 20 at the same time.

You are starting to update those statistics and let’s say you are doing the full table scans. Your SUN operators are going to come back and say, “Hey, you know I’m noticing this huge spike one o’clock in the morning.”

Grant: We need to talk.

Carlos: We need to talk, that’s right, you are overloading my buffer with cash and everything. Now as we talk a little bit about maintenance and doing that…

Grant: Sure, that’s one of the issues.

Carlos: One of the issues that can come back to bite you. I think we need to take a look at updating statistics. How often should we be updating them? Luckily SQL Server does that for us in a sense automatically. There’s some care and feeding involved that automatically is done with the statistics.

Grant: In most systems most of the time that’s actually adequate.

Carlos: I would say combined with my index rebuilds. If I’m doing index rebuilds, my statistics get automatically updated.

Grant: When you rebuild an index it does a full scan on the statistics that it recreates because rebuilding the index is basically a new thing, it’s literally rebuilding the index. When it does that, it also does a full data scan for the statistics that it creates.You get the most accurate statistics after an update like that which brings up…one of the classic, classic problems that people have is that they will say, “OK, well, I’m going to rebuild indexes then I’m going to do my statistics maintenance.”

They will rebuild their index, that’s takes a period of time or whatever, it finishes and all great, you’ve got all of these wonderfully new up to date statistics on any of the index that were rebuilt.

Then you run sp_updatestats which is the shorthand that everybody uses for doing update statistics instead of doing the update statistics command. Microsoft gave us a tool sp_updatestats, it works.

Carlos: It’s simple, straight forward, we will get everything.

Grant: It’s all clean, it helps you out, it’s all great except that they run that immediately after rebuilding all of their indexes and there’s a threshold that sp_updatestats has, is that if you cross that threshold it would then sample update your statistics. The threshold is really simple, if the ROMAD counter is greater than one.

Carlos: [laughs] I did not know that.

Grant: If you’ve done a full scan statistics update as part of your index rebuild, now you’ve got the most accurate statistics you could have, then you run sp_updatestats against it, everybody has touched a row [inaudible 22:17] you now get sample statistics in place of your beautiful perfect set of statistics.

Carlos: All right, yeah. Let’s talk about how do we get around some of that? I think part of that is not running my update statistics or not running update stats every night as a nightly process.

Grant: That can [inaudible 23:46] a dance. Most of our systems we found we needed to do index rebuilds once a week, needed to. I’ve heard people say, “Oh, our index rebuild doesn’t help that much,” well it can, I’ve seen it help a lot.We were doing that about once a week but then statistics were aging a lot faster than the indexes were fragmenting. We were doing stat updates once a day

Carlos: There was enough data being added that your stats were becoming a little bit worked and you needed to update those but your indexes from a fragmentation perspective still looked OK.

Grant: Yeah, no big deal. We were doing much more frequent stats updates than index rebuilds.

Carlos: I think again that’s probably a little bit more of knowing your environment. I would think it goes to further that that’s probably on a table by table basis potentially as well.

Grant: Potentially, yeah. You may have hotspots right, like we had that bad design, I bring that up because at this point it’s 10 years ago when we had the problem and it ultimately got fixed by redesigning the entire structure

Carlos: Very good.

Grant: [laughs] That was the solution. The issue there is that we had a hotspot, we had a particular area that needed tender loving care.You do have to be aware of your systems and how they behave to understand that, this area most everything is going to work fine. We do an index rebuild once a day, it’s not hurting any performance issues, the SUN guy is not complaining, everything is fine.

We probably don’t need to do statistics maintenance that often, you can look into other alternatives or you may find that you are not doing those updates very often on the indexes but you still got a lot of data volatility, either it’s getting modified, deleted, inserted, you need…

Carlos: This is going to lead us to…you mentioned the control that we didn’t have and I know that…actually talking with some of my Oracle buddies.We had a statistics problem one time…anyway he mentioned…the Oracle guy was laughing at us because they can actually import and export statistics, I was like, “Whoa! Whoa! That’s pretty cool, we are working at their mercy,” right but one of the things…

Grant: We can do that.

Carlos: You can import and export statistics?

Grant: Yeah.

Carlos: Then you’ll have to point me to where I can find that.

Grant: You just gave me a good idea for a blog post.

Carlos: There you go, that’s right, because I have not seen that. That would be interesting to know. One of the things that we can do to that effort is if we see our data changing in a specific area is that we could also then start looking at either filtered stats or…

Grant: I left out filtered stats, thank you.

Carlos: Potentially, I guess I’m assuming that’s maybe a date parameter or something, that that new data is coming in and that’s where you are having your problems or your stats are going to have a date. Filtered stats could potentially help you with that.

Grant: Absolutely, that’s what they are there for. They were created the same time as they created filtered indexes. It’s the same idea, the filtered index is going to have automatically filtered statistics but you can’t just create filtered statistics independent of indexes.That will give the optimizer more tools for figuring out how many rows you are getting back. That’s a huge win depending on your data distribution, it’s not something that everybody runs out and starts applying, statistic filters.

There are a lot of tools, it’s just that it’s not always straightforward what the problem is, it becomes one of those things of being able to identify what the query itself is fairly well written, the structure is in place, the indexes are in place.

Why does it think it’s going to get a million rows when it’s only going to get one or vice versa, why does it think it’s going to get one row when it’s going to get a million? It’s making that determination of why is it making those choices lead you into statistics and then you can figure out what you have to do to fix them.

We do have quite a few tools, we don’t have…like we do have the 200-steps histogram, Oracle can adjust that. They can say, “Well, for this index we wanted to have 500 steps.” I’m not an Oracle guy, I don’t know what the limits are but they can make adjustments on that.

They do have more knobs to tweak than we do. Believe me there are occasions where I would love to be able to go, “Yeah, I don’t care if it costs extra maintenance overhead, I want 1,000 rows on my histogram”

Carlos: Yeah I want a little bit more. That’s a question for you there, off the cuff. When we have a…let’s say we have a medium sized table, let’s just say 10,000 rows, 15,000 rows, we know that we can have up to 200 steps in our histogram.Why would there be occasions when there’s only, let’s say 150 and it’s not using all 200?

Grant: It’s actually funny, it largely depends on the data and the distribution of that data. If you’ve got a lot of duplicates in your data it’s not going to use 200 steps to define it.If you’ve got constraints in place that says only certain data types can go in here, it actually uses those in front of the calculation and it can figure out how many rows it puts in. I have a demo I do where I create statistics and I get two steps inside the histogram on a million rows.

Funny thing is it’s not because I put it as a [inaudible 30:12] , I would have done that too. It has to do with the fact that I put a constraint in place that it could only ever have one value but I still got two rows in the histogram which I thought was hilarious.

Carlos: Interesting.

Grant: If you look at it, it’s determination is the same, it knows that it’s only going to ever return one row but for some reason it had to give me two steps, I don’t know why.

Carlos: Because one just wasn’t enough in that case.

Grant: It should have been able to do with a single step in the histogram but I think it needs to have a start and stop point, so it created two.

Carlos: There you go compañeros. I think you should be able to tell the statistics while maybe in principle it’s a fairly straightforward idea, lots of different considerations to keep in mind and we are not going to be able to get to all of them today.

Grant: No.

Carlos: There are a couple of opportunities for you to learn a little bit more about statistics and we’ll invite you to check out sqldatapartners.com/podcast for all the show notes from today’s episodes.One of the articles that we will point you to is an article Grant wrote for Simple Talk called “SQL Server Statistics Questions We Were Too Shy To Ask”. It goes through some of the things we talked about today but also includes some additional detail.

For those of you who are headed to the PASS Summit in Seattle this October, Grant will be giving a session there on statistics for the new data pro. That maybe of interest to you, and we’ve talked about several different things, but again there will always be more.

Before we let you go Grant, just a couple of standard questions, one of the things we would like to do is to try and provide some value to folks. We want would like to talk about favorite tools. What’s your favorite SQL tool? Could be a page tool, free tool, why do you like it and how do you use it?

Grant: I’m going to come off prejudiced, but my favorite SQL tool is Red Gate SQL Prompt, and I work for Regus Software, I feel bad saying that, but I can’t write TC Code without it.[laughter]

Grant: That’s a fact. If I stopped working for the company, if they fired me, I would still buy the product, because it makes it so that I can write TC Code.

Carlos: We would like to hear one of your favorite stories. Ultimately, we would like to hear about why you enjoy being a database professional?

Grant: Why do I enjoy being a database professional? I think those are personal issues. [laughs] because I do enjoy it, I really do. I don’t remember the specifics on this one, but I’ll tell the story anyway because it illustrates why I enjoy being a DBA. The DBA’s job is dull and invisible, until the emergency happen.

Carlos: Till it hits the fan.

Grant: To a large degree, a good DBA is an invisible DBA although, I don’t believe in that, you should be engaged with everyone and very much involved in what you do.What the company does is a business and what the developers are doing and all that stuff, but by and large you can lie around in the background and be a happy camper, until everything goes south.

It was Friday, at five o’clock, I swear, walking out the door with some friends, and we hear a long stream of curses, coming from the stand hand maintain.

You are saying, “OK, whatever,” and you keep walking, until every single one of us…we were all two DBA’s, myself and another DBA, a couple of admin guys, everyone’s phone went off at once.

[laughter]

Carlos: Oh boy, database down.

Grant: You know that, Oh, there is a problem. [laughs] I’m looking at the door, it’s right there. I turned around going back, turn on your computer. What the heck is going on? Wow, we can’t get to like half, more than half of our servers are offline.They’re all gone, and we don’t know why. The SQL Server Instances are offline, the Windows Instances are offline, we’re going round and round.

I miss Friday afternoon, I’m punched drunk [inaudible 35:04] , but I start cracking jokes, I’m having a good time and I’m really enjoying myself, I’m getting very excited. We are figuring out what happened just like the SAN guy turned off the SAN. [laughs]

We really quickly had to get all our servers back online, we really quickly had to go through all the databases and do all the DBCC consistency checks, because they had a hard crash on the disk. We had to have script written and all this stuff done, very quickly for the short recovery. It was just so much fun.

I had a really good time, because it’s the adversity seems to be the moment when things are entertaining and shining, you get to step up and do stuff that you just don’t normally get to do. It was a lot of fun, and it’s the one thing that makes to me, I’m a database professional because, when the stuff hits the fun, I get pumped.

[laughter]

Carlos: Pump starts raising and things start happening. Compañeros, we do have another opportunity for you to learn about SQL Server. You can’t see it, but Grant’s actually wearing a SQLCruise shirt at the moment. Grant, you want to talk about SQLCruise for a second?

Grant: Yes, I take every opportunity I can to talk about a SQLCruise. Let’s put it this way, SQLCruise is a very intense and long-form classes SQL Server training with I would consider some of the better trainers out there, and somehow, I’m in the list too.[background music]

Carlos: [laughs] That’s right.

Grant: It’s great classroom time, some very serious training from very serious people, Kevin Klein, [inaudible 36:51] ,Tim Ford also does some of the training, Tim Ford of SQLCruise. Some of the best people out there are going to teach your classes, and that’s wrong.

Carlos: [laughs]

Grant: What you end up with is literally changes people’s lives because, you get intense classroom time and you learn stuff and you develop your skills.But then you get the networking time with people like Kevin Klein, who 20-minutes talk with the man and you could change your life, you could change the direction, the path, the approach things that you do, he is that inspiring, he really is.

I’ve watched people come back and not change their jobs, but get more involved in the job that they are in, step up and become technology leaders within their communities, technology leaders within their company.

The last time we were there, it was 100 degrees warmer [laughs] in the Caribbean, than it was back here in Massachusetts and Michigan. It’s a glorious time to go and get your SQLserver learning on. [laughs]

Carlos: If you are there from the North East, and you want to get out of the cold, this might be a way to do it. In fact, you and I, met on SQLCruise and it’s probably a big reason why you agreed to come on the podcast here.If you’d like to learn more information about how you can actually save a $100, by going to SQLdatapartners.com/SQLCruise.

The team has afforded us that opportunity to give a $100-discount, and there are some directions on there if you can find out more about the Cruise and get the $100-discount as well, that’s something you wanted to do.

Grant, we do have one last question for you. That is, if you could have one super hero power, what would it be and why would you want it?

Grant: Of course, the one I’ve always gone with. I’ve always wanted to be the flash, I’ve always wanted to be the flash.

Carlos: Interesting.

Grant: I want to be able to move quickly from place to place, hey, if I can get there as fast as flash does, I don’t have to fly anymore and that would be great.[laughter]

Carlos: Very good, very cool. Grant, thanks again for being on the show, we do appreciate it.Grant: No, thank you, thanks for having me. I appreciate it.

Carlos: Compañeros, we will see you on the SQL trail.[music]

Children: SQL Data Partners.

Share This