Baselines

1400Have you ever had a situation where performance got worse and you were sure why?  Do you keep records of when changes happen to your system?  How do we know there’s a problem?  If we have a baseline, this can help us out.  More often than not, tribal knowledge rules the day and if you weren’t part of the tribe when the on-call pager goes off, things can be tough to figure out.

My guest this episode is Erin Stellato of SQLskills and we discuss what your baseline should consist of and how you might go about capturing some of that data.  I am always interested to see how people monitor their servers and I know this episode will give you a few things to think about in your baselines.

Transcription: baseline

Carlos L Chacon: This is “SQL Data Partners Podcast.” My name is Carlos L Chacon, your host.

Today, we are talking about baseline. I have with me Erin Stellato from SQLskills. She’s with us today. Of course, this is a real treat, always enjoy talking with those folks. We have talked about finding your people or people that resonate with you. Erin is one of those people for me. I first met her at a SQL Saturday in Philadelphia several years ago.

I really like her teaching style. The way she lays out information makes it very easy for even a knuckle-dragging Neanderthal like me to understand.

I think there’s a little bit of confusion around what we as database professionals can do to be proactive in our environments. This episode is definitely going to go and provide some thoughts around that area. Ultimately, today, we’re talking about baselining.

Today’s episode is brought to you by SQL Cruise. Learn. Network. Relax. Grow. Just add water. Check out sqlcruise.com for more information.

If you have something we should be talking about on the show, you can reach out to me on Twitter @carloslchacon or by email at [email protected]. Compañeros, it’s always good to have you on the show. I do appreciate you tuning in. Let’s get to it. Welcome to the program.

Erin, thanks so much for being here.

Erin Stellato: No problem. Thank you for having me.

Carlos: As I mentioned, I attended your session in Philadelphia of 2012, that SQL Saturday up there. I know you’ve been talking. That session didn’t happen to be about baselining. In a recent post, you mentioned your first ever session was on baselining and you’ve done several posts on baselining in several of the popular SQL Server blogs.When I thought about having this discussion, I knew I wanted to have you on the show. Take us through some of your experience with baselining. Why do you think baselining has played such a large part in what you teach? What do you see as the benefits of baselining?

Erin: Sure. My previous position, before I joined SQL Skills, was at a software company, a company that produced software that thousands of customers across the US, and across the world really, ran. The back end for that application was SQL Server. I started out at that company in technical support and moved over to the database team over time and really fell in love with SQL Server and databases and how it works.I would get pulled into tech support cases and even when I was in tech support, because of issues related to performance. It was always about, “The application is slow. We’re seeing these issues.” One of the first things that people tend to blame is the database.

Carlos: Of course.

Erin: The question was always, “Tell me what slow means.””Well, it’s just slow,” was often the answer. “How do you know?” “It used to be fast, and now it’s slow.”

[laughter]

Erin: What was fast and what is slow? Trying to quantify fast and slow became a real challenge, because people didn’t have baselines. They couldn’t tell me what was good. They could just say, “It’s bad.”That was where the interest came from because there were ways to measure performance in terms of how long did it take to execute a particular action within the application. People weren’t doing that, nor were they collecting any information on the SQL Server side, not just about how quickly something happened but even what was the configuration of their environment.

It wasn’t uncommon for someone to go in and make a change within the SQL Server configuration and have that adversely affect performance. Nobody was keeping track of what those settings were. No one knew when anything changed, unless it would get logged in the error log or maybe would happen to notice something, because also people don’t use change tracking. Not change tracking for SQL Server, but for their processes, when someone goes in and makes a change.

Carlos: Yeah, exactly.

Erin: That’s really where it started for me.

Carlos: OK. One of the interesting thoughts that you have again in these posts, which we’ll put up in the show notes for today’s episode, and that is the idea of taking a mini health check of your environment every month. That’s spawned out of the same idea as well?

Erin: That’s something that we provide as a service from SQLskills. One of the services that we provide is a health audit. Let me come in and let me look at your environment. We use the SQLdiag tool to do this, which checks with SQL Server.People have it in their environment. They don’t have to download a tool. They just download a set of scripts. They use SQLdiag to run that and capture the information that we want. We get a snapshot of their environment. Let me see what this looks like in terms of the server configuration, the instance configuration, database configuration.

What do we see in the logs, the default tracer, extended events, system health session. We pull all that information and review it to see what does the health of this environment look like right now. That is a baseline, even if things are not good.

If things are not good and they’ve not been great for a while, that’s a baseline. We can improve that. If things are great, then that’s the information where we know, “Hey, this is what things look like when they’re “normal.”

Once we’ve done that for a customer, if they’re interested in remote DBA services, then every month one of the things that we do is we repeat that audit. We compare the information each month to what we’ve seen previously, looking for trends, looking for problems, to proactively identify changes.

Carlos: You’ll actually go through that PSSdiag process each month?

Erin: We will. We’ll run SQLdiag every single month. We’re consistent in that we’re always capturing the same information and then comparing it to the previous month. In some cases, I’ll end up going across three or four months to look at the data, because you’ll see subtle changes sometimes in something like wait statistics or virtual file stats.It increments maybe a few milliseconds each month. When you look at that month against month, it’s only a few milliseconds of increase. It doesn’t necessarily get flagged in your brain as, “This is a bad thing.” If you compare January down to May, and those few seconds add up to a hundred milliseconds, then that’s something to note.

Carlos: Yeah, you see a trend there.

Erin: Right, we see a trend in that increasing. We’re doing that with our tool. There are third-party tools that make that a lot easier to see.

Carlos: That’s an interesting thought. In fact, again, this is a couple of years ago. Red-gate was having one of those SQL in the City events. I’m from Richmond, Virginia. I actually traveled up to Boston. I wanted to see Adam Machanic. Grant Fritchey was there, Steve Jones. At that time, I hadn’t met those guys. I wanted to go up and see them. At Adam’s session, he actually mentioned using a monitoring tool.My jaw hit the floor there. I thought, “I don’t like this guy’s.” [laughs] Probably forgotten more about SQL Server than I know. He’s like, “Use a monitoring tool.” I thought, “OK. We’re at a Red-gate event. Maybe he’s just saying that.” In talking with other people, they were like, “No, no. You should be using a tool.” It’s ultimately for that exact purpose of keeping that history and being able to review it.

I think when we look at baselining, that’s what we need. We need something to be able to look. I think a lot of times we may fall into that trap of taking a single number and then basing some decision on that versus, as you’ve recommended, looking over time to say, “Hey, what’s changed,” or “What’s different about my environment?”

Erin: Right. In the very first presentation that I did on baselines, I had a slide. The very first [laughs] slide was someone out in the snow, which is appropriate for this time of the year, shoveling, shoveling his big driveway and they were shoveling it manually. I said, “This is what you could do with SQL Server,” meaning you manually have to go in and set all of this up.You have to either pull some scripts from the Internet, some of which I provide, and implement those, and set everything up, and manage that and watch that yourself. That’s a lot of work. You can do it. It’s a lot of work. Then I had a picture of a snow plow or a truck with a plow on the front that did it all for you. That was my analogy of a third-party tool.

The difference is money, because if you can prove to the business the return on investment cost, then in the end, I believe, as much as I know that people can do this themselves, you’ve got lots of things to do as a database administrator. Implementing this and trying to make sure you’re covering all your bases can take a lot of time. A third-party utility makes it so much easier for you.

Management balks at the cost. What I recommend is, if you don’t have a tool and you really want one, take one set of scripts to capture information that’s most useful for you, that’s most relevant, that’s going to help you solve some problem. Track your time. Track how long it takes for you to put those into place, for you to get everything set up.

Then, figure out how many other things would be really important to you. Estimate that it’s going to take you the same amount of time for each one. Then take that and determine what your, assuming you know your hourly rate, what’s the cost for that? Present that to your management to say, “Look. This is something we really need. This is why.”

You need to give an example. “We had this problem. Here was what the root cause was. I could have probably solved that faster if I’d had this tool. I can do all these scripts and I can put all of this in, and it’s going to take me this amount of time. Or, we could purchase a third-party tool for this amount, which doesn’t take me time and lets me do something else, and I’m going to get way more information.”

Because if you try personally to replicate what any of those third-party tools do, that is all you will do. They have developers who are constantly working on that product and improving it. A DBA doesn’t have time to do all of that.

Carlos: Sure. The other value add is then the charting or the display graphics. You can take a look at that T-SQL output and make heads or tails of it, but then trying to convince other people that there’s a problem can sometimes be a little problematic.

Erin: The visuals are far superior.

Carlos: Exactly. It’s one of the things I liked about the PAL tool, where you’re doing this individually. You can then graph that up and you’ll say, “Oh, we experienced high CPU.” They’re like “OK, whatever.” Then you show them a graph over a week where their CPU is pegged, and they’re like, “Oh, now I get it.”[laughter]

Erin: Exactly. Those resonate much, much better than anything in text.

Carlos: Right. Again, maybe making a little bit of a jump, you want to use the tool but even then we want to start the process of collecting some of this information, as you mentioned. I guess let’s take a look at some of the common things that we might want to take a look at in our baseline. Ultimately as database administrators, the first area you want to start is with disk space.

Erin: That’s a good one, I think, because if you run out of disk space, you’ve got a serious problem.[laughter]

Carlos: Yeah. All eyes are on you on that one.

Erin: Right. I’ve got some scripts that you can use to look at how much disk space is available and also for the files that you have within a database, what size are they and how full are they, so you can proactively address that size and not let it grow automatically. We always want to leave auto growth enabled but ideally I’m presizing those files and monitoring the use of them and how full they are so that I can manually increase the size if needed.

Carlos: With that, how often are you checking, and logs I think is a common one, because they can grow regularly. Are you checking that once a week, every night? How often do you normally do that?

Erin: I usually have a job set up that checks that data for me. It tells me if it exceeds a certain threshold. Let’s say that I’ve got a file and I want to know when it gets above 90 percent full. Then I have a job that might run every hour, every four hours, maybe only once a day. Sometimes it depends on the application for that database and the volatility of it.Some are pretty slow in terms of how quickly that space gets used. Others can have random fluctuations because of processing that occurs at different times of the month. I might have that job run very frequently or just once a day, that checks to see how full those files are and sends a notification if they’re greater than a certain percentage.

Carlos: That threshold. Yeah, I think that’s a good point. We want to start taking those scripts, get comfortable with what they look like and then putting them into a job so that it can do those checks for us.

Erin: Right. It’s two pieces. It’s snapshotting that information so you have the history, and it’s looking for that change and getting a notification, because what I don’t want to do is snapshot everything and then have to manually go look at that data, as you said, every week or every month. I’m going to automate as much as I can.

Carlos: Right. Very good. To finish that thought, when you look at growing your logs, because you’re going to induce some conflict into the log there, as you’re growing it, you’re going to tie that up a little, do you have a schedule? Is like a maintenance window thing that you’re doing those? If you see that it’s going to grow before the maintenance window, you just let auto grow at that point or thoughts around that?

Erin: It’s a little bit harder to know when the log is going to grow. I can guess that it’s definitely going to be used during a maintenance task. Ideally I’ve presized it to handle that. You can also do event notification to let you know when file growth has occurred. Jonathan has a great post on that for either data files or log files. One of the questions that we often get asked in our Accidental DBA course is “What size is the transaction log supposed to be?” For that, there is no cut and dried answer.You have to set up some monitoring to check it. You can use DBCC SQLPERF to look at the size and the percent used. I’ll capture that information over time and auto growth, of course, always enabled. In that case, I’m probably not going to be able to react quickly enough to grow the file if I need to, because sometimes it can get up to 90 percent and then we might have a backup and it will drop right back down.

I’m not in the habit for the log, especially in a newer environment, of monitoring that. I still will monitor it and I’ll get a notification. I will tend to let that go to sort itself out. Then you’re running through a normal workload, which may take a week, which may take a couple of weeks. You want to make sure you go through your maintenance and then maybe make some adjustments. If I increase the frequency of my transactional backups instead of once an hour to every 15 minutes, does that handle the size a little bit better?

Once I’ve figured out what that size should be, then I would go shrink it, size it back out to make sure I’ve got an adequate number of virtual log files and then let it be. Still continue with the monitoring. Still continue snapshotting that data on a regular basis and still getting the notifications if it becomes full above a certain threshold and see if the auto growths occur. There I would probably put in event notifications. I do have this for a customer who seems to run some crazy processing at random times and just blow out the log.

Then I’ve got a notification so that if the log does grow because of something that someone did, either on purpose or accidentally, I can see that the log has grown and then I can go in and make an adjustment. Either the log needs to be that size, or I’m going to resize it because that was just a one-time thing and it doesn’t need to be that big, because I want to keep that transaction log as small as I can and still manage everything in there, because if I’ve ever got to do recovery, it’s got to write out that entire log file and it’s got to zero it out. You can’t use if either.

Carlos: Right. I think taking a peek at when those logs have grown could be a good starting place just so you can see what the growth looks like there.

Erin: Right.

Carlos: A couple of other options we have are, of course, backups, right? Database administrators, we have to make sure we have good backups. Now, one of the things you talk about is actually getting estimates for the space that you’ll need. Does that ring a bell?

Erin: Where we’re trending backup size?

Carlos: Yeah, you’re trending backup size, that’s correct.

Erin: Yep. I don’t know if I have that linked in my summary post. Yes, one of the things that I like to do, the client with the log growth. One of the other fun things is that their database, it’s an over terabyte database. Sometimes they do some fun things that increase the size of the database by 100 gigs because they’ve got some really wide tables that have LAV data and they’ll create indexes that include those LAV columns and all of a sudden we’ve added 100 gigs.One time they had an issue where they implemented some new error handling and it started adding something like a hundred million rows to a table every day and that table just grew. I like to monitor not just the space used but also the backup size, because I’m writing notes to a specific location and I’m looking to keep a certain number of days online.

If that backup file really grows like by 100 gigs, then that’s something I need to pay attention to because my job might fail, my backup job, because it doesn’t have enough space. That’s definitely happened with this customer because all of a sudden, within a day or two, the database has increased significantly in size.

I like to trend that over time, and I’ve been doing that for this customer, I give them updates every quarter, “Hey, look. Backup size has gone, it used to be a year ago less than 200 gigs and now we’re up to 350 gigs. I can’t keep as many copies of the database online, as many backups online as you would like. As we move forward to get new storage, because they’re running out and we’re about to play the shell game here pretty soon, I can tell them, “If we want to keep this many days of backups online, here’s how much space we need.” They have to make that decision of what’s important for them.

Carlos: I would think that information, as you mentioned, you were just talking to the owners there. It would help with the SAN admins as well. Say, “Look, yes, when I originally asked for the space a year ago, my backups were X. Now they’re Y.” You can show them. Having that data would make, “OK, I get it now.” Again, being able to prove your point, rather than just, “I’m too lazy to do maintenance on my backups.”

Erin: Having that data to back up either how much space you need for your databases in general or for your backups is fabulous. You never know when your lucky day might come along where the SAN admin or the storage admin comes over and says, “Hey, we’re getting new storage,” and you’re like, “Score.”They’re like, “How much space do you need?” They never ask about what performance you need, what do you need in terms of latency, they ask how much space you need. When you say, “You know what? I need three terabytes.” They’re going to be like, “Dude, right now you’re only using 1.5. Why would I give you three?”

You can come back with data, and you can say, “Well, look, we’re trending here. This is what it looks like. This is why I need three.” Anytime you’ve got that data to back it up, you have a much valid case when you’re asking for something.

Carlos: No question, no question. Much harder to go against that.

Erin: Right.

Carlos: Another option. Integrity checks are very important. You are getting this at the database level.

Erin: Capturing the run times you mean?

Carlos: Yeah, you’re capturing the run times of your integrity checks.

Erin: Exactly. Typically for our integrity checks we’re running those using Ola Hallengren script, which I recommend to everyone, and which we use for any of our clients, in terms of running your maintenance, your backups, your integrity checks, and your index and statistics maintenance.With his job, by default, it’s running the integrity check for all user databases. We don’t get the different checks at the individual level. If I wanted to do that, I would need to set up a different job for that.

You can see that. You can get a pretty good estimate of that looking at the error log to see how long each one takes. In general, it’s great to know how long the integrity check takes for a database, because if you run into an issue and you’ve got to run one, someone is going to be asking you, “When are you going to know?”

“How long is this going to take and when are you going to know?” If it takes longer than normal, then that’s a red flag and you’ll be able to say that. You can say, “Look, it normally takes an hour for this database. It’s taking an hour and a half.” That usually means it found something. We need to let it finish.

Carlos: Right. Interesting, good point.

Erin: Always let that check finish. There’s no ifs, ands, and buts about it. You’re never going to know what’s wrong unless you let that check finish.

Carlos: Let it finish, yes. Being able to provide that answer for, “How much longer will it take,” just puts you, again, in the driver’s seat. It gives them a little bit of comfort so that, in theory, they’ll leave you alone or give you a little more leeway to finish what it is that you want to do.

Erin: Right.

Carlos: The last piece I want to talk about is maintaining your log. Ultimately, you talk about taking a peek into your log. Now there’s some PowerShell scripts that can go and do that for you if you’re looking for something specific. Thoughts around what it is that you look for in your log.

Erin: The error log, you mean?

Carlos: The error log, yes, sorry.

Erin: No, that’s OK. I just want to make sure we’re talking about the same one. I don’t think that I have any scripts that interrogate it directly. I have a script on sqlperformance.com about using the error log proactively looking for things. Part of the audit, anytime we do an audit, whether it’s a health check or whether it’s part of our monthly audit, I am looking in the log to see what’s interesting there.There’s a lot of great stuff in there. When people use trace flags, when people change the memory allocation. One of my favorite things is when, if you’re running at a virtual environment, sometimes the VM administrators will make a change to your VM configuration. You may or may not be notified.

They may just say, “Hey, look, we’re doing some maintenance for the VM and you’re going to experience an outage.” It’s really good to go back and look to see that your memory allocation is still the same and that your cores are still the same. I also look for lock pages in memory and SPN information, instant file initialization. You have to do that through a trace flag.

If any, what SQL traces are being run, when CHECKDB finished, if there’s anything that was changed at the instance level or the database level. All kinds of fantastic information you can get from the log. I admit I don’t have anything that queries that. You don’t have to use PowerShell. You can use CSQL. I typically tend to look at that, and hopefully in just looking for errors in there, looking for issues.

I always use trace flag 3226 which disables the writing of backup information, write full backup disks or transaction log information to the error log, because that’s just information. It’s not errors. I really want to see only errors in the log.

Carlos: Right. People logging successful logins as well, that’s a pet peeve of mine.

Erin: Right, that’s so hard.

Carlos: I’m like, “Really? Why do you have to keep this log?”

Erin: Right. For some people, they have to do it for compliance reasons. I’m saying, “Well, you should be using server audits.” That’s a better way to track that information than through the error log.

Carlos: Exactly. How long do you normally keep your error logs around? Grooming or maintenance thoughts around that.

Erin: I cycle that usually weekly. You can keep up to 99. That’s a lot. I don’t need that many. I like to keep about two months’ worth just because you never know. I usually have about 60 files. I know some people only tend to keep a month’s worth. I don’t like to cycle them every day, because I don’t know that in most environments you need that, especially if you’re not writing any of the backup information there.A week, to me, is what I generally implement. Again, keeping two months online.

Carlos: I like the day, mostly because if there is an error or inevitably somebody will come to me, “Hey, last Friday I would experience this.””Oh, man, really? [laughs] I have to take a peek at it. At least I can, “It was Friday? OK, now I know which log to look at.” I get a little bit of filtering that way. Interesting.

Erin, always a great conversation. We do appreciate this information. Of course, we’re going to have the links that you mentioned in the show notes at sqldatapartners.com/podcast. Compañeros, you can check that out there. Again, I’ll mention SQL Cruise.

Erin, are you a SQL Cruiser?

Erin: I went on the very first SQL Cruise, yes.

Carlos: Look at that, the very first one.

Erin: The very first one back in, oh, geez, 2010. Is that when that was?

Carlos: Wow.

Erin: That seems like forever ago. It was a game changer for me.

Carlos: Yes, I’m a big fan as well. Tim and Amy Ford have put together a great training opportunity. They’re allowing us to give $100 off the price of admission. You can check out sqldatapartners.com/sqlcruise for details about that and get on board. I think you will have missed the 2016 voyage this year, but there’s always next year.

Erin: Right. The first one, I don’t know if there’s one or two this year, but I know the first one is coming up in a couple weeks. My friend Jes Borland will be there for some of the training. It’s one of those things. It’s not necessarily what everyone thinks of in terms of training. It’s more than just technical training. There’s a lot of professional, not training but mentoring and networking that goes on there that you don’t find at any conference, at any SQL Saturday.It’s just you and that crew. You create a great bond during that trip. Again, technical knowledge, yes, definitely, but it’s so much more. So many of those folks from that cruise I still keep in touch with, I’m really good friends with, I love to see even though we’re all doing way different things than we were then. It’s really cool to see how far people have come since going on the cruise.

Carlos: I was talking with Melody Zacharias. She mentioned, “The technical components you can get on the Internet on your off time. Those people’s time, you can’t get back.”

Erin: Right. The ability to ask those questions during that session and talk about it as a group. In a SQL Saturday or even a user group where you have smaller crowds, that doesn’t happen. You don’t raise your hand and say, “Look, here’s something that I’ve seen. What would you do?” Someone else might chime in, “We had something really similar, and this is what we did.”You don’t get that same kind of interaction at a conference or anything else. On the cruise, you do. Even though you’ve got particular time frames where you’re supposed to be talking about a topic, if it goes beyond that, you all go we went to this deck. We had this deck that we basically took over the whole time we were there. We’d all head back to the deck after class and we’d grab something to drink or to eat. We’d sit there and continue talking.

Carlos: Hash it out.

Erin: Exactly.

Carlos: Very good. Erin, we’ve arrived to the portion of the program I call SQL Family.[laughter]

Carlos: Here we get to know a little bit more about you and some of your work experience. One of the things we like to talk about is your favorite SQL tool. What’s your favorite tool? Why do you like it? How do you use it?

Erin: I mentioned Ola Hallengren script. That’s one. I have two. That’s one of them. Then Adam Machanic’s sp_WhoIsActive scripts. Those are two things that have been staples for me for a very long time. In fact, in that first baseline session, I ended by demoing Adam’s WhoIsActive. It was awesome. I spent time with him at either the 2010 or the 2011 summit, where he was talking about different ways to use it.He hadn’t written that sp_WhoIsActive post-a-day series that he did, right, about how to use it. He gave me some awesome tips. I incorporated that. The last 10 to 15 minutes of that session was spent using that.

One of my favorite things to do with that tool is snapshot that information to a table. Kendra Little has a great blog on how you actually do that. I remember finding that blog and being like, “Oh, my goodness” and pulled that out.

I have people use that all the time to capture information. I remember Michelle Ufford, who works at GoDaddy, telling me that she would use that. She’d snapshot information to a table and then if something happened, she could go in and look at that. Typically if something happened in her environment, it meant that something ran for longer than five minutes.

She was doing a lot of data warehouse and big data stuff then. She would retain that information. If someone said something to her, she had that and went back to it. If I’m troubleshooting, I might snapshot it more quickly. That’s one of my favorite things you can do with that tool. Ola’s stuff makes management of those database maintenance tasks so much easier.

Carlos: So much easier. I agree. On SP_WhoIsActive, how often are you snapshotting that?

Erin: Depends on what the issue is.

Carlos: This is while you’re having the issue. You’ll go ahead and initiate that and say, “Start taking this every hour.”

Erin: Right.

Carlos: Very good. Yes, he was kind enough to be on the show earlier. Very interesting stuff with that. Eight years, it will be nine years, that that’s been around. It’s hard to believe.

Erin: Crazy, and they’re both free.

Carlos: That’s right, even better. Now you’re with SQLskills. Before you’d mentioned you were with another organization. Lots of SQL installations. You’ve had some diverse experience. What’s one of the best pieces of career advice you’ve received along the way?

Erin: That’s a really good question. I have to say one of the best pieces I got from Brent Ozar. Way back when I did the first cruise it was Brent and Tim. Brent was on that cruise. At some point after that, Brent did one of his FreeCons. I remember him talking about how much time you spent at work.If you hear any noise in the background, that’s my dog. I apologize. Hopefully, she won’t start barking.

Anyway, he was talking about how much time you spent at work and overtime beyond that 40 hours. He said, “If you’re spending more than 40 hours at work, what are you getting from that? Why are you doing that to get a three percent or a five percent raise?”

It made me stop and think. I’m like, “Oh, that’s a really good question.” At that point, I stopped. I continued to work hard for my company and work my 40 hours per week, but the extra time that I was working I turned into my personal growth.

That’s when I started writing the blog. That’s when I started spending my own time understanding how SQL Server works. It was hard for my family to understand that at first, because if you’re working more than 40 hours, you can say, “Oh, it’s work. I have to get this done. I’m supposed to get this done.”

When you voluntarily choose to spend that time doing something that looks like work but isn’t and it’s for you, I had to say, “Look, this is for my career. This isn’t for my career at my job. This is for my career as a whole. ”

That was a suggestion, a piece of advice that, I think, once I did that, again, helped me get to where I am now.

Carlos: Very cool. Yeah. I think you are your biggest investment, and you should invest in yourself.

Erin: Exactly.

Carlos: Speaking of investing, our next question, and I’m trying this as a new question that I’ve just introduced here.

Erin: Sure.

Carlos: You’ve just inherited a million dollars. What are you going to do with it?

Erin: Invest it. Pay off my house.

Carlos: [laughs] Hey, there you go. That’s one big house there, Erin.[laughter]

Erin: I’m very ridiculously practical. What is the biggest debt that I have is the house. I would pay that off, take the rest of it and invest it, which I’m really not going to have that much left after taxes. Not a ton would change, but it would be nice to get that out of the monthly rotation.

Carlos: There you go. That’s right. I hear you. I’m with you on that one.[laughter]

Carlos: Our last but not least question. If you could have one Super girl power, what would it be and why would you want it?

Erin: [laughs] There’s so many to choose from. I don’t know if you consider apparition from “Harry Potter” a super power.

Carlos: Of course.

Erin: I really think that’s cool. I’d really like to just be a wizard and have a wand and be able to do all that stuff.

Carlos: That sounds like more than one super power.

Erin: Yeah, I know.

Carlos: I would take the apparition.[laughter]

Erin: You’re going to cut me off there. That’s fine. I get it. I got it.

Carlos: No, very good. Again, Erin, thanks for being here. I’ve enjoyed the conversation. I think Compañeros are going to get some value out of it as well.

Erin: Yeah, thank you. Thanks for having me. I appreciate it.

Carlos: Compañeros, again, sqldatapartners.com/podcast for the show notes today. If you found today’s episode interesting, I invite you to leave a comment on iTunes or Stitcher. Of course, you can check out [dropped audio] . You can contact me on Twitter. I’m @carloslchacon. I’ll see you on the SQL trail.

Share This