Episode 78: Conversion to SQL Server 2016

Episode 78: Conversion to SQL Server 2016

Episode 78: Conversion to SQL Server 2016 560 420 Carlos L Chacon

As database administrators, we will all upgrade our environments at some point; however, we don’t normally have the opportunity to upgrade to the next version of SQL Server before it becomes available to everyone else.  In this weeks episode of the podcast, Steve and I chat with Brian Carrig about the journey channeladvisor took to implement SQL Server 2016 in their environment, that it was like working with the SQLCAT team, and how they go about making use of some of the new features.  Brian shares with us some of the struggles they were having along with how the 2016 version helped address these issues.
[/et_

 Episode Quote

“SQL 2016 is probably the version of SQL Server that has had most attention paid to performance improvements in quite some time probably since the 2005 release. They, I believe, I won’t say they promise but it’s not atypical to get 20% performance gains just right out of the gate.”

Listen to Learn

  • What it was like to work with the SQLCAT team
  • The features channeladvisor was after, but also how they went about using them
  • How the in memory options are helps with bursting
  • Why Brian how to create a connect item and what it deals with

Brian on Twitter
BWIN in memory OTLP whitepaper
Using memory optimized table variables
Memory Estimates for memory-optimized tables
Connect Item to store query store in a separate filegroup
SQL Server setup checklist in Github

About Brian Carrig

Brian Carrig is a Microsoft Certified Master of SQL Server and manages a team of talented DBAs at leading e-commerce cloud solutions provider ChannelAdvisor. In a previous life, Brian spent some time as an academic and holds a PhD in Computer Science. He is a native of Dublin, Ireland but now lives with his wife and two daughters in Cary, North Carolina.

[/e

Untranscribed introductory portion*

Carlos: So welcome compañeros, this is Episode 78. I am Carlos L. Chacon.

Steve: And I am Steve Stedman. And today’s guest is Brian Carrig.

Carlos: Yeah, Brian is the DBA Manager over at ChannelAdvisor. He is currently living in Raleigh, North Carolina. But you’ll notice an accent. I know that he is not from that area. It’s great to have him and we’ll be chatting with him a little bit about the 2016 features and some of the issues that they have or issues/challenges perhaps, experiences that they’ve had in rolling 2016 out and they’re actually a preview customer so before it was released they had access to it. And so we’ll be going through that. Couple of things we wanted to talk about first and that is I wanted to remind everyone that our SQL Server checklist is out on GitHub and so if you’re interested in contributing to that we’d love to have you. Again, that’s an opportunity there for you to contribute kind of give your ideas in two cents and make it a little more available for others as things continue to change. We’d love to get your experiences there as well.

Steve: Yup, and you know what I think is nice about that is if we get more input on that and update that we’ll be able to put that checklist out maybe a couple of months or once a quarter which is here’s the update with everyone’s suggestions. And perhaps build one of the best SQL Server setup checklist out there.

Carlos: There you go. Yeah, it’s interesting. There are still lots of even though people aren’t necessarily installing SQL Server. It’s still something just to reference because as they’re going in and reviewing environments. It just kind of, “Ok, am I setup correctly?”

Steve: Absolutely, and I’ve found that, I mean a lot of those setup items get missed when a SQL Server initially gets built. People who set it up are not always the ones who know the right things to do for the SQL configuration.

Carlos: That’s right. Ok, so with that let’s go ahead and welcome Brian to the show. Brian, welcome!

Brian: Alright, thanks!

Steve: Yeah, Brian, good to have you on the show.

Brian: Yeah, good to be here.

Carlos: Yes, thanks for coming in. I know we’re all a little bit under the weather. The winter weather has kind of come and then it’s worst and so thanks for taking a little bit of time with us. So ultimately our conversation today is about your experience in migrating to SQL Server 2016, and what that experience was like and some of the challenges that you had? So I guess, first, set us up with that story, what ChannelAdvisor is doing? I guess why did you feel that need to upgrade to SQL Server 2016 before it was released?

Brian: Ok. Well, just to give some background I guess not everybody will know who ChannelAdvisor are and what we do. So we’re an e-commerce software solution provider. We’ve been software solution since 2001, since probably before the name existed. What we do is we whelp retailers and brands to sell online, right. Marketplaces like Ebay, Amazon, Walmart, Jet. And there are a lots of different, there are actually hundreds, possibly thousands of these marketplaces throughout the globe. Basically, we help companies to sell across these marketplaces, manage their inventory and all that kind of stuff so that they don’t have to worry about writing API’s for the various marketplaces. They can just interfaced with us and we will manage their inventory, manage their online sales, digital marketing, all of that. And so we power about I think 3,000 retailers.

Carlos: Oh wow!

Brian: Yeah, with some pretty big customers, Macy, Samsung, UnderArmour, Staples. Yeah, there are quite a lot of big names in there. Before I moved to the US, you mentioned my accent, when I go home to Ireland people say I’ve lost all my accent so. Apparently not.

Carlos: Yes. Isn’t that the case. My dad has the same way. My dad is from Costa Rica. He came here when he was 15, lived here for 30 some odd years. Goes back to Costa Rica now and people are like, “Is Spanish your first language?” He was like, “Yup.”

Brian:  Well, when I got home they say I sound like an American now so.

Carlos:  You’ve been Americanized.

Brian: Yes. So we recently just went through what we call Cyber Five, right at thanksgiving that whole weekend Cyber Monday. And during that weekend we
[00:05:00] will, last year we were close to a quarter billion in sales. I won’t say this year’s because I believe the results are not actually official yet. I don’t want to get in trouble with the SSC. But so far as to say we do a lot of processing during that weekend and that’s kind of what drives our uses of SQL 2016 or why we wanted to move to SQL 2016.

Carlos: So that’s your peak usage time?

Brian: Yes. Yes.

Steve: So then, given that that’s the peak usage time, I would assume that probably in July or August you’re really starting to ramp up for that. Is that about the right time for it?

Brian: Yeah. Generally, we’re trying to get the infrastructure in place probably around July or August and kind of gearing up then for the load in the 4th quarter. And then based on just natural growth, we might see a bit of a dip and Q1 of 2017 say. And then we start to ramp up just from natural growth and we kind of get ahead of that. And SQL 2016 is probably the version of SQL Server that has had most attention paid to performance improvements in quite some time probably since the 2005 release. They, I believe, I won’t say they promise but it’s not a typical to get 20% performance gains just right out of the gate. So that’s what interested us. We’re quite sensitive to cost even though we have quite a large SQL infrastructure because we make money when our customers do. So if we just go crazy and expand and deploy a huge amount of infrastructure we can impact our own profitability.

Carlos: Sure. Now, has any of that, I mean using cloud technologies or is it still On-Premise.

Brian: Pretty much everything we do is On-Premise. We do have a footprint in AWS. We have some customers that where we run SQL Server in AWS for because we have geographic restrictions on where their data can reside so we leverage AWS’s data center in Ireland actually to run their workload.

Carlos: Ok, got you. So you’re an early adaptor before it was available. I guess, what was that process like?

Brian: It was fantastic. I can’t compliment to SQL CAT Team enough. It was a real eye opening experience. I once have had terrible experiences with Microsoft support prior to these but this is another level entirely. So yeah, it was really enjoyable.

Carlos: Well, so you mentioned you are looking for 20% increase so that’s the marketing was telling you, right? So you get it in. You get the SQL Server 2016 version. How do you go about to testing that and kicking the tires to see what’s that going to do for your environment?

Brian: Right, that is difficult, right? Currently, we have our 118 SQL Server instances in production. I think it’s close to 3,000 databases and our environment peaks at about 1.6 million transactions per second. It’s really really hard to get a dev environment that reflects that. But you know, we do what we can. In particular, we knew we had some problem areas, and one of the most In-Memory OLTP. We’ve been using that since SQL 2014. While the performance gains were incredible when the features that’s kind of limited in terms of what data types and SQL syntax you could use. We also had a lot of stability issues with In-Memory OLTP particularly around the checkpoint engine. So we have cases, sorry go ahead.

Steve: Oh no, I’m just going to say what type of issues did you run into with that that’s interesting?

Brian: Oh ok. I think I found the connect item on this or somebody did. It was fixed in a hot fixed. But we had an issue where if you run out of transaction logs space. So basically your transaction logs filled and even before Auto Growth could kicked in. The checkpoint thread for the In-Memory OLTP engine would die and fail to
respond. So the only way you could kind of get that process to restart was that take your database offline and bring it back online which in highly transactional environments is problematic.

Steve: And given at that checkpoint files really the only place that new data is going to exist. I could see how big of issue that might be.

Brian: Yeah, so the first time it happened we did not notice and the transaction log on the server was 600 or 700 gigabytes and it did not get truncated until it filled because of this. It would basically have a full transaction log that was 600 gigabytes in size. It ended up being quicker to restore the database from backup rather than first taking the database offline and having it start and go through crash recovery.

Carlos: Oh wow!

Brian:  Yeah. That’s a lot of role for it.

Carlos: Yeah, exactly, scenario you see every day. Now that’s on the 2016 environment or that’s 2014?

Brian: That was on the 2014 environment. They actually fixed it in the service pack for 2014 but I guess that was the point of which we were kind of looking around and saying, you know.

Carlos: Ok, we need something different.

Brian: Yeah, we heard that they had rewritten a lot of the engine for In-Memory OLTP for SQL 2016. So that really prompted us, apart from performance gains. That was another thing that kind of prompted us to go, “Hey, we’d really like to get into 2016 as quickly as possible.”

Carlos: Ok, so then, I guess any other changes to the In-Memory objects if you will. Or what is really just like I needed a new version?

Brian: There was the stability side but also availability of constraints. I think that using In-Memory OLTP has become a lot more viable now that constraints are available in terms of foreign keys, and default constraints, and unique constraints.

Carlos: Now, it reminds me, so the foreign keys, the In-Memory objects. Can I have a foreign key from an In-Memory table to a disk? What are they calling them? What do we call them, not In-Memory table?

Brian: Disk based tables.

Carlos: Disk based tables. Yeah.

Brian:  I actually do not know. I have not tried that.

Carlos:  But you’re talking about foreign keys between two In-Memory tables?

Brian:  Yes.

Carlos: Oh yeah, ok. I wasn’t sure. I didn’t think that was an option and I thought ok maybe that had changed. So ok, so then I guess can you tell how many In-Memory tables do you actually have?

Brian: Right now? I want to say 3 or 4. So it’s not extensive. It could be more. Kevin designed a lot of them so he’ll probably correct me after this confessional.

Carlos: Another excuse to get Kevin back on the podcast.

Brian: There you go. He can back and explain everywhere where I was wrong. I want to say 3 or 4 of the regular In-Memory tables that’s in the database. But what we’re using extensively now and I don’t know if enough people know about this is you can create a table type that is memory optimized and use this as a table variable. Either as a table variable in the procedure or, you know, if you use TVP you can pass it in as a TVP. We’ve seen some incredible performance gains from this. And it’s really simple to implement. I think I have given an example where we reduced the CPU consumed by a procedure by about 75%.

Carlos: So when you’re doing that at that way basically you’re kind of skipping TempDB for the type of jobs or the type of tables that would normally go into TempDB and you’re doing it as a memory only, In-Memory OLTP table, is that right?

Brian: That’s right. It doesn’t touch TempDB at all. So what we were finding after we deployed SQL 2016 and we got those kind of performance gains that we’ve mentioned is that like always here. You’re just shifting the bottleneck to somewhere else so where we start to see some stress was contention in TempDB around reading and writing from the system tables. And the way we looked at reducing that was basically take some load off on TempDB by using memory optimized TVPs.

Carlos:  Always the big system. How do you take it to account the memory
considerations now? Because I mean obviously one of the things with memory optimized objects is that you have to account for that In-Memory now which, you know, previous to that it was just, ah we’ll allocate, you know, 85% or whatever the percentage was of the memory to SQL Server and be done with it. Now we have to carve that up little bit. Can you talk to us about how you decided to do that or what were your considerations there?

Brian: Yeah, certainly. So if you read books online, they say for your fixed tables, right, not temporary objects that you should allocate about 3x the size of the table. But really what it depends on is what your frequency of insertion and deletion from that table is. So basically how much work does the background garbage collection process need to do and how much overhead do you need to handle all the raw versions essentially. And typically if you’re looking at In-Memory you probably have a really high frequency of insert and delete and update type operations.

Carlos: Right, that’s why you’re using it, right?

Brian: Right, it wouldn’t be a problem if your table wasn’t heavily access to. You wouldn’t be looking at this. And also, you know, concurrency because the wider and more sessions that are doing this the more versions that will be held In-Memory. So we have found that in cases you might need to allocate as much as 10x. Yeah, and we talked to a company called bwin.  These are like the pioneers of In-Memory OLTP. They’ve worked a lot with SQL CAT. They’ve got some great white papers out there about using In-Memory OLTP to really boost your performance and the amount of transactions that you can support, and they had similar experiences.

Carlos: I admit I don’t have extensive experience with the In-Memory objects but knowing, because that’s what the recommendation is playing for 3x but you’re seeing a 10x increase. Are these for the DMVs or what are you using to then monitor how much memory the tables were actually taking up?

Brian: So I use Resource Governor and there are also DMVs that will tell you how much memory it is using. And one of the tricks that you can do and I recommend doing this because in a lot of cases your big concern is that you will run out of memory and you will cause the server to crash. If you have just memory optimized objects in one database you can use Resource Governor to bind that database to a resource pool and fix the amount of memory that it can use. So if you’ve run out of memory that you’ve allocated to that database, queries will fail but your server will stay up.

Carlos: Will stay up.

Steve: That’s a very creative approach there. I think often times resource governors really gets downplayed a little bit is not being that useful. But I mean that sounds like a great way to be using it.

Brian: I love Resource Governor. There’s not many people that use it. But we use it extensively here.

Carlos: Interesting, that might be another topic because I agree. It seems that Resource Governors kind of on the wayside but scenarios like that do make it very appealing.

Brian: Yeah, because we’re  and we have a lot of different product teams that often use the same database infrastructure. We use Resource Governor sometimes without doing any actual limiting or governing of resources. What it does is it splits your plan cache. So if you have happened to have teams that use procedures in different ways. They get their own plan cache and you can reduce parameters anything issues.

Steve: Oh, interesting. So another question around the In-Memory OLTP and this is kind of the concern that I came across as I’ve been exploring it, is that and this probably comes from my background with database corruption and checkdb and things like that. But checkdb and checktable won’t check In-Memory OLTP tables. And I guess is that something that you’ve run across or have problems with?
Brian: We have not encountered any problems with that. Yeah, it’s definitely a concern. I mean, the tables that we use for In-Memory OLTP are largely can be taught of as like staging tables, ETL processes. And obviously we extensively use the
memory optimized TVPs. And just to talk about hopefully not to kind of frighten some of your listeners too much, if you’re just using the memory optimized TVPs so these are all temporary objects, you probably won’t even notice the amount of memory that’s been consumed by In-Memory OLTP because they’re short lived.

Carlos: Yeah, exactly, it will be almost like Temp tables, right, you know, in a sense.

Brian: Yes. But no, we haven’t encountered any type problems there. We have encountered corruption on our disk based tables but nothing in the In-Memory OLTP space yet.

Steve: Well, ok, that’s good news on the In-Memory OLTP side.

Carlos: Yeah. So I guess that’s maybe an interesting point we can potentially take us off the line. But how, so the corruption on the disk happens, you know, I guess there’s something get switched there. As long as the database knows what records are there, I guess how could corruption occur there. So your memory would have to become corrupt. It would then affect more of your hardware.

Steve: Yeah, you mean with the In-Memory tables, Carlos, right?

Carlos: With the In-Memory tables, yeah.

Steve: Yes, so I think the thing I’ve seen and I experimented with a little bit is if you have corruption in one of the checkpoint files where all of the changes are tracked in that In-Memory OLTP table to the checkpoint table or checkpoint file. If one of those is corrupt you’re not going to know it until you, and the only way you’re going to know it is when you try and back it up, either with a full or differential backup.  And at that point you’re backup is going to fail, and it will tell you that the, it usually checks on there that you’ll see. And then you can’t actually backup the database and your only point in that, or the only thing you do with that point is go to a previous backup version of that database.

Carlos: Do you have anything else you want to add there, Brian?

Brian: I mean the checkpoint files are really just used for recovery purposes, right, and also for the backups. So in that scenario where you encountered at the backup has failed and you have some kind of storage corruption, presumably your actual data will be fine, right?

Steve: Well, if you’re pulling that data from another table that is a disk based table, yeah it will be great. But if it’s going into In-Memory OLTP table that data may only exist in that checkpoint file until it gets backup.

Brian: Oh, ok.

Carlos: Got you. Yeah, crazy.

Steve: Yup, so. It’s definitely, I mean In-Memory OLTP is definitely a huge performance gain there. And it sound like that’s what you’re using it for and you’re definitely seeing those results.

Brian: Yes. I mean, I think our plans for 2017 definitely involve more In-Memory OLTP.

Carlos: But that’s not the only feature that you’re using in 2016?

Brian: No, another feature that we’re using extensively is Query Store.

Carlos: Right, and this made waves recently because they had enabled it by default in Azure SQL database.

Brian: Oh, that’s right. Yeah, that’s right.

Carlos: So they maybe, you know, who knows in the SP2 it might become available. This is a pure conjecture. This is not an announcement. But I could see that coming in future versions being enabled by default. But currently it’s not.

Steve: Yeah, I could see it and even during the preview top program. They were changing the defaults almost on a monthly basis. I don’t know if anyone was paying a lot of attention to the CTPs but almost each month they kind of they got tweaked.

Carlos: Yeah, I know when I talked with Borko, you know, the PM about that. They we’re looking at those usage patterns. They wanted to get that feedback, see how people were using it and then being more responsive if you will.

Brian: Yeah, for us because we have a lot of adhoc queries. They switched the cleanup the queries that are not frequently used was important.

Carlos: Ok, so I guess talk to us, so I guess we should review quickly what the Query
Store option is. And for my perspective, right, it’s basically tables that are inside the same database that is going to keep a record or a history of all the executions that have happened on your database for further analysis. Is that fair?

Brian: Yeah, I think that’s fair. I think Borko describes it as a flight recorder for your database. And I think that’s a really good description.

Carlos: Yes, that’s right.

Brian: So it keeps a record of all of the queries as they execute and then the plans that they use. So even though, probably the primary use case is that, right, looking for plan changes that are problematic. There’s a lot of rich data in there that you can use for other things as well.  Even cases where it might be queries that are aborted, or cancelled, or timed out for some reason that’s all data you can pull out of there.

Carlos: Right, I know, from the DMV perspective, you know, that you can only see either cumulative totals or the last execution counts. And so that scenario where, “Oh, this morning I was having a problem but now it seems fine”, you know, not much that you can do about it now.

Brian: Yeah, so we typically, prior to Query Store we would typically run Adam Machanic’s sp_whoisactive every minute and dump that to a table. So we would have some kind of history of what’s executing when a developer or a customer comes to us and says, “What, you know, there was an issue an hour ago or two hours ago. What happened?” Right, it’s really hard to tell. So that’s what we use that for and that Query Store is kind of like a more enhanced version of that.

Carlos: Ok, interesting. So, I mean, you talked about aborted queries, you know, that you’re starting to look at or I guess understand better. Other things that you’re getting out of there that maybe you haven’t thought of before?

Brian: We use plan guides and once extensively but we have certainly areas where we use plan guides and if anybody has worked with plan guides to force plans, I mean Query Store is amazing, right.  It execute a procedure, you pass in two parameters and your plan is forced. None of this messing around with like huge blocks of XML or trying to capture the correct plan from the cache.

Carlos: Ok, so now you’re actually saying I can go back into Query Store. I can capture the plan. I have two plans, one is good, one is bad. And I basically say, “I want to use this one.” And that’s your plan guide?

Brian: Yeah. It is a force. It’s not, “Hey, would you mind using this one?” You’re forcing it to use this plan.

Carlos: Sure. Well, that’s kind of an interesting option and I say this only because I was just writing test prep questions and so I’ve been looking at force. Force is kind of an interesting option because any plan is really a force, right. You’re telling it, “Hey, you need to use this.” Instead of what you think you’re going to use.

Brian: Sure, but you’re not giving the optimizer a choice. So the optimizer is being told, “Here is your plan, execute that.”

Steve: And I think that’s a much more direct and guaranteed way of using plan guides rather than saying here’s the plan guide and here are some suggested hints to use.

Brian: Yeah, certainly what we have found is that we’re often using hints to try and get to the plan that we want. As you say, this is more direct. Here, I know the plan, I won’t use it.

Carlos:  Right, right, right.

Brian: I mean that should be a pretty rare scenario. Usually the best thing to do is to leave it to the optimizer to figure out the best plan. But there are always those cases that you have to deal with.

Steve: Yeah. I’ve seen that where, like, if somebody adds a bunch of hints into a query that then make the query behave poorly you can then use the plan guides to override those bad hints that were given to the original query and if you can pull that information around the Query Store that would be extremely useful.

Brian: Yeah, and that you could use it to address the same problem that plan guides largely used to address. Cases where you can edit the query to put hints in there so you use plan guides to force the behavior that you want, you can use Query Store in the same way. So if you deal with a lot of ISVs and you’ve got issues with parameters anything this could be a way to address it.

Carlos: Very good. So the last feature we want to talk about is the column store indexes.

Brian: This is one you’re definitely going to have Kevin back in for. He is our store guru.

Carlos: Yes, so one thing I’ve always, I guess because the example that they frequently
give for the column store indexes is like state. Right, and like ok, I get it state makes sense. I’m constantly grouping by that, I’m reporting by that. You know, things in that nature so that makes it a good for column store index. But outside of that, like, it might be just basically looking for common things that I am either grouping by or reporting on? And that’s where I want to go for my column store indexes?

Brian: Possibly, I mean, one of the things that maybe kind of gets overlooked a little bit. It’s just how good the compression can be on column store. And a lot of your performance gains might be from that so, we had a case where we had 4TB fact table that was reduced to about 600GB in size.

Carlos: Wow!

Brian: I’m making it, yeah, clustered column store index. Also you can generally reduce the amount of non clustered indexes that you need when you basically re arranged your table as a column store.

Carlos: And I guess that’s the new feature in 2016 is I can now create that as a clustered index whereas before I couldn’t.

Brian: Yeah, you can basically mix and match now so you can have clustered column store with non-clustered V3 indexes. Or you can have, probably less common, you can have a regular V3 clustered index with non-clustered column store indexes on the table. Yeah, and one of the gains now as well with 2016 is batch mode execution for column store.

Carlos: Right, now batch mode meaning being able to update and write to.

Brian: Yes, it will execute in batches and you can get considerable performance gains there.

Carlos: Right, very cool. Very cool! Well, awesome. It’s always good to talk with other folks about why they’re doing certain things? How they’re doing them? You know, a lot of times even on our podcasts sometimes we get kind of carried away. Well, I think we always try to dive in to why we’re doing that but discussions like this help bring the problem a little but more to the front and then how we are going about solving that. Should we do SQL Family?

Steve: Yes, let’s do SQL Family then, so one of the first questions there is keeping up on technology? How do you keep up with technology and all the changes that are happening specifically around SQL Server?

Brian: That’s kind of easy in here because we have so many people working here who are really into SQL Server. So everybody can be everybody else’s filter. So everybody is kind of looking at blog posts and so forth. We have a very very active user group in Triangle area. So yeah, basically just conversations that work will usually keep you firmly in the loop.

Carlos: Yeah, it’s interesting. Brian, you’re actually the third person from ChannelAdvisor we had on the podcast. We’ve had Mark. Mark Wilkinson was actually the first, and then Kevin, and then now yourself. Yes, we run into your folks from time to time of course being on the East Coast.

Steve: Yeah, I think you’ve got like, maybe 10 more SQL professionals to go if you want to get the whole set.

Carlos: Only 10, there you go. How did you first get started with SQL Server?

Brian: In a previous life, I was a Linux administrator. Mainly working with MySQL, and the company I worked for at that time took on a project. They didn’t have any in house developers so they took on a project where the project was going to be developed in Delphi so it made into the SQL Server backend and nobody was around to manage it so I did.

Carlos: Ok.

Brian: I started from there.

Carlos: Alright, and then if you could change one thing about SQL Server what would it be?

Brian: Apart from the licensing cost? I will say, TempDB, and I think that’s a major bottleneck for SQL Server still. I would definitely like the ability to allocate TempDB on a per database basis.

Steve: Yup. Alright, so little bit of background then there, Carlos, this question about changing one thing with SQL Server. Where did that originally come from?

Carlos: Yes, we reached out to the audience and asked them what questions should we asking and Brian came up with that one. And it’s been interesting to get, it’s been a good one, right, lots of different thoughts there.

Steve: Give credit where credit is due.

Brian: Thank you! I only realized before the show. I didn’t necessarily have an answer
myself.

Carlos:  Yeah, it’s much easier to ask the questions than it is to answer sometimes. What’s the best piece of career advice you’ve ever received?

Brian: Alright, well so I’m going to credit the leader of the user group in Dublin for this piece of career advice, a guy named Bob Duffy. Some of you in the SQL community have probably heard of him. And the advice was that I should sit the Microsoft Certified Master Exam. When they announced that it was going to be closed, I kind of assume my chance was gone. He was the one who really said no just go for it. And I kind of squeaked at the end there.

Carlos: Wow, ok, very good.

Brian: Early days before it finished.

Steve: So if you can have one superhero power, what would it be and why would you want it?

Brian: Ok, I was going to say the amazing power to reduce SQL licensing cost. I don’t know if that’s. But actually now that I think about it, I think flight would be good. I’ll fly over back to Ireland and, you know, when you’re paying that ticket for four people it kind of adds up. So if I can only fly everybody over with my superhero power that will work out great for me.

Carlos: There you go.

Steve: Yup. And I will be looking for that new comic book of the new superhero that reduces the SQL Server licensing cost.

Carlos: Marvel, if you’re listening, it was our idea first. Ok, well Brian, thank so much for being on the show today.

Brian: Thank you, happy to be on here.

Steve: Yup, thanks Brian. It’s been fun.

1 Comment

Leave a Reply

Back to top