Episode 59: Mirroring, Availability Groups, Replication, and Log shipping


Welcome to episode 59, Compañeros! Have you ever wondered what the best option is for making your data available in your environment? Listener Chris Hendon asked us to talk about differences between Availability groups, mirroring, log shipping, and replication. Today on the SQL Trail Steve and I chat with Andy Mallon and Mariano Kovo about these topics. We cover each topic in detail and explain the positives and negatives of each one.  Join us, for another great episode.

Episode 59 Quote:
“I would say that some of the best advice I ever got is that the more you screw up, the more you’re going to learn. But, when you screw up, admit it, fix it, ask for help if you need it.” – Andy Mallon on his best piece of career advice

Listen to Learn…

  • How to use Mirroring to save money for your shop
  • The difference between Mirroring and Availability Groups
  • Why would I consider using Mirroring
  • The unique benefit of Replication
  • Why Log Shipping and Disaster Recovery work well together

Show Notes

About Mariano Kovo
Mariano Kovo is an Argentinian SQL Server professional with Precision IT in Argentina. Read his blog (in Spanish) on SQLPass.org. Follow him on Twitter @Mariano_Kovo.

About Andy Mallon
Andy Mallon is a SQL Server DBA. He’s also the co-organizer of SQL Saturday Boston and co-leader for the New England SQL Server User Group. Follow him on Twitter @AMtwo and LinkedIn. His blog The Impatient DBA covers DBA careers, SQL Server, and T-SQL topics.

MSDN: SQL Server Replication
MSDN: Always-On Availability Groups
MSDN: Log Shipping
MSDN: Database Mirroring
Log Shipping FAQ
The Impatient DBA
Mariano Kovo: @Mariano_Kovo
Andy Mallon: @AMtwo
Steve Stedman: @SQLEMT
Carlos L Chacon: @CarlosLChacon


Transcription: Data Availability

Carlos Chacon: Gentlemen, thanks for coming on the show today with us. Why don’t we take just a minute and introduce ourselves? Andy, why don’t you go ahead and go first?

Andy Mallon: Sure. I’m Andy Mallon. I’m a database architect at a large management consulting firm. I’m been using SQL Server now for about 13 or 14 years for tech support, and eventually becoming a DBA.

Carlos Chacon: Mariano?

Mariano Kovo:[00:01:00] My name is Mariano Kovo. I work as a database consultant of SQL Server since [inaudible 00:00:48] five, I think, or oldest. I’ve been working in Microsoft three years, here in Argentina subsidiary, and basically work with replication since the first [date 00:01:04].

Carlos Chacon: Very nice. The way we’re approaching this, is we’re each going to talk about one of the different options of Mirroring, Availability Groups, replication, and log shipping. Those are where our focus will be. Then, each one of us will introduce that topic. Then, we’ll get into the pros, cons, differences between each of them. Let’s go ahead and start with Mirroring. Andy, you want to take that one?

Andy Mallon:[00:02:00] Sure. Mirroring has been around for a while. It’s deprecated, starting in SQL Server 2016, but it’s still fully functional, up to and including 2016. It’s not going away any time soon. What it is, is it’s a high-availability option between two servers. It’s database-scoped, so you get everything inside the database, and nothing more, kept in sync between two servers. It’s synchronous, or asynchronous. Asynchronous is an Enterprise-only feature. Even without Enterprise edition, you can use it in synchronous mode. You can have it as automatic failover, or manual failover. It’s a really flexible option for your DR and HA needs. The best part is, from a dollars-and-cents perspective, is if you have software assurance, you don’t have to pay for that second server, if all it’s doing is being a secondary mirroring server.

Carlos Chacon: There you go. A way to save a few pennies there.

Andy Mallon: Absolutely. Everyone’s looking to save few pennies.

Carlos Chacon: [00:03:00] That’s right. I’ll take Availability Groups. The idea of Availability Groups is like mirroring. You have multiple replicas. These replicas have their own data stores. They have their own copy of the database, which means if you have 100 gigabyte database, now you have 200 gigabytes of space. You have to allocate one for each copy of the database. We’re going to add a Window clustering layer on top. You have the database, you have the Windows clustering component. There’s some integration with that, that you’d need. That cluster component has to be there, before you can create the Availability Groups. Then, like the name implies, you have this group so you can actually put databases together, in a group, so they’ll failover, or all be on the same server at the same time. A good example is SharePoint. SharePoint has multiple databases. If, for whatever reason, you’re using multiple drives, and one of those drives were to come available, you’d want all of those databases to move over. You can create a group, or multiple groups, on that server to move them back and forth. Then, you also get an enhancement. You have some enhanced options for those secondary replicas. Now, you can take backups from those secondaries. You can also do some reporting, or querying on them, so it almost makes this second copy. You can offload some of that CPU, or memory, if you have intensive queries for that application. This was introduced in 2012, as an Enterprise-only feature. In 2012, you could have four replicas, so four total nodes. In 2014, they increased that. You can go to eight replicas, although all of a sudden, that sounds like a lot of extra data. They also introduced the wizard that will enable you to replica to the cloud, to Azure, the IAS, the Infrastructure As a Service offering. You have a VM in Azure. You could then run a replica, in asynchronous mode, obviously, because you wouldn’t want that delay, over the internet. You can have that available.

[00:06:00] Then, in 2016, they introduced, you can have nine replicas now. I’m not sure why they only added just the one. They also introduced a feature for standard edition. As Andy mentioned, mirroring has been deprecated. Availability Groups is going to be taking their place. You can have now, in the Standard edition, a replica, but it has some of the same limitations as you do with mirroring. You can’t take backups from that replica. You can’t do reporting, you can’t run DBCC commands, things like that. That’s Availability Groups. Replication, let’s talk about replication, Mariano.


Mariano Kovo: Okay. I think it’s one of the oldest technologies that we have in SQL Server, for copying and distributing data, and various objects. Basically, there are three types of replications. There is the very initial, and the first one you have to do. There’s natural type, when you copy all the object and the data. When you move on, you have to do transactional replication that each transaction made, and the publisher. It’s replicated to the subscribers. It’s like when you buy a magazine, the architecture. You have the publisher that makes the magazine, and all the subscriber that holds the data to itself. I think the replication is not a mechanism for HA or DR. I think it’s a great mechanism to distribute [DHM 00:07:20], but not being considered as a high-availability mechanism.

Mariano Kovo:[00:08:00] A highlight point of replication, that is you can replicate data to SQL servers and database servers that are not from Microsoft, like Oracle. [inaudible 00:07:52]. I think that the negative side of replication is that you have to maintain and keep an eye on it every time it is working. It usually happens that the applications do modifications of data, and there, perhaps, is not, how can I say it? It’s not supported by replication. You’ll have to maintain those changes.

Carlos Chacon: Ultimately, I think replication has adopted that, they call it “the newspaper mentality”. You have a publisher, a distributor, and a subscriber.

Mariano Kovo:[00:09:00] And a subscriber. You can have multiple subscribers, and you can handle with different [inaudible 00:08:46] to push the data to those subscribers, so you can manage the delays between those subscribers. The most complex scenario is the merge replication that allows you, that subscriber can make changes to data, and that changes go back to the publisher. Every server in the architecture becomes a publisher, basically. You have to maintain all possible changes with the data.


Carlos Chacon: That’s probably one of the unique facts of replication, from some of the other items that we’ve talked about, is that you have two sources where you can make changes. Two, or more, where you can make changes. That doesn’t apply to any of the other areas.

Mariano Kovo:[00:10:00] Exactly. The other great thing that you have, was that all the servers participating in this architecture, publishers and subscribers can be different versions of SQL Server on different editions. You don’t have to maintain all the servers like [inaudible 00:10:02].

Mariano Kovo: You can have a SQL Server 2016 and a publisher, and a 2008 as a subscriber.

Log Shipping

Steve Stedman:[00:11:00] Log shipping is one of probably the least glamorous ways of moving data, in a redundant scenario. It’s also one of those that’s fairly robust, and it’s been around for a very long time. It was first introduced, I think, in SQL Server 7, unofficially, through some undocumented calls. Then really, in SQL 2000 and beyond, it was a regular feature. The way log shipping works, is you go through a process where you start with a full backup. Then, you have transaction logs that are being run at a specific time interval. It just continues to copy those transaction logs, and then replay them on the destination server, or multiple destination servers. With this, you get an entire database. There’s no option to pick just a subset of tables, like you could with replication. There’s a couple different ways to do it. Really, an automated way, or a manual way. The choice there really depends on your specific environment.


[00:12:00] The automated way, you just go through the SQL server dialogues, and setup through SQL Server Management Studio, you can turn on log shipping, and it does the full backup for you. Then, it creates the jobs to do the copy of all the transaction log files. The scheduling is all done through that configuration. There’s also an option where, if you just have full backups, and regular transaction logs that are running, you can do a manual process, where you could just start by restoring a full backup on another server, and then have a job that copies and restores the transaction logs on a regular schedule. Effectively, that’s the manual version of log shipping. With this [crosstalk 00:12:02], most people start off with the automated one. There’s some special cases, which I’ll get to in a second, where the manual process may make sense. With this, there’s little-to-no overhead on the source database, with log shipping, once the shipping has been established.


[00:13:00] If you’re already taking full backups and transaction log backups, there’s no more work going on here, other than a tiny bit to keep track of what needs to be copied, and a little bit of bandwidth. Those would be copied off the server already, if they were backups. It’s very light on the primary server. Then, because of that process of just copying those backup files, and getting them restored on other servers, you can effectively do that to any number of SQL servers. Oftentimes, it’s just done from one source to one destination. It is possible you could do that from one source to 25 destinations, if you wanted to. Part of the process with log shipping, since it uses your backups, is that you can use your existing backup location, if it’s somewhere on the network that’s shared between both the source and the destination servers, as the place to hand off, or move, the data between the servers.


Once you get the log shipping database onto the destination server, there’s two options for how that database will look to the people looking at that secondary server. One, is that it’s going to be in a standby option, where you have to actually do a restore with recovery to get that database to come online on the second server. The other, is you can have it copy and go with a read-only option. One of the negatives on that read-only option, though, is that while the next restore is going, that database goes away, and it’s bot read-only. You can’t see it at all.

Carlos Chacon: Yeah. You’d get blocked, basically, when you try to connect to it, because it says, “Hey, I’m in the middle of a restore.”

Steve Stedman:[00:14:00] Yep. Now, with log shipping, it’s not a great high-availability option, or even medium high-availability, but it is a great DR scenario, or DR option, so that if something catastrophic happens to a primary data center, and you need to get the SQL server running on an secondary server, it’s a great way to bring it back online, which a little bit a downtime.

Carlos Chacon: Sure.

Steve Stedman: One of the scenarios that, I know it’s been around for a long time, and it’s not that glamorous, but one of the more interesting scenarios, that I was part of recently, was a project where we used log shipping to move from SQL 2008 servers, up to Azure VMs that were running SQL 2014. That’s one of the nice things with log shipping. You can go from the version of SQL Server, to the same version of SQL Server, or to any newer version of SQL Server, with the log shipping. You could go form 2008, up to 2016, if you wanted to.

Carlos Chacon: Right. It will take care of any updates. It’s basically just adding data, at that point, with the transaction.

[00:15:00] Steve Stedman:
Yeah. Then, part of that move to Azure, where we used log shipping to get the databases there, was it took us more than a week to get all the data copied over on those initial backups. There were lots of databases that we were doing. We were able to use log shipping to keep it caught up, over time, while all those other databases were being copied. It was really, really, incredibly useful to get into an Azure VM, and do a limited-down-time switch over.


Carlos Chacon:

[00:16:00] Right. As we’re talking about these different options, I see this trend. Mirroring and Availability Groups is at the transaction level. That’s the smallest delta, if you will, that it has. Log shipping can, if there are transactions there as well, but then you get to specify a certain time period. I want 15 minutes’ worth of transactions, an hour, a day. There’s some wiggle room, if you will, as to how many transactions are going to be [separate 00:16:04] your source and destination. Then, replication, that same idea-


Mariano Kovo: You have to pray to the gods that the data is replicated.

Steve Stedman: You know? I’ve had that experience.

Carlos Chacon: Yeah. In fact, do that first.

Mariano Kovo: I know that, perhaps in the United States, it’s not a problem. Here, in Argentina, the links between the subscribers and the publishers, perhaps, the internet connection is not quite good. Sometimes, we’ll have days without connecting. The data becomes not updated, and you have to restart all the process.

Carlos Chacon:[00:17:00] Yeah. It can be not very forgiving there, either. We talk about things go “oops”, and you have to get back to where you were. I have found the replication to be difficult, sometimes, to make that happen. Is that a fair assessment? Don’t everybody jump in at once.

Steve Stedman:

[00:18:00] I would say that replication is probably the hardest to dig yourself out of a hole, when something goes wrong, out of all of the options. Additionally, replication is probably the most expensive, time-wise, on an ongoing basis, to keep it stable. Whereas, with the log shipping option, and I’m not an evangelist for log shipping here, I’m just filling in the details, log shipping is pretty robust, when it comes to a flaky internet connection. If it’s copying a log file, and the connection goes down for a few minutes, to a few hours, or even a day, when it comes back up, it’ll just get caught up. May take a while to get everything caught up, but it will eventually get caught up, and bring all that data up-to-date, as long as you have all those transaction log files still around at the time the connection comes back.


Carlos Chacon: Yeah. That would be the big caveat there, is making sure you have all the transaction logs still available. If your clean up is sometimes, that window is too tight, then if you deleted them, they basically, you have to start again.

Steve Stedman: Yep, absolutely.

Carlos Chacon:

[00:19:00] One of the things, I guess let’s continue with this replication conversation. Mariano made the comment that he didn’t think replication was not a high-availability, disaster-recovery type scenario. I’m going to mix it up here a little bit. My thinking, and maybe this has changed with Availability Groups, but at least in the older versions, my thinking was that replication was really the only way to provide those four nines, five nines if you will, if you wanted your data to truly be available, because you had so many different options with it. Thoughts?


Andy Mallon:

[00:20:00] I would say that it’s not necessarily the only one that you can achieve five nines with. When I worked an e-commerce company, we use mirroring for our high-availability. Ultimately, we had … Our teams’ goal was to hit five nines, but it wasn’t necessarily the goal of the company. We came pretty close. Five nines gives you about five and a quarter minutes of downtime per year. We did a bi-yearly patching. Or, I should say semi-yearly patching. Twice a year, we took production down to patch the production databases. We kept the site up and running within that time frame, using database mirroring.


Carlos Chacon: Now, I’m interested. In that mirroring option, how long were your normally seeing the failover process take?

Andy Mallon:

[00:21:00] That’s the catch process. With mirroring, it will basically constantly be rolling forward and rolling back the transactions on the secondary, to try to keep the secondary up and going for failover. What we found was that each individual database would failover in a few seconds. It depends on how busy it is at the time, how many transactions there to roll forward, to roll back. We found that how recently the database has been checkpointed affected how quickly it would failover. We added a stat for plant failovers, where we would checkpoint the database. Then, do the failover. Even at, say, five or 10 seconds per database to failover, if you’ve got a dozen databases, you very quickly turn that 10 seconds into a couple of minutes.


Carlos Chacon:

[00:22:00] Sure. That seems pretty fast. Of course, I am going from memory here. For mirroring, it seemed like I was in the 15 to 25-second range. Some of those points that you made, as far as checkpoint, I think it would be worth discussing again what that means. Basically, you’re telling SQL Server that if I haven’t written this out to disk, now is the time to go ahead and do so.


Andy Mallon: Exactly. We never quite figured out why that would affect the failover, since in synchronous replication, regardless of whether it’s written to disk, whether it’s been checkpointed, if it’s committed, then that transaction exists on both servers. It’s synchronous. We couldn’t quite figure out how being in synchronous mode depended on it being checkpointed, but we did find that our overall failover time was just a little bit quicker. That could be one of those side effect-type of things, where checkpointing also make X, Y, Z happen, and X, Y, Z was the reason why it was faster. That was our shortcut for figuring it out, just as anecdotal evidence.

Steve Stedman: Back on the five nines comment there, Carlos, one of those with log shipping, is that it’s very unlikely that you would be able to keep a five nines environment with log shipping.

Carlos Chacon: That’s necessarily the purpose for log shipping. If you’re looking for that, that’s not really a candidate.

Steve Stedman: Right. The reason for that is that normally, when a failure occurs with log shipping, it’s a manual process to do that activation on the secondary server. Or, you could have it scripted, but it usually requires human intervention to step in and trigger the fact that we’re going to turn on that second server. Most of the time, when a server goes down, somebody has to call you. Or, you get a notification. You have to connect, and make all that happen. Even in the best scenarios, you’re going to be three to four minutes before you can even start the switch over. Sometimes-

Steve Stedman: Sometimes, it’s more like 15 to 20 minutes. Then, that doesn’t even cover the case for being able to have maintenance windows, and things like that, where you have to switch to the other server. Log shipping is certainly not something you want to consider in a five nines environment.

Carlos Chacon: I know we really haven’t brought the application into play either, because the log shipping doesn’t provide, necessarily, any logic on top for an application to say, “Oh, I’m A. Oh, now I’m on B. I’m over here.” Then, there’s that logic that you’d have to bake in, as well, if you were going to have an application failover, or switch over, switch databases in that event.

Steve Stedman: The way I’ve made that work is through DNS entries, where you have a single DNS entry that you can change to point to server A, or server B when you do the switch over. Again, that takes time, when you do the DNS change, for that switch over to occur, as well.

Carlos Chacon: Right. You have to wait for the timeout, or whatever, for it to flush and pick up the new address.

[00:25:00] Steve Stedman:
Yep. Definitely not a five nines options with log shipping.


Mariano Kovo: I’m sorry. [inaudible 00:25:10] was talking about [siv 00:25:12]. In replication, the unit is a table. You have to think that the developers, or the application, creates new objects. You have to add them to the replication scheme. In the case of a failover, or the server go down, you won’t have those new objects, or tables, synchronized.

Mariano Kovo: You have to not forget the security logins, users, permissions. All that, you don’t have it on replication.

Andy Mallon:

[00:26:00] Yeah. I think that brings up your point about just the maintenance of replication. That’s one of the differences from replication, from the others, is that the others are at the database level. Once you make a change in the database, any change, it’s going to get propagated out. With replication, you’re looking at the object. You get to pick and choose what goes and what doesn’t go.

Mariano Kovo: Exactly.

Why Consider Mirroring?

Carlos Chacon: We’ve brought up mirroring. Actually, Andy, one of the reasons I invited you on was a tweet you had put a long time ago. There was some discussion on Twitter about high-availability options. You [inaudible 00:26:34] for mirroring. What would we consider, again, I’m assuming obviously, it’s going to be deprecated in 2016. Beyond 2016, I’ll be deprecated. Under what circumstances, or why would why still be considering using mirroring, if I’m using an older version of SQL Server?

[00:27:00] Andy Mallon:


[00:28:00] One of the reasons that … There are few scenarios that Availability Groups just don’t fit the bill, or don’t quite fit the bill yet, unless you’re on 2016. Availability Groups require Enterprise edition, or “Expensive edition”, as many people like to all it. If you don’t have deep pockets, then Availability Groups are out. Availability Groups also require a Windows Server failover cluster, which are also a bit more complex. If you’re in a smaller shop, if you don’t have a team of sys admins to help you with getting that Windows cluster setup. In those scenarios, I think having the Windows cluster, whether you’re doing a failover cluster instance, or an Availability Group. This cases, if you don’t have the expertise to support a Windows cluster, you’re actually going to get more downtime than you’d benefit from. There’s lots of switches to flip, lots of knobs to turn. I certainly am no expert on that part of things. From those regards, you might just not have the manpower to support it.


[00:29:00] Another spot where mirroring really shines is cross-domain, or domainless. You can mirror between two servers that are not on a domain, or two servers that are on separate domains. Dependent, again, not having a domain is one of those smaller shop type of scenarios, where you’ve got less infrastructure. Getting mirroring setup is really straightforward in those scenarios, and offers pretty good failover time. Again, I mentioned that, at the eCommerce company, we were doing failovers at about 10 or 15 seconds per database. A little bit longer on busier databases, a little shorter on less-busy databases. If you’re talking failover cluster instances, or log shipping, some of these other options people might look to, the failover time is always going to be much higher. That was actually the primary driver at the eCommerce company, was being able to get completely failed-over in seconds, rather than minutes. If they had used a failover cluster instance, then you’re restarting an entire instance. If you’ve got a lot of memory, starting up the instance can take a while.


[00:30:00] Carlos Chacon:
Another little caveat there, is that if a sys admin is connecting and running some query, it will wait for that to complete, before it starts to failover. You could increase your time, potentially, there.


Andy Mallon: Right. You can have scenarios where your looking to a problem can actually make an outage worse.

Steve Stedman: Yes. The watcher effect.

Andy Mallon: Yes.

Steve Stedman: That’s right.

Carlos Chacon:

[00:31:00] I think you make a good point about the Windows cluster component. I won’t claim to have vast Availability Group experience, but I’ve setup several of them now. I agree that the majority of my problems from the Availability Group has come from the Windows side. Or, if you have something going wrong with the cluster, and it freaking out. Then, my database is affected. Versus, my database having something wrong, and it the reason for causing the outage.


Andy Mallon: Right. I know, particularly in, I think it was in 2012, that first version with Availability Groups, they were particularly fussing about if you lose quorum on the Windows cluster, for any reason, then all of the databases go offline. Even your primary database that was up and running that server, still up and running, but your databases go into a non-writable state, because you lost quorum on your cluster, and everything panicked and shut itself down, just to be safe.

Carlos Chacon:

[00:32:00] It thinks there’s a problem, and it wants to eject. Eject, and then [the other 00:31:53] cluster comes down. Now, you’re stuck. That’s where you have to have good relationships with the Windows folks, because that’s not necessarily a database, per se, problem, if they come together. Traditionally, I have seen lots of shops where those two groups haven’t always seen eye-to-eye.


Andy Mallon:

[00:33:00] Yeah. With Availability Groups, in particular, you have such tight integration. In mirroring, if you want to have automatic failover, you just need a third instance that acts as your witness. That instance is basically the same thing as the quorum configuration in a Windows cluster. In Availability Groups, instead of it being a SQL instance, it’s within the DBA’s control. Now, it’s the cluster quorum. It’s a little bit further out of our touch. In order to really understand how that works, you really have to be a system admin, or a clustering expert. Not a lot of DBAs are, so we have to really lean on sys admins. We have to learn more ourselves, but we also have to lean on a sys admin that understands how clusters work, and how Windows work to make that work for everyone.


Carlos Chacon: Yeah. Or, even at a minimum, making sure that we have the right permissions to be able to create the objects, active directory, and all that good stuff, if they’re not going to help us do that. There can be some definite roadblocks there.

Andy Mallon: Exactly.

Carlos Chacon: I want to circle back to replication, just for a minute. We hit it a little bit, and picked on it. Are there good scenarios why we would consider replication now, particularly, [now 00:33:56] that we have got these enhanced options?

[00:34:00] Mariano Kovo:


[00:35:00] That’s a nice question. I try to avoid it in [every client 00:34:07], when I [heard 00:34:08] replication. There are still a bunch of client here that are really using replication. I was working last week, for [inaudible 00:34:21]. They are a telecommunication company that have multiple different units, like mobile phones, like land lines, and different applications working. Every department owns their servers. They do not share data. When they have to ask for some information between them, they [may 00:34:48] replicate those SQL servers between them. They are different versions, different editions of data. Perhaps, the models of the databases are different, too. When you have to distribute those information between different models of tables, or applications, I think there is a little chance to use replication still.


Carlos Chacon: Sure.

Steve Stedman: You know, one area-

Carlos Chacon: Go ahead.

Steve Stedman: One area with replication, that I’ve seen it used recently, successfully, is with SQL Server, standard edition. You don’t have all the other options available that you would have with Enterprise edition. You need to offload some of your reporting work to a secondary server. Replication can be setup to move just a subset of the tables, off to a secondary reporting-type server, to remove that load from your primary production server.

Carlos Chacon:

[00:36:00] That was my thought, as well. If there was, for whatever reason, you could pinpoint … You have a large database, let’s say 1,000 tables, but your main reporting was only 10 tables, then replication could be pretty attractive, potentially, to just handle that data.


Steve Stedman: Yep. That’s one that, as long as that reporting system can somehow deal with the time it takes to re-initialize one of the publishers, if it needs to be re-initialized. If you can’t live with that, then replication is not a good option.

Andy Mallon:

[00:37:00] The other spot where replication stands out to me as being a great option, is the scenario where maybe, again, in a reporting environment, or for some sort of additional read activity, you want to maintain a longer history of data. With replication, you can replicate just inserts and updates, and not replicate deletes across to it. In a scenario like that, you can have maybe a reporting database, or a data warehouse, that has an indefinite history of data, but on production, you keep just 90 days, or a year. That’s a scenario where you can’t really do that with any of the other technologies that we’re talking about.


Carlos Chacon: That’s an interesting point. I think, however, I would still potentially attack that as moving over the data. I’m assuming, let’s just say it’s at the transactional level. I’m moving it at least every day, and I only need to keep five days’ worth on the source, or something, I’m thinking that I would just do those deletes … I guess not transfer them over. Log shipping, the deletes would come over. Availability Groups, it would come over. Mirroring, it would come over as well. Yeah. You’d have to have two separate … If you’re moving that data over, to your point.

[00:38:00] Steve Stedman:
Another way to accommodate that would be with change data capture, or something like that. That’s always a mess to use.


Carlos Chacon: Right. Introduce another functionality there.

Steve Stedman: Yep.

Mariano Kovo: I have some client that replace replication with some [ETLs 00:38:20] to make this offload of the data, and don’t have to use the replication anymore. That’s an [inaudible 00:38:31] that we have, I think, in the standard edition, and enterprise as well. [crosstalk 00:38:36] to work with ETL and [inaudible 00:38:42]. Of course, you have to maintain it, too, and develop the ETL packages.

Carlos Chacon:

[00:39:00] Right. Going back to Andy’s point, if you have the infrastructure, and could support the ETL, then yeah, it makes a lot of sense. If you’re looking for something out-of-the-box, if you will. That’s true.


Steve Stedman: One of the things this really comes down to, is none of these options are really particularly good, or really particularly bad, generally speaking. Some of them may be great, and some of them may be horrible, based off your specific scenario, your environment, or what it is you’re trying to accomplish. It really just-

Carlos Chacon: [crosstalk 00:39:20]Steve Stedman: Go ahead.

Carlos Chacon: No, finish your thought. I’m sorry.

Steve Stedman: They’re really just tools in your tool belt that you can choose from. Depending on what you need, Availability Groups might be the best. Replication might be the best. Log shipping might be the best, and mirroring. Any of them might be the best, depending on that specific environment.

Carlos Chacon: Having said that, I think it was Mariano, or somebody said, when they hear replication in the client environment, “Ugh!” They internally groan. “Ugh!” True or false: that is the way you feel about replication. Steve?

[00:40:00] Steve Stedman:
I’ll cringe a little bit, and get some goosebumps thinking about replication sometimes, and not want to deal with it. In the end, I’ll jump in and deal with it, no matter how much I like it or not.


Carlos Chacon: Of course. Andy?

Andy Mallon: True. Replication is definitely not high on my list of things that I want to look at, personally. When it breaks, it’s always a learning experience.

Carlos Chacon: Also true. Mariano, you agree?

Mariano Kovo: Yeah, totally agree with that.

Carlos Chacon: There you go. That might be worth something. We have seen situations where it has worked well. I guess I’d just say that my experience, mostly it’s been yeah, when those bumps in the nights do come, it’s a big bump, and it’s going to take a lot of my time.

Male: That’s probably the biggest issue.

[00:41:00] Carlos Chacon:
Awesome. Last thoughts? Things that anybody wants to come back around to, before we jump into SQL family?


Steve Stedman:

[00:42:00] The other thing that I’ll say, advocating from my corner, and talking about mirroring, I do have to say that mirroring is my favorite way to do a migration to new hardware, or to a new version, because you can get the server setup, initialize mirroring to get your data moving across. You can mirror to a higher version. Once you failover, there’s no failing back, but you can mirror to that new version. Then, when it comes time to do that cut over, the amount of work that I have to do, as the DBA, is really just to do that failover, and make sure that the connection strings get updated, or DNS entry gets updated to that new server. The amount of work that I have to do, in that migration window, is really minimal. I can have all of my logins created, all of the data there, all of the jobs staged. I just have to turn them on. It’s just a matter of hitting that failover button. That’s the other spot that I think all of the HA technologies are able to [help 00:42:18], is with those one-time server failovers, when you want to move to another server. If downtime is a consideration, I always pick mirroring as my first choice.


Carlos Chacon:

[00:43:00] That’s a good point. Even in the Availability Group side, the OS has to be the same. SQL Server, of course, has to be the same. In 2016, they did introduce features where you could use a different version. Again, you have to have Windows 2016 to be able to do that. It really hasn’t even come out yet. I think it’s supposed to come out here, at the Ignite Conference. That’s really in the future, so to speak.


Andy Mallon: Right. If you’re on a current version, and using Availability Groups, there’s really no good way to get to a new cluster, and a new version. There’s a little bit of pain that’s going to be involved there, until that new version of Windows comes out, and you can get to that version. Then, you’ll have the tools to migrate to new hardware after that.

Carlos Chacon: That’s right.

Steve Stedman: Similar to what Andy said, on migrating to new hardware, if I’m migrating to new hardware a distant location from the primary, with possibly flaky internet connection in between, that’s where I would lean on log shipping to get there, where it may take a week to get all of the data copied over, before you can activate it on the other end, and have it catch up along the way.

Carlos Chacon: Sure. You could potentially have pauses in the middle there, as well.

[00:44:00] Steve Stedman:
Yep. Just my preference, and it’s worked well in the past.


Carlos Chacon: Okay, very good. Should we do SQL family?

Andy Mallon: Let’s do it.

Carlos Chacon: As technology changes so fast, and even part of the discussion, as things continue to evolve and migrate, how is it that you guys keep up with technology, stay engaged, and understand what’s going on? Mariano, let’s start with you.

Mariano Kovo:

[00:45:00] Okay. I usually find myself constantly watching [inaudible 00:44:40] podcast on trying [inaudible 00:44:42]. Unfortunately, I don’t have the time to do as many as I want. I think the online resources are a very good point to start. With Azure, or platforms like that, or [inaudible 00:44:57] from Microsoft, or something like this, you can try new technologies. Of course, I am a user from a local chapter of [PASS 00:45:10]. It’s a great service of information, when you can share these technology changes, and discuss them.


Carlos Chacon: Very good. Andy?

Andy Mallon:

[00:46:00] For me, I really keep up probably primarily through Twitter. I follow lots of people that I think are much smarter than me. I see what they’re talking about, and I see what they’re posting about. Even if I don’t learn necessarily how to do things from their blog posts, I can always learn who is an expert in what area. Then, when something does come up, I have that bare minimum of know that if I want to find something on Azure, I know Grant [inaudible 00:46:00] has an article where he lists out other experts in subareas of Azure. Things like that, where I can learn a little bit about everything in little snippets. Slowly, all those little snippets that I get from Twitter, from linking to blog posts, and just seeing what other people are talking about, over time, those start to add up. Before you know it, you’ve actually learned something.


Carlos Chacon: There you go. There is a downside there, compañeros. If you’re too active in Twitter, then I’ll reach out to you and invite you on the podcast.

Steve Stedman: The next SQL family question would be if you could change one thing about SQL Server, what would it be? Let’s start with Andy on this one.

Andy Mallon: Oh, man. There are so many things that I would change.

Carlos Chacon: Keep mirroring around?

[00:47:00] Andy Mallon:
You know? I would keep mirroring around for a little bit longer, definitely. If I had to pick one thing, I would change the default isolation level to be recommitted snapshot, what some people refer to as “recommitted with real versioning”. I think that that’s really the best way to get consistent reads. With a little bit of luck, if you’re using recommitted snapshot, you can convince fewer people to use No-Lock, which is one of my many pet peeves.


Steve Stedman: Nice. I like that one. That is one my peeves as well, with No-Lock. If you listen to, I think, last week’s episode, we ranted about that a little bit. Mariano, as far as changing one thing about SQL Server.

Mariano Kovo:

[00:48:00] Okay, there are many things, too. I was working in the [BI 00:47:51] side in the last months, too. I think that there are too many products in the same product. I don’t know if you mean what I am saying. In SQL Server 16, there are so many choices. You have to install one database server that you get crazy about, “What I have to tell? The R server, [inaudible 00:48:14] server, the integration server, the reporting server.” We have to limit that [things 00:48:23].


Steve Stedman: Maybe focus a little bit more on the core offering, and then spin some of those off into a separate product?

Mariano Kovo: Exactly.

Carlos Chacon: Or, even a more dynamic install. Almost like the [web 00:48:39]. “I want to install the database, I only see database stuff.” It’s not going to ask me about SSIS.

Mariano Kovo: Exactly.

Carlos Chacon: What’s the best piece of career advice you’ve ever received?

[00:49:00] Mariano Kovo:
Difficult one. Any thoughts?


Andy Mallon:

[00:50:00] I have one. I would say that some of the best advice I ever got is that the more you screw up, the more you’re going to learn. But, when you screw up, admit it, fix it, ask for help if you need it. If you’re always timid about making a mistake, or always timid about not being the best, you never really get anywhere. Sometimes, you just have to get out there, whether you’re trying something new, or trying something different, get out there. Whether it’s a new type of job, if you want to be a DBA, and you’re not yet. If you want to use a new technology, try it out. If it breaks, it breaks. Admit it. Say, “I messed up. I’m working on it. This is the plan to fix it.” I think you get a lot further, both of this things. Making mistakes, and then owning up to it.


Carlos Chacon: Yeah. There’s no question.

Mariano Kovo: [inaudible 00:50:14]Carlos Chacon: Our last question today, gentlemen. If you could have one superhero power, what would it be, and why do you want it? Andy?

Andy Mallon: I would definitely be able to fly. I am incredibly impatient. I can’t stand waiting in traffic. I don’t like crowded subway platforms. If I could fly, I could get where I want to be quick, over the tops of the crowds, and never had to wait in traffic again.

Carlos Chacon: Very nice. Mariano?

[00:51:00] Mariano Kovo:
I have my favorite superhero, that is Flash. Speed, it would be my best super power.


Carlos Chacon: There you go. For those who are afraid of heights, right? Speed is the alternative to flying. Gentlemen, thanks so much for being with us today. I appreciate the discussion. I think we had some good insights. We appreciate you taking some time to chat with us.

Mariano Kovo: Okay. Thank you very much.

Andy Mallon: Thanks for having me.

Steve Stedman: Have a good day, guys.

Carlos Chacon: We’ll put an ending on that, and we’ll wrap that bow up very nicely.

Episode 57: the SQL Server Hit List – Our Least Favorite Features

Compañeros, in episode 57 of the SQL Data Partners Podcast we take a page from the SQL Server Radio podcast and discuss some SQL Server features we think could improve. We talk about the redundancy of default maintenance plans, the pain of autogrowth and autoshrink, why NO LOCK hints shouldn’t be used, what’s so bad about Activity Monitor, and more.  It should be noted here, that we LOVE SQL Server, we just being a bit picky with some of the tools.  🙂

Our Top SQL Server Offenders

  • Default file autogrowth settings
  • Default maintenance plans
  • Shrink database or Auto Shrink
  • NOLOCK Hints
  • Table Variables
  • Instance settings – Processors tab
  • SQL Server Log File Viewer
  • Sql Server Logs file history options
  • Spatial Data
  • Data Tuning Advisor
  • Activity Monitor

Tuning Minute
We also chat about the what, where, why, and how of the Dedicated Administrator Connection (DAC) in SQL Server.



Steve: Compañeros, welcome to the SQL Data Partners Podcast! A podcast dedicated to SQL Server related topics which is designed to help you become familiar with what’s out there, how you might use those features or ideas, and how you might apply them in your environment. This is episode 57 and I’m Steve Stedman.

Carlos:  And I am Carlos L Chacon. Welcome, Compañeros, we’re glad to have you. We’re doing things a bit differently if you can’t tell already. We’re shaking things up a little. We ask our guests one of the things they could change about SQL Server and as Steve and I have gotten some of that input we’ve talked amongst ourselves, we’re like, “Hey, let’s do a program dedicated to that.” One of the impetus for this idea, truth be told, came from the SQL Server Radio program. So we’re friends with that show, Matan and Guy over in Israel do a fantastic job and have lots of fun with their program. And episode 50, they did a list of the 50 worst features. Now we’re not going to recreate that whole list of 50, but we did pick out a handful that we thought we could talk about. We will post their list on the show notes today and we’ll do that at SQLDataPartners.com/worstfeatures and we’ll get that up there for you guys. Take a peak and provide some feedback. It will be interesting to get your take on what did you like, what did you not like about some of the things we talked about? In fact, we’ll invite you to leave a comment on social media, right? So share your thoughts with us using the hashtag #SQLPodcast and let us know either your feature that you’d change or what you thought about what we talked about and how you’d react differently.

Steve: We might be opening up a big can of worms there with some of our opinions on this one, Carlos.

Carlos: [laughs] Yes. Be nice. It’s just theory, just opining, right?  And we should say, “All hail to the Microsoft people.” They’re a lot smarter than we are and yes, while we might nitpick a little, we are extremely grateful for the product that they created.

Steve: Yes, absolutely. And then after we get through our list, we’ll follow up with a Tuning Minute.

Carlos: So today we’re going to be talking about the DAC, or the dedicated administrator connection, why you’d use it, and kind of how that came to my rescue not too long ago in a client engagement. I’ll tell you the story about it, why you’d want to turn it on, and what’s it for.

Steve: Here we go.

The Least Favorite Features

Carlos: Hey Compañeros, welcome to the show. So changing things up a little, I don’t have to welcome you to the show, Steve.

Steve: I’m already here.

Default Auto Growth

Carlos: You’re already here. You haven’t gone away. But for that we’re grateful, right? For all the rest of the Compañeros, seven episodes we’ve done together. So this is another great episode. So one of the first things that we want to talk about is features or functionality that we think we can change, and that is the default auto growth settings for databases. Which in fact has gotten quite a makeover in 2016.

Steve: Yep. Absolutely, so this is one that for many years it’s just frustrated me. Where you go into work on a database and you see that it’s set up with the default auto growth settings, which initially for many years for SQL Server it’s been 1 MB of growth on the data file and 10 percent growth on the log file. What you end up with on this is that over time you get a very fragmented database, because growing the database in small amounts leads to excessive fragmentation on disk and it leads to even more so when you’re shrinking your database. That’s one of those other naughty items, but we’ll come back and talk about that one a little bit later. The nice thing was, in SQL Server 2016, Microsoft updated this. So if you just create a database using all of the defaults you’ll get 64 MB autogrowth on both your log file and your data file.

Carlos: Yeah, I think that’s going to be a local change as things get bigger, but also things are moving faster too so in the past maybe growing that much would cause some issues. You shouldn’t be noticing that so much now. Obviously that doesn’t remove your responsibility as a database administrator to be watching that, and taking care of those file growths, but at least you’ll be able to put a backup on.

Steve: And the best practice is to really grow your files so it’s large enough that you don’t really have to use the autogrowth. But however it doesn’t always happen that way and people forget to grow the files and are relying on autogrowth, oftentimes that can lead to the fragmentation. And this change in 2016 with the 64 MB autogrowth default is definitely an improvement on where it’s been in the past.

Default Maintenance Plans

Steve: And next on the list, and this was one of my gripes, was default maintenance plans. Yes, specifically around shrink database, rebuilding indexes, and rebuilding stats. I’m not going to hit on shrink database much, we’ll hit on that a little bit later, but someone who’s new to SQL Server and they realize, “I want to set up some maintenance tasks. I want to make sure that my database is in good shape.” They go in and they right-click and get the maintenance plan wizard

Carlos: Yep.

Steve: And when you do that maintenance plan wizard you get this nice set of checkboxes and you say, “I like this and I like that” and you go through and check the integrity and shrink the database and I’ll reorganize the indexes, sure, why not? Rebuild them, update statistics, and rebuild the history.

Carlos: Yeah, sounds like a catch-all, right? Like a one sized fits all.

Steve: I mean, those are all, well not all of them, but most of them are things that you’ll want to do at one time or another. But the problem you run into is that if you just choose all of the defaults without understanding what will happen you get a maintenance plan that ends up in a default order where it first goes through and reorganizes all the indexes. And keep in mind what the reindex is doing is it goes through and is rearranging all of the different pages so that the index can perform as best it can without doing a full rebuild. And then what it does the next step is it throws out that index and completely rebuilds it from scratch.

Carlos: [laughs] So you’re saying I may have duplicated effort there?

Steve: yes. You could be doing a bunch of effort that is completely thrown away. And then one of the things that when you rebuild indexes, it also updates the statistics associated with those indexes.

Carlos: That’s right.

Steve: So once you’ve done an index rebuild, you have the absolutely best statistics you’re ever going to have on that index, because it scans the entire index when it gets rebuilt and the statistics are 100 percent scanned there. And then the maintenance goes and updates the statistics, probably using the default of 10 or 5 percent, and it builds you statistics that are not as good as the statistics you ahd when your index was rebuilt.

Carlos: And you’re doing that work all over anyway.

Steve: And you’re doing it all over. So you end up doing these with effectively moving and scanning all of your tables at least three times. This is one of those that I commonly run into when I’m doing performance tuning or performance analysis where I ask the question of, “Why are you reorganizing your indexes and then rebuilding them?” And they say, “I thought that was the way it should work.” As we can see, it causes a lot of performance trouble. A lot of excessive I/O.

Carlos: Sure, and then the other thing I think, being part of the SQL Community you benefit from this. There are scripts for this, but the idea of the rebuild index job is that it will rebuild all of those indexes and it’s no respecter of indexes if you will. Ultimately if you want to have good care and feeding, to back up for a second, in certain systems after it gets to a certain size, you might be taking longer than your maintenance window is. So then I’ve seen people just disable them because they’re like, “Oh, it takes too long, I can’t do it.” So now there’s no care and feeding which is even worse. So having something that you can run there regularly that isn’t a respecter and doesn’t sweat the small stuff, if you will, is probably better overall than scripts is probably better from a maintenance perspective.

Steve: Absolutely. And this is one of those things that I haven’t looked recently but I remember in one of the SQL 2016 previews there were some improvements on these default maintenance plans but you get to a point where you add them all in and you’re really doing more work than you need to be doing. So you need to understand that and not do it over and over and over again.

Shrink Database and Autoshrink

Carlos: Sure. So the shrink database we actually talked about in episode 51 a little bit, let’s just hit that and tie it into the autoshrink as well.

Steve: This is one of those that I think was number one on the worst list at SQL Server Radio was shrink database or perhaps the auto-shrink, one of those two. But the thing is, shrink database or autoshink, the difference there is that they’re doing the same thing effectively is that they’re going through and they’re freeing up as much space as can possibly be freed up from the database and they’re packing everything in as tight as it can be effectively without any real regard for performance. And the difference is one of them you issue as a maintenance plan or as a DBCC command, and you have control over when it happens. But with autoshrink, you don’t have control over that. SQL Server hits a point in time where it says, “I need to autoshrink.” Let’s say that you have something going on that you want to have good performance with, autoshrink then runs and significantly impacts performance while it’s running, and then once it’s done perhaps impacts performance even longer because next time you need to insert any rows or change anything in that database, you then need to have an autogrow event. It’s slow. So you can get into sort of battling back and forth between your autoshrink and autogrowth if you don’t have it configured well. So this is one of those that if we could just completely get rid of shrink database and the autoshrink feature in SQL Server, I would probably be happy that. However, I’d want to keep the ability to shrink a file. To go in and shrink a log file or a specific database file if you need to.

Carlos: Yeah, that’s true. Some of the log files do get out of control. Especially with bad care and feeding of your maintenance and your indexes, your logs are going to get big, potentially. I’ve often wondered where this came from because it kind of seems very Access-ish to me, because they had this idea of compaction, right, of the database and the Access people and my limited experience of them is, “You’re having problems? Compact it!” I think ti’s kind of the same idea, and I’ve told this story before, but I was sitting down with a SharePoint consultant at a client site. They were having some issues and I was the data guy to come in and talk with them. And the words that literally came off his lips were, “the reason your database is slow is because you’re not shrinking your database.”

Steve: Ooh, and that is so wrong.

Carlos: I was like, oh, okay, I’m not sure who you are but you’ve lost your credibility with me.

Steve: I think back to one of the very first SQL Servers I ver worked on, maybe it had a 50 to 100 MB harddrive. This was 25 years ago, and at that point in time shrinking your database might have been really important because disk was incredibly expensive

Carlos: Sure, that’s true.

Steve: But in today’s world it’s just not something that you really want to be doing.


Carlos: So the next on is the NO LOCK hint and I think this is on the list, ultimately, because there’s a lot of misinformation out there.

Steve: Absolutely, yes. The NO LOCK hint is one of my peeves on SQL Server, actually. I see it used a lot and really 99 percent of the time I see it used, people think that it’s doing something different than what it does. So, I heard the statement, “But I want to run a query in the production system but not impact or block anyone else. Shouldn’t I just use NO LOCK?” And the answer there is NO. The NO LOCK hint tells SQL Server to ignore other people’s query locks. Basically, to read dirty or uncommitted data at that point, which can lead to missing rows or phantom rows or data showing up in results. But it doesn’t do anything to stop the locking or the blocking in any way on the query that’s calling it. So it’s the equivalent of saying read uncommitted on a specific table that it’s referencing.

Carlos: Now, would that be your advice to people who want to move away from that? Is to transition the isolation level to READ UNCOMMITED?

Steve: Well, effectively whether you’re using NO LOCK on every table in your query it or you’re just using the isolation level of READ UNCOMMITED, the result is the exact same thing. But what I would, if you’re running into this, and you can’t fix –

Carlos: Let me go back. Ultimately the idea is that it’s not giving you what you want. To your point, people think, “Oh, NO LOCK means no impact.”

Steve: Right.

Carlos: So that isolation level is not quite the same thing either, right? It’s not preventing impact.

Steve: Right. We’ll get to alternatives here in a second. But let’s take the example of your bank account and you have the bank account, probably wherever you bank is storing that in a database somewhere. Let’s say that it’s SQL Server and on payday you happen to get your paycheck deposited and you see a nice lump of money there. And then you run a report or somebody runs a query while an index is being defragmented or a page split is occurring, it could show that you have two paychecks on that query. Then a moment later you run that query again and you have no paycheck on that payday.

Carlos: [Laughs]

Steve: Now if it was me and that was my bank, and they’re telling me that I have twice my normal paycheck or no money in my account, I would be really concerned and perhaps consider changing banks at that point.

Carlos: Yeah, it wouldn’t take too many times for that to happen.

Steve: Yep. Exactly. So there’s some solution, some way around this. If you’re finding that you need to use NO LOCK, either with NO LOCK hint or READ UNCOMMITED, because you have queries that are running and not returning, which is one of the almost legitimate reasons that people do it quite often. The thing you really need to look at, at that point, is what’s the root of the locking or blocking and figure out what’s causing that. For instance, you might have a really big update statement that’s being run and it’s locking it for everyone. If that’s something you can break up and run smaller or figure out how to run faster, you can eliminate the locking so you don’t have to use NO LOCK when you’re running the queries.

Carlos: Sure

Steve: Another alternative is that you could a kea look at is the READ COMMITTED SNAPSHOP Isolation level. If you put your entire SQL instance into read committed snapshot isolation, which has less locking going on at that point, and reduces the need for NO LOCK. So one of the things that people argue on this, is, “You’re never going to see the phantom reads or the duplicate rows when you’re using NO LOCK.” Well, there is an example I did where I had a table with just a million rows in it and it had a clustered index on a GUID or a unique identifier, not your best practice but it was a great way to show fragmentation and page splits and things. And I went through and I was just updating that unique identifier to a new value which caused rows to be shifted around. I had that running in one window and I had another session open in SQL Server Management Studio and all I did was count from that table. And I hit refresh on it and refresh on it. Sometimes it would say I had a million rows on that table; sometimes it would say I had a million and ten. Other times it would say I had 990,000 rows in there and sometimes it would say I had 995,000.

Carlos: And it was the same number of rows, you were just updating?

Steve: Exactly, because what happens is when you’re updating a row to cause it to go from one page to another, and then you’re using NO LOCK, you might scan that row on the first page and then it shifts to the second page before you get to it, and you count that row again. So it’s a very legitimate case of, if you’re having blocking issues, if you turn NO LOCK on you might get those missing rows or duplicate rows getting returned.

Carlos: Wow, very interesting. I was one of those who think, and it does run in a lot of circles, in fact it was just yesterday somebody was like, “Yeah I’m running these big queries but I’m using NO LOCK so it should be good.” And it’s just that misinformation there about what the expected result is.

Steve: Yep. I use dto work with a guy who always referred to this jokingly as the, “NO LOCK is the go faster switch”. Anytime you’ve got a slow query you throw a NO LOCK on it and you get faster results. Well, you might not get the results you’re looking for.

Carlos: So another kind of misinformation one and this goes back to our previous episode with Wayne, is the table variables. So just kind one of those that, hey they’re in memory and everything is good. But upon further inspection, maybe not so good.

Steve: There is a lot of misinformation out there about this one. And there are people who will argue with you that if you use a table variable instead of a temp table, it’s not going to use tempDB. And that’s just not true.

Carlos: Right. So I think for our more expanded discussion on this you can go back to episode 48 with Wayne and take a peek at that where we talk about the differences there.

Steve: Or, if you’re going to the summit this year you can see him talk about it live. Which I’m looking forward to this year perhaps going to myself.

Carlos: Because he’ll go through and show you all the logic behind it. You’ll do more than talk about it.

Instance Settings for the Processor Tab

Steve: So the general rule here is to stay away from table variables and use temp tables instead. Alright the next on the list that we had, and this is one that a lot of people go into and think it’s the go faster switch. And that’s the instance settings for the processors tab.

Carlos: So you click under Properties on the instance and you click Set up the Page, and you click on Processors, this is what we’re talking about there.

Steve: Yep, and this is one of those that unless you really, really know what you’re doing, in almost all cases whatever you change on this screen is probably going to make things worse performance-wise.

Carlos: Sure.

Steve: And there’s a lot of documentation out there on it but in some very few fringe cases that perhaps Boost SQL Server Priority might help, but they’re very, very, very rare. And things like, well the default is to automatically set the processor infinity mask for all processors. Yeah, that’s a good thing you should leave it at the default. Automatically set the IO affinity. Yeah, leave it at the default. Don’t mess with those. Don’t go in unless you’ve got a really good case that you’ve tested in a development environment and you know it’s going to work well, don’t mess with those. Don’t mess with the worker threads. Don’t boost your SQL Server Priority, because that’s not the magic “go faster” switch.

Carlos: Exactly, you’re just treating it like an operating system level operation and again, you get mixed results there. In one interesting case there that someone did tell me about his, about making a change there, they had a big server…24 CPUs… let’s say 2008 standard edition could only take 12 CPUs if that memory holds correct. So they just installed two instances and had one instance use the first twelve and the other use the second 12.

Steve: Oh, there you go.

Carlos: But again, a fringe case.

Steve: Yep, an extreme fringe case. The simple thing is, if you pop up the properties and look at the Processors tab, look but don’t touch.

Carlos: That’s right.

Steve: Don’t change anything there.

Carlos: Yes, because when you start doing you may start sending a few more things to the SQL server log, which is the next item on our list.

Steve: There are a couple of things on this that I think could be improved. And the SQL Server log viewer is really a core DBA task to be regularly in there monitoring what’s going on with your SQL Server logs. But there’s some things on in it that are just difficult and painful to use. One of those is when you just bring up the SQL Server log file viewer and you bring up the list of log items up top, you get this little tiny gray box at the bottom where the log message is going to be displayed. And often times most of the actual message is hidden off the screen in that little gray box.  So you need to take that and slide it up until it takes up like half your screen before you can see the log message. Now the next thing that I think that is really poorly done with it, is it truncates errors that come back from SQL Server agent jobs, or other jobs, and it truncates it at 4000 characters. And often times I’ll be looking at a job that failed sometime

Carlos: And the first 3000 characters are gobbley-gook or tell you about the system.

Steve: Yep. And according to those 4000 characters, everything looks great. And somewhere after those 4000 characters there’s some useful error message that you can’t get to.

Carlos: Yeah, that’s the other side, sure.

Steve: The other one on this one would be better support for carriage-returned line feeds. Basically, when you get the error output from a stored procedure or a query that’s run, oftentimes it just piles it into the log file viewer without any carriage return or line feeds. And I end up copying it off onto something else and adding carriage returns in so I can see what’s going on.

Carlos: Formatting that a little bit.

Steve: Yep. So, really it’s the only way to view the SQL server log rather than writing T-SQL code to query it or reading it as a text file or using some kind of third party tool. So it’s one of things that is pretty ore to what people use every day and it would be nice if that was a little bit better.

Carlos: Sure, and if Mike Fal was here he’d be telling us to use PowerShell to read that.

Steve: Oh, good point. I forgot about that one, yeah. I’m sure Mike will razz me if he hears this.

Spatial Data

Carlos: So another one which is, so the feature itself, everything I’ve hard about the feature itself from people who are using it they’re very positive about it. So I might be razzing a feature that doesn’t need to be razzed, but bringing up spatial data. So spatial data, again, the people that like it like it a lot.  IN fact, I don’t have a ton of experience with it so while I was at a past summit I went to Hope Foley’s session on spatial data and I thought, okay, let me see what this is all about. And without question, there are some things that you can bring up and show in SQL Server Management Studio. My beef with it is why would you be showing that kind of visualization in SQL Server Management Studio? Now, admittedly, the spatial data type and the geometry types have been around since the early SQL Server 2008 R2. S it’s been before the reporting components have caught up, but I think particularly now we have PowerBI and all of the other visualizations in the reporting. In my mind, lets’ store the data that we need and we can visualize it in the reporting later.

Steve: Very good point there. I think that, a tool like PowerBI with the right visualization can use the spatial data in amazing ways. Far better than you’ll ever see in SQL Server Management Studio.

Carlos: Yeah, exactly. And so with the items that they have to have, they have to have spatial indexes and hints and compression. They even have different query plans for using spatial data, I think again it’s one of those things that if it makes sense for your organization, I would try to see if we can get that reporting done in a different layer.

Steve: And this might be a good time to remind people of the hashtag #SQLPodcast. If someone doesn’t agree with us, and I kind of throw it out on this one because I don’t care about spatial data so much, but if someone doesn’t agree throw it out on the hashtag and tell us what you think.

Carlos: There you go. Again, those who have been using it are going to hate me for it. So if you have experiences with it let us know.

SQL Server Components

Carlos: Okay, so we are getting outside of SQL Server with these next ones. And that is, so they’re not pure SQL Server but components that get installed with SQL Server. The first one is the data tuning advisor.

Steve: Yep. And the full name on it is the Database Engine Tuning Advisor. So even though we’re saying it isn’t core to SQL Server, it does sound pretty core if you look at the name.

Carlos: Sure, very good. I think you can only get it with the SQL Server install. I don’t think you can get it as a standalone install.

Steve: Exactly.

Carlos: So, one of the things again, I think the tool itself is very neat. I think it’s one of those where the way that you use it, you can kind of lead yourself on a bit in the sense that you’re not providing the tuning advisor with enough information to give you good data as a result. So it’s just not big enough. Or if you do get large enough data sets in there it kind of chokes and people’s patience runs out before it finishes processing.

Steve: I’ve seen the same thing with that.

Carlos: Now previously at some point they did add a plan cache option, but you had to take a profile or get some other excel file or export of some of this data and then feed it into the tuning advisor. So I think it was in 2014. That was the tool I used in preparation for this episode. I saw, oh, they added the plan cache. It gives you a slightly larger environment, or at least the things that are super important, assuming that your database has been on for a while, etc. There are caveats but this is a step forward in my mind. You get a better view, potentially.

Steve: Yep.

Carlos: But then it spits it out and unfortunately again I think people don’t necessarily review that output and say, “does this make sense to me or do I understand the changes that are happening?”

Steve: I think the tuning advisor, for what it is, it is an amazing piece of software engineering to be doing what it’s doing. I think it just doesn’t go all the way to do I guess what it needs. One of the things I see is it will suggest indexes and statistics, indexed views, partitioning and that’s about it. And there are a lot of other things you need to understand for performance of the database. For example, what’s happening in your T-SQL code? Or is there something in the stored procedure that you’re doing? It’s completely isolated on that. What I see is people jump into it expecting that it’s going to give them everything that could possibly be wrong with performance, and it really doesn’t do that.

Carlos: Yeah, it has a more narrow view, if you will.

Steve: And another thing that’s prevented me from using it a few times is it has problems with temp tables. Specifically when you’re not specifically declaring a temp table and you’re doing INSERT INTO and the temp table name, it doesn’t work at all. It just crashes and throws extremely confusing syntax error problems. So if you’ve ever got a query that calls a stored procedure that’s possibly doing something with temp tables and it works great in SSMS but in the tuning advisor it says, “syntax error”, well it could be that it’s using the INSERT INTO syntax to insert into a temp table. And then the other thing that I don’t like about the tuning advisor but there’s an option in there that says, “Just apply all the recommendations.”

Carlos: [laughs] Yeah that’s true. Like, “Hey! Trust me! What could go wrong?”

Steve: And it’ll go build a bunch of statistics and indexes and it’s very dangerous and can be very wasteful. Just like when you have the, when you’re looking at the actual execution plan and it has recommended indexes. You don’t want to apply all of those. Same thing with this. You don’t want to apply them without reviewing them and looking for duplicates or getting a good understanding. Otherwise you’d get a lot of overhead there with duplicate indexes.

Carlos: Or you create those indexes and you’re going to be locking up your tables in the middle of the day. All of a sudden your users are complaining because they can’t get in. And they’re like, “hey, what’s going on?”

Activity Monitor

Steve: That brings us into the next one, which is Activity Monitor.

Carlos: Yes, activity monitor. This is another one of those that I think is actually pretty cool. Which just may not get applied quite correctly. So the activity monitor it seems like it’s been around forever, right?

Steve: I think I stopped using it a while ago, so it does feel like it’s been around for a long time.

Carlos: Yes, that’s right. But it has the graphs at top so you can see the CPU percentage and kind of gives you some insight into your IO and into the batches. So that was kind of useful. I could just see how busy is my SQL Server at this moment. Then it kind of has these screens underneath that shows you different components. Where things are slow or intensive but it just kind of leaves you there and all of those columns are not super explanatory. They don’t really help you find the issue. The other reason I really fell out of favor with the activity monitor. I guess I should say that I talked with some of the Microsoft folks and they actually liked it. I guess it’s still their go-to, and that’s fine. If you know of components there and know what you’re getting back, yeah, it may still work for you. However my experience was that I worked in a shop, so the managers and the developers actually knew about Activity Monitor and so they would all have it open, just a tab in their SQL Server, and I don’t know if ti was random or if they were trying to run something, but it’d be like, it seems slow. So they’d look at it. And again, in most environments the CPU is not all that busy, or it’s a consistent state, generally. And then all of a sudden it would jump up 5 or 10 percent and they were like, “Oh my gosh! The CPU is at 50!” I was like, what do you want me to do about it?

Steve: And that could be completely normal for where it’s at every single day. But because they see the change, they look at it like that’s gotta be the cause of the problem and they end up chasing the wrong thing. So I think part of the key to that is really understanding what the different graphs and reports in the Activity Monitor mean. And that’s one of those things, I’m just going to do a quick side plug on Database Health Monitor here because I have a lot of reports in there. I had a similar problem. People would ask me a question, “What does this report mean?” or “What does this chart mean?” So what I ended up doing with Database Health Monitor is that I ended up putting a link in every single page to help that would go and describe what the details were around what that report is doing and what report works. And I think if the Activity Monitor had something like that, that would be a big improvement.

Carlos: That would be incredibly helpful. And to your point there, some of it is just knowing your system. In episode 28 with Erin Stellato we talked about baselines and how you can better know your environment. And admittedly, the Activity Monitor might be a place to get a feel for what those baselines might be. But if you are using Activity Monitor and it’s your monitor of choice, hashtag #SQLPodcast and we’d love to get that feedback.

Steve: Tell us what you love about it. Maybe we’re missing something here.

Carlos: There you go. Maybe there’s some feature we haven’t explored. So that kind of wraps up the features that we wanted to talk about.

Steve: So I think that’s one of the #SQLFamily questions that we ask: if you could change one thing about SQL Server what would it be? Well, here’s if we could change ten or eleven things. This is what it would be.

Carlos: That’s right.

Steve: And I hope we’ve done this in a constructive way. We’d love to see these things change but we’ll keep using the product even if they don’t.

Carlos: Yeah. That’s right. I am not interested in spinning up too many postgres instances all of a sudden as a result of this conversation.

Steve: [laughs] No thank you.

Carlos: Yes, well again if you liked what we talked about please let us know. I guess we do want to give a shout out to Alberto S. Gonzales in Episode 54 he liked what we had to say, he said we were crushing it, Steve.

Steve: Crushing it, nice!

Carlos: I’m sure that was the CEOs that were crushing it. Yeah again, so give us that feedback and let us know. We’d love to hear from you. If there’s other topics you remember on the SQLDataParners.com/podcast there’s also a mechanism to leave us a voicemail, if you will. Through your computer you can leave us a voice message. Or leave us a comment, that’s another way to interact with us.










Episode 58: Four Ways to Stay Sharp with Technical Learning

With all the changes in technology, how do you stay up to date with your technical learning?  This is a question we have been asking our guests the last several episodes and in this episode we, the hosts, attempt to answer this question and Carlos and Steve give some thoughts on how we stay up to date.  We discuss four ways to keep up–and spoiler alert, the technology is just too broad to be able to keep up with it all, so we start looking out to our friends.

Episode quote (that didn’t make it in, but should have):
“It’s know WHAT you know, it’s not WHO you know–it’s WHAT you know about WHO you know.”  🙂

Our four tips are:

  1. Get involved with PASS events
  2. Participate or engage the community in some way
  3. Teach someone else
  4. Network and reach out

Inside the episode we share specific examples, lessons learned, and tricks for making the most of each tip.



SQL Data Partners – Transcript

Episode 58, 4 Ways to Stay Up-to-Date | Air Date: 08/23/2016



Carlos: Compañeros! Welcome to the SQL Data Partners podcast. A podcast dedicated to SQL Server related topics and which is designed to help you become more familiar with what’s out there, how you might use features, or how you might apply them in your environments. I’m Carlos L Chacon

Steve: And I’m Steve Stedman and this is episode 58. This week we’re going to be talking about four ways to keep up with technology. Again, this is a spin on SQL Family questions we’ve asked in the past. This time around, there’s no guests it’s just the two of us. But we’re going to cover some of the things we do and some of the things we’ve learned from previous guests.

Carlos: We’d like to give a shout out to Chris Hendon. Chris reached out to us on Twitter. He’d like for us to talk about the differences between AG log shipping, mirroring, and replication. And so we’ve been planning that episode and it will be coming in a few weeks. So thanks Chris for giving us that episode idea and that will be coming shortly. We also want to invite you to download the latest version of the Database Health Monitor. You can get that at databasehealth.com or you can go to sqldatapartners.com/monitor and there you’ll be able to download the latest version. We’d love for you to download that, take a peek and give it a spin, and give us some feedback and let us know what you like and don’t like about it.

The Show

Carlos: Compañeros, welcome to the show.

Steve: So onto four ways that we keep up with technology. And the first question that comes to my mind is, why? Why do we need to keep up? Why do I need to spend some extra time learning what the latest 2016 feature might be?

Carlos: You’re causing extra work for us, Steve. You’re such a slave driver!

Steve: Yeah. Well, we could just sit back and watch tv for a few hours and not worry about that.

Carlos: The Olympics is going on right now.

Steve: That’s true. Really, whether it’s SQL Server or any type of technology-related area, everything is changing so rapidly and I think without learning or keeping up, you’re going to fall behind. And I can imagine that like, think back to SQL Server 2000, if you stop there and didn’t learn anything new since then your SQL experience would just be so limited at that point.

Carlos: Right. And you wouldn’t necessarily be blamed in the sense that it was five years before the next version came out and now we’re in two year increments. Right so 2012, 2014, now 2016 and it will continue to change. And I think with the advent of the cloud, I think that just accelerates things. More people are looking to do different things and the role of the DBA is changing as well, with the influx of analytics and reporting and the things that people want to get out of that data.

Steve: Absolutely. And I think that with this, if we don’t keep up, someone else will. And then that other person will have more knowledge that’s more relevant to what’s needed to do our job than we do. And where would that put us? Not in a very good spot.

Carlos: And again, we want to be in the driver’s seat as far as opportunities go, and keeping up with those things. Not that we need to understand them 100 percent completely, but we need to be able to speak to them. Understand where they fit in the ecosystem. All of things can become important

Steve: Or at least understand who to ask if there’s an area you need to know about and you don’t have a deep enough knowledge.

Carlos: That’s a huge point, because the reality is that the technology is getting too big. There are just too many things out there now and the idea of trying to keep up with all of it is really silly.

Steve: Yep, and I can remember back to – and this might date me here –1990 when I had my first SQL Server class in version 1.1. And I felt at the end of that, it was a week-long class, that I was certified. I knew everything I needed to know to do anything with it. Well, 25 years later there’s a lot that’s changed and grown along the way. And if I hadn’t kept up along the way, I think I would be a relic along the side of the road.

Option 1: PASS

Carlos: [laughs] That’s right. Let’s go ahead and jump into our four points. And the first one, is somewhat obvious is PASS events. So they’ve changed the name and PASS is just PASS. They provide events like the Summit and SQL Saturdays and local chapters.

Steve: And don’t forget virtual chapters as well. If you live somewhere where there aren’t local chapters, there are virtual chapters online and you can catch it through a Go to Meeting type broadcast.

Carlos: That’s right. And you know I think it’s safe to say that getting database people together, that the PASS organization is a big driver in helping to do that. It’s not that everyone comes from those environments, but it is the most common way for people to connect with others.

Steve: And it’s not just getting database people together. It’s getting fun database people together. I’ve been to other conferences where it’s not a lot of fun. But if you go to PASS events and Summit and SQL Saturday, everyone’s having a great time. You get to know people, and you can make some lifelong friends there perhaps.

Carlos: It’s almost a culture about it.

Steve: Yep.

Carlos: Of sharing and learning.

Steve: So one of those that I look at is getting to know the vendors while you’re there. And there are some vendors that you approach that just want to get to know you. I think an example that really led to a great experience for me is Joes to Pros, which is a book publisher about four or five years ago. That was a vendor I met at a SQL Saturday. And Rick, who was the owner of the company, he and I chatted and he said, “You wanna write a book about SQL Server?” And I’d just done a couple of sessions on Common Table Expressions and I thought, okay, yeah, let’s do a book on SQL Server common table expressions. So over the next four or five months we got to know each other and work together and he showed me how to write a book and I kind of picked up on it and by the end of it a book came out that we published called SQL Server Common Table Expressions and although that company had some issues and they are not doing very much book publishing anymore, it was a really awesome experience. It pushed me into a whole new area that I’d really not been part of before, how to write a book. And it forced me to go and learn everything about that topic that there is to know. And I think that, even though Common Table Expressions was something that was really a pretty narrow topic, I went really deep and learned everything that I possibly could about Common Table Expressions.

Carlos: So it presented you with a unique opportunity that you wouldn’t have otherwise have had. I think the other advantage there is, ultimately organizations exist to try to solve the problems or at least, they figure out what people’s problems are and have a way to address them so we can do well. If you can at least understand what problem that organization is trying to solve, that can also be a benefit to you. That might open some doors into new technologies or even business issues that you may not have been aware of to guide you in something that maybe you want to pursue, or if nothing else to be able to pass it on to others should the need arise.

Steve: Yep. Absolutely. And I think the other side of PASS is the real big thing, whether it’s SQL Saturday or something else, is the sessions. You go to one of those types of events and there are many sessions at any time of the day that you can choose from. I usually find far more than one per hour that I want to attend. And I mean, it’s just incredible material put on by the community out there. It’s a great way to learn. I’ve picked up a lot there and have used that as a way to jumpstart my learning on a specific topic.

Carlos: Exactly. You know, it’s much easier to have someone else who’s gone through the manuals, the abbreviated version or the cliff notes version. If you get on your way, you’re on the fast track there.

Steve: I was going to throw another one out there at the PASS events, and we’ll talk about networking here in a bit, but it’s just getting to know people. I remember at a SQL Saturday speaker dinner, which usually happens the night before the SQL Saturday, I was talking with Chuck Lathrope, who is just an expert on SQL Server Replication and that was at a point where I was still pretty new to the whole side of SQL Server Replication. And just being able to chat with him for a half hour during the dinner time, I was able to pick up a few tips and tricks that helped me get going in the right direction with SQL Server Replication.

Carlos: I think going to sessions and meeting the presenters, we have a lot of smart people in the community who are willing to give a lot of their time, generally even after the SQL Saturday events. Or even sometimes after the local meetings, people will go out after to chat and to pick people’s brains about what they’re doing, why they’re doing certain things, and things like that.

Steve: And if you’re ever out at an event and you hear, “The group is going out to dinner somewhere.” Join and go along. You’ll get to know people and it’s great.

Carlos: And of course, if you’re going to be at Summit this year, of course look for Steve. I’m still on the fence about where I’m going to go, and I think before this podcast goes live I’m not sure that decision will be made. I’m going to make it out there, and if you’re there we’d love to see you and say hi. And again this year we’ll have “SQL Compañeros” ribbons at SQL Summit.

Steve: And on that point at Summit this year I’m actually going to be there the whole week. Coming in on Sunday night. If there’s anyone who wants to talk about the podcast or Database Health Monitor or other things I’m doing, just let me know. If you’re there early I’ve got plenty of time to meet you.

Carlos: Sure. We may even sponsor a meetup of some sort. Not that it’d be a big thing, but people getting together for drinks. As a non-drinker I can say that. To chat about things and talk more.

Steve: Yep, sounds like fun.

Option 2: Create Something

Carlos: So the second point we had was to create something. Again, this is something that’s going to be above and beyond what most folks are doing. It doesn’t have to be huge. We’re both going to share examples that we’ve done that have had huge dividends for us.

And this could be as simple as coming up with some SQL Scripts and putting them out there on Github as an open source tool or something like that. Or it could be something more like doing your own podcast.

Carlos: Yes. This podcast has been great for us. We’ve been able to talk with tons of very interesting people. We’ve gotten to know some of Microsoft Program Managers that we wouldn’t otherwise, well, I’ve been able to establish relationships with them that I just wouldn’t have had otherwise. And in certain instances, have been able to follow back up with them and I’m thinking of Luis Vargas for example, who was the Program Manager for the VMs, the SQL Server VMs. There was actually an instance where we had some questions about some functionality that was happening in regards to the classic and the resource manager versions and how that was the migration and what-not. And I actually reached back out to him and he provided me with some information as a result. I wouldn’t have had those interactions otherwise.

Steve: I know just in the short time that I’ve been a part of this podcast, I’ve learned a lot just in what we have to research each week to talk to the guests and then just going back and listening to all of the episodes prior to episode 50 when I got involved.

Carlos: I think, again, it doesn’t have to be super challenging. I know the podcast is a lot of work, believe me. Another option is even setting up, or even going to your local user group, and being willing to take the, hey let’s take ten minutes and answer questions or talk about problems people have before they get started. With all the other things people have to do, taking that little part of it or thinking of some way that the other attendees can interact and intermingle. It just enhances the experience and gives you the opportunity to intermix there.

Steve: You know, another one to get in the mix would be to have some type of online event or activity. And that’s kind of a real vague term, but an example of what I did a year ago was the Database Corruption Challenge. And that was something that, when I did that, I never really sat back and planned it and said, “This is what I’m going to do.” It started with a blog post that eventually turned into this over ten-week competition of dealing with corrupt databases. And with that, I think I learned more about database corruption than I could imagine. I mean, I knew a lot about it going into it but by the end of it, wow. I’ve seen so many ways of doing corrupt databases and different ways of people trying to fix them that I felt like, I was I learned so much about database corruption that it put me in a really good spot that that’s one of the things that I deal with regularly as part of my work.

Carlos: Right, I think that was one of the takeaways with what you’ve expressed to me. You know, you had solved the database corruption in a certain way but then found other people doing it in different ways, and so now you have another option in your arsenal that you may get to use later on. And so again, you’re kind of inviting the creative process from the community into your world and it just helps there.

Steve: Yep, yep. And to be able to be part of something like that, I mean whether it’s corruption or some other topic you choose to do, you really have to get it right otherwise the people participating are going to catch on it. And I know one of the things I didn’t get quite right and they corrected me and it was an awesome learning experience.

Carlos: Exactly. It’s one of those, yes, you have to put yourself out there but again, the people that are going to interact with you, it’s going to be a positive experience if you put your best foot forward.

Steve: Oh definitely. And I think the next one we had on our list for creating things was Database Health Monitor. It’s been working on, off and on, for over five years almost six years now. And that’s something that it originally started out, as a DBA I had a handful of queries that I would run regularly for different performance related things and I thought, “There’s got to be a better way of doing this.” So I originally built this as a bunch of SSRS reports and I quickly learned, at that point in time, that SSRS really wasn’t mature enough to do what I wanted it to do. So then I started building database health monitor as a standalone application to automate many of the common queries that I would run on a daily or weekly basis to keep track of what’s going on and how healthy are my databases?

Carlos: Yeah, exactly. It’s almost like a two-fer there. One, you’re learning and putting all those things together but it also helps from the organization perspective because you don’t have to ask, “Where did I put that query again?”

Steve: Yep. And along the way there’s lots of people using it. I mean, there’s been over ten thousand downloads of Database Health Monitor so far. And, with that, everyone’s got a slightly different environment and when something doesn’t work they let me know. And I remember very early on that one of the problem areas that I had issues with was case sensitive databases. When I first built It, it was on a case insensitive database and the first person who tried it on a case insensitive database, it didn’t work. Quickly I spun up a VM and installed a case-sensitive database to see if it works and that just became part of my overall testing strategy and I made sure that it was something I was aware of as I built it.

Carlos: Yeah, and again that feedback or that ability to get experience or that, “Here’s my unique experience,” helps you know one, because you may not have had any experience with case-sensitive databases. And to know, okay, what’s that, let me go learn more about that and now let me go and make the tool work.

Steve: And it wasn’t that I didn’t have any experience with it, it’s just that I overlooked it. And it just pushed me to go and make sure that I included those kind of things. And along the way there’s just so many different things where people will come up with an idea or people will come up with a suggestion that they’ll throw at me and I have to go and figure out how can I build that in a way that’s going to work for everybody. And it’s just been an incredible learning experience.

Option 3: Teach Something

Carlos: The next topic or the next idea is to teach something. Of course, this can be as formal as making a presentation. We’ve talked about some of the options, you know, I mean Steve’s been lucky enough he’s been selected to present at Summit. So there’s going to be like a thousand people I’m sure. And that might be a little bit intimidating. But that’s not necessarily what we’re talking about when we talk about teach something.

Steve: Right. I mean it can be anything from just teach the person in the desk next to you something on a daily basis. One example was the place that I worked, the developers didn’t know enough about writing SQL queries. So me and a coworker, who were stronger in the database side, we put together a lesson set to teach them on a weekly basis a different topic that would get them closer to 70-461 SQL Queries certification.

Carlos: Oh, very nice.

Steve: And that was a lot of fun. And I thought I knew a topic, but then I went through and made sure I had to know it enough to present on it for that week, and wow I had to learn it in even more depth at that point.

Carlos: Exactly. And I think that going through that and being able to verbalize it out loud lets you make sure that all the dots, at least that’s the way I think about these things. I have these dots that I think that I know about. Can I connect all of them? And do they all make sense? Or as I go through all of them I think, there’s a gap there in my understanding I need to go in and understand that better.

Steve: And one of the things I’ve discovered is even if I don’t see that gap when I present, somebody else where. And when Id o present, and then they’ll catch me on that and I’ll be even stronger and know that better. People ask questions and to be able to go back and say, “I didn’t know that, I’ll learn it and get back with you.” That’s not a bad thing and you learn along the way.

Carlos: And I thin k as a side note, if you can incorporate that culture into your company that’s just a bonus. Now not everybody’s company will pay for lunch to have a lunch session or what-not. I’m a big fan of them and I don’t think they happen nearly enough. But if you can make that kind of a cultural thing, then at least you get pizza out of it as well.

Steve: Yep. And if anybody’s in my area in Bellingham, Washington and they want pizza, you can drop in for the Bellingham SQL Server User’s group and come up and present. I’m always looking for someone to present there and it’s a great way to learn.

Carlos:  And I think again, to multiple people it can be as simple as mentoring. Going up to that white board and teaching. I think in a lot of environments, that classic case is that the database is guilty until proven innocent when it comes to performance-related issues. Why not take that time and explain to the people who are involved how you’re collecting that information, right? The role indexes play, speeds, disk performance, things like that. And then help those people understand so that the next time they have an issue, they can actually do some of that themselves without having to come to you directly because they understand that process better.

Steve: And I think, being the dba who teaches them to be stronger in the database will put you in a much better position the long-run.

Carlos: It may seem counter-intuitive, but the more that you share, the more people will look to you as a trusted, acknowledged source.

Option 4: Networking

Steve: Yep. So the next item we have on the list we had was networking. Just getting out there and getting to know people. Getting the scoop from your peers on what’s happening and what’s new.

Carlos: Yeah exactly. So I think, if you take all these ideas and start kind of getting out there, you’re going to have networking experiences. You know, the one thing that I would suggest is, you know, don’t try to blow people away with all the things that you know. But try to find out more about them, right? Kind of connecting some of those dots, you know? Asking some questions and then see if you can provide them some value. The reason for that is that nobody likes a windbag, right? That Mister-Know-it-all type personality and I think sometimes we as technology folks, we immediately kind of go to that because we’re trying to establish our position, I guess? Instead of just asking those questions. A very common one is where are you from, those kinds of things. If you happen to know somebody who’s in the Bellingham area you could very easily connect with Steve because, “oh hey! Have you talked to so-and-so who also lives in Bellingham?” That’s an easy way for you to get to know that person and for them to remember you.

Steve: Yep, absolutely. And then when you connect with those people, follow up afterwards on LinkedIn and Twitter and connect with them electronically so that if four months down the road you remember, “Oh, I had that conversation with Carlos at PASS Summit.” And you can know where to find them or where to follow up with them if you need to. I know for me that’s lead to a lot of great relationships and even work coming out of it.

Carlos: I will add one caveat here which is don’t ask people to solve problems for you, right? If you’re going to ask someone for help, reach out to them and connect with them but don’t say, “Oh here’s my problem, solve it.” That’s not a way to win friends and influence people and people will find other things to do very quickly.

Steve: Yes. I would completely agree. So ask people for help when you need help but don’t ask them to do your job for you.

Carlos: Exactly. And I think ultimately that’s the benefit from this networking option. So doing these things and having so many networking opportunities. How does that help us stay up-to-date? So my example was when I first wanted to install availability groups. I had read a little bit about it and attended a session from the regional mentor for the mid-atlantic area. So I had gone to a user group in Lynchburg Virginia and a presenter there worked for the college they had implemented availability groups and were talking about it. So I’d been exposed to it and learned some of the basics and then it was time for me to implement it. And I thought I knew enough that I went too quickly and I implemented it wrong. It wasn’t quite working and so while I did troubleshoot that, I did it for twenty minutes. I said, “this is crazy. Who do I know that knows about availability groups?” And I remembered him and I dialed up his number and we had a conversation and hew was able to guide me down the path of, “Here’s where you went wrong, don’t worry about these things, you need to worry about these things, and oh by the way you haven’t asked me about x, y, and z. How are you going to solve that?” And so that saved a lot of painful staring into the screen and yelling as a result and again, kind of getting there faster.

Steve: And having those networking relationships you know who to ask at the right time, can be incredibly valuable for that learning experience.

Carlos: Exactly right.

Steve: I was going to say that another time of the networking is not just going out and talking to people but it’s getting involved with some of the local events. And I think a big part of that is giving back to the community. Like with the local SQL Server chapter, local PASS chapter and you really have the, by doing that and getting involved with those things, it exposes you to so much opportunity to learn from different people.

Carlos: Exactly. Because you are there, you’re in the community. Both of us are leaders in our user group and then we’ve become, people seek after us because they want us in our network. They know that we can potentially be helpful to them.

Steve: And another example there is getting to know different people through that user’s group. I know a couple of different people who do different SQL Servers than what I focus on most of the time. For instance, there’s one that does a lot of BI type work. And he’s someone that I know that if somebody came along with BI work that I didn’t have time for or it’s not the kind of thing I do, I’d hand it off to him.

Carlos: Yeah.

Steve: And that would be the, and if he chose to do it, it would work out to be a good thing for both there. But the thing, getting to know and learning that’s back to our main topic, learning who can do what well can make things really good for you and growing your knowledge going forward.

Carlos: Exactly. So Compañeros, those are our four tips. We’d love to get your feedback. What ways do you stay up-to-date with things? You can use the hashtag #SQLPodcast to put that out and we always welcome your comments. Today’s episode will be at SQLDataPartners.com/learning and there you can get to the show notes and references for some of the things that we’ve talked about today.