Episode 63: Availability Groups

Have you heard about the new High Availability features in SQL Server 2016? Availability groups provide both high availability features and disaster recovery options, but they also have several areas you must be aware so you don’t introduce more risk into your environment.  The major advantage is availability groups allow for you to fail over more than one database at a time.  In Episode 59 we talked about general data availability options and in this episode we focus on the new features of Availability Groups in 2016 and how data availability options have changed with our guest John Sterrett.  John shares his experience getting a large database to a highly available situation along with some other ways to use availability groups.

If you are using availability groups, we’d love to hear about your experience along with any issues you’ve had in the comment below.

Listen to Learn…

  • How High Availability has changed since SQL Server 2012
  • The new 2016 features and how to use them in your environment
  • The new “round robin” fail over option that gives you more control over recovery
  • Why your Network Admin might be who keeps you from implementing HA
  • The single biggest impediment to HA, according to John
  • His best piece of career advice and how PASS helps you with this

Episode 63 Quote:

“So basically, availability groups are sort of taking the best features of both failover clusters and database mirroring and putting the best features go together to get what we know today as availability groups.” 

This Week’s Tuning Review Topic

Episode 63’s Tuning Review is about Rebooting SQL Server. Listen and learn over at http://stevestedman.com/reboot.

About John
Today’s guest is John Sterrett, a SQL Server DBA and Microsoft Data Platform MVP out of Austin, Texas. John runs the High Availability and Disaster Recovery Virtual Chapter as well as the local PASS chapter in Austin. He also blogs and speaks regularly at PASS events on . Follow him on Twitter @JohnSterrett and on LinkedIn.

Resources
SQL Data Partners: Episode 59
SQL Data Partners: Episode 61
Multi Subnet WFCS Setting
Direct Seeding Benchmark
60 TB database made available with no downtime
Readable Secondary Replicas
Today’s Tuning Review: Rebooting SQL Server
PASS High Availability and Disaster Recovery Virtual Chapter
John’s Posts on SQL Server Central
MSDN: Data Replication and Synchronization Guidance

Episode 63 Transcript

Episode 63: Transcript

Carlos: Well John, welcome to the program!

John: Hey guys, thanks for having me.

Carlos: Yes, always nice chatting with you. We do appreciate you coming on the show today. Ultimately, our topic for today will be expanding on some of the things we talked about in episode 59, talking about some HADR-type scenarios. And we’re focusing on the 2016 implementation of some of these features and so John has been doing a bit of playing around with that. So before we get into the new stuff, let’s take a quick second and review some current features, if you will, of availability groups. So John, why don’t you talk us through what availability groups are, how they get set up, etcetera etcetera.

John: Sure, is this where we hit pause and tell everyone to watch Episode 59?

Carlos: There we go. At SQLDatapartners.com/dataavailability.

John: Nice. Well to get the cliff’s notes, there’s a couple of core high availability features that have been around for a pretty long time in SQL Server. We’ll touch base on a couple of these because they kind of morph into what Availability Groups turned out to be. First of all there’s Andy’s, if anyone’s watching Episode 59, favorite HA feature, Database Mirroring. And this basically was a database-level of high availability that would take your database and make you have a second copy, almost in real-time, where you have options of using asynchronous or synchronous, to send that data over to another server.

Carlos: Right.

John: So, there also is windows failover clustering which would be used for a failover cluster index. What this gave people was a single name that they would always use when they connect, so they can access the data regardless of which server A or server B or whatever server in your SCI topology is hosting the data. So basically, availability groups are sort of taking the best features of both failover clusters and database mirroring and putting the best features go together to get what we know today as availability groups.

Carlos: Merging those features into one. So you’ve got two data stores but then you also get the windows clustering, which is where you get the naming conventions.

John: Exactly. And this came in 2012. So you had to have Enterprise Edition. Same thing in 2014, we get 2016 we’ll talk about how that changes. You now get to have some Availability Group love even in Standard Edition.

Carlos: There you go. And the reason for that is because the mirroring is going away, and they had to find a way for the common folk, if you will, to have some availability.

John: Exactly, and one of the super cool things there is that you can actually use async mode, where you’re mirroring. Before you would have to have enterprise edition for that. Even if you love how you use mirroring today, that’s one huge huge bonus feature of using the Availability Group in Standard Edition with AG 2016.

Carlos: So let’s go into that. So in standard edition, what are the feature sets available to me in 2016.

John: So in 2016 you’re going to be able to use what is the “basic availability group”. I know that doesn’t stand for BAGish either. But this is very similar to what we expect in SQL database mirroring today. You’re going to have a single database that would be in the availability group and really loud 1 replica that you can use there as well  so a lot of it is the same if you want to still have your connection strings we got your fill of her partner and you can do that as well or if you want you can have the listener as well

Carlos: No I didn’t realize that was so much I I didn’t realize that was a requirement as well that you can only have one database on availability group

John: So yes, you can only have one single database inside of the availability group

Carlos: Interesting.

Steve: So wait, just to clarify at that point you don’t mean one SQL Server you just referring gle database on that instance?

John: That is correct. So you’d have an availability group for each database out there.

Steve: So you can do the multiple databases each one just has to be..

John: Oh yeah yeah each one is in its own so you have the same experience as you’re not allowed to prep the databases together unless you went with Enterprise Edition.

Steve: So then if someone out there knows and loves mirroring today do you think they’re going to be happy with the feature set in the basic ability groups in 2016?

John: Well I think there’s some big benefits that you don’t see today. Mainly the biggest one that I really love is the fact that you can be asynchronous. Now you’re not forced to be in synchronous mode to where if for whatever reason you have  some kind of latency or  you don’t have the fastest disks that you are using the heart of your transactions. You can be in asynchronous mode and a moment of delay will have less impact across your end users.

Carlos: Right I think the big downside, however is that it does come with the windows clustering  and there is some management or some potential politicking that you’ll have to do with your network team in order to get that set up.

John: For sure yeah that always becomes a huge challenge especially if you’re going to be doing multi submit or basically having a stretched out Windows failover cluster that would span across multiple data centers.  But there’s a really nice, probably my favorite feature in 2016, for availability groups is this new functionality called a Distributed Data Center

Steve: Now I’ve heard that described as being like an availability group of availability groups. Is that the right way to think of it?

Yeah, it really is. So before, currently in 2012 or 2014 , Iwould have to have a Windows failover cluster that would span all the way across the data centers that you want to use inside of that availability group. And what’s really cool now is that it’s no longer the case. You can actually have Windows failover cluster inside of an AG and then have an AG of those ages. So some of the big problems that I’ve had with customers when they’re implementing availability groups the first time, where they’re spanning multiple data centers, it’s just that fact that they don’t have enough time to correspond the default settings across multiple thresholds. So when the heart beats are going across the latency is just too much that it can’t keep up and end up with quite a lot of failovers. So the nice thing here is that you can basically have one Windows failover cluster in your primary data center and you set up your AG there, and then you do the same thing in another Data Center and have an AG on top of them that’s going to use the listener to set the endpoint trap between the AG’s. It’s really really cool not only does it help with that problem, but I see a lot of people are going to use this in the future for helping with upgrades as well because this would actually allow you to completely isolate the windows failover cluster so you can upgrade one that they are independent so you can upgrade you can have a new address or you can go ahead and even upgrades SQL and move it over and make that process a lot more seamless for you.

Steve: So that would be handy if you needed to take the time to completely rebuild the cluster on one hand I would assume.

John: Oh definitely.

Steve: So do you think that distributed availability groups would be more of a high-y or more of a disaster recovery option or kind of combination of both?

John: You could definitely get a combination of both but it’s definitely a lot more there for ecovery then Availability.

Steve: okay. so you lose one Data Center you’ve got it all running at the second Data Center ou can just fail over to that through the distributed AG concept.

John: Exactly

Steve: okay

Carlos: So what about some of the new features that are coming to us in 2016?

John: So another one that I have used a lot recently, in fact I have a great blog going over kind of beating this with a hammer, is what is known as direct seating. So one of the main points that people have had with implementing availability groups in 2012 or 2014 is the fact that your data has to be completely in sync in order for you to turn on availability groups. Just like you would have with database mirroring. So for example if you’re just going to have a simple concept of two replicas, whether you’re using the Wizard and by default it basically would take a copy-only full backup, log backup for you and send it over to restore from the network share over there,  you have that data in sync and order for you to be able to join the replicas together. And while this could be very trivial for a very small databases, it gets pretty complex when you’re working with a really big set of data. And what direct seating can do for you it’ll actually do all of this for you. It’s a fun feature that actually has been in Azure SQL databases for quite a while now with the geo-replication going on there. That basically, you have a database, you want to get highly-available you turn it on it’s going to use that endpoint to basically do a VSS back up that’s going to send over the wire and restore at the same time as it’s going. It’s actually pretty cool and I’ve seen some really good results with that I’ll make sure that we sent over the link to that to some of the benchmarking I’ve done there.

Carlos: Yes will make sure that gets on to the website let’s see what we going to call this episode at SQLdatapartners.com/2016HAfeatures. Now do they give any guidance, John, as to when you both look into the threshold of you’re too big to use direct seating?

John: So not really but I’m glad you mentioned it, Carlos, because the big caveat that people definitely need to know if they’re going to consider using this in production, and it’s going to make a lot of sense once they’ll hear it. It’s the fact that your transactional log on your primary, it’s not going to be allowed to truncate data until the direct seating finishes.

Carlos: Oh wow. So the entire process has to finish, the database has to get over there, before the log can go over there?

John: Exactly, and it will make sense because it’s a VSS backup in real time then you’re going to want to have all the difference of the data and between when it kicks off or when it completes. So, a perfect example for me that I had that I’ve done a good real world test case with this was a 60 gig database where it took about an hour and we were seeing about 1.4 gigabits of consistent throughput through the endpoints. Our actual limitation of that point was for storage. But it’s really really good performance. I’m sorry it was 600 gigabytes not 60.

Carlos: So go ahead and say that one more time?

John:  so it was drug testing that I’ve done with a 600 gig data base and I saw that it took what’s 66 minutes and we are running about 1.4 gigabytes per-second of consistent throughput and are bottleneck at that point I actually was our storage that we were using that that was persistent the secondary over to

Steve: So then for that entire 66 minutes, the transaction log backups were not able to truncate or run and truncate the transaction log at that point?

John: that is correct, yes. That is one thing you definitely have to test out there before you do this full-on in production. But it is now at the point where this can be very seamless for you because of that even if you have log backups going through all the time, it’s not going to have any jeopardy of having the two replicas not being synced. So it makes it really really simple and I think a lot of people are going to enjoy using availability groups if they have not ran into a problem with it was 2012 or 2014.

Steve: Okay so then one of things I heard a little bit about is a different load balancing options with the readable secondaries. Is that something you can talk about?

John: Oh yes definitely. so there’s also this new feature of round robin and we’ve had read intent before in 2012 and 2014 and basically what this would allow you to do is basically set up a relationship so every replica when they became primary would find in a thing of think what we’re going to find next and if that doesn’t work. We’re going to keep going down the chain in 2016, we were able to group these replicas now so for example instead of just going to the second one and it going to the third and fourth on a failure you can actually round-robin between replicas. So for example if you had a scenario where you had the five replicas in your AG and we’ll say replica one was primary. You could tell it, okay try this group of replicas 2 and replica 3 first. And what it would do is round-robin in between those two so that if you connect using the read intent with your connection stream, it’ll bounce back and forth between replica 2 and replica 3 which is pretty cool. And from there you have it go down and do the next one whether that would be try replica four or five or group them together where you could bounce between those as well. It’s pretty cool and allows you to do a lot of really nice kind of load down balancing with the read intent with SQL straight out of the box in SQL without having to use any third-party products

Steve: Oh wow that seems like a pretty big win there for that load balancing feature

John: Oh definitely.

Carlos: So I think the caveat, or maybe not the caveat, but in order to use that in your n stream the application has to specify that parameter setting of the read intent.

John: That is correct yes.

Steve: Interesting. So then one of the things I guess that I’m really curious about here is what is the biggest set of data that you’ve been able to make highly-available with availability groups and how did you go about doing that?

John: Good question, Steve. so earlier this past year I actually had a great opportunity of taking a database that was 80 terabytes in size and at this point in time it was basically a stand-alone instance we were able to make it highly available by having basically for replicas to in the primary data center that we added some DR to it as well by adding a third copy to the second Data Center. And we were able to use the log shipping to get the data in sync so that when it was time for us to implement we actually didn’t need any downtime the actual physical cutover took us about 30 seconds for it from start to finish to have the availability group replicas to be able to be joined together. Now all shipping itself took quite a bit of work; it took about a week to get the data in sync and going to fall over to the second Data Center getting log shipping and going so all the data was fully in sync.

Steve: be able to switch over that quick once for 80 terabytes that’s amazing

Carlos: right I think that’s kind of an interesting idea in the sense that you’re using multiple technologies to kind of get to where you want it to be right, so some of these older things well that’s not the primary role I can play, it can be used to help you get to the more highly-available solutions.

John: Yes for sure I mean so yeah we did all kinds of real fun stuff that used to be cutting edge way back in the day. For example looking to see that the data is in sync through log shipping, being able to stand by so you can query the secondary through log shipping to actually look at data and compare it to the data that’s currently in your primary. So you know, you’re definitely right there Carlos, it’s going to take a lot of great features that come with SQL server in order to kind of bridge them together to do some pretty cool stuff.

Carlos: Now in all this coolness, right, is there a reason why we might not consider, wny we might consider not putting availability groups into our environment?

John: Oh for sure yeah one of the biggest reasons that I see out there in the field is that if you today don’t have the infrastructure around or the people to manage the process is for availability groups it’s definitely not the solution for you. So for example if you don’t really have a network administrator and the guy who does that role basically is like [inaudible], you may end up having quite a lot of network issues that make your high availability solution very unstable and unreliable and down more than available.

Carlos: Yeah that’s an important concept, right, as we try to put in this feature we don’t then want to be the cause of the outage.

John: That is definitely correct that reminds me of a client I was working with this year where you know they called me and they said what’s going on and they were wanting some tips. I’m glad to help people so I told them to try a few things here and there. And they did it and they called back a week later saying this is constantly waking me up at 2 a.m. fix it for me. So I ran a script that that Tracy Jordan had presented at the high availability virtual chapter that basically would go through and scan all the errors. And we could see there were over 30 failovers in the last week mainly because of that scenario we talked about a little bit earlier about them having multiple data centers and the heartbeat just wasn’t making it across the latency and with the activity that was going down the pipe. So just by jumping in and changing some settings instantly we were able to eliminate that pain points. That’s just a perfect example of if you don’t understand that working if you don’t understand Windows failover cluster is this can be a pretty hard technology for you to grasp. In fact and going back to your podcast 59, I would strongly actually disagree here I would say that replication is a lot easier to manage than availability groups if you don’t understand this core pieces that are part of your infrastructure to making your availability group be successful.

Steve: Sure and I think to add to that a little bit it might not be that the people don’t understand those networking areas, but it might be that they are not permitted access go to some of those too. Where you got the network person conceptually on the other side of the wall that that sets up your cluster and doesn’t let you in on it.
John: exactly
Carlos: Sure.

John: I was going to say yeah you definitely have to have a good team and everyone has to be pretty pretty solid on what they’re doing for availability groups when they work they’re phenomenal. But yeah if you’re missing some pieces there and what you need it cannot be a favorable solution for you.

Carlos: and ironically in episode 61 Russ mentioned a scenario where some firewall settings were changedand he failed over and then couldn’t fail back because of the firewall changes.

Steve: Right and that was a change completely unrelated to anything that the database was doing. The networking guys made that change without even telling them, however it had a big impact on it.

Carlos: Interesting. Well good alright great conversation on some of these features. I think I am looking forward to it. Let me say that I have had some opportunities to put up some availability groups and and you know, again knock on wood, they’ve work pretty well for me. I haven’t run into some of the same networking issues, but the future is pretty neat.

John: Yeah and you know the only way to really get in there is to get your hands dirty. In fact one thing I recommend to a lot of people when they want to get started with availability groups as you know don’t just set it up and leave it like a lot of people are doing. Get a synthetic load tool like Hammer DB or DB store. Get a workload that will just hammer that. Hammer doing lots and lots of transactions with inserts updates and beliefs and then go through and monitor and see exactly, okay how is this working with how you configured it and the infrastructure you have behind it.

Carlos: Yeah. Good points. So I guess last thoughts about Availability Groups or Disaster Recovery features in 2016?

John: Just get in there and get going with it. There are so many added features and availability groups and it’s amazing to me alone how they got through all those, not to mention all the other little parts of the product. There so many things that we didn’t even go into much detail about like group managed service accounts we can now use in our support, or DTC, or even the increased number of auto-failover targets. Yeah there’s a lot of new things that are added over there in 2016 just for availability groups.

Carlos: I’m anxious to see what the future holds there.

Carlos: Should we do SQL Family?
John: Sure.

Steve: Alright so with a sequel server features changing so rapidly with each new release what kind of things do you do to keep your SQL Server skills sharp?

John: So I do a lot of different things. I love this question too because there’s so many different things people can do and there’s no right or wrong answer. Just doing something is a great start a great step towards having a good career. For me, I’m highly involved in the PASS community so  I actually run the High Availability Virtual chapter and I run my local chapter here. So that forces me to watch a lot of sessions and see a lot of different things, so I always love learning from that group. I also have a pretty nice home lab here as well so I have my own Mac Pro with 64 gigs of RAM, six cores. I have my own Synology storage NAS here that I can use VMware to just beat up and play with any concepts that I want to learn and get my hands dirty on.

Steve: Yeah I can see how valuable that lab would be if you’re doing anything with availability groups.
John: Oh definitely
Carlos: so we been talking about some SQL server features tonight but if there was one thing you could change about sequel server what would it be?

John: So one thing I wish there’s actually a lot that’s kind of hard to focus on one so I’m probably going to have to go with the one that I wish I had right now. And this would be with distributed replays. So I love hammering and replaying workloads I couldn’t pay workload tuning and do it as a way to test things with high availability, like always on availability groups. But one thing I wish it would be added to that is the fact that right now you can only do a one-to-one replay. And I really wish there was a way I could take a workload and say just multiply it by 10 so if you have a bunch of inserts and identities and selects and some updates that it could just scale it for us and allow us to magnify it and hammer that even harder with actual real work loads that we have.

Carlos: I see. Gotcha.  it helps with those inserts can be troublesome.
John: Yep
Steve: Alright so John what is the best piece of career advice that you’ve ever received?

John: So the best advice I have ever received is to find good mentors and then latch on to them. So I’ve been very blessed throughout my whole career, even all the way from being the intern, to having some really good mentors that were able to help guide me and help save me from spending more time learning things instead of just going down better paths.

Steve: that’s definitely a good point I think there’s a lot of things out there that could be that people could you better if they had a mentor to help them along the way.

John: I couldn’t agree more. Yeah that’s not really just SQL, that’s life in general. I’ve been a bit blessed to see a lot through being a business owner being also a single professional or just a SQL person. I’ve had a lot of great people surrounded around me too kind of help me become a better person and to be a better SQL person.

Carlos: John our last question for you today: if you could have one superhero power what would it be and why would you want it?

John: My wonderful superpower skill would be memory manipulation. And every year you know we have this wonderful conference called SQL Pass Member Summit. I strongly recommend people go. And there may always be some things that you know accidentally gets done that I wish people would forget and never knew it happened. So if I were able to pick any superpower it would be nice to have that one.

Steve: Would that somehow relate to SQL karaoke?

John: Definitely it could.

Carlos: John, thanks so much for being on the program with us.

John: Yeah thank you guys for having me, it’s been a blast

Steve: Yeah thanks John. A lot of good information today.

Episode 62: Ask me anything–Cortana Intelligence Suite

Have you heard the buzz around the Cortana Intelligence Suite? It seems like it is all the rage these days and Microsoft is coming out with lots of new features in this space. In Episode 62, Steve and I interview Melissa Coates, AKA “SQLChick”. We chat about the history of Cortana, about the tools it encompasses, and why you might implement parts of the suite in your organization.

Compañeros, listen to learn…

  • Which tools are officially part of the Cortana Intelligence Suite (there are almost a dozen!)
  • The history of Cortana, from Halo all the way to the digital assistant
  • The tools you need to get started
  • How to use the Cortana Gallery to jump-start your data project
  • Why predictive analytics is a hot trend that’s only going to grow
  • The skills you should cultivate if you want to implement Cortana
  • If Cortana will replace traditional data warehousing and best practices

Episode 62 Quote:

“So a DBA that’s fluent in scripting will likely do really well with a service like Azure Data Factory, which is JSON-based. Even for deployments from say dev through production using the ARM or the Azure Resource Manager templates, which are also JSON-based, will really probably benefit from a DBA-type being involved. ” – Melissa Coates on the people skills needed for Azure Intelligence Suite

Cortana with Melissa CoatesAbout Melissa
Melissa Coates is a Solution Architect with BlueGranite, a consulting company that specializes in several areas including data warehousing, big data, enterprise reporting, business intelligence and advanced analytics. She’s also a regular PASS speaker on topics like PowerBI and Cortana Intelligence Suite. and On her blog, Melissa writes about all things data intelligence. Connect with her on Twitter or LinkedIn.

Resources
Setting up a PC for Azure Cortana Intelligence Suite
Melissa’s Presentations
Follow Melissa on Twitter
What is Cortana Intelligence Suite?
Cortana Intelligence Gallery
Episode 60: U-SQL

Episode 62: Transcript

 

Carlos: So Melissa, welcome to the show!

Melissa: Thank you very much.

Carlos: Yeah, it’s great to have you. Another east coast person. We kind of have East Coast – West Coast as far as who can we get on the show. I think our east coasts are leading by just a hair. But ultimately our conversations today are going to revolve around the Cortana intelligence Suite. And of course, those of us who are running Windows 10, when we click down there and it says “Ask Me Anything”, are we using the Cortana Intelligence Suite or is it more than that?

Melissa: Excellent question. So the Cortana Intelligence Suite is actually a huge collection of services in Azure for the purposes of providing big data and analytical solutions. So the suite consists of: azure data factory, data catalog, azure SQL data warehouse, azure data lake which is actually a composite of three services, azure machine learning, stream analytics, event hub, PowerBI, cognitive services, the bot framework, and finally the Cortana digital assistant like you just mentioned. So although they named the suite after Cortana, the digital assistant is just one small part. She originated as a character in Halo, as a smart artificial intelligence character that can learn and adapt. And then she was the inspiration for the digital assistant in Windows. And now this suite of tools is named after her because she symbolizes the contextualized intelligence they hope to achieve with the suite of tools.

Carlos: Interesting.

Melissa: So having said all that, there will absolutely be other Azure services as part of an overall solution which are not officially considered part of the Cortana Intelligence Suite umbrella and we absolutely expect that things like Azure SQL Database, Blob Storage, virtual machines, and so forth are really commonly used as well.

Carlos: That sounds like a super big umbrella, right? You need a lot of services and components in there and you know, we’ve had episodes on some of those different pieces and I feel like, oh my gosh. Big data, you know, that expansion of functionality and services and technology. And I feel like Cortana is just blown that up even bigger. How do people even get started with that and even decide, “Yes, this is something I could be using?”

Melissa: The idea behind it is building intelligence and automated solutions. For example, a company is interested in building a fraud detection system. Something like that would usually encompass several of those services within azure. Um, and then another common thing that we’re seeing an awful lot nowadays is predictive analytics. So predicting credit risk and customer retention and hospital readmissions, when equipment will require maintenance. The opportunities are endless.  So in terms of getting started the ultimate goal is for there to be preconfigured solutions and templates that minimize the need from developing from the ground up every single time. Now they’re in the early stages of that, but there are a few to be found in the Cortana Intelligence Suite Gallery.

Carlos: And so tell me, the gallery is just that frame work of “here’s those bits and components that you can take off the shelf and start using?

Melissa: Correct

Steve: So to get started with those bits and components off the shelf there, I imagine that you can pick and choose from the features you described there. You don’t have to be using all of them in order to be working in Cortana, is that right?

Melissa: Correct, correct. For instance, there is a particular solution for equipment maintenance. So that might include something like stream analytics to pick up the data from the telemetry from the machine and send it to some sort of predictive database. Also send it to PowerBI machine learning so it can turn around and do predictions form the health of that machine and when it may need maintenance the next time. It definitely doesn’t mean that everything has to be used. The gallery evolved from an azure machine learning gallery, so that’s most of what’s out there right now. But longer-term, the vision is for that to be built up more so than where it is currently.

Steve: So then, is it the idea behind the gallery is that those are the pieces you’d start with and then modify them for your environment? Or is it example pieces like the AdventureWorks database?

Melissa: Good question. So both. And so what you can do is deploy some of these assets to your own azure subscription and then obviously you’d have to do some tweaks for you own credentials etc etc. So yes, the goal is to give you some pieces to get started and accelerator type of thing.

Steve: Okay, great.

Carlos: So then I guess take us through a scenario where, I guess you talked about fraud, right? But as you try and go and use some of this, one is identifying… my question is: organizations would start using Cortana because one, they have a scenario. Two, how do they goa bout putting together those pieces to make it all work. I mean, people are involved and yo mentioned some of the technologies. I guess help me put together that recipe a bit more

Melissa: I think you just asked about tools and people. You mentioned tools first, and there’s a lot of them. There’s a post on my blog at sqlchick.com that talks about setting up a workstation in Cortana Intelligence suite development. So in a nutshell, depending on the services you’re going to use. You basically want management studio, vs with some important extension, SQL server data tools. There’s an azure sdk which Is important for some of the services. There are some tools that you might not install on day one that you’ll probably want at some point. Things like azure storage explorer or azcopy or azure PowerShell. And then there’s also an azure feature pack for integration services if you need it. Lots and lots of different tools for client side in addition to somethings over on the portal side as well

Carlos: So Compañeros, we’ll have that link in addition to some of the other articles on her site at SQLDataParnters.com/Cortana and there you can go and click the link and it sounds like there are quite a few things you need to get set up right.

Melissa: Yep, yep. And in term of the people side of things, the way I see it is that since it’s such a wide variety of services and so forth, that there are numerous skill sets that are really going to be needed because sometimes these solutions get big and complex. So a DBA that’s fluent in scripting will likely do really well with a service like Azure Data Factory, which is JSON-based. Even for deployments from say dev through production using the ARM or the Azure Resource Manager templates, which are also JSON-based, will really probably benefit from a DBA-type being involved. Conversely, a DBA traditionally isn’t likely to be as familiar with algorithms in the machine learning space. So the data scientist or statistician type of person usually best qualified there.  And then we’ve got a piece such as Azure SQL Data Warehouse, which is based on SQL Server but is on that MPP, massively parallel architecture, which means that there’s really some important distinctions to be aware of which kind of justifies a deep data warehousing specialist. And then there’s’ the portion of the service that rely on custom coded solutions. So definitely a team effort since it’s so broad.

Carlos: No doubt. So you’re a Solutions Architect with Blue Granite. And it sounds like, if an organization is going to implement one of these, kind of hold on to your seats, because there’s going to be a lot of people involved and it’s going to be a fairly complex. Is that fair?

Melissa: It is. It is. And I know that there is definitely the paradigm of less administration in the cloud. However, when we’re moving data and we’re introducing a lot of services and they’re integrating with each other and we’ve got different security layers. You know, it just does get complex. That’s the way it is.

Carlos: So then, I guess if someone wanted to get into this- and there are a lot of different components – if someone has dabbled in the Azure Data Factory or maybe the Azure Data Lake components what would they need to be looking at to say, “okay, I’m interested in being the solutions architect or being more involved in this analytics and see if I can use this.” What skills sets should they be looking to get better at?

Melissa: That’s a good question. And so, usually when someone asks me something like this I say something along the lines of, “What would make your normal day job better?” for example, putting analytics on top of data you’re already gathering. So figuring out which use case would make the most sense for you twofold. So you’re learning and you’re actually sort of helping yourself in your primary job as well. As far as going through the list, figuring out which particular one falls into that category. In terms of a really obvious place to start, the easiest and fastest entry point would be PowerBI. Because a lot of the other systems are kind of a bigger endeavor in some cases.

Steve: Okay so let’s say you’ve done that and you’ve tried PowerBI and you have some data out there in an Azure Data Warehouse. Then what really makes this Cortana versus just being able t pull that data out of your data warehouse in PowerBI?

Fair enough and you’ve got to remember that Cortana intelligence suite is really a marketing term for the collection of these services.

Carlos: Duh-duh-duh!

Melissa: Yeah, yeah. So, whether or not you use services that are technically under the umbrella or not kind of doesn’t matter to me as long as it’s the right tool for the job.

Carlos: I feel like we’ve been duped a little bit, Steve.

Steve: No, by the marketing teams, right?

Carlos: Right, by the marketing teams. Cortana intelligence suite, it’s really just all these products that we’ve looped together than congratulations! You’re in the cloud.

Melissa: Well, now, in all fairness let’s say that you really do want to integrate with Cortana on your windows 10 machine. In PowerBI, if you’ve seen this thing called QNA, or Natural Language Querying, where you type into a text box “sales by quarter by division” and it renders a visual on the fly, right? The next step to that is you talk to your laptop and you say, “Hey Cortana! Show me sales by quarter…” and basically if you’ve got your machine set up, you’ll get your PowerBI visual on the fly. That’s one of the first integrations we’re seeing through Cortana that’s a more auditory way of delivering it.

Carlos: So ultimately, Cortana will be the engine to interface and ask these questions through to get to the back end.

Melissa: I don’t know the answer to that, but that would be a really cool goal to shoot for.

Carlos: Interesting. So where are we in the spectrum of where this lives? Again, you just talked about some of the facets of using a couple features here and a couple of features there. We know with azure things change pretty rapidly. For example in episode 61, we had AZ on and we were talking about U-SQL, which is in this family umbrella as well. That’s still in preview, it came out at the beginning of the year. I guess where do you kind of get a feel for, I’ll just say, the stability of the environments and the functionality supported in the umbrella?

Melissa: Yep, yep. So you’re right, some of the services are still in preview like the two azure data lake services. Although it’s third piece of azure data lake is HD insight and it’s actually a pretty mature product now. Cognitive services and the bot framework are in previews as well. So overall a lot of the services are still pretty young and integration between the services is still evolving as well. That definitely represents a little bit of a hurdle. And as we kind of alluded to earlier, the prebuilt solutions and those templates there are all still evolving as well. And then, one thing that I think we BI folks are really feeling is that there’s a lot of new design patterns associated with these tools and these cloud services, so best practices are still emerging. So, a really good example of this is Azure Data Factory. It’s an Azure Data orchestration service and it is absolutely not SSIS for the cloud. Because it’s so fundamentally different, the patterns that we follow and the best practices we follow are not the same and those are all still evolving and that’s something that we’re kind of learning as we go.

Carlos: Sure. And I think even that whole idea of streaming analytics in that space does continue to evolve. I think form the traditional BI environment, so the Kimball methodology and cubes and things like that. That is kind of being turned and the processing power that’s available via the cloud, it has a different way of slicing and dicing all of that.

Melissa: True, true. I still believe with the data lakes in play as an overall data strategy. I believe that there’s a place for a data warehouse. That predictable reporting, that organizational reporting, and the people that really just need to run parameterized reports and that kind of stuff. I still think the Kimball methodology is a sound framework for people to learn and so forth. But you’re right, we don’t necessarily put as much data in the data warehouse or in the semantic layer or in cubes anymore as we kind of used to do in the past by default basically.

Carlos: Sure.

Steve: So then when we talk about features, you mentioned that there’s a bunch of them in preview and a few that are more stable or more robust. Are there any features really missing at this point that if people are wanting to try this out that they should wait for?

Melissa: So, there’s a number of things that as you go down this road, that you would find. So one of my big suggestions would be do a proof of concept or a small project to prove things out. For instance, one thing that comes to mind is something that we discovered a few weeks ago on a client project. So Azure Active Directory integration with azure SQL data warehouse and Azure SQL Database has gone GA and is now supported for integrated AUTH. Well –

Carlos: I feel like there’s a but here…

Melissa: What we’ve learned is that this support only extends to SQL Server Data tools and Management Studio. So what we had wanted to do is use PowerBI as a front end reporting tool in direct query mode to that SQL Data Warehouse or SQL Database using Integrated Authentication and it’s ust not supported yet. And that’s the sort of thing you want to learn early on. So small POCs when things are young and growing and moving fast.

Steve: I think that proof of concept idea sounds like a really good way to start. Jump in, do something small, make sure all the steps work and we’ll at least know if it’s going to do what we’re looking for.

Melissa: Yep, absolutely. So I’m on a project right now where we’re basically laying the foundation for a data lake strategy and it’s a data lake store. So we learned that oh yeah, since it’s in preview, it’s not HIPAA compliant yet. So any HIPAA data we need to make sure it goes over to the BLOB store for that, you know, all those sorts of things while it’s maturing and growing.

Carlos: Yeah, wow, that would be a lot to keep up with, you know, because certain features can only work with certain data sets… then all of a sudden you throw HIPAA compliance on there and it gets complicated really fast.

Melissa: Yep, absolutely. They did do a good job in the Azure Trust Center of you know, documenting what’s compliant with what by each service. So we have a reference. But you’re right, there’s a lot ot keep up with these days. It makes me crazy, but in a fun and good way.

Carlos: Well there you go. So I guess, what kinds of questions have you found most interesting about getting into the Cortana Intelligence Suite and what have you enjoyed most about working in that area?

Melissa: Yeah, so I kind of live to learn new things all the time. That’s kind of what makes me tick. And so it’s kind of opened up this big wide door of that and being a BI developer background, I don’t necessarily have a deep background in installation and tuning and that kind of thing for the on-prem SQL Server world. So for someone like me, it’s almost a way to hit the reset button and say “Okay, I can learn this cloud stuff kind of from the ground up,”. And granted a whole of our on-prem knowledge is useful, especially with our platform-as-a-service, some things give me a leg up there.

Carlos: No doubt, very cool.

Steve: So then if you’re going to jump into this, do you really need a whole IT team behind you to really jump into this? Or is it something really anybody can jump into to start working with Cortana?

Melissa: Yeah, yeah I think it depends on how far you want to take things. So for instance, a current customer of mine wanted to do three different subscriptions for Dev, Test, and Prod rather than segregating by resource groups. Things like, the ARM templates and deployments between environments. Things like managing firewalls and security credentials. Even scaling up and down and just the logging and auditing and that sort of thing. And granted, a lot of that stuff is much, much easier in Azure, but it’s not normally the stuff that your business user wants to pay a lot of attention to.

Carlos: Sure.

Melissa: So I think that yes, IT is still a relevant role.

Steve: Yep. Okay.

Carlos: So I guess last thoughts about Cortana Intelligence Suite.

Melissa: Hmm. I got nothing.

Steve: Okay, well there you go.

Melissa: [laughs]

Carlos: Should we do SQL Family?

Steve: Yes, let’s do SQL Family. So with technology changing so quickly, how do you keep up with all the changes?

Melissa: It is really hard with the pace of change these days. And I’m a generalist, not so much a specialist, which makes it even harder. So although some people kind of think it’s old school, I do still have my RSS feed of bloggers that I follow.

Carlos: Oh wow.

Melissa: Hey! I’m also on Twitter but it’s hit or miss. I’m not a great multitasker so I don’t usually have it up during the workday so I use Twitter more for useful links and information. And then I also use the “Pocket” app so if I see a link somewhere on Twitter I’ll send it to Pocket to read later which sometimes becomes a bit of a black hole. I have good intentions when I file it away.

Carlos: If you could change one thing about SQL Server, and we’ll extend that into the BI space of course, what would it be?

Melissa: I forgot to tell you to nix this question. I couldn’t think of anything really good, sorry.

Carlos: Okay! You’re just so happy with it.

Melissa: Well, I couldn’t think of anything like super awesome. Like absolutely wonderful suggestion.

Carlos: So there’s no change, like, you know what I really wish it would do this. I want it to be the color purple; I want my database to be mauve.

Melissa: Hmmm. Nope, not that one.

Steve: Alright so let’s head into the next one. What is the best piece of career advice that you’ve received?

Melissa: Someone said to me years ago, “You’ll never be caught up.” And not to let it make you crazy. So just how to mentally deal with it because there’s always too much to do and not enough time to do it. And that’s the thing that I guess I’ve remembered that conversation many times over the years.

Carlos: So does that mean that it helps you specialize in a certain, focus on a certain thing and don’t’ worry about the periphery? Or how do you go about taking that advice to heart I guess?

Melissa: Personally, I need to make lists. If things are floating around in my head that does kind of make me crazy. But if I have a list and I know what’s the top priority, then I’m okay. I can have lots of things to do and not feel the need to work all weekend necessarily because okay, I’m organized and I know what’s coming next. I want to say, and I don’t remember this for sure, but I want to say he found me in the office on a Sunday when he told me this. Although now the question is, why was he in the office on a Sunday?

Carlos: Mysteries, mysteries.

Steve: Very good point.

Carlos: So Melissa, our last question for you today. If you could have one superhero power, what would it be and why would you want it?

Melissa: I think I would like the ability to selectively see the future, only the things I want to know of course.

Carlos: Interesting. How would you go about deciding what you’d want to know?

Melissa: Hmm. Good question. I don’t know.

Steve: That might just start with lottery tickets and go from there.

Carlos: You can figure the rest out at that point, right? Use Cortana for the rest.

Melissa; [laughs]

Carlos: Melissa, thanks so much for being on the show today.

Melissa: Thanks for having me, that was fun guys.

Episode 61: The Debrief

Have you ever experienced a post-project meeting that seemed more like a game of point-the-finger and less about lessons learned? We have! That’s why we brought Russ Thomas on the show for Episode 61, “The Debrief”. Russ gives the who, what, where, why, and how of a good debrief. He also shares his first experience with the debrief as a deputy sheriff, as well as how a debrief could be implemented in a corporate setting.

Episode 61 Quote
“It’s really good if you already have a culture that is good at getting together and celebrating your wins, because hopefully you can kind of meter those back and forth and hopefully you have more of the good ones than the bad ones. But in either case, if they only ever get together to talk about what went wrong or whose fault it was then yeah, those kind of get togethers really have a bad connotation.” – Russ Thomas

Listen to Learn…

  • The origin of the debrief and why it’s so important
  • The difference between a good debrief and a bad one
  • Why finger pointing is the last thing you want in a debrief
  • Recommended scripts that effectively end the “blame game”
  • How to make debrief attendees feel at ease
  • The only two questions you ask in a debriefing session

About Russ Thomas
Russ is a SQL Server DBA, blogger, speaker, and trainer in Denver, Colorado area. He’s was formally a Sheriff’s Deputy in Washington State, which is how he learned about the Debrief. Follow him on his blog or on Twitter @SQLJudo.
Resources
Russ Thomas on LinkedIn
Practical SQL Server In-Memory OLTP Tables and Objects
Practical SQL Server High Availability and Disaster Recovery
Entity Framework Database Performance Anti-patterns
SQL Judo’s Monthly DBA Challenge
Debrief, It’s Important (Part One)
Debrief, It’s Important (Part Two)

Episode 61 Transcript

Episode 61: Transcript

 

Steve: I’d like to welcome Russ Thomas on the show today. Russ and I go back, oh, quite a ways through various SQL PASS events. And I know I’ve heard him present on a couple topics and one of the topics is “The Debrief”. Which is something I original heard him talk about at I think SQL Saturday Portland a few years ago. I’d like to say welcome, Russ.  

Russ: Thanks guys!

Carlos: Yeah, thanks for being on the show today.

Steve: So, with the topic being The Debrief, can you tell us a little bit about really what it is and why you’re so passionate about talking about it?

Russ: Uh yeah. So, as Steve said, one of the ways we both know each other is we are both from the Pacific Northwest, at least recently. And met a couple of time and hung out at SQL Saturday in Portland. And my very first SQL Saturday ever was on “The Debrief” and where that came from is, um, before I returned to the private sector I worked actually for the Clark County Sheriff Office. Clark County is right there in the metro area and I know Steve is an EMT, a volunteer EMT as well, and so we’ve kind of started talking about that. And in that environment a debrief or sometimes as they call it, “after action reports” is pretty common. I can only imagine that it’s also very common in the military. I know it’s common in hospitals. And anytime you have, you know, kind of high liability incidents that involve humans, you’re going to have things where it’s really healthy to go back and look at what you did, look at decisions that were made, and really kind of try and learn how can we do this better. And I transitioned back to the private sector, it kind of surprised me how often there’d be big money-losing events, maybe sometimes hundreds of thousands of dollars, maybe even more where you know, you’d pick up the pieces and maybe servers would get put back together. Possibly there was an email that went out about hey, someone needs to fill out a root cause analysis so that we can report up to the chain what happened. But there was never any sort of, kind of, organized debrief where the people who were involved can kind of wrap their heads around what happened and learn from each other. And just work better as a team. Specifically, for me as a database administrator in an operations environment, I kind of saw this lacking. Or when it was done, I think it slipped into one of the main reasons people fear a debrief. It kind of turns into a, “Hey, let’s find out whose fault it was” session as opposed to a, “Let’s find out what we can learn and how we can improve as an organization”.

Steve: And you know, I’ve seen it go that was so many times in the past where it really just turns into a witch hunt to figure out who can we blame for the problem? And that leads to no one even wanting to be part of it.

Russ: Right.

Carlos: And that was my experience, actually, when Steve and I talked about the idea of the debrief. One of my experiences and the best example was a big project, right, and millions of dollars. Now it was actually, it ended up successful. So the idea was, “let’s get back together and kumbaya, let’s review what happened.” And there was this one, it was all about configuring the servers. We were getting delayed in this one little section and the knives then came out. I was like, “Well, I could have done better in this situation.” Well the knives came out and afterwards, my manager wasn’t in the meeting, he came up to me afterwards and was like, “don’t you ever try to fall on the sword again!” And I thought, “Oh, well, why did we have that meeting again?”

Russ: [laughs] Right.

Carlos: And so I guess it would be interesting to kind of see your parallel. Maybe the current state of things with how things could be and how we could make that, those environments, better.

Russ: Yeah. And it’s interesting that you bring up, “Hey let’s bring everyone together to talk about how awesome we were with this successful project.” To me, that’s really important if you’re going to have the occasional debrief to decide why everything went to heck. It’s really good if you already have a culture that is good at getting together and celebrating your wins, because hopefully you can kind of meter those back and forth and hopefully you have more of the good ones than the bad ones. But in either case, if they only ever get together to talk about what went wrong or whose fault it was then yeah, those kind of get togethers really have a bad connotation.

Steve: And I think that’s one of things that in agile methodologies they refer to it as “the retrospective”. Where they do it after every cycle. They do the equivalent of a debrief and you get to a point where you can celebrate the wins as well as deal with the losses.

Russ: Yeah, I have only recently become familiar with you know, agile and scrum and some of those methodologies. As I have seen those take place, I think that’s one area that maybe development or at least devops with scaled agile kind of has a foot up on your typical operations environment. And let’s face it, we’re all in IT where we’re not necessarily blessed with an abundance of people with great personal skills anyway. And I say that completely looking in the mirror as well. Sometimes I want to sit in my cube and be left alone and play with my toys.

Carlos: Sure.

Steve: So given all that then, why is the debrief so important in the operations or IT environment?

Russ: So, what, going back to my history where I learned the value of it. I’ll tell you a quick cop story. And if you go out on my blog at sqljudo.com, this was my very first blog post. So this and SQL Saturday is where I got my start in getting more involved in the community of our awesome SQL Family.

Carlos: And we’ll have them in the show notes, Russ. SQLDAtaPartners.com/debrief. And we’ll have the links up there for people to get at.

Russ: Nice! Cool. So we’d gotten called out to do a search warrant. We had to go out and arrest the guy for the DTF, because they wanted to arrest him. They said that they didn’t have anyone to do it. So we got the search warrant details, no big deal. Um, we get on scene and dispatch tells us, “hey, it’s very likely that they only speak Spanish in the house.” So I was brand new rooking on the team very green, but the only Spanish speaker that could be found. So I was given the job of Mr. Microphone, which is the guy that issues the commands, directs people what to do, brings people out of the house and tells them… and this is kind of a critical job because if you’re giving them poor instructions and they end up doing something not to the liking of maybe the handcuff team or the guys who are all spun up and not sure if they’re going to resist arrest or run or whatever, especially because I’m speaking in Spanish so they don’t know what I’m telling him.

Carlos: Sure.

Russ: So I’m trying to communicate with the Sergeant on what I’m saying, then say it, then issue what they want me to say back and forth. Well all that went off great. Um, but afterwards once we’d gotten the guy out of the house and gotten him secured, my job was then to transition into a kidn of a support role and kind of be the guy who has, you always have options for escalating force, so my job was going to be the less lethal guy in case it really went bad. But because it was a high risk warrant there were people with guns drawn, after it was done I ran over to my spot on the less lethal situation and walked right in front of all these guys with guns drawn right in front of the suspect and took my position. Um, we got the guy arrested, everything was safe. He was actually a very n ice guy, was very cooperative. So the sergeant says, “Alright! Let’s go in front of this tree.” And right away they were like, “Russ, you could have gotten someone killed including yourself.” And I was like, I thought I was the only one who spoke Spanish? What did I say? And they were like, no, afterwards you walked right in front of all your fellow officers with guns drawn, and they said that if it were to have gone bad, you would have caused them to hesitate because you’re now in a dangerous situation. You know, so I’m feeling two inches tall as a brand new deputy, and I want to earn the respect of the guys I’m working with. But, within just a few minutes after getting reamed they’re calling out what a great job I’d done with the commands, how smooth everything else went, and a couple of guys pulled me off to the side and were like, you know this isn’t going to be the last time you get yelled at. That’s what these things are for, but we’ve all been there no big deal you’ll learn and next time you’ll know. And to me, it almost made me feel part of the club after the whole team made it obvious that this was very common, this was something that they did, and everyone would get the turn to be the guy that got used as an example as to how we could do things better. Um, so for me it was almost kind of an endearing memory.

Carlos: Interesting. And I think to just apply this to the DBA space if you will. We’ve all done something kind of boneheaded. For example, I’ll never forget the first time I left a transaction open.

Russ: Heh.

Carlos: And all of a sudden the database everyone’s like, “Hey we can’t get into the application. Who’s this ID?” And I’m like, that’s my ID. And they’re like what’s going on? And they’re like, oh, you left the transaction open. You gotta finish that. And then oh okay, now I get it, ha ha. And you all move on. And it kind of almost seem like the critical component of the debrief is being able to assess, this is an area we can improve in and this is an area we did well. How do you kind of build up that culture so that you can kind of get that feedback, and give it as well since there’s almost a skill there as well, to make it beneficial for everybody?

Russ: Yeah, I totally agree. And I bet me, you, and Steve we could waste probably three hours on boneheaded moves we’ve done. How many times have you had that stomach drop going, “Holy crap, am I connected to production?!”

Carlos: Or, there’s no WHERE clause?

Russ: [laughs] Right! So to me, when the debrief is done if you are a person who has skin in the game, if you have someone putting pressure on you or you feel like there is someone who’s looking to cast blame and make sure they remove themselves, somehow you’ve got to go either high enough up the chain or maybe sideways. Find someone who can kind of mediate those first few, who just absolutely has no skin in the game. They just don’t care if there was someone at fault or if there was, why. And it has to be someone with the interpersonal skills that is really kind of good at, interesting one of the reasons my blog is called SQLJudo is that one of the skills they teach in law enforcement is the skill called verbal judo. And the 2hole concept being is how do you use your words to accomplish productive and positive results? And interesting thing about judo is, how do we achieve a good outcome for everyone involved with the least damage as possible? So verbal judo is kind of the same way. I’ll give you an example that happened to us the other night. We lost our really critical availability group because as part of some initiatives some firewall ports were closed. And these firewall ports were closed weeks ago. But you don’t know that you need some firewall ports until you’re trying to do a failover.

Steve: Oh boy.

Russ: Once you enter that zone, there’s no guarantee that you’re going to be able to fail back. What happened is that we lost the whole cluster and the whole AG everything just collapsed. And it took 12 hours or so to figure out what the heck happened, and we grabbed a couple of people and got everyone in a room and we said, let’s go back as far back as we need to figure out what happened. And I was really, really pleased with the guy who was running the debrief. Because he was very sterile about it. And he was like, “and then on this date firewall port whatever-whatever-whatever is closed. And at no point were any of the actions ever tied to a person. UNLESS there was only person in the room who knew the reasons or the justifications, but even then it was kind of, it was very clinically removed from the human who did it versus the reasons behind it versus the order in which it happened.

Carlos: Sure. Even something like that, right, they close the firewall because they got a ticket to do it. It wasn’t like they were like, “Hey, let’s see how many firewall ports I can close before somebody complains.”

Russ: Yeah absolutely. And I’ll be honest, I myself went into the meeting a little bit looking for blood, because I lost a weekend I had planned to do something else with. I had business and everyone else up my chain and all over this environment went down. And you know, I’m like, “Hey, this ain’t my fault.” So I ended up having to kind of rein myself in a little bit. But it was good to have a mediator.

Steve: So let’s say that we’re at the point where we’re buying off on the idea of the debriefing and something happens. An event is just completed that caused us to want to have a debrief. What are the elements that would cause us to want to have that debriefing meeting?

Russ: Yeah, um, so first and foremost I think the two big questions, and one of my favorite questions that I have learned as a manager is, “What would perfection look like?” Establish right away what is, what was the objective with the actions being taken, the outcome that was desired, and right away you establish intentions were good. And that immediately kind of puts people at ease. You say right away, “Intentions were good. We wanted to accomplish something positive.” And then you can kind of start deconstructing what kept us from achieving the desired outcome? Um, so the two big questions are, “What was the objective – what would perfection have looked like?” And “What kept it from being perfect?” And then as you deconstruct those things, you may have some, what was timing? And break down those items individually, again as, “So, firewall port was closed on this date as per this request. Knowing now what it caused downstream, the big thing is that we don’t wreak our AG cluster, that’s the big objective. With that in mind, what could we have done way back then?” And establish right away that obviously there are too many moving parts here. You wouldn’t necessarily know that it was going to happen. How could we have known, right? And to me, once you establish what the objective is, establish that everyone’s intentions were good. From there, really deconstructing how you could have gotten to a perfect outcome. It kind of just guides itself at that point. You have to have someone in the room who’s willing to know the balance when people, sometimes people need to vent. But when that venting becomes less than constructive to the objective. You say, “you know what? I will follow up with you later and give you all the opportunity to vent. Right now, let’s focus on why didn’t we achieve this perfect objective?”

Steve: So is that the role of the moderator to be jumping in and doing it at that point? Or is that just other people in the team who are going to say, “I’ll follow up with you on that later?”

Russ: I think it works best when people in the room are involved enough in the situation in order to self-moderate, I think that’s best. You have to have a highly functional team for that to work. EVentaully, every conversation is going to devolve to the point of being unproductive and people’s feelings are going to get hurt or, you know, other unproductive results so, ultimately someone in the room is going to have to take responsibility for the debrief and recognize, “Okay, I need to step in here.”

Steve: Yep.

Russ: It would be fantastic if that was never needed, but yeah, at some point whoever is calling the debrief, there needs to be someone who their objective is a good healthy debrief.

Steve: Okay. So I guess the next question is, for that debrief to be successful, what are the things that you really need to focus on there?

Russ: Yeah, um, the items that you have identified that went wrong, right? You do have to dig in and establish what went wrong and why it went wrong. There’s going to be a human element where people make mistakes and people made mistakes. It might even be serious mistakes, right? Unfortunately, that’s part of establishing why the ideal objective wasn’t met. If you as a moderator recognize that it’s something that can’t be done in a group setting, then it might be better to meet with people individually. There again, that’s where it’s really important that the person who is establishing that debrief doesn’t necessarily have any skin in the game other than the debrief itself. Or has the trust of all the members in the team that they, the objective is to learn and grow as a team. But ultimately, you’ve got to be able to dig out those true o=honest assessments of actions taken and what the objective was and why it went wrong. Even if it is a little bit painful.

Carlos: Russ, one of the thoughts I had was I that reasons like root cause where, like, “Hey this is what went wrong” and people not owning up to it is because in the end, they’re going to get there’s going to be additional processes that come in place as a result. So let’s take for example the simple transaction, keeping the transaction open which was the example I used earlier. What happens when someone then dictates, “Oh now we have to ensure that there is a commit at the end of every script that gets run”. And you’re like oh my gosh, really? I feel like there’s now more work when I know what I did wrong and I can take steps to prevent that. I learned from it. But I don’t necessarily want to end up with this big process now as a result. Does that maeks sense?

Russ: Yeah, absolutely. Because it’s interesting, that’s not where I thought you were going. Yeah definitely, there is definitely a fear of additional policy, additional red tape, additional procedures. You know because, any of us who have worked in any sort of production environment know, cowboy is always faster.

Carlos: [laughs]Steve: Oh yeah.

Russ: It’s just that occasionally it goes wrong.

Carlos: Occasionally the horse bucks, you know?

Russ: Yeah. So yeah absolutely. Um, you know and how do you get around that? That is an interesting problem. I think there needs to be ultimately running the debrief it would be ideal if you could separate that from policy decisions. You know, there has to be trust…

Carlos: That’s an interesting concept. The goal of the debrief is not to come out with a, “here’s how to prevent this from ever happening again”?

Russ: Or at least, not to come up with preventing this from happening again via edict or policy. How do we prevent this from happening again via just learning from our mistakes or identifying if our existing policies caused it themselves? But oh it’s difficult, yeah. There are three major management styles and one of them is definitely, “well if I make an edict then it will be so.”

Carlos: Right.

Russ: Yeah, reality just doesn’t work that way, right?

Steve: Right. So one of the things that in big crises that I’ve had to deal with throughout my career often times they lead into the wee hours of the morning or even further beyond that and you’re dealing with an outage and you’re dealing with an outage and at the end of it you get things working eventually. Sometimes it’s your fault and sometimes it’s cleaning up from someone else’s problem, but you get to the end of that and there’s sort of like this state of uncertainty. Like, “Oh my gosh, we could have lost that part of the business if this hadn’t been fixed.” Or whatever it was at that point in time. And that uncertainty can lead to a lot of stress for people who work in the IT environment. And it’s stress that probably doesn’t need to be there for long-term. But kind of leaves people hanging and wondering, “Am I in trouble for this? Am I the hero for this? Did I screw something up? Or did I save the day?” I think that having that debrief really helps bring closure for those types of events so that you can the move on to the rest of your work or the rest of your life, whatever it may be, and not stress over it anymore. Have you seen that similar type behavior there?

Russ: I have, I have absolutely. Because I have seen entire departments who really distrust each other. I hate to keep picking on security, but the security, oh boy what a tough situation those guys are in. Where no one knows if they’re doing their job, right? We don’t know if we’ve blocked a good hack attack or if we’ve blocked a business-ending virus or some sort of encryption for a ransom scheme. No one knows if those guys protected us from that. But boy, we sure know when we can’t talk to a production server because of a firewall or we can’t use an application that we know would save us hours and hours and hours because we know it hasn’t been approved through the proper vendor list or what-have you. So it’s very easy for departments like sys admins and DBAs to not get along, or DBAs and the security teams not getting along. I have personally seen these debriefs actually bring these teams together because you realize how hard each other’s job is. And when you ask, “What would perfection have looked like?” sometimes it is healthy to really recognize that hey, the reason I got this done was because I have three levels of management breathing down our back that we have to comply with this edict from an outside auditor or something. And you know what I mean? And to kind of put yourself fin their shoes. I’m going back to something you said, working on an issue throughout the night where you’re completely frazzled and you’re completely stressed out about it anyway. I think that’s another huge art to a debrief, because when do you call it to be recent enough within close proximity to the event that people remember what happened? But you kind of have to do it when the feelings aren’t really raw or really down on sleep. Boy, there again that’s another big balancing act.

Steve: So you wouldn’t want to necessarily do it after 18 hours of battling some database issue and everyone needs to sleep. But you might want to do it the next day when everyone’s rested and in the office.

Russ: Right, exactly. It never hurts to have a decent lunch catered and say, “Hey, we don’t want to interrupt your day, but we’re going to have nice little lunch catered. Come on in, get some food, and let’s figure out what happened and what went wrong.”

Steve: Yep, okay. I know one thing I’ve used in the past is referring people to your blog post in the past. Just saying, “Read this over and let’s talk about let’s talk about doing the debrief after you read what Russ has posted there.” And I know that’s been incredibly valuable for me for a couple of people I’ve talked with. So thanks, I do appreciate that. I guess before we move on to the SQL Family wrap-up end of it, is there anything else you want to follow up with on the debrief?

Russ: Uh, no, just the realization that in the real world, they’re hard. Even me when I’m a fan of them, when it comes time to call one it’s like, “Oh man my goodness. I’ve got fifty things on my plate. My coworker’s got fifty things on their plate. We talked about it in email, isn’t that enough?”  And you know, frankly you can overdo it, right? You can micro-analyze everything where people are afraid to move, but it is important and if nothing else, just for establishing that people are human and hat you’re allowed to make mistakes even if for nothing else, that. I think that’s super valuable. But at the same time, right, sometimes you do identify that maybe you have a team member that really isn’t a great fit for their job, or they don’t have the competency or don’t have the integrity, unfortunately I’ve seen that as well. You have someone who just can’t be honest about their own skillset and way they do things. They’re going to find a reason to make it someone else’s fault even though everyone else in the room knows that isn’t the case. That usually leads to a different conversation that doesn’t have anything to do with the debrief, at least directly.

Steve: Yep. It’s sad when integrity is a bigger issue than the competency. But it happens.

Russ: Right, or integrity becomes the bigger issue than even the event that brought it out. You’re going to lie about submitting a ticket or a change over a stupid two-row update statement? Come on now.

Steve: That comes back to what you said in the very beginning about having trust in the entire team. If you don’t have that, it doesn’t really work very well.

Russ: Yeah. It is definitely a balancing act.

Steve: Okay, so we generally have the SQL Family at the end here and let’s just jump into that, with the first item being technology changing so quickly. What kind of things do you do to keep up with the changes?

Russ: Um, well, I’m one of those lucky people who does something I really like for a living. So me, I kind of just nerd out on watching either Pluralsight videos on technology videos I’m in. You see me at SQL Saturdays.

Steve: Oh yeah.

Russ: Typically at least one session I go to will be something I’ve never heard of, because anytime I see something I haven’t heard of, in the back of my mind I go, “What if my job requires that?” Natural curiosity doesn’t hurt. But just getting out there and uh, as selfish as it sounds, one of the reasons I like getting involved with teaching at SQL Saturdays is because trying to get in front of people and act like you are any kind of intelligent about a topic is a really good motivator for getting intelligent about a topic.

Steve: Oh yeah, that is so true.

Russ: I sheepishly admit that I’ve submitted sessions I know nothing about in order to motivate me to really learn a lot about it.

Steve: Yeah, that’s a really good way to learn. I know I’ve done the same thing.

Very good. Russ, if you could change on thing about SQL Server, what would it be?

Russ: Uh, more Pokémon.

Carlos: [laughs]Russ: Um, so that is a super interesting question. Um, I see where our industry is going as a whole, right? And the people that have the complaints about SQL server being old school. You know, relational, there’s still a reason that people want data integrity relational sets. I was a big fan when JSON came out, so if there was one thing I’d change bout SQL server, so if there was one thing I’d change about SQL Server and I realize that this is nigh impossible, like backwards compatibility and all those other things, but I never would have gone down the XML route with SQL Server with extended events or any of the metadata. I would have gone JSON from the very beginning and I’m glad to see it kind of catching up now. I would love people, I know there are a lot of people out there that cringe at the thought of a full JSON parser shoved into the SQL engine, but…

Carlos: Me being one.

Russ: I mean instead of all the XML parsing and all that stuff. It’s too bad that those worlds didn’t come together sooner where Microsoft could have jumped on that bandwagon because I do like the simplicity of the JSON approach.

Steve: Yep. Well you know it’s interesting because it took so long to get the full XML support in there to begin with. The full XML functionality wasn’t there until SQL Server 2000, and it wasn’t an actual native data type until I think 2005. So now JSON is just coming out at this point. I think it’s got a bit of catching up to do there. I’m excited to see where it goes.

Russ: Yeah, and I know that’s a sensitive topic for a lot of people but there are a lot of developers who would rather package their small datasets into a JSON object and store it that way. A lot of the times I’m of the mindset where if you can’t beat ‘em, join ‘em. And if you only have a small handful of items, whatever, put it in a document. I think that’s one area where postgres really gains a lot of fans is that it’s a little more mature in some of that area. Though it’s completely less mature in almost every other area, let’s make that clear. I’m still a Microsoft Database Stack guy.

Steve: Yep. Alright so what is the best piece of career advise you’ve received?

Russ: Um, this one took me a long time and when I finally got over it, it really helped my career. And I will tell this to anyone who works for me: your company views you as a resource. That sounds kind of harsh, but I’m talking about the sterile, nebulous, soul-less organization of your company. Your boss is a human being and might be friends with you. But the organization itself just sees you as a resource. And it’s not just going to hand you more than it can in any given moment, right? Think of it like a process. The computer isn’t going to just hand you more CPU and more desk space unless you ask for it and the best career advice I ever got was be honest about your own value and be honest about your boss about it. I’m not saying go in and say, “Hey, boss, this is what I’m worth. I demand a raise.” But at the same time when you are out looking for a career and you’re going through the interview process, no one is surprised when you say, “Hey, this is what I think I’m worth.” Um, you know and for people who like me, it was almost a taboo subject to say, I was like oh you know, pay me whatever you think I’m worth. It’s going to be like ten dollars. And so there again, it is kind of a sensitive subject but I’ve found in negotiations with things that I Need, negotiations with my career goals, negotiations with training, I’ve never once had a boss offended that I asked for money to go to PASS or asked for money to go to SQL Saturday in a city other than my own. Or because I asked for a subscription to plural sight. And that’s really where I’m going with all this. Now they don’t always say yes, and that’s okay, but I’ve never had a boss say, “How dare you ask that?” So for all these people wishing that these things were true in their place of employment I would say, have you asked? And have you put in enough effort where you could like make a case for it, right? This is how it would benefit the company and this is how it would benefit me. Once I started asking for things, I was surprised absolutely floored, at the number of times the answer was yes.

Steve: Wow. That’s a great, great piece of advice.

Carlos: Russ, our last question for you today. If you could have one superhero, what would it b and why would you want it?

Russ: So, I’m going to reveal way more nerd side than the Pokémon comment from earlier. I have kids, so I have a built-in excuse. Me and my kids, the original avatar not he planet with the blue people, the avatar with the last air-bender. ME and my kid watched that show religiously when it came out. We knew Nickelodeon Tuesday nights at 4. When it came out in the boxed set and dvd, we bought the entire boxed set. I know those shows inside and out, my kids know that show inside and out. I still think it was one of the best things ever put on TV.  I would be Air bender. I would have air bending powers. I would sky surf and I would do all kinds of stuff.

Carlos: Very cool.

Steve: Nice, I like that.

Russ: And all my kids would tell you, at the drop of the hat, what they would be. I’ve got a daughter who would definitely be a water bender. My son would be a water bender. Yeah. We all know.

Carlos: Sure, sure. Now why would you want to be an air bender?

Russ: Just the ability to fly and just, yeah.

Carlos: To move around.

Russ: Yeah, just the ability to – the freedom. I love watching American Ninja Warrior, and I wish I were 80 pounds less and was 20 years younger because I would totally be in American Ninja Warrior.

Carlos: There you go. Well Russ, thanks for being here with us tonight.

Russ: yeah, thanks guys.

Steve: I think that wraps it up. Thanks so much.

 

 

 

Episode 60: U-SQL

1400

Welcome back to the SQL Trail, Compañeros! In episode 61 of the SQL Data Partners Podcast, we sit down with Richmond local AZ and talk about U-SQL. What is U-SQL? As part of the Cortana Intelligence Suite, it plays a role in the streaming analytics space–primarily in the Azure environment.  We break down some of the myths of U-SQL, then we’ll discuss the use cases that make it such an effective addition to your database toolbox.  While we don’t think most DBAs will be touching this technology anytime soon, those with C# skills will be interested in checking it out.  This episode will help you know where U-SQL fits into the environment will help when the subject is brought up in you next team meeting.

Listen to Learn…

  • The common use cases for U-SQL
  • How U-SQL relates to T-SQL and C#
  • The connection between U-SQL and big data
  • How to scale data using U-SQL
  • Environment requirements to get started

Episode 61 Quote
“[It’s] an extension of the programming language that I can use my C# and my SQL skills together and that’s what I get with U-SQL.”

Show Notes

About AZ
U-SQLAZ is a Data Architect who works for the state of Virginia.  He is a  member of the Richmond Virginia PASS User Group and help organize the SQLSaturday.

Resources
MSDN: U-SQL Language Reference
Meet U-SQL: Microsoft’s New Language for Big Data
U-SQL Team Site
Introducing U-SQL – A new language for Massive Data processing
U-SQL and the Azure Data Lake

Podcast Transcript