“The move to the cloud is inevitable.” Listening to Microsoft marketing and all the MVPs out there, it can certainly seem like we are getting further and further behind each day we aren’t use cloud services. Sure, lift and shift of VM resources are straightforward at this point and they are happening with more frequency; however, this doesn’t mean everyone is leveraging cloud services. How do you go about trying to prepare for the future when the company you work for isn’t quite ready to make the leap? In this episode, we talk with Jim Donahoe about making the transition from our traditional SQL Server environments to using Azure SQL Database. We discuss some of the basics and some of the challenges you will face as you make the migration.
It can be a bit overwhelming to think about all the other services available in Azure. Lift and shift gets us to the ‘cloud’, but if we want to start doing the cool stuff, we will have to make our data available to start interfacing with these services. We know that cloud providers are pushing more hybrid models as not every system is ready for the cloud. New development initiatives make better candidates for the cloud services and this episode covers the first steps to help you on path to cloud services.
“Microsoft takes care of all of the back-end work of creating the DNS, getting you pointed, opening the ports, all of that good stuff is already baked right into that install.”
“It takes some of the administration side off of it, and some people get a little uncomfortable with that, but I like to think of it as they’re taking some work away from me so I can play with the more advanced features.”
“The way I always think of an elastic pool is that it’s mainly a cost-savings feature.”
“You actually can do a cross-database query, it’s just you’ve got to set it up a little bit different.”
“If my database is going to get hammered with an archive procedure, I can scale [it] out to a Premium tier and never have it go down to my users or to my application. Then once the operation’s done, I can scale it back and again, nobody ever knows except you.”
Listen to Learn
01:53 Compañero Shout-Outs
04:08 Intro to the guest and topic
06:09 The Azure SQL Database nickel tour
08:09 Jim’s magic trick…nothing up his sleeves…poof! Server!
11:06 DTU calculation?!
12:40 Think of elastic pools as a cost savings feature
15:13 Cross-database queries in AzureDB – Jim has templates he can share
18:33 You can’t just drop an existing application into Azure SQL Database and expect it to work
20:46 Jim’s favorite option is transactional replication
23:47 Not all the services are compliant with Active Directory, so what about security?
25:45 Noisy neighbor issues with Azure SQL Database? Jim’s never seen it.
27:02 Will there be performance impact with new features being deployed?
29:32 Do some data profiles work better in Azure SQL Database than others?
32:44 What is currently not supported
35:25 The moral of the story
36:46 What it means to get stuff back when things go pear-shaped
38:34 A few more features of Azure SQL Database and some missing features.
40:09 Be ready to learn on the fly
41:12 SQL Family Questions
45:54 Closing Thoughts
Jim is a Cloud Solutions Architect with RDX. He lives in Pittsburgh in a small suburb. He loves working with all things Azure(ARM templates, DBaaS, SQL on IaaS). He is an avid collector of retro video games! He is a member of the Idera ACE program for 2018.
Carlos: Compañeros! (Spanish) Sorry about that guys, I’m here in Costa Rica, recording this intro for Episode 139 and thanks for being with me today. I’m here in Costa Rica doing a couple of things that I hope I can announce in the next couple of weeks. I’d love to get some input on some of the things that we’re trying to do and hopefully that will all get ironed out here, shortly. Our topic today is Getting Started with SQL Server in Azure and this topic was suggested to us by Douglas Kemp. We’ve enlisted the help of Jim Donahoe, who is a database administrator and been doing some things with Azure, so we thought we’d bring him on and talk with him about what it would be like to get started and that onboarding process, if you will, with some of the Azure services. Admittedly, there’s a ton of Azure services, so we’re going to be focused mostly on the Azure SQL DB and some of the interlaying or the connected services, so just so you’re aware, that’s our focus there.
I do have a couple of Compañero Shout-Outs. I want to give a shout-out to Miquel Moroyoqui from Mexico, Kevin Evans and big shout-outs to Kevin Wilkie and James Rhoat, who are helping chat with me about EMRs. Both of them work in healthcare and we were chatting a little bit about some of the different ones, and so I appreciate their time and energy.
I do want to bring up a little bit, the SQL Trail Conference continues to come along. We’re excited about the folks that we continue to add, having our Friday lab. One of the other things that I’d like to get some input on is what should we be doing for our Wednesday evening event? Part of my thinking here was, last year as we were talking and I want to have a get-to-know-you-type event, so it’s outside of the conference, a bit more social, so that way when we jump into the sessions on Thursday, people will be a bit more familiar with themselves and they’ll be able to feel confident asking questions because you’ll know people already. You won’t feel like you’re interrupting quite as much. One of the interesting things that happened was several of the attendees had actually been trained in cooking and taking courses or whatnot. And I thought, wow, wouldn’t that be interesting to do like a team building exercise, if you will, and make a meal together? I’ve reached out to a couple of places here in Richmond that will allow us to do that and I guess now I’m curious, is that something we should pursue, or should we just do more of a traditional dinner where we go order and eat? Last year, we actually went out onto the boat, the Spirit of Norfolk, ate on the boat and had kind of that as a social event. Obviously, there’s no boat in Richmond that will let us do that, so I’m curious if you think that’s a good idea or not. Let me know, hit me up on Twitter, social media or email at [email protected] and I’d be very interested in hearing your response.
For SQL Server in the News, this episode is mostly about the features in Azure and several of the features that are in preview, so we’re just going to leave it to that for this episode. The show notes for today’s podcast will be at sqldatapartners.com/azure or sqldatapartners.com/139. So with that, let’s get into the conversation with Jim.
Carlos: Okay, Jim, welcome to the program.
Jim: Thanks for having me.
Carlos: Yes. Also today, we’ve kind of been neglecting you guys, these mysterious panelists that we have as well. We’ll go ahead and bring you out a little bit more into the light. We’ve got Eugene and Kevin on with us again, today. So welcome, guys.
Kevin: I think you mean Professional Podcast Guest Kevin Feasel.
Carlos: Yes, you’ll forgive me for missing that official title.
Kevin: I don’t get paid, so I get the title.
Carlos: Yes, fringe benefits.
Eugene: I’m still in the minor leagues.
Carlos: Yeah, we’ll have to work on a title for you. The compañeros have spoken, have given you a title, but I decided not to pass that on, Eugene. I’m just kidding!
Eugene: My mind is racing to how people try to rhyme with blue jeans. I’m trying to think of any weird high school taunting I might have gotten.
Carlos: But you do have a little support tonight in that Jim is also from the Pittsburgh area and so we’re happy to have him on the program tonight.
Kevin: Tonight’s podcast will be dubbed in Yinzer.
Carlos: You’ll have to excuse them, Jim.
Eugene: That’s great.
Carlos: So, maybe I won’t introduce you guys in the beginning.
Jim: I was trying to think of a good word to call them, but there was a news article recently about the one councilman running and what was considered a profanity or not. We won’t test the waters, there, but just know, Kevin, you’re going to get a strongly worded email.
Kevin: It won’t be the first, today.
Jim: You know what, actually I’ll see Kevin next week and I’ll just tell him to his face.
Kevin: I’ll see you in court.
Carlos: Okay, our topic today is this idea, and again, this is an idea from Douglas Kemp. He wanted to do some comparing and contrasting. Azure’s been getting a lot of hype and, of course, it’s changing all the time. We’ve heard about Azure SQL Database and that’s kind of been there, but that idea of transitioning from what we know and love when we installed SQL Server to using the Azure services is a bit bumpy, to say the least. Particularly for some of the things that we’re used to. Ultimately, our conversation tonight is going to be centered around that. I guess maybe to kick things off, Jim, why don’t you give us the nickel tour, particularly as we think about Azure SQL Database, because there are, I don’t know, a million and one different services. We’re kind of focusing it more to the traditional SQL Server platform services. Why don’t you take us through and give us the nickel tour on some of those services and some of the things we’re going to talk about.
Jim: Sure, I’ll do my best, here. One of my favorite things to do with AzureDB is just spin it up and go with it. There’s three basic performance tiers. You have Basic and Standard and Premium. For most of the stuff that I do, I stick around with Basic and Standard because Premium’s pretty expensive. With the Premium side, you get the solid-state storage. Otherwise, with Basic and Standard, it’s network attached, however it’s still pretty awesome equipment you’re working with. I’m trying to think of how to give a tour without visually showing.
Carlos: Yes, that is one of the downsides to our audio format. But I guess to equate that back to what we would install, when we talk about Azure SQL Database, we are literally talking about just one database. Right click, new database and you give it a name and that’s it.
Jim: Right, remember, though, each database has to go a logical server on Azure. One of my favorite things with Azure is that I can do most things through PowerShell. If I want to install a new logical instance somewhere, I have a canned PowerShell script. I just fill out some parameters and bam, it’s done. Now sure, you can do that on an actual VM or physical box, too, but one of my favorite demos to give when I go out and do this in the real world is I like to make a magic trick. I clap my hands together and show there’s nothing up my sleeves. I show that this is all my resources and this is everything that’s installed and I run my script and two minutes later or ten minutes later, depending on how busy the data center is, I have my server and it’s ready for me to put a database on there. Microsoft takes care of all of the back-end work of creating the DNS, getting you pointed, opening the ports, all of that good stuff is already baked right into that install. The only thing I’ve got to do is throw in my IP address to the firewall and it lets me right in. That’s the same thing with your database, too. Once you create your first database, Microsoft is saying, “you know what? We’ll take care of your backups and your integrity checks and we’ll make sure this thing is 99.99% available throughout the year.” That takes so much weight off your shoulders that now I don’t have to worry about did my backup run or did DBCC find an error? I don’t have to worry about those things because Microsoft got it. That’s one of the beauties of it. I get to focus my time more on, “well, why is this query running slow?” or “what’s chewing up all my DTUs?” that kind of thing. It really makes it a lot of fun that way. It takes some of the administration side off of it, and some people get a little uncomfortable with that, but I like to think of it as they’re taking some work away from me so I can play with the more advanced features.
Carlos: Okay, so I feel like that’s fair. You start with one database and great, it’s all roses. But then you add a second database and I feel like “okay, now things have gotten complicated.”
Jim: Now it’s gotten real, huh?
Carlos: Now it’s got real, that’s right.
Jim: Really, when you add the second database, each database pretty much gets its own DNS pointer, so if you look at your database name when you connect to it, it’ll actually be, if we’re using Adventure Works, it would be adventureworks.yourservername.database.windows.net. Which is weird if you look at it in the grand scheme of things because it should be the other way around. But that’s one of the odd things that I’ve found. Once you start adding more and more, you can always start looking at one of my favorite features called the elastic pools so you can try to save yourself some money. You get to work with elastic queries. You get to watch all the databases chew up all of the resources and try to figure out what’s going on, or everybody goes down and you’re kind of sitting there, “I didn’t do it.”
Carlos: Sure. I guess I want to go through some of those. To explain or flesh out some of those ideas. You talked about elastic pools. When we spin up the database and you mentioned the tiers, so very similar to if you’ve used Azure at all, one of the things or the challenges is that now you’re going from the traditional, “how many CPUs and memory do you want” to “you have to choose this DTU. How many DTUs do you want”, which is kind of this weird calculation of all of that stuff.
Jim: Yeah, I like to think of that as the magic bullet or the magic number theory with Microsoft because I can never actually find a straight answer on what exactly is a DTU. A good friend of mine, Andy Mallon wrote a really good blog about it. If you search Google and you do ‘What the Heck is a DTU’, odds are you’re going to come across his blog and he really dug into it and gave the best findings on what a DTU really is. Generally, it’s just a summed total of your resources you’re going to be governed with. It’s your RAM, your CPU, your IO and they make up this DTU number to say, “you’re going to get this much with this package,” even though you really don’t get to see it. They do have something in preview called VCPUs, which is supposed to make things a lot easier to understand what you’re getting for your money, but it’s still in preview.
Carlos: Right. Yeah, so again, another way of licensing that so you can actually feel like you’re looking at those CPUs.
Carlos: Getting back to that, so now we’ve added one. We’ve chosen whatever, 100 DTUs, and now we add a second database. Then my instance, both databases can use the number of CPUs and memory that I’ve thrown at it and no one’s the wiser. But in this instance, they can’t, they’re limited to 100. Then this is where this elastic pool idea comes in.
Jim: Right, so the way I always think of an elastic pool is that it’s mainly a cost-savings feature. Essentially, one of my older clients was my former company and what we did was way back when, with Azure, you had to have a Premium database in order to access column store indexing. We had like 52 databases throughout the subscription, so 52 P2 databases that were barely 5G a piece. It added up to like $53,000 a month. That’s just for our database spending. That didn’t have any of our other stuff. So elastic pools came out and we said, “you know what, let’s try this out.” You’ve still got my Premium features, but they all shared resources, so I threw them all into a pool. They all shared a pool of resources, which cut our bill in half, so our bill went from 53 down to like 32 or something like that. We saved like 20 grand a month, which overall was a lot of money at the end of the year. “Hey, I saved you 20 grand a month times 12, I’m not a math wizard, I’m a DBA, so you guys can do the math.” But you know, it was almost a quarter million dollars we saved just by moving to an elastic pool, so it really made things easy.
Kevin: And yet, when bonus time comes around, they won’t even give you, what, 20% of that?
Kevin: I’ll just take 20, that’s fine. I’m not greedy.
Carlos: Going back to our example, that idea of each database has a 100 DTUs, I can put them in a pool and only have to use 150 DTUs, for example. But then, each database could use whatever they wanted, so if one database, in the morning you need to use 125 and the other one was only using 25, I can get that sharing without having to bottleneck at the DTU level.
Jim: Exactly. Yeah, that’s the beauty of the elastic pool was that it really works with those workloads where it might scale up high at one point and come back down. Instead of scaling your database out each day to different levels, you put it in an elastic pool and it just adjusts your workload for you. That’s one of the great parts of it.
Carlos: I feel like part of the ease that they give us with the service, they make up for in architecture think, which traditionally we haven’t been great at. You mentioned elastic pools, that’s something else you have to add on. It’s like you’ve created these two, so now I have to create the elastic pool. “Oh, you want to create a query between the two of them?” Let’s just say your database, one is accounting and the other one is some sales system and you want them to do a query that has to go over both? Oh, well, that’s another configuration. That’s another architecture decision. I feel that’s where it can get a bit complicated pretty quickly.
Jim: It does. A lot of the times when I’m doing assessments or when I’m having a meeting with a potential client about moving to Azure, almost every single time, somebody’s asking about the ability to do a cross-database query. Not even an elastic query, just they always say, “I know we can’t do cross-database queries in AzureDB, so what do we do?” That’s not the case. You actually can do a cross-database query, it’s just you’ve got to set it up a little bit different. You have to create the external data source and the external table and adjust some code, but it’s definitely possible. There’s really nothing to it, you just got to learn how to do it. That’s the hard part is figuring all the syntaxes out and doing it all that way.
Carlos: Well, it’s still fairly new and I can’t remember what the latest version was, but when they initially released it, Azure SQL Database version 11, there were some limitations and even in version 12, it wasn’t quite there. This cross-query idea. Something, again, as simple as SELECT from database A and JOIN to database B, now all of a sudden I have to think about how I want to do that and what those queries are going to look like.
Jim: Yeah, you’re exactly right. Between the two different versions where it became, you know, it wasn’t quite fully baked in 11 or it had a lot of issues in it. Twelve, it came out, but it still had some tweaks that needed to be made. But once they got it, because I use it all the time, now, so I have templates out there, so if anybody ever wants templates for a cross-database query set-up, let me know and I can hook you up.
Carlos: There you go. But that is using yet another term that we should probably throw out, and that’s elastic queries.
Jim: Yes, those are the preview features.
Carlos: Which is another challenge, moving to the services is, is knowing what is fully-baked and what’s not quite there yet.
Jim: Right. Yeah, that’s really odd. I did not know that would be considered the same thing. I should take a few notes on this.
Carlos: Eugene, what’s been your experience moving from the traditional install on a VM SQL Server to some of the Azure Data services?
Eugene: Yeah, it’s funny because I have played around with some of the other services, but I haven’t done anything with SQL Database.
Carlos: Ah, interesting!
Eugene: Yeah, it’s just I haven’t had a good opportunity. I’ve been thinking about it because it’s pretty cheap. I think I maybe did a little bit with a to-do app I had played around with, with asp.net and that sort of thing. For that it was pretty simple because I was using Azure webapps, so they make it super simple to hook it up to SQL there. But actually, something I’d be interested in from Jim is I remember you talking about how with infrastructure as a service, you can get pretty close to lift and shift in certain environments, right? You can literally just sys prep a machine or just pluck a VM you have and push it out to Azure. But I remember you saying that you really can’t just take an existing application a lot of times and just drop it into Azure SQL Database and expect it to work, right?
Jim: Correct. One of my favorite ones with that is, a lot of our clients, they’ll do a hybrid model where they’ll host the database as an Azure SQLDB but the application layer will actually still be on prem. What will happen is, they actually have to work into their code for retry-ability because it’s not that the database isn’t available, it’s actually the network latency between their on-prem network and talking to Azure that it actually times out. A lot of the times with .net or even Java, some clients will say, “it’s telling me the database is unavailable, what’s going on?” I’ll connect in and I’ll say, “well, the database is fine. Microsoft would have let me know if the data center went down, but database is here and we’ll dig a little further in and we’ll use some tools and we’ll find out that it’s actually their network. It’s not able to transmit and get the data there fast enough, so that’s one of the big gotcha’s is the network layer, when you’re doing this, if you’re doing an on-prem to AzureDB is you have to examine your network layer to make sure that you can support the traffic between the two. The other part of it is, dropping the database in there and making it work is that you’ve got to remember, the big part of this is that this is just a database hosted somewhere else and that this hardware might perform different than the hardware it’s currently on. The DTU factor comes in where if you just basically drop your database into a standard database size, it might not have enough processing power to keep up or the IOPS might be too low and you might be getting memory pressure. A lot of the times just to drop it in and hope it works usually does not work out because you didn’t really work with it enough.
Carlos: Sure. I guess to that point, they had previewed, I want to say, was it at PASS? Anyway, one of the other conferences, they had previewed this idea of attaching to your on-premise database and through the backpack option, basically be able to take a copy and then restore it up and create the Azure SQL Database. How has that come along, if you know and how else, then, are you getting data up there?
Jim: That’s actually a really good question. If you’re using Management Studio 2017, if you right click on one of your on-prem databases, you should go to tasks and you should have the option to deploy database to Microsoft Azure. When you do that, that’s actually going to want you to connect to an instance up there or you can create it as you go. I refer to that one as the shotgun method because you’re kind of hoping and praying it’s going to go through. It does run a check for you but you’re already pretty much committed at that point to put that database up there, so if it fails you know it’s not going to go. But if it succeeds, your database is up in Azure and you’re going to start being billed. There’s another tool called the Database Migration Assistant that Microsoft has released, and it’s a free tool. You can do an assessment or you can do the project. If you do the project, it’ll actually migrate to database for you. With that one it works a lot better because you can select the objects you want to move. You can just move the schema, you don’t have to move the data, you can move specific tables. My favorite option is you can use transactional replication to Azure SQL Database. Azure SQL Database can be used as a subscriber only, you can’t make it a publisher. That’s one of my favorite methods to set up for my clients, is to use transactional replication, because then they can actually put it up there. We can see the workload as it’s being hit with queries and we can adjust it on the fly. That way they’re not fully live, but they’re still getting enough workload on it that we can study it and analyze it and say, “okay, well you know what, with this query, we can probably tune it a little better” or “let’s increase the DTUs and we can judge it from there”. So, my favorite option is transactional replication.
Carlos: Okay, that’s good to know, then. So, even with the challenges of the transactional replication, like having to have the primary keys and all of that other stuff?
Jim: Yeah, it’s all the same limitations that you have today with transactional replication is it has to have a primary key, no heaps are allowed. Except the one other option is, is you have to put secure.database.windows.net, otherwise it won’t be able to connect up. There’s a lot of good blogs out there. I haven’t made one yet for that. That’s actually a really good thing for me to do. There’s a lot of great blogs on how to set it up and troubleshooting, but again, you have to make sure your network is able to support that method of doing it. Because if you take that snapshot and you’re locking the tables while it’s being processed and then pushed, so you’ve got to watch what you’re doing there.
Carlos: Right, and give yourself a downtime or a maintenance window.
Carlos: Now, so another point we should probably bring up is security. We’ve talked a little bit about the different ways or even services that are available. But not all of them are compliant with Active Directory, so we’re still kind of back to SQL Server authentication?
Jim: Right, Windows logins are not supported. However, if you have Azure AD, you can integrate with Azure AD. So, by default, Azure AD is not enabled, it’s only SQL logins, however, if you have an Azure AD tenant, it’s very easy to enable the AD integration. You just go onto your logical instance, and there’s an option there for Active Directory admin. You click on that to configure who you want as your admin. I usually recommend setting it as a group, as an AD group, not as an individual user. Once you do that, it actually enables the integration between the two and you can actually start mapping your users to Azure AD into SQL Server.
Carlos: Okay, there you go. But another example, where you have to, you know, you’re creating another service, so now I have to create that service, Azure AD, and then depending on what I want to do with that, I guess how complicated I want to get there, as far as integrating it with my on-premise directory, or creating a separate directory, then again, it’s kind of an architecture decision, to go from that.
Jim: It certainly is. Luckily, a lot of folks that I’ve worked with recently, they’re all using Office 365, which automatically is already in Azure. They already have a tenant, so it usually makes it a lot easier to synchronize. But you know, if you’re brand new to Azure and you still have your on-prem AD, it’s a bit more work, but it’s come a long way. It’s actually a lot easier than it used to be.
Carlos: Sure, and to their credit, obviously they’re investing quite a bit into it and so that whole system continues to get better.
Jim: Oh yeah.
Kevin: You mentioned, obviously we’re not going to have our own servers to ourselves and even if you did have your own server, it would be a virtual machine somewhere in a network. Have you experienced any type of noisy neighbor problems with Azure SQL Database or other Azure-related services?
Jim: With Azure SQL Database, I have not. With VMs, I have. With an actual Azure VM, this is really interesting, I was just talking about this today, is I had a client where we were seeing a lot of IO spikes and then it would drop and then we would have a lot of problems. We opened up numerous tickets with Microsoft before it was finally resolved. Here, what it was, is just somebody was just pounding away at the SAN and it was killing it for everybody. When one of the drives would fail, or when one of the drives would have enough, they’d have to switch you to another resource and during that time, you couldn’t write to anything. We kept seeing drops. It was crazy, and then Microsoft finally isolated the issue and they were able to get it resolved pretty quickly, but the only time I’ve ever seen it is when I’m running on a VM. I have never seen it with an Azure SQL Database.
Carlos: Along those same lines, this idea that you’re now, it’s a service, which means that, we talked a lot about the new features to get added to SQL Server. They’re deploying those to the cloud first, which means you’re going to get impacted, right? There’s deployments happening all the time, so talk to us a little bit about considerations for how to do that maintenance? Yeah, sure, maybe my backups are now complete, but what do I need to think about in terms of having my application available and when things do go bump, or like you mentioned, my underlying data storage needs to move from one system to the other?
Jim: Yes, so it’s actually really cool the way they handle it with AzureDB. Whenever you create your database, they are actually making a replicated copy. What they do is, all the maintenance that they’re doing, like your backups or your integrity checks, that’s actually running against your replicated copy, so you should never, ever receive a performance impact from that. The idea being that if your primary storage has a problem, you still have that replicated copy somewhere and they can put you on to that very seamlessly and you wouldn’t even notice. The other option is they use what’s called geo-replication, which if you’re familiar with an always-on availability group, to me that’s exactly what geo-replication is, except it’s a lot easier to set up than an always-on availability group. I don’t have to create the cluster or all that stuff. I just point and click my way through it and I replicate my database to another Azure data center and it’s highly available, so, if one goes down, I can fail over to two, or it’s actually automatically done. But for the maintenance stuff, the replicated copies, they actually do all of the maintenance work against those. For a VM running SQL Server, you’re still going to be responsible for everything on that, however, Microsoft does do their own administration to the VM host. What I recommend all the time is that you put your SQL Servers into what are called availability sets and what availability sets gives you is that it’ll say, “okay, I have two servers. One in rack A and one in rack B. When rack A is being patched, rack B cannot be touched.” This way, when rack A has to reboot, rack B is still online and essentially you will never go down from a Windows patch.
Carlos: Okay, there you go. Yeah, so again another architecture-type decision to make there. Okay.
Kevin: So, speaking of architectures, I’d like to ask you about what kind of data profiles do you think work best in becoming Azure SQL Databases? What I mean by data profiles is maybe the use of the application, how the data is stored, how it’s worked with. For example, a lot of broker-style messages flowing through individually and independently or batch operations, things that looked closer to a fact dimensional structure with a read-heavy set-up or maybe on the opposite side, a highly normalized, very heavily write-dependent transactional style system? What kinds of databases do you think fit better in this set-up, versus some other, either on-prem or IAS solution?
Jim: I just want to say first, now I know why you’re an MVP, because you threw out a lot of really giant words at me, there. Honestly, I like to try with them all. I haven’t come across one that’s not a good fit, as of yet. I’ve worked with healthcare companies. I’ve worked with financial institutions. I’ve worked with educational. All of them have their own subsets, like some of them might be write-heavy or I need replicas for reporting. With AzureDB, once you work out the kinks as to, “okay, well, this database is going to be read-heavy, I need to isolate it here, or make it into this package or make it into this tier.” But, to me, I can’t say that one works best and one doesn’t. With IAS, you’re going to be physically limited to how big the VM can go, so you can create that VM as big as you can, and then if you drop a database in there and it’s, let’s say write-heavy and you’re still running network attached storage, your IOPS are still going to be a lot less than using a Premium storage option, so you’d have to switch your hardware around, which results in the downtime and all of that good stuff. But, with Azure SQL Database, if I need to say like, “hey, I’m doing an archive operation the first day of the quarter,” and I know my database is going to get hammered with an archive procedure where it’s going to read from the data and then write somewhere else, I can scale out my database to a Premium tier during that time and never have it go down to my users or to my application. Nobody ever knows the difference while that operation runs. Then once the operation’s done, I can scale it back and again, nobody ever knows except you, because you pushed a button to scale. But basically, to answer your question, to me, I haven’t come across which one works best and which one doesn’t. I do a lot of work with pretty much every profile you mentioned, and I’ve never come across where one just simply does not work. You did mention brokers. Just to let you know, service broker is a big no-no in Azure SQL Database. It’s one of the things that’s not supported. I know that wasn’t what you were talking about, but you said broker and it triggered it in my mind, so service broker is not supported in Azure SQL Database.
Kevin: Are there any other big limitations or things that are currently not supported that a DBA may look at and say, “oh, that may impact me right now”?
Jim: The SQL Agent’s not there. That’s usually one of the big things that everybody notices first, is that they don’t have an agent. Everybody starts wondering, “well, how am I going to run my reports with a stored procedure” or how am I going to do my index or statistics maintenance”? For those, you can create what are called automation jobs through Azure Automation, that uses in-line scripting or PowerShell to connect to the instance and execute your scripts. Another way to do it is if you have a SQL Server up in Azure already, you can actually use an agent from another VM to connect in and do it. You just have to create the link between the two. A couple of my clients, what they do is they don’t want to use Azure Automation, they just spin up SQL Express on a VM and they do that. They do all of their maintenance through that.
Carlos: There you go, and there is, again, new in preview, the documents is from April of this year, so April 2018, so still fairly fresh. But they have another service which, if anybody can guess, is now called? Elastic Database Jobs.
Jim: Ah yes, those are coming, too, yes.
Carlos: Now, that same idea. Again, it’s kind of a PowerShell hybrid. I would really kind of classify it, really, as managing that with PowerShell, in my opinion. But again, it’s kind of this idea of, “I need to do the same thing to multiple databases” and it gives you that framework to do that.
Jim: Yeah, it’s going to be one of those really cool features that you can do. Me, personally, I like doing it through Azure Automation because you can parameterize your queries and then hide them. Which, if you’re doing that, make sure you save the file locally just in case you have a client that doesn’t know what that is and they delete it on you.
Carlos: Yikes. Yeah, that’s not good.
Jim: Yeah. They wiped out the parameters, they wiped out our user names and password parameters for all 1000 and some automation jobs, so all jobs started failing and I’m like, “well, I don’t remember what the credentials were, do you?” They were, “we sent them to you.” I was like, “that was like a year and a half ago. I don’t have that email anymore. It’s been archived off.” Luckily, we saved it locally somewhere and we were able to get it back on there, but make sure you save those locally.
Carlos: There you go. That sounds like a plug for Source Control.
Jim: I love me some Source Control.
Carlos: Yeah, I think the moral of the story here is you’re going to have to get in, try it out. It’s with varying degrees of success. I guess you may talk to your– particularly if you’re a Microsoft shop, if you have Software Assurance, obviously you have more options available to you. But if you really just want to get your hands on it, going out there, it’s fairly easy to get $150 in credits to go ahead and get started. Luckily, with at least the Azure Services, that can actually go a pretty decent way, assuming you’re going to start with the small stuff. Don’t try to go to the highest level tiers and think it’s going to go very far. But to get your hands dirty with it, if you will, and start playing with some of this stuff, those credits are useful.
Jim: Yeah, just in the database realm alone, $150, and you can make a basic database at Adventure Works and it’s like, most $8 a month and that’s me pushing it. Usually it’s like $6. I have a Nintendo database out there that I use for keeping track of my video game collections, it’s like 15MG and it costs me like $2.50 a month. But it’s still there, it’s up there in the cloud and it’s hosted by them. I don’t have to worry about backing it up. Everything’s up there.
Carlos: Now, I guess before we let you go, that is one of the main marketing options is–and one of the things that DBAs fear is that “hey, you’re not obsolete. All of these administrator functions are now, we’ll take care of them for you.” Now let’s talk a little bit about in the event that we need to do a restore. You need to get something back. I mean, you even mentioned your example where some jobs got removed. Maybe to cover this entirely, let’s talk about what it means to get some of that stuff back.
Jim: Sure. Each performance tier has their own retention period for how long Microsoft will retain your backups. I want to say Basic is a week and Standard and Premium are five weeks. Now if you need them held longer than that, let’s say you’re financial and you’ve got to hold on to it for seven years or HIPAA and all of that, you can actually create an Azure Recovery Vault. You can have the databases placed in there and you can restore out of there. Otherwise, the only option you get is a geo-restore and with that, I want to say what it does is it takes the last full backup it’s taken and restores that and applies all the diffs and the t-logs, but that’s that option. I really haven’t had to do many restores on AzureDB. Usually what they’ll do is with a client that let’s say they want to refresh a QA from a prod, you just export the backpack and I usually just export it and re-upload it or I geo-replicate it over to one and then turn off the link once it’s there. But there are, if you do need to restore, there are options, like I said, there’s the geo-restore option and restore to last backup. But if you need it retained longer, you have to have the Azure Recovery Vault. Couple of the things I didn’t get to mention was some of the baked-in features that are enabled automatically is, one of them is TDE. TDE is enabled automatically on all of the databases, that’s throughout all performance tiers, Basic, Standard and Premium. The other thing is if you want to use data masking, like Dynamic Data Masking, you don’t have to be on Enterprise to use certain features, every database has all of the same options, so all of the performance tiers have all the same options now. If I want to do a data masking on my Basic database just to play around with it, I can. Advanced threat protection, there’s a lot of cool things you can do with Azure SQL Database. One of the best ways to learn them is to just spin one up and go play.
Carlos: Yeah, I think they are adding more features. I think Kevin kind of asked what features might be missing. The CLR functionality is one. I know that’s really a deal-breaker for Kevin. The rest of us won’t sweat it too much.
Kevin: Not a fan of it not being there.
Jim: I can almost feel his sadness emanating from him.
Carlos: Things like change data capture or change tracking are not available. Again, some of the side stuff, like data quality services is not available, snapshots. Although I guess snapshots, maybe they function a slightly different way because it’s not the traditional snapshots. Obviously, you can get version backups and things, but it’s not quite the same functionality that we’re used to. But yeah, it’ll be interesting to see how that continues to grow and evolve and get better.
Jim: Oh yeah, one of the things I always stress though, is when you’re working with Azure, you have to be ready to learn on the fly you have to be able to adapt because there’s going to be days when, if you’re working with PowerShell or Bash or something like that where your script’s going to work one day and then the next day it won’t work because something in the Azure fabric or a commandlet’s changed overnight. So, the work with Azure or even in AWS or any of the platforms, you really have to be able to have a lot of patience to work with it because it’s always changing. It’s always evolving. That’s the big advice I always give everybody is that you’re not going to be the only one frustrated that they changed the PowerShell commandlet. Because I guarantee you, there’s going to be somebody out there that has some kind of monitoring script that’s using that same commandlet and all of a sudden, they have to update 30 different commandlets to make it work, so a lot of patience is needed with working with Azure, as well.
Carlos: Okay, shall we go ahead and do SQL Family?
Eugene: Sounds good.
Carlos: So, your all-time favorite movie.
Jim: This was a hard one. I couldn’t decide between the Princess Bride or the Iron Giant. Those are my two all-time favorite movies as a kid and I still watch them any time I can find them online.
Kevin: Andre the Giant’s in the Princess Bride. It’s obviously a positive there.
Jim: Oh yeah, Andre the Giant’s in there. You’ve got Westley, who is Cary Elwes and you know– oh gosh, I can’t think of his name–
Carlos: Oh, you killed my father, prepare to die.
Jim: Yeah, there’s him,
Carlos: Mandy Patinkin.
Jim: I was thinking more of the iocaine powder and what was his name?
Carlos: Oh, yeah.
Jim: The Great Vizzini, yeah, that guy.
Kevin: You have Fred Savage, that’s really what you’re thinking of.
Jim: Yes, cause Fred Savage is playing video games right in the beginning.
Carlos: Billy Crystal is in there, too.
Jim: That’s right, Billy Crystal is in there as Miracle Max.
Carlos: Yep. So, the city or place that you’d most like to visit.
Jim: This was easy. It’s the Chengdu Panda Reserve in China. That is my bucket list place to go so I can go play with the pandas.
Carlos: There you go. I mean, do they have like different kinds or is it just a reserve for them?
Jim: It’s a big breeding ground for giant pandas. So, the American pandas we have here, like the American giant pandas, most of them actually come from this reserve and they’re actually on loan. They loan them, they put them out and then they breed them back. So, any baby pandas that are usually born in America, they’re usually only here for about a year or so and they’re actually taken back to the reserve.
Carlos: Interesting, okay. Very cool. So, a food that reminds you of your childhood.
Jim: Since I was a huge fan of Ninja Turtles, I said pizza.
Carlos: Very nice, yes, I love that original movie. Tell us how you first got started with SQL Server.
Jim: I am one of those kids from the 80’s, I can say video games got me my career. What it was is, I collect retro video games and I wanted a way to store them that wasn’t in Excel or something else and the senior DBA at my previous company was looking for help, so my first database was a video game database and he helped me structure it and told me the ins and outs, so that’s what’s started. I became a junior DBA shortly after that and started working with it and so like I said, Nintendo gave me my career, because it was my Nintendo database.
Carlos: There you go, very nice. Now, if you could change one thing about SQL Server, and we might expand this into the Azure services that we’ve been talking about, what would it be?
Jim: I’d say get rid of Enterprise and Standard and Developer editions and just make everything one level, so like with AzureDB, every performance tier has every option. It didn’t used to be that way. There were certain options you could only get in Premium, but then Microsoft, they started scaling it back, so I thought they were going to start doing that with SQL Server, but I guess they still need to get some more extra bucks out of those CPU cores, you know, if you’re Standard or if you’re Enterprise. But I’d like to see just one model, to be honest. I think we’re actually at the point now where they should make it one model, in my opinion.
Carlos: Yeah, we’re getting closer, so maybe the next couple of iterations we’ll see that.
Jim: I hope so.
Carlos: What’s the best piece of career advice you’ve received?
Jim: Network yourself. The biggest, the best advice I was ever given was, “get out there and talk to people, meet people, put yourself out there” because you never know when you might need to switch a company or you never know when something might happen to you and you have to leave a company or if the company goes under. The more people you know in your network, the better off you are, because they can either help you find a job or they can always just help you with a problem you’re having. So, I always say, “network yourself.”
Carlos: There you go. Sounds like a great reason to go to SQL Trail. Just saying. Our last question for you tonight, Jim. If you could have one superhero power, what would it be and why do you want it?
Jim: I put thought injections. This way I can inject the right thinking into my clients when it comes to SQL Server. Or I can inject the thinking of why CLR isn’t disabled into Kevin’s brain and make him understand it.
Kevin: Not going to happen.
Jim: I know. It’s a hope.
Carlos: There you go. Okay, very good. Well awesome. Well, Jim, thanks so much for being on the program tonight.
Jim: Thank you guys so much for having me. I really enjoyed it.
Carlos: We do thank Jim, again, for taking some time to chat with us. One of the things he did mention was having the database in the cloud but the app on premise and I think I just want to go ahead and call that out that that’s not a recommended architecture there. You’re going to want to move both of those, obviously testing is one thing, but I think like he mentioned timeouts in the application. Now you can do some things on the networking side to get more direct connect, if you will, to the Azure, which is additional services or additional fees for that service. I won’t say it’s completely out of the question, but it’s going to take a unique approach to make that happen out of the box. There’s just too much latency and the application experience will degrade and I think you’ll get a lot of calls that will be like, “hey, why is this so slow?” kind of a thing. I also want to echo Jim’s comment about learning on the fly. Things in Azure are changing all the time and it’s part of the fun and the craziness of Azure. But I don’t think that you should feel overly daunted by getting started. You do still need to check if all of the things that you’re doing, if you’re looking to move, data types and other services that you’re looking for, they may or may not be available, so there is still some homework that you’ve got to do. But I think it will be interesting and another point Jim hit on was the hybrid model. Again, you listen to all of the marketing and I think particularly at first, Microsoft was kind of like, “hey, come to Azure and let’s just make it happen.” I think now they realize that there will be some hybrid scenarios or that most organizations will be in hybrid scenarios. Yeah, I’m interested, compañeros, to see how you’re using it. If you have thoughts or experiences that you’d like to share, of course we’d love to hear about those. You can leave those on the show notes at sqldatapartners.com/azure, or you can hit me up on social media. I’m always interested in connecting with you, hearing about what you’re doing. You can reach out to me on LinkedIn. I’m @CarlosLChacon and compañeros, I’ll see you on the SQL Trail.