Episode 175: Migrating to the Cloud

Episode 175: Migrating to the Cloud

Episode 175: Migrating to the Cloud 560 420 Carlos L Chacon

Are you in the cloud? Lots of folks are talking about moving to the cloud; however, as we find in today’s episode this can be tricky to quantify what people mean by being “in the cloud”. We are happy to welcome Rick Lowe to the show and he gives his take on working with AWS. We explore what it means to move to the cloud, our experience helping companies get there, why companies would want to consider cloud options, and how you can keep up with all the craziness of new technologies in the cloud.

Episode Quotes

“[There can be] kind of a lift and shift mentality of, “let me take this existing set of instances, virtually pick them up, drop them in the cloud and expect everything to work the same, and that’s, unfortunately usually not an optimal use case for the cloud. Ideally you’d do some more re-architecting.”

“The default storage option in AWS is backed up by solid state discs, but they’re solid state discs that might be on the other side of the data center from where your hypervisor is sitting, so there’s going to be a lot more latency than you’re used to.”

“For the most part, especially if you’re doing a one-for-one, call it lift-and-shift into the cloud, it tends to get more expensive in Amazon.”

Listen to Learn

00:37     Intro to the guest and team
01:11     Compañero Shout-Outs
01:44     Intro to the topic
04:11     What does migrating to the cloud look like?
06:41     “Don’t just lift and shift. Revise what you’ve got.”
09:37     You have to think about availability a little bit differently
11:36     Things to think about when deciding to move to the cloud
12:56     Discussion on costs and what you should do about them
15:38     Setting up processes in advance will help with costs in the long run
16:56     Do you save money by going to AWS?
20:00     How to keep up with new services without losing your mind
25:51     Last thoughts on the cloud from Rick
26:55     SQL Family Questions
29:14     Closing Thoughts

About Rick Lowe

Rick is a Microsoft Certified Master with more than 15 years of experience working with SQL Server. He currently works as an independent performance DBA/Developer for clients across the USA and Canada. He started his career as a database developer, but over time became more and more interested in how the database engine operated … eventually becoming more focused on operational and performance issues than code.

Rick will work with all things relational, but most enjoys helping smaller companies get better performance from MSSQL, as well as smoothing over relationships between DBA and development teams.

Rick’s blog: https://dataflowe.wordpress.com/

*Untranscribed Introduction*

Carlos:             Compañeros, welcome to another edition of the SQL Data Partners Podcast. This is Episode 175. Our guest today is Rick Lowe. Hello, sir.

Rick:                Hey, everyone. Happy to see you again.

Carlos:             Yes, to have you back on the show. Our topic today is Migrating to the Cloud and so we’re going to figure out a little bit more of what that means here in a second. And we can’t forget that we have Kevin Feasel with us here today.

Kevin:              Who? What? Where?

Carlos:             He has made an appearance. We’re going to excuse Eugene for today. He is out. But before we get into today’s topic, I do have a couple of shout-outs, and again, compañeros, please forgive me if I butcher your last names. So Peter Summat, Samuel Mills, who we just sent a t-shirt to. Julia Rusmanica, Michel– oh gosh, I’m sorry Michael. Michael, I want to say Hudebine, or maybe Hudebine, I’m not sure, and Elvin Kubicki. So thanks, compañeros, for connecting and chatting with me on LinkedIn, I do appreciate that. Okay, so today’s episode can be found at sqldatapartners.com/175 or obviously if you’re listening to wherever fine podcasts are played. Okay, so we’re talking about Migrating to the Cloud and when you start talking about the cloud, we can’t but help but start throwing around numbers. Now, Forbes, in April, Forbes had a publication that said that public cloud was expected to reach $331 billion in 2022. So we’re three years out. Now, once you start looking at overall numbers of the cloud, things start getting really hairy. So, for example, Forbes said there was $182 billion spent in 2018, but I went to like 3 different sites and all of them had a different number for 2018, so I’m not exactly sure how they’re counting all of those pieces up. But part of what gets murky, particularly for us, so ultimately, we’re in the data platform focus, however, email services and file storage are the predominant cloud compute services. So even things like Dropbox, right? Dropbox continues to be the leading cloud storage provider. And so that kind of muddies the water sometimes as far as what are people counting or not counting as cloud services. And then of course you get into public versus private, so at least here in Virginia, what used to be Peak 10 or your  local data center, I think sometimes folks are counting those pieces in as cloud storage then, as well. So purposes of today’s conversation, ultimately we’re leaning towards public cloud, so AWS and Azure. I know Rick and Kevin, you guys using AWS?

Kevin:              Yes, we use AWS fairly heavily. Not for databases, but for other stuff.

Carlos:             But for other stuff, yeah. Yeah, so as AWS, they’re the first ones, they still have the lion’s share of what we think about cloud computing. And so, I would say that they’re still the most popular overall, particularly from the big name players, and again we’re talking about public cloud, so their services are “here’s what you get,” and there’s not as much flexibility as you might have with a data center in your hometown as far as like being able to get in there and touch it and rack things, etcetera, etcetera. So, one of the question that we had for today’s episode was okay, so I am looking at migrating to the cloud. What should I expect that to look like? Now, I suppose I’ve muddied the waters a little bit, and this is where I guess it can be a bit daunting is that management may be saying, “hey, I want to move to the cloud”, but they could also be talking about, “well, hey, I just want to move my email and file storage services there. I may not want to move my databases there.” So I guess let’s start there. Under what circumstances are you normally seeing, Rick, customers want to decide, “okay, hey, I’m ready to move my data” and again we’re specific to public cloud offerings.

Rick:                Yeah, so in my experience, and you know my experience might be skewed, I tend to work with smaller firms, just because they’re more fun to work with a lot of times. To be honest, most people I work with, the decision to go to the cloud is kind of driven by management as, I want to say a golf course decision. That, “hey, I’ve heard there are great financial incentives to go to cloud, so we’re going to do that.”

Carlos:             The marketing is working.

Rick:                Yeah, yeah, exactly. So, in terms of background, I’m really deep on AWS. I haven’t done much with Azure, and actually I was working for an Azure project, but it seems like you tend to get pigeon-holed into one or the other. And my typical Amazon client is they have the physical data center, they’re not necessarily unhappy with it, but they’ve been told, “we are moving up to the cloud, figure out how to use it effectively.” And the unfortunate thing about that is a lot of times that leads to kind of a lift and shift mentality of, “let me take this existing set of instances, virtually pick them up, drop them in the cloud and expect everything to work the same, and that’s, unfortunately usually not an optimal use case for the cloud. You know, ideally you’d do some more re-architecting.

Carlos:             Right, yeah, so I would agree. So my experience has been a lot of lift and shift as well. People get the bug, they want to “be in the cloud”, but fundamentally, nothing really changes. But I’m sure Kevin, on the other hand, you guys have been able to leverage a few more services and whatnot, right?

Kevin:              Yeah, it’s more of a hybrid scenario where basically most data lives locally, but things like application servers, web servers are almost all in Amazon. Elasticsearch is generally in Amazon. Making fair use of Cabana, or excuse me, we are using Cabana, but I was thinking about AWS Kinesis or message brokering. A lot of this kind of stuff fits well, and I’m going to make one statement and then that will drive me into a leading question for Rick. That statement is that stateless message passing tends to work much better in cloud scenarios than what typically works in a classic on-prem. With that leading statement, Rick, what kinds of architectural changes do you mean when you talk about, “don’t just lift and shift. Revise what you’ve got”?

Rick:                Yeah, as you said the question you asked is very leading in that– so I–

Kevin:              Badgering the witness.

Rick:                I’m a little bit spoiled in that I typically get to only worry about the database servers, so we aren’t hit by this as much, but just because I did kind of where a network engineer the last time I had a normal W2 job. The biggest problem we had to troubleshoot was actually with some Java webservers where the app devs, they’re using what are called sticky sessions. In other words, once they connected to the Java server you’re supposed to keep it in the same one, because they kept a boatload of state information in local memory, and yeah, Kevin, like you’re indicating, that just doesn’t work well in the cloud. Because, issues you run into is number one, if you’re thinking public cloud, you’re kind of encouraged to think of servers as being kind of disposable. If one fails or gets in a bad state, just blow it away and automatically spin up a new one and you can’t do that if you have stateful sessions. The communication and even IO tends to be a lot slower, so if you’re used to saving a bunch of session information to disc and you don’t pay for nice fast storage, that doesn’t work as well in the cloud as it did on-prem. But if we’re talking more database side, I guess I would translate that, assuming most of the people listening to the podcast aren’t necessarily Java devs or webservice people.

Kevin:              Really?

Rick:                Yeah, maybe that’s not fair.

Kevin:              This is the Java Development Partners Webcast, right? I’m not on the wrong one?

Carlos:             Yes, if you are a Java developer, I need to know. You need to reach out to me. I’d be very interested to know how you came to the podcast.

Rick:                Yeah, right now there’s probably a bunch of people listening to this being like, “oh, I’m about to tune out. I don’t care about session state.” So, in the database world, if you have one really big one with a database, with a ton of storage, and you don’t do anything special to make your storage faster, and you don’t think a lot about what your memory footprint is going to look like. Perfect world, you would have sharded that out and redesigned it to be a bunch of smaller instances that could, maybe, be spun up and spun down independently and I feel like I went down a little bit of a rabbit hole, there.

Carlos:             Yeah, yeah.

Kevin:              That’s alright. So recapping what you’ve got, segregation of servers, using smaller servers, thinking more stateless than stateful, understanding that IO generally is going to be slower, especially if you’re used to something on-prem that’s, say direct attached into the ME, you’re not going to really touch that, generally, if you’re running an EC2 instance with just SSDs. So also, rethinking the way the application is working to transmit things, like to think of inbuilt delays and periods of activity, as opposed to, “I prebought a big chunk of server and I have that big chunk of server even when CPU is at 1%, because I need it to last for the next 5 years.”

Rick:                Yeah, and also availability, you have to think about a little bit differently. You know, EC2 as a service, and RDS as a service are both stable in terms of uptime, however, individual hypervisors, I won’t say they go down all the time, but it’s not that unusual for me to get an email saying that a hypervisor is failing and I have to shut down one of my instances and restart it, and that doesn’t count as downtime in Amazon. I think the assumption is that you have these multiple instances and you more importantly have instances in multiple availability zones and if there’s a hardware issue, you kind of need to be able to shut it down and back on without bringing your entire business to a halt for the half hour it takes to do that.

Carlos:             Yeah, so then from that data perspective, right I mean, just that idea of, “well, my instance is not available, what am I doing?” So maybe I’m the one that’s slow here, but you’re saying that you have to put in some availability options to copy that data around, so that way there are multiple ways to be able to access the data.

Rick:                Yeah, yeah, so if we’re talking Amazon, this is less of an issue on the RDS side than the EC2 side. EC2 is their infrastructure as a service offering and RDS is more of the software as a service. There’s some redundancy built into RDS; you don’t have to worry about that. But I guess what I was saying is if you’re running SQL Server on EC2, it’s not impossible there you can get an email message from Amazon saying, you know, “hey, we’re having a hardware issue. We need to swap out the hypervisor that your instance is sitting on. If you don’t shut down your instance and restart it within a certain amount of time, we’re going to have to do that for you.” And if your business is dependent on that one specific instance being up and you don’t have any additional redundancy built in, that can be painful.

Carlos:             Right, right. So then what does it normally take? So, obviously we’re interested on the data platform side. When you think about lift and shift, the name itself is like maybe not quite to the level of just moving or swapping VM hosts, but people can sometimes kind of get that impression. And the tools have gotten a lot better right to be able to push those VMs up into the cloud, that the cloud providers are accepting them much easier than they were before. So what other kinds of considerations are, maybe even just from an infrastructure perspective, are folks being asked to consider when they make that jump?

Rick:                The biggest one I run into, if someone’s used to on-prem, you know, a direct to tab storage, especially if they’re used to solid state or flash, if you go to the cloud and you just get kind of default discs and attach a SQL Server instance, the storage is going to seem really slow. So yes, the default storage option in AWS is backed up by solid state discs, but they’re solid state discs that might be on the other side of the data center from where your hypervisor is sitting, so there’s going to be a lot more latency than you’re used to. That’s not to say the storage is just inherently slow, it’s just kind of like we were saying before, relational database systems weren’t necessarily what the architects had in mind when they designed these services. Yeah, faster storage is available, it’s just a lot more pricey than the default options.

Carlos:             Right, which then we get into the cost discussion and it is difficult for a lot of organizations, because every time you spin something up, you have to make a cost decision, whereas the on-premises route,  it’s some cost kind of a thing. And that can potentially lead people to make poor decisions there, because they’re like, “well, let me try to save some money here” or what have you. Maybe the question there is, so have you seen organizations adopt a, infrastructure policy isn’t necessarily the right way, but, “hey, here are the minimums for our organization” when you need to spin up new server or services?

Rick:                Sadly, I personally don’t see as much of that as I think I should. At the end of the day, for the most part, if you’re on Amazon, at least, and I assume it’s the same with Azure. Well, actually, no, I think it is a little different for Azure. But for the most part in Amazon, you sign up for the service, you agree to pay what you spend and month to month they just bill your credit card. You can go download a detailed bill, but I haven’t seen a lot of organizations do like a monthly review, as they probably should. “Here’s what we spent this month on instances, here’s what we spent this much on data movement, here’s what we spent on storage.” I personally think we should track those costs a lot more closely, because it should be a heavier ask. You know, you should have to kind petition to spin up into an EC2 instance and say, “I’ve ballparked the costs and it’s going to cost, you know, $50 a month for this instance, but it’ll generate an additional $100 of revenue” or something along those lines.

Kevin:              Yeah, we actually do track that by day, so all managers get out an email, all of our resources are tagged by team and we see the daily cost differences and things get highlighted in an HTML table, “hey, your costs have gone up by more than X percent in the last day or over the trailing 7 days.” So that is something that you’ll see more often as you get larger sized companies, because this is definitely a controllable operating cost. As opposed to a fixed capital cost, capital expenditure that I can amortize over a 5 year period or a 3 year period.

Rick:                And actually that is a great point that tends to get overlooked is the difference between capex and opex and it can actually show up in different places on the budget than we’re used to seeing the costs.

Carlos:             Yeah, so this is an accounting principle and it ultimately has to do with taxes, at least in the United States. Outside the United States, I can’t speak necessarily to that. That’s a whole different episode if we wanted to tackle that one. And so it is interesting, those costs, so particularly when you think about serverless or thinking of them as cattle spinup, spindown, test environments work really great or proof of concepts, those type of environments. So one of the things that we are seeing is you’ll spin up a VM, and again this is in Azure, you’ll get some of the network pieces or even some discs attached to that VM, for whatever reason, you’ll decide to remove that VM, but all of the cleanup pieces aren’t quite there, so then you’ll be left over with just discs hanging out there. And so kind of reviewing that, they make it fairly straightforward in the dashboard, to look at the list of inventory and then kind of group by name or whatever.

Kevin:              Yeah, plus you can use resource groups to tag all of this stuff together, so that when you do get rid of the VM, it can take away all of the ancillary discs and VPNs and IP addresses and whatever else that you purchased and associated with that VM.

Carlos:             Right, but the idea being is that it’s another piece that you have to manage or set up or think about as you start spinning those things up, because again, it’s very cool to be like, “oh, I want a new server” and in 10 minutes I have it. But putting in those processes, ultimately, to help you manage that will help you in the long run.

Kevin:              Yeah, so a question for you, Rick. Have you ever seen a customer actually save money by going to AWS by Amazon?

Rick:                You know, I was about to go there, as long as we’re talking costs. I have, but it’s very unusual. For the most part, especially if you’re doing a one-for-one, call it lift and shift into the cloud, it tends to get more expensive in Amazon. The one use case where I’ve personally seen it get cheaper is if you have something like a development system that only needs to be up from 8-5, five days a week. The runtime savings you can get by only having to pay for the only 40 hours of runtime as opposed to a full week’s worth of runtime per week, can add up. But yeah, for the most part, if you’re a 24/7/365 shop, it gets pricier.

Kevin:              Yeah, that’s my experience as well. I’ve seen that for smaller companies or companies that have developed explicitly to be a cloud product, that you can get away without having to pay that much money. But for any type of established system, it’s going to cost you more in general. Even if you do pre-purchase, I lock into EC2 for a one-year or a three-year plan, it’s still generally going to be more expensive, in my experience.

Rick:                Yeah, one I haven’t seen personally, but I think it’s an interesting use case is if someone’s using Amazon for disaster recovery. You can theoretically do a really, really super undersized instance if you’re doing log shipping or AGs or whatever, to just receive the data, and then if you ever have to fail over to the DR site, you can, you know, shut it down real quick, make a purposely big sized instance and restart it.

Kevin:              That’s true, that’s a viable strategy, for sure.

Rick:                And you can do a similar thing with storage, is that you could have the disc be magnetically backed when you’re duplicating to it and then switch it over to something better when it’s time to go live. It’s a lot of work, but I think you could theoretically save money that way. I’ve just never seen it done successfully.

Kevin:              Yeah. One area where I think you can end up saving on that is storing backups in S3.

Rick:                That’s true, yeah.

Kevin:              There comes a certain size where you could say, “I could technically save money if I have giant numbers of tape drives” or whatever, but when you’re dropping in Glacier for something where it’s a backup that you’re not expecting to use, but you need to keep it for seven years because Medicaid requires it or because some regulation requires it. Then you’re probably going to save money that way, over keeping it on tape or on disc, locally.

Rick:                Yeah, that’s very, very true. Although at the same time, you could push it directly into Glacier from on-prem, if you wanted to.

Kevin:              Oh, absolutely.

Rick:                Yeah. I’m not going to try and talk people out of the cloud, at all. There’s kind of a persistent false belief that it’s cheaper and easier and everything happy and sunny in the cloud. It’s slightly more expensive, it’s as difficult as on-prem, it’s just it’s different kinds of problems. It’s just, you get a lot of flexibility, which I think is why people ultimately go to the cloud. You can get some of these other really great services.

Carlos:             So one of the challenges with– I want to use the word challenge. So one of the challenges with the cloud is the new services that are offered. And I’m sure they have an equivalent, but at least in Azure, so like Databricks?

Kevin:              Databricks is originally AWS.

Carlos:             Okay, well, there you go.

Kevin:              So it’s just called Databricks over there. The Royale with cheese.

Carlos:             There you go. So it is like all everybody’s talking about, right?
And when we get into discussions with customers about analytics, whether they really need it or not, I find that Databricks comes up all the time. And it is then challenging, because that’s another technology that I have to keep up with. And so I guess I’m curious on, yeah, thoughts around– so lift and shift is one thing, that’s infrastructure. In theory, as administrators, we all kind of have some either background or the what we know to what we don’t know is small or at least the team knows about it. Once the management comes and says, “oh hey, I want to start using new X cloud technology,” it can be a little bit more challenging. So I guess thoughts around how to keep new services in check? ‘In check’ is the wrong word. But how to go about keeping up with new services without losing your mind?

Rick:                My thought on it, and I imagine this is probably very different than Kevin’s take, because I’m a real slacker, as far as technology goes. My personal thought on it is I try to stay away from the cutting edge a little bit. Not so much that I don’t want to be current, but I usually kind of wait until I’ve heard about the technology persistently for at least a few months before I get too excited about it. Because a lot of times, things kind of go away or kind of die in the cradle or die on the vine.

Carlos:             Yeah, or change.

Rick:                Yeah, yeah.

Kevin:              Personally, I do have a different philosophy, but the philosophy I recommend for a lot of companies is the same as what you just said there, Rick, that, “unless it’s something that feels like it’s going to fit a need, hang off a little bit.” Now, if you have the time to do it, if you just want to experiment, that is a great thing that you can do with the cloud. That, “hey, I don’t want to have to set up a Spark cluster and figure out the security aspect and figure out all of this integration stuff. I just want to see if it’ll let me do my job faster.” That’s great, that’s a wonderful use case for a lot of platform as a service technologies. That they charge you for the administration and for the infrastructure, but you don’t have to think about the administration or the infrastructure. So you get a lot out of that and for a certain subset of people, which we’ll call developers, that’s a really good experience for them, so they’re more likely to go try out these things. But as a corporate philosophy, yeah, I probably would push toward, you don’t have to think about everything, and you don’t necessarily need to micro-optimize your set of products. There is a good enough range and even within the bounds of which technologies you use, there’s a good enough range. It’s not like you have to switch all of your ETL. You don’t have to dump SSIS and go to Azure Data Factory and Databricks for everything, because Integration Services still does a pretty good job for a pretty good amount of stuff and until there is a compelling business case, yeah, recommend caution, but, go play around with it on your own. I mean, I think that’s probably the easiest way to get familiar with these. Try it on a small project, for many services. Databricks may be an exception, because it, like HDInsight is intended to be a bulky process, meaning a fair amount of servers under the covers, a lot of hardware you’re throwing at a problem because it’s intended to solve problems with large-scale data, so the costs can come pretty fast. Specific to Databricks, if you do just want to play around with it, there is a free version. It’s Databricks Community Edition. Strictly for AWS, there’s no Azure Databricks Community Edition. But most of the functionality is the same, it’s just that, okay, you’re using S3 for storing your data instead of blob storage. You don’t get the nice integration with things like Azure Data Factory, so it’s not perfect one-to-one, but it is free, which is a really good number for me.

Carlos:             Yeah, maybe the big takeaway there is as you start implementing with these new technologies, part of your responsibility, and maybe that’s a strong word, but part of what you’ll need to test out is the costing structures and becoming more familiar with them–

Kevin:              It’s really easy to accidentally get a $10,000 bill, because “oops, I left that HDInsight cluster on all weekend.”

Carlos:             Yikes, yes.

Rick:                I’ve literally seen that. Most recently someone was doing an experiment where they needed some Amazon EC2 instances with decent network bandwidth. So they spun up some really, really super oversized instances, just because you know, only the big instances had the 10gig connections at the time. And yeah, they forgot to shut them down, in an organization that typically had about a $500 a month Amazon bill had thousands of dollars show up just from runtime on those.

Carlos:             Yikes.

Kevin:              Yeah, and Amazon usually doesn’t give you the Oopsie Credit, the ‘I didn’t really mean to.” That’s usually not a good reason.

Carlos:             Yeah, doesn’t bode so well. Interesting, okay, well good deal. So I guess last thoughts?

Rick:                One area we didn’t really touch on, but just to be aware of, in both Amazon and Azure, there’s an option where you can basically have infrastructure as a service, you can have a full-on virtual machine where you can log into Windows and mess around with it and run SQL Server on-prem. Then there’s also usually a software as a service option where you basically just get an instance that you kind of pretend someone else is managing for you. Just need to be aware of and think about which one you want. I guess because we’re talking mostly of SQL Server people, the main gotcha there is if you do infrastructure as a service, you still get to use, at least in Amazon, it’s probably better on Azure. On Amazon, if you want SSIS, if you want SSRS, if you want SQL Server Agent, you have to go the EC2 route, rather than RDS. But it also means having to manage one of those, which you wouldn’t have to do if you went the RDS route.

Carlos:             Right, right, and that is an issue with all of the services versus VMs is that there are pieces that you’re going to get, that you may be used to that are going to get left out. Okay, shall we do SQL Family?

Rick:                Sure.

Carlos:             So, compañeros, you have spoken and we have listened. So we asked what do we do with repeat guests for the SQL Family questions and you said, compañeros, that you wanted to have new questions asked. So here we go, we have a couple of new questions here, and we’re going to try these out on Rick, today. So where is your hometown? Not where you were born, or where you are currently, but like when you think about growing up, what’s your hometown?

Rick:                I think, really, it was Minneapolis, Minnesota. It was where I went to grad school, but it was kind of my first job as an adult. I ended up staying there for, gosh, like 10 years or so, right after college and yeah, just kind of fell in love with the place. Not that I’ve been back for quite a while, but I do go back for SQLSaturday, though.

Carlos:             There you go. Yes, the long-lost love, there. Okay, so then with that, favorite go-to restaurant?

Rick:                Gosh, so if we’re talking the hometown restaurant, I guess I’d probably have to go back to Minnesota, which would be probably a meatball sub from Davanni’s.

Carlos:             There you go, and here I thought you were going to say Dairy Queen. I don’t know that I’ve seen as many Dairy Queens than I have in Minnesota.

Kevin:              Look, when it’s that hot, you need ice cream, okay?

Carlos:             Yeah, that’s right.

Rick:                There is that. Yeah, there is that.

Carlos:             Okay, so ultimately, you’re a consultant, data platform. What other profession than your own would you like to attempt?

Rick:                I always kind of wanted to be a novelist. You know, I didn’t say writer, because I don’t mean technical books, although not that I’d mind doing that, but yeah, it would be fun to write fiction.

Carlos:             There you go. So now I have to ask then, a follow-up question. So do you have a favorite author?

Rick:                It varies.

Carlos:             Who are you currently reading?

Rick:                Right now I’m kind of on a retro kick. I’ve been working through a lot of Robert Heinlein. Probably the person I read most consistently would be Orson Scott Card.

Carlos:             Interesting, yeah, yeah. Okay, our last question for you today, Rick. Room, desk or car, which do you clean first?

Rick:                I’d say my desk, because I probably spend 20 hours a day there.

Carlos:             Fair enough, fair enough. Well, Rick, thanks so much for being on the program today.

Rick:                Thanks for having me again.

Carlos:             Yes, and thanks, Kevin, for tagging along again. Compañeros, you can reach out to us on social media. Rick, how can people reach out to you?

Rick:                Easiest way to find me is on Twitter as dataflowe with an e at the end.

Carlos:             And Kevin?

Kevin:              You can find me in your S3 buckets.

Carlos:             And compañeros, you can reach out to me on LinkedIn. I am @CarlosLChacon. Thanks again for tuning in to today’s episode. The episode show notes will be available at sqldatapartners.com/175 and compañeros, we’ll see you on the SQL Trail.

3 Comments
  • Dear Carlos,
    First, I wanted thank you for your podcast and the work you put in for people like me in the SQL community. Secondly, I wanted to give some feedback on this episode. I come from an Azure background and there was not a clear distinction between Iaas, PaaS and SaaS. ALso from my limited knowledge of AWS, you can set-up billing alerts so you can get notified if a resource is going to be costing you.
    Keep up the great work and I look forward to hearing future episodes.
    Cheers,
    Zahid Hanif, Edinburgh (Scotland)

    • Thanks for the feedback–I appreciate it. For us, Iaas is the most common experience with Azure and the other options are for dev/test scenarios. It doesn’t take too much for those Saas offerings to require their own episodes. We’ll circle back around to them.

  • Dew Drop – September 20, 2019 (#3035) | Morning Dew September 20, 2019 at 6:00 am

    […] SQL Data Partners Podcast Episode 175: Migrating to the Cloud (Carlos L. Chacon) […]

Leave a Reply

Back to top