Compañeros! The good organizers SQLSaturday in Pittsburgh allowed me to take a slot on the schedule and opened up the podcast for a little Q&A session and I was super happy with the results. We had several SQL Server pros come and our conversation was around one central issue–testing in your development environment. We came at from a setup perspective, but also from a how to compare apples and oranges when your hardware is different. Because we had a developer ask the question, I opted to call this episode “Dear Developer”. Hear from Microsoft MVPs and SQL Server consultants such as Allen White, Kevin Feasel, Jonathan Stewart, Tim McAliley and others give their thoughts on this subject. At the very least you will have a few more ideas on where to start looking for potential options to your testing after listening to this episode.
Episode 67 Quote
“Containers are the same thing. Learn Docker, learn Docker or eventually you will lose your job. Windows Server 2016 will have Docker support. Windows 10 professional edition has Docker support. We’re seeing this already on the Windows side. We’ve seen it on the Linux side for several years now. It will get more popular on this side of the house as well. It is time to get ahead of that because otherwise these containers will come up, you will have no clue what you’re doing with them, and bad things will happen.” – Kevin Feasel
Listen to Learn…
- The importance of Docker and containers
- Why Allen says you have to “re-tool” yourself every five years
- Why being the master of everything is no longer necessary
- How to pick a professional specialization
- Which essential database tools they recommend
Transcription: Dear Developer
*Untranscribed introductory portion*
Carlos: Okay. You can stay seated. Here we go. Okay companeros, we are here. It’s SQL Saturday Pittsburgh. We’ve got a nice little discussion brewing. I think we’re going to title this session Dear Developer. We’ve opened it up to invite people to come in and ask questions to our panel. Actually as we go through, because the transcript … Poor transcript-er, I’m sorry, whoever you are that’s listening to this now. There’s a lot of people on this one. Hopefully you can make heads or tails of it. When you get the mic for the first time say your name please so that we know who the transcript-er will be. Anyway, we have David here and David has a question, our first question.
Dave: Hello. My name is Dave [inaudible 00:00:46] and I work for Community Care Behavioral Health, which is a subsidiary of UPMC. In a prior lifetime I was a DBA, but that was 20 years ago when things were so much simpler. Now, as a developer I find that when I’m having production issues or when I’m developing code and I’m trying to test something out I can generate the volume of productions records that I have on my test system, but I can’t determine whether my code is really as bad as it seems or it there’s an issue with the development box having been set up by a system administer who doesn’t know sql server if they’ve set it up wrong, or if my queries that bad because my response time is just really bad sometimes on some of my solutions. I don’t always have time to use all the tools that DBA’s use. I wondered if there were some easy tools that would give me quick pointers as a developer to say, “Hey. Look, if you do this or that than you can change it. After that then you have to call in the cavalry.
Carlos: Okay. Ultimately I’m having problems in my dev environment, what do I do? Right? We’re going to start. Again, if you introduce your names. I’m going to start over here with Allen, and let us know.
Allen: I’m Allen White and currently i’m the business development manager for Sequel Sentry. That plays into the answer that I’m going to give you. Just this past week we released our free tool Plan Explorer in the new, what we were going to call, the Ultimate Edition. The thing that is tremendous about this tool for you as a developer is the fact that you can not only look at the query plan in far more detail that you’re going to see it in management studio, but you could actually have a DBA with full access to the production environment, run the query from plan explore, save the actual query plan and all the data that our tool collects into a special PE successional file, and then send you that PE session file. You could pull that up in Plan Explorer and not only get the execution plan and all of the data related to that execution plan, but also the detail on all the indexes it uses, it could use, and what we might recommend for you to use as an index to solve the problems with the query.
There are multiple tabs that you can look at on a query plan, both the visual plan that you’re familiar with from management studies, but also we show you the top operations that are going on, we can show you a diagram that shows the various joins that are in that particular query, we can show you the waits that that query is waiting on, all within Plan Explorer. If they run this in production, even thought you don’t have access to production, they can save that session file, send it to you, and then you can do a detailed analysis and find where the issues are in that query. The tool is now free. They can run it, they can install that in their environment where they can access it on the production system, and you can install that same tool at no cost to your company. That’s the approach I would take.
Carlos: Okay. Getting deep down into execution plans already. Tim, what you got for us?
Tim: [inaudible 00:04:48].
Carlos: NO, no, no. I’m saying that was where Allen was going. What’s your thought? Response to my dev environment is slow, what do I do.
Tim: I think Allen’s got a really good response just in general, but I always take the onion approach and start broad. I make sure I take a look at storage, I might take a look if it’s a performance problem related to how your work loads going, take a look at storage, take a look at network, take a look at the components of the applications tier that are hitting the data tier, then work my way into the server and up through the engine to see what’s making the query run slow.
Carlos: That’s Tim [inaudible 00:05:24].
Speaker 5: [inaudible 00:05:25].
Carlos: Okay. Hold on. We want to stay on this topic just for a little bit longer. ON this topic. Okay, here we got. Make sure you state your name.
Speaker 5: [inaudible 00:05:44]. Independent contractor from Baltimore. I have question to team.
Carlos: I don’t want to do another question just yet. It’s the same question?
Speaker 5: Question, yeah. For me it seems like you don’t have access to the host machine, so you have access only to the chill box, right? It seems like something [inaudible 00:06:04] or in the cloud generally. You have a machine there, you run a query there, and it runs slowly and you don’t know why it is slow. Is it your query or it’s maybe family and neighbor that are in the cloud? Maybe that approach will link to a dancer. Any thoughts about how to [inaudible 00:06:30] in the cloud?
Allen: Azure just released a new product. We’re pitching products here.
Carlos: That didn’t take long.
Allen: We’ve got these lovely sweaters with kittens on them. They’re in triple X now, then we’ll get … Anyway. Excuse me. We’re in Pennsylvania. Yeah. [inaudible 00:06:47]. The Assure product team announced this past week at Ignite, I believe, it’s called the Assure Tuning Advisor. It does a real time work load capture to make assessments on how the queries are running or your response times, and it gives you actionable tuning information where you can take that information and either apply it or save it off to maybe test on another workload or another system. Maybe in a situation where you were abstracted from being able to do some really deep trouble shooting where you couldn’t get to all the components, but you did have something running out in Assure, which I’m very pro. You’d have a tool like this Assure Tuning Advisor.
Tim: It hasn’t been announced per say, however talking with the PM manager of the query store they are going to be looking enable the query store by default in Assure super database. That’ll be another one coming.
Carlos: [inaudible 00:07:54].
Dave: Hi. This is Dave again. I understand what you’re saying Allen, about using Sequel Sentry. I’ve used it before. Paul did a great job at helping me out with one of my issues before. In this case, I’ve got a production data load for a data warehouse running. Telling me what the production thing is right now might help a little, but what I’m currently working on is making that go faster. Now I’ve built new indexes that I believe are going to work better, but because I’m on a virtual development box I can’t tell whether that index is helping or not because I don’t have the right performance things. I can’t put it into production, because it might make things worse, to get the actual result out. I only have this development box. Like I said, if there was something quick that I could to say, “Oh. Look, it’s because I’m memory confined.” Something like that. If there were any easy things as a developer that I could then point the administrator of the virtual machine to do to help me, because sometimes administrators of machines don’t know where to look and don’t want to look bad so they just say, “We’ll figure it out.”
Allen: I will say one thing and then I’ll hand it over. That is at least the execution plan will tell you if the index is being used. Then you could then potentially push that over to your production DBA’s to say, “I’m looking at implementing this index, can you please tell me maybe from the DMV’s what Sequel Server is using. What’s the usage of this table, what’s the read rights currently on the table?” TO at least get a guesstimate or expected impact as a result of the indexes. Of course, we’re assuming that assumes that you have good care and feeding for your indexes as well.
Tim: Yeah. I used to work for a company that had made the mistake of in the enterprise combining a transactional system with data warehousing. It’s just classic Squeal Server problem. I had to do a lot of work without any tools, third party tools, to make a case to do the due diligence and separate these tiers out. I used things like task manager and quickly spun out [inaudible 00:10:19] and did some [inaudible 00:10:20] 14 day log captures to show them spikes, show the impact of things like index maintenance we’re having on the platform broadly as well as the horrible customer experience. In the absence of really good third party tools, even if they’re free, old school task managers used to say what’s on fire? The built some perf mod around it just to see if you can get some history around that.
Vladimir: My name’s Vladimir and this goes back to my very first session of Sequel Saturday. IT was by Kevin Klein. 10 Things Every Sequel Developer Should Know. During that session he explained couple items available on T Sequel, one of them is DBCC Optimizer what if. That allows you to generate query plans based on the production size and production RAM and CPU. Optimizer will change the plan, how it run in the production environment versus how it’s running in your environment. If you have a two cores CPU, but in production have 64 cores, using that option you’re able to say run as if you had 64 cores. Show me the kind of plan you’re going to get. Besides that there is a way to generate statistics and indexes from production environment and put them in your dev environment, so then even thought you don’t have the same amount of data you’re using the same indexes and statistics that you did in production.
Carlos: That’s just regular DBCC commands from packaged in with Sequel Server, it’s not an add-on?
Allen: Yes. We’ve taken a peek at 2016, no changed there to your knowledge?
Speaker 7: I’m not sure.
Allen: Okay. We didn’t maybe [inaudible 00:12:13].
Johnathan : This is Johnathan Stewart, SQL Locks. I had a question to follow up for you. You mentioned these are data warehouse loads. Are you just doing procedures in Sequel agent jobs or using SSIS? A little bit more information because your problem may not even be your selects. Your problem could be your transformations in you [inaudible 00:12:34] package too. If we can get a little bit more information on your load, that’ll help us give you other ideas to help trouble shoot it.
Speaker 7: [inaudible 00:12:47].
Dave: Actually, it’s kind of a mix. I found that issues with my SSIS package. It’s probably up to 70 million claim lines now that we have to move, but I split them out because I end up joining the different tables based on the type of … I’m in healthcare, so if it’s an inpatient claim there are a lot less of those but I also need additional data about what was your admission date and what was your admission diagnosis. For the claim lines overall 98% of them come out of one table. I’m using SSIS for the three types of sub joins that I have, but I’m using a direct insert statement from a Sequel Select for the fourth one because it was so large it was crushing my SSIS push through.
Johnathan : With that, just the little bit of information that we have, obviously we can’t see your system and stuff there. If that amount of data is crushing your system the way you had it just there, are you doing a certain type of components that may be synchronous such as sorts and stuff like that? That can slow it down as well. There’s certain things in your package, tuning your package, whether you want to tune the buffers, the role counts. There’s a lot of tuning that can be done for SSIS as well. It may not just be … You still want to do what Allen talked about too, checking the database stuff before, but that’s the first part of it. If you’re using SSIS there’s a whole another world of performance tuning behind it as well that you may want to look at. One of the things that I see a lot of clients, they do run into, they don’t realize that they’re doing a lot of synchronous versus asynchronous transforms. With a synchronous transform you literally got to wait for the whole thing. Like a sort, obviously you can’t sort a data set unless you have the whole thing. You have to wait for it, and it slows it down, and then there’s you’re memory and all that type of stuff as well. Begin to look at how your packages are configured. You can just Google asynchronous versus synchronous components SSIS to see.
Johnathan : We’re not looking for pictures.
Allen: Something I guess I will say there from a set up perspective. I guess I’ll say two things. One is [inaudible 00:15:08] SP. Not ISP who is active, that’s Adam’s. SP Blitz, right? That’s just from a like, “Is my system set up correctly?” That’s one area. My partner Steve Stedman is databasehealth.com. If you download his it’s actually more of a gooey interface. He has what’s called a quick scan report. Where I was going with that is from a set up perspective in your development environments particularly if you’re running SSIS and your database, we talk about memory constraints, if the amount of memory that you’re giving to Sequel Server by default exceeds the box and then you’re trying to run SSIS on top of that you need to compartmentalize those two so that they’re not kicking each others … They’re not kicking each other.
Vladimir: I would suggest maybe if you run your queries in SSIS maybe if you run your queries in SSIS maybe use option set statistic on your own. Then you can run, for example, query before implementing index after implementing index and see the difference of IO’s. Use then a next one. See maybe any bottleneck, if you do a lot of inserts, maybe there is a problem with gam contention. Then maybe there is a lot of page splits during the in sort. There could be a situation how you upload the data. Maybe you clean the data before. Do you? Or you just always do inserts?
Dave: It’s all inserts. [inaudible 00:17:02].
Vladimir: Here is a problem. Here is what I’m leading to. Maybe [inaudible 00:17:06]. Maybe delete will work, yes. Yes. [crosstalk 00:17:11]. Wait, wait. Listen, listen, listen, listen. Okay. When you delete from the table you just delete all records. Yes, it’ll take longer to delete, but it will faster to insert because all pages are already allocated and index structure is already built. When you have clustered index you can try that with clustered index built on [inaudible 00:17:42], for example. If you [inaudible 00:17:45] it will one speed insert. If you just delete it it will be much quicker and much less page splits.
Dave: If your records are already in order.
Vladimir: There’s another thing. Maybe you lose your time on ordering records.
Dave: [inaudible 00:18:13].
Allen: [inaudible 00:18:13].
Carlos: That’s something that Johnathan brought up. That’s interesting, I hadn’t seen that on the deletes. I’m interested to find out more about that now. Yeah, yeah. That’d be very interesting. Again, just from a am I set up correctly? Those tools will help with that. [inaudible 00:18:31] contention means I don’t have enough tempt [inaudible 00:18:33], or files, things like that. Those are things that I can be doing in those dev environments which probably didn’t get set up before, and would help me there as well.
Kevin: Kevin [inaudible 00:18:47], professional podcast groupie.
Carlos: We’re not increasing your pay Kevin, just so you know.
Kevin: Okay. I want to take a step back and say we’re getting into some pretty hairy things. I know we talked about gam contention, we’re talking about some fairly detailed things. At a higher level, if i have .net code for example, the code is slow or it may not be slow, I don’t know. The important thing is let’s get some metrics on what calls are slow, what lines are slow, when are we reaching out to external resources, and how long is that taking? Can we trace that out, can we log that information, and then be able to figure out, “Look. This line of code is really slow. What’s going on in there? Okay. Well, it’s going over a network and it’s connecting to my Chinese data center and somethings happening over there and it’s coming back three hours later.” That’s where you know, “Can I at least get down to that level?” I think that digging deeper, deeper in is interesting. I think we have to get there. When you’re starting out, when you have a code base that somebody’s just saying, “The code is slow.” Start at the top level, get those early metrics.
Allen: That’s a great point. We were having an issue and experience where the time out was set for a minute. It was making “database calls.” They’re like, “Well, their database is timing out.” Oh, really? We put some tracing on there. We’re like, “Look. Yes, there were some things that were taking a couple of seconds which maybe should have been faster. Okay, I get it. The cumulative transaction for what you’re asking is not taking more than 20 seconds. There was several components in there. Then they went and did exactly what you said and put some, in my code, how long does this take? How long does this take? They’re like, “Oh. Gosh. Guess it’s not a database problem. Let’s keep looking at the code and see if there’s something else like some function that’s spinning, it’s causing the problem. That’s a great point.
Tim: Well, first I was mad because there was a guy in a SQL Sentry shirt and my first thought was, “Oh. Man, plan explorer is awesome and they did just make it pro.” Two other things that are different that I wan to say is one, I think Kevin has a good point that part of the original question is, “Hey. I don’t have a lot of time. THere’s all these different options. What’s the quickest bang for the buck?” The first is execution plants, but another thing I think worth looking into is wait stats. If you do [inaudible 00:21:29] SP Blitz it’ll give you a little bit of data if it has some really bad ones, but I know Paul Randal. If you Google Sequel wait stats, his blog will generally be the first thing that pops up [inaudible 00:21:39].
Carlos: The post is tell me where it hurts by Paul Randal.
Tim: Yeah. It’ll come up and you can just run that script and it’ll tell you what’s the biggest performance bondage. What is the Sequel waiting on? You can clear out those stats right before you run your query and get an idea of what’s the performance issue.
Vladimir: Since he mentioned wait stats, one thing that the plan explore does, I don’t sell it, it’s free. Plan explorer is actually will capture wait stats for your query when you run it. Now you get to see what’s impacting your individual query right there in one of the tabs.
Allen: Okay. Very nice. I guess I’m going to have to update my version of plan explorer. One thing I guess I would say with wait stats, and probably in the dev environment more than likely, are big wait stats going to be our disc? That’s probably going to be a scenario of contention where we’re going to have problems because we’re on commodity hardware probably over provision VM’s. Yeah. The age of the disc and all those things as well.
Speaker 7: [inaudible 00:22:59].
Allen: Something I guess I am interested to come back to. We went a little tangent a little bit. The question ultimately, or the thought was, I had a licensing issue and that limited what I was able to use, what Sequel Server was able to use.
Speaker 7: That was the second half of the problem.
Allen: That’s right.
Speaker 7: [inaudible 00:23:30]. Multiple times per row.
Allen: My question, I guess, is thoughts on finding licensing issues in your environment.
Speaker 7: [inaudible 00:23:44]. We’ve been licensed [inaudible 00:23:44] since then.
Allen: I thought that was going away.
Speaker 7: I hear you.
Allen: Is that still allowed.
Speaker 7: I don’t have the money to change it.
Allen: I thought that was gone.
Speaker 7: The newer license [inaudible 00:24:00].
Speaker 7: They don’t even allow that anymore.
Allen: Yeah. I was going to say. Even the operating system is moving to core seating, or core licensing. Okay. I guess that answers that question. Bye, bye seats.
Speaker 7: [inaudible 00:24:38] because you’re going to need it.
Speaker 10: [inaudible 00:24:41]. We worked a deal so they got some systems I’m migrating.
Speaker 7: [inaudible 00:25:12] to try to determine what the right numbers are for us to get psych license on that. I’m not involved in any of that. Like I said, I’m a developer. I don’t talk about any of that.
Allen: As we hand this over to Kevin, what if any tools do you have to know if you’re still on seat licensing? Is that even a thing?
Kevin: I have no idea about that. I was going to say in contract distinction to Johnathan’s implicit bias against virtual machines, I will say that it is much easier to license VM hardware than it is to license bare metal stuff. Hey, I don’t care how many VM’s I have sprung up on this thing. We’ve license the rack, once I license the rack I can just throw stuff on there. At least as far as I know I am not a licensing person, nor do I pretend to be, even on podcasts.
Carlos: You’re just a big fan or over-provisioning, is that what you’re telling me?
Kevin: No. It is not over-provisioning. What is it is saying, “You know company, you buy this big rack here, we’re going to license the rack.” That’s actually, I think in our case, it was cheaper than it was if we were to purchase physical hardware. We also get some nice benefits like being able to v motion machines across and being able to replicate machines, very easily sped up new machines. All those fun things you get with virtualization, but in the end our bill, again your mileage will probably vary, our bill ended up being a little bit less than if we did everything on physical hardware.
Carlos: Where’s Neil Hamly when we need him, our licensing guy?
Speaker 11: I don’t believe I have an answer on how to do time on the license type of sales. Select sell for property, license type. From that website called Stack Overflow.
Carlos: There we go. Okay. You closed it already. Select server property and then you pass in license type as a parameter. Or num license, and that will give you that information. There you go. Now you’ll be better positioned. Yeah, there we go. We’ll put that out on the tour.
Speaker 7: [inaudible 00:27:07].
Carlos: The transcriptionist is not going to know who’s talking now. Okay. I’m not sure how much longer we have, but I want to switch gears slightly. I don’t know. I was at Ignite this week. Something just clicked. I was in Doctor Sirocious session. He’s the VP of the data platform at Microsoft, right what Sequel Server falls under. A lot of the advancements that they’ve made in 2016 with column store, even some of the services in Assure, and the migration to Sequel Server on Linux. Basically his presentation or his focus was that we want sequel server to be a transactional system and an analytical system as well. That’s going to change things for what I’ll just call the traditional folks. I guess my question is thoughts on that and how do we start preparing?
Kevin: I’m going to say I embrace our ant overlords. Yes, absolutely. I think it’s a good thing on net, first of all. Technologies are fun, it’s fun to learn these types of things. I like Sequel on Linux. I like this idea first because of the licensing fees that I get rid of where I don’t have to pay for Windows server licensing. Secondly, it helps pull in that marginal unit shop, Linux shop that they need a real database platform and my SQL isn’t going to cut it.
Carlos: I guess I should say, Microsoft actually put out some information. The number one database platform for “big data or analytics” is currently My Sequel. Sequel Server’s number two. I think that Microsoft would like to just provide that solution end to end to be able to say, “Hey. I’m there.”
Kevin: You’ll see the same thing with this move to Assure Data Lake. You’ll see this thing where we’re moving toward [inaudible 00:29:33] clusters, we’re moving toward things that 5, 6 years ago were scary. They’re not as scary anymore. I think it’s a good thing. IT’s a good thing for us to, in the community, to learn as much as we can, embrace what we like, poo poo all the things that are terrible. Sure. I don’t get paid by Microsoft so I’ll trash the stuff that sucks. It’s nice to see them moving in this direction. It’s better than we know what the world wants and the world wants NT35.
Speaker 11: I was talking to one of the members on Sequel Sever on Linux product team and one of the concerns they had was, it’s kind of to your point, as Sequel Server ties it future to the future of Windows Server and more people are moving over to open source server platform workloads they were in a intentional bifurcation move to get Sequel Server uncoupled from a reliance on Windows Server. You’re going to continue to see more and more improvements on the Sequel Server on Linux features, you’re going to see more of the same … You’re going to see parody basically, what you see on Windows Server. They’re still working on things around SSIS, SSAS, SSRS, but all the core security features are there and it’s all quite intentional to embrace your ant overlords.
Carlos: I just want to point out [inaudible 00:30:56], we’ve actually reached out to the PM thanks to Tim, and we’ll be having a session dedicated to that and talking more deep dive about that.
Allen: Not specific to your question, Carlos. Again, this is Allen White. I’ve been in IT for 42 years now. Over the life of my career I have found that technology changes enough that you almost have to re-tool yourself on average of every five years. Now, fortunately five years from now I’ll be 67 and I can retire and spend my time on a golf course, but this is not anything different than what I’ve gone through. When I started in IT personal computers didn’t exist at all. You always have to be learning. You can never ever stop learning if you’re in this field. You should probably do it any field, but especially in this field. Never stop learning. Right now the big things are our services, right? The whole push towards analytics, just like you’re talking, just be prepared to always learn and then you fine. You just move along as things grow.
Tim: I think if nothing else, one of the goals to plug the podcast a little bit is to help data professionals, or developers even, to be aware of some of these other technologies. Just how it influences them, we’re not necessarily looking to master all of these things. You want to at least be able to address them and to understand, “Oh. When I hear the word are I’m talking about a statistical language and not the letter of the alphabet.”
Carlos: Not to put too fine a point on the learning new things, but when you speak with Travis Wright he’s going to talk to you about the importance of container-ization. It gives them the flexibility to move around containers of platform support, or I’ve used that inappropriately. He’s going to ask you about both the popularizes of that in your user group.
Tim: Right. I think that’s a whole new area. At least I’m speaking personally for myself now. Containers, I hear that in the application in the dev space. I’m like, “Oh. My gosh, I’m a little bit nervous.” I think it will. It’s going to come our way.
Kevin: Everything that you heard from people saying for the past several years learn power shell, learn power shell or you will lose your job. Containers are the same thing. Learn Docker, learn Docker or eventually you will lose your job. Windows Server 2016 will have Docker support. Windows 10 professional edition has Docker support. We’re seeing this already on the Windows side. We’ve seen it on the Linux side for several years now. It will get more popular on this side of the house as well. It is time to get ahead of that because otherwise these containers will come up, you will have on clue what you’re doing with them, and bad things will happen. Unless you’re Allen and you’re golfing.
Carlos: I guess I should say being at the Ignite conference Docker was a gold sponsor. They had a big booth at Ignite. They were a major player in the center section kind of thing. Yeah. I agree that they’re particularly because Microsoft is adapted in Assure. It’s going to make it’s way down stream. We’re going to start to hear more about it.
Speaker 12: Something to compliment what Allen said. He said you have to re-tool yourself every five years. I’ve been working with Sequel for about five years, which apparently means that I’m going to have to learn everything new. That’s what I was about to get to. I did two presentation on Power VI. You talk about the ways Sequel and data is going and it scares me a little bit because Power VI is a perfect example. Every week they’ve got new features they’re releasing and every month the tooling, the desktop tooling, it’s different. You look at Assure, you look at Power VI, you look at Sequel Server management studio changes every month now. I haven’t been in the field for 42 years, so I don’t have the perspective to go, “Oh. This is going to be okay. This is normal.” To me, as someone who’s new, it feels like it’s accelerating and I don’t know how you keep up. I listen to five different podcasts and I watch plural sites and all this kind of stuff, but I just don’t know how you keep up with how quickly things are changing.
Carlos: I think the days of begin the master of everything is over. I think Johnathan’s going to follow up with that.
Johnathan : Yeah, that’s actually where I was going to go with it too. Everybody knowing everything is going to be reserved for MVP’s like Kevin, maybe Allen, but he may be golfing. I don’t know. No, no. I think the days of everybody knowing everything and being masters of everything, because there’s just so much coming. It gives you the benefit too of being able to chose where do you want to specialize? You can be a high end VI professional and know nothing about [inaudible 00:35:53] because there’s just so much other stuff as well. We’re to the point now where you can specialize in master data management and have a great career, make a lot of money, and not know anything about anything else. Don’t feel like you need to know everything. Find something that you like and run with it. Let that be your thing and just run with it. Don’t feel like you need to know everything.
Kevin: Following up on Johnathan I absolutely 100% agree.
Johnathan : That’s right.
Kevin: Let me give you an analogy. You had [inaudible 00:36:19], who was probably the last mathematician who knew everything about math. Part of his knowing everything about math was the fact that he opened up so many doors that future mathematicians came up upon. There will never be another [inaudible 00:36:35] because nobody will understand the totality of mathematics to the extent that he did. It is a good thing on that because we’ve learned so much more since he opened so many of those doors. IN this data platform world, yeah. You could be the guy who knew everything about Sequel Server 7 or Sequel Server 2000. You could have everything in there, but things have changed. We’ve gotten more things. You will not be the person who understand that totality of Sequel Server 7, you will not be the guy who knows how everything functions under the covers. Don’t try to be that guy, don’t want to be that guy, don’t aspire to be that guy, be the guy who knows something, maybe a lot about something and a little bit about a lot of things. Don’t think that you have to be all omniscient about the entire platform.
Carlos: I think one potential new horizon there is, this is going back to our CEO panel we had in that executives want IT to be more familiar with the business. That may be the next hurdle that we have to embrace is-
Speaker 7: [inaudible 00:37:47].
Carlos: You’re going to know these niche topics, but then you also need to understand how it applies to the business so that the business can become more profitable. You create value there, right, rather than just performing an IT function. Very good. How we doing on time? We have 8 minutes. Final thoughts. Dave, you want to?
Dave: I don’t know if it’s a discussion for the podcast, but I’ll ask it. Because of everything else that’s going on in our environment and trying to keep up with all the new stuff I’ve been trying to look at Assure lately, and some of the stuff that I found in Assure that would make it difficult for us is if you try to use link servers instead of a virtual machine is the Assure database won’t let you sue link servers. When you have data in despair places that you’re trying to pull together for a report you’re hosed. You end up having to put them all in the same instance or something, and then there’s some issues with if you refresh your data every week if you want to take your data and load it into Assure you have to create a whole new database each week and then rename the database or something in order to get all your old links to work again because you can’t reuse the same database name when you do a restore from an MDF or something. [inaudible 00:39:14].
Carlos: I’m not familiar on the restore, that’s new to me. I guess I’d have to find out more about what that topic is. I guess-
Dave: [inaudible 00:39:29].
Carlos: Let’s make sure that we’re talking about the right things here. Are we talking about a VM that lives in Assure or are we taking about Assure Sequel database? There is not “DR” to Assure Sequel database, however you can … There is some HA type features and you can make those connections, but you can take a database and push it into Assure. I guess I need to figure out more of that.
Allen: I thought when you do a URL base back up and you do a URL base restore in the same instance you’re right, it doesn’t let you do a restore over with that URL base backup.
Dave: You need a new name and then you have to drop the original one and rename the new one to the original name or something like that is the way I believe I read about it. Again, I just didn’t know because I do so much stuff that’s more reporting based I have to report to the depart of human services certain stuff. I just restore databases that are production and OLTP onto a server where I can then do something with it without completely destroying the OLTP environment. Now do I restore to an Assure database? Well, if i have to recreate the database using Assure database then that’s a pain. Whereas that tells me that I need a VM instead and on a VM I have to do the copy to the VM area, blog area, whatever and restore from that in order to not have to change everything. I didn’t know if anybody had run into that, resolved that issue.
Carlos: I guess the pain point really is that that database then relies on other databases that you’d have to get that networking for, that’s really the pain point. Restoring to a different name shouldn’t really be a problem. IT’s just all the other downstream components that you’d need, the connections.
Tim: Yeah. Old step, clear out the connections to the old one, drop it, and then rename. Yeah.
Kevin: This probably won’t solve your exact problem, but this is an opportunity to plug power VI. They’re a solution for a lot of this stuff because you’ve got the on premises data gateway and you’re able to pull in your data. As long as it’s something where you’re doing more analytics and you’re not trying to do all your LOTP kind of stuff, but you’re pulling in data that’s highly compressible. IT’s able to just pull in the data for you, send it off to Assure, it’s running in memory, it’s all compressed and everything, and you don’t have to do anything special. You don’t need any inbound ports, you just set it up and go. You’re going to see more and more hybrid solutions like this where instead of trying to ship everything up they’re going to provide a bridge to work with the data that’s on pram. I think you’re going to see more and more of that instead of the old way of, “I need to copy everything, put it on a bus, and ship it over to their data center.” Kind of deal.
Carlos: Good stuff. Okay, final thoughts? No. Awesome. Okay. Well, thanks everybody. Let’s get a big round of applause. Here we go. Everybody clap really loud, as loud as they can. Here we go. 1, 2, 3. All right.