Episode 80: Containers

When was the last time you deployed code and then found an issue with the version of a stored procedure, a setting, or even a service pack?  While we data folk don’t generally have as much trouble with environment issues are our developer counterparts, it can still be a big deal.  Spinning up a VM may not be such a big deal anymore; however, most of us still have to request one and wait.  What if you could take care of the OS and the SQL Server and not worry about setting up another development environment?  Our guest today is Andrew Pruski and he talks to us about how he is using containers to support his environments and the flexibility it provides to his and his co-workers.

While the Linux containers seem to get lots of love, one unique thing about Andrew’s setup is he is running Windows containers with older versions of SQL Server.  What is cool to me is there are tools out there that can help us folks running windows get up and running without having to wait on our infrastructure to upgrade to Windows server 2016.  If you are using containers, I would love to hear about it.  Use the hastag #sqlpodcast and let us know!

Episode Quote

“Containers getting more and more popular with more diverse software that you can run in them.”

Listen to Learn

  • What containers are and how SQL Server can live in them.
  • The benefits of containers
  • How Andrew gets containers to run on Windows
  • Resources for more information

Andrew on Twitter
Andrew’s blog
Windocks
SQL Server Containers

About Andrew

SQL Server ContainersAndrew Pruski is a SQL Server DBA with over 5 years in database development/administration. Originally from Great Britain, he now works in Ireland. He is a regular speaker at SQLSaturday events.

Transcription: Containers

Carlos: Andrew, welcome to the show.

Andrew: Hi guys! Thanks for having me.

Carlos: Yes, so one of the reasons I reached out to you is ultimately our topic today is containers. And you had put out a series of posts on your blog about containers so I wanted to have you on because I think containers are going to impact database administrators here in the coming months if not years. I heard the analogy, and most of what we’ve been hearing from the containers kind of come from the Linux side but it seems like the Windows side is starting to speed up there as well. The analogy that I heard was that containers are to environments much like VMs were to Servers. And just kind of carving them up and making them their own little islands if you will so that you can then move them around much easier. That is what kind of the container idea brings to a specific application, and in this case we are talking about SQL Server in the container. Is that like a fair representation or understanding of containers? I guess why won’t you start as off and take us through kind of that definition of what containers are and how you’ve been using them?

Andrew: Yeah, definitely we’ll say that’s a good analogy. Their own horizon especially the Linux has been out for long as more established technology but the Windows side is coming up fast. And in, I’ll just say, months or years we’re going to be looking at containers getting more and more popular. And more and more diverse with the amount of software that you can run in them and especially with Microsoft announcing support SQL Server with their containers. It’s going to be an interesting time in next year to see what happens. I’ve been looking at containers as opposed to VMs is if you have a lot of different apps that you want to run in an environment. Maybe you have a VM with only apps running but you have the same app that you want to run multiple instances of then containers can definitely be the way forward for you.

Carlos: Right, so it’s kind of that encapsulation of an application that I can then move the application around.

Andrew: Yup, definitely this apps will give you the portability features very very good. I mean, with the SQL Server I think the main advantage that you’re going to get with containers is the ability to spin up an instance at a very very short period of time. So if you need it quick throw away instances into SQL Server, do some development, or testing on containers they’re definitely be the way forward.

Carlos: Right, so talk to us about how you’ve started integrating containers into your environment?

Andrew: My QA and Dev Departments have a couple of old servers sitting in the backroom somewhere that they’ve installed hypervisor on and they’ve got access to it. They’ve constantly building DMs, testing their apps and then blowing away the VM, and they do this every single time. Each time they do that they’re also installing a local instance of SQL Server. Now, the servers they’re running on exactly high spec, we’re talking 72K disks, CPUs that are five years old, probably older so the install time for these VMs was taking up to an hour. They did installed SQL and then start restoring all that databases from baseline kept in TFS and this process wasn’t exactly reliable either. When it did fail, they look, they have tuning to error logs and what not. They’re looking at this process and always thinking, you know, there has to be a better way but what to do. I mean this process has been in place for awhile I didn’t know there are other options to give them. Until probably about this time last year I start reading about containers and what options are there. I started to look around and got into companies that are out there, Docker being the main prevalent one. I started researching into containers and how SQL can integrate with those. What I basically did was I went off and had a word with a couple of my system admins and ask if there was a spare box lying around that I could provision, wiped it, installed an OS on it and then got the containers up and running. The idea is instead of these guys building their VMs, deploying their apps and then installing SQL Server. Because they deploy their apps to their VMs but then they run, the older apps they contact to container that spun up on a separate host, our container host. And the benefits that we saw were, well, we went from an install time of 40 minutes to an hour to spinning up container in about two minutes. That’s a new instance of SQL Server with all our databases available. Bear in mind they are small databases but the same database that we’re using for the install. And so from 40 minutes to about 2 minutes, that’s a huge saving.

Carlos: Right, now does that require an application change. You mentioned instead of pointing them to the database they could point it to the container. Does that mean that the application, instead of giving them a Port 33, excuse me, 1443, 1433, yeah and server name that you’re giving them a container name now. There’s another layer there?

Andrew: No, so what we did was one of the prerequisites I have was I didn’t want them to change that process too much. All I want to do is remove the install of SQL Server and have everything else exactly the same. What we did was I’ve written a partial script that goes off and creates the container from calling the Docker daemon. And then it creates a bunch of aliases, SQL client aliases on the application service because all our applications use DNS entries to contact the apps, to contact the server so we don’t need, so instead of using DNS the client alias overwrites DNS entry with the new container port number, so the app doesn’t know anything about it. The app still thinks it’s talking to an instance of SQL when really it’s talking to a container.

Carlos: Got you.

Steve: Ok, so then to take an example like what I do is I work on a Database Health Monitor Application and part of that is testing it on lots of different versions of SQL Server everything from 2005 all the way through 2016, and different installs from Express, to Standard, to Enterprise. And to do that I have a number of VMs that I’ve spun up and then I also have stacked instances on each of those VMs. So it sounds like if I was using containers instead of having all those VMs and stacked instances I could just have a container that I spin up with that test environment. Do what I needed to do and then turn it back off again. Am I following this correctly?

Andrew: That’s exactly, yes. Instead of having your instance there you could literally just kick off, say a little script that you’ve got to say create me a new container from a saved image that I’ve got especially for this one test purpose and off you go.

Steve: Ok.

Carlos: So again, the components there are, so there’s a host server, right? That host server is going to have an operating system and then I’m installing the containers on top of that. And my application could be a container and my database is a container and they can talk to each other.

Andrew: Yes, absolutely, so instead of having another layer with a hypervisor there. So instead of having the hypervisor the container is running on it that you have base OS and then you have your container daemon running that would create your containers for you which basically is the service that you call to say spin me up this container, spin me up a container.

Carlos: Very cool! And so we talked about Docker being kind of the leading technology there. It seems like as we’ve mentioned all the love was kind of going to the Linux world but you’re doing all of these in a Windows environment.   

Andrew: I am, so one of the things that trip me up at the start of the year was great. So we get in some support for containers on Windows which is actually brilliant. I think Microsoft has announced that partnership with Docker way back in 2014. So we’ve been all sitting on how it’s going, come on then when it’s going to happen. But it start to progress in the recent SQL Windows Server 2016 that the community technology preview, and Windows 10 as well if you’re inside of track preview which have the Enterprise Edition you can run containers on Windows 10. And so yeah, as you said, Linux was getting well of, and we’re finally start to see some of that come true now. We’re getting the same functionality that these guys have been playing for awhile.

Carlos: Right, so now what are the apps or components that you’re using on the Windows side to get that done because it seem like you’re really just talking  about from an operating system perspective it seem like Windows Server 2016 was the operating system where that finally arrived.

Andrew: Yeah, so that’s the operating system that Microsoft announced first supported feature. It is actually an in-built feature that you can switch on when you install the operating system so it’s just like anything else like .Net and things like and you’re away. You could be up and running with it in less than 10 minutes.

Steve: So when you say less than 10 minutes is that less than 10 minutes for someone who is a pro with Docker or less than 10 minutes for someone who’s just getting started.

Andrew: Well, it literally is enable the feature run. I think it’s two scripts and you’ve got your service up and running so it’s very very simple.

Steve: Ok, so then today I know if I’m working with VM to sort of make my own environment. I remote desktop to that VM, I do whatever I needed to with it and then I could shut that VM down later. But with containers then is it similar that you can connect to that container and then you have a choice of saving what you’ve done or aborting it and going back to that original state.

Andrew: Yeah, so what you can do is you initially connect to the Docker engine, Docker daemon, and you can create your own base images from templates so you have the base OS which I think Windows core on 2016. And then you can get SQL image, download that and spin up your SQL container. Then what I did was install a lot of databases, shut that container down and save that as a new image that I could then create new containers from, so instead of having to restore databases each time and to each container when I spin the container up those databases there are already in the state where I wanted them in. 

Carlos: Well, that seems like a slightly or I guess I’m not sure that I quite understand that either because the databases, I thought what the databases were not actually part of the container. Now maybe the data on that database, I thought they were separate like I can move my container around and those database files are not part of the container.

Andrew: Yup, ok the portability feature of the containers is something that I hasn’t actually played with that much so I’m looking at getting into. What I’ve been mainly concern with is I want an instance of SQL up and running very quickly. The software that I’m working with which I mentioned a bit called Windox is I don’t think you get that portability functionality because it is obviously dependent on the I/O subsystem. One thing I have tried is that with the Windows containers, the ones that support on Windows 2016, the database and the SQL instance are all wrap up in the container which is I/O independent. I need to double check that I’m afraid guys but you can basically push your image up to the Docker hub, save it there and then pull it down unto a different server and the databases will all be there.

Carlos: Right, now again that’s what I’ve been seeing from these demos, all these part of what they’re talking about. It seems like from an application perspective it’s a little easier to understand and grasp. But from the database perspective it seems like there’s a little bit of leap there still.

Andrew: Yes still, again very very new, I mean obviously that’s stateless so doesn’t really matter where they live and then not depend on the I/O subsystem. But of course with SQL Server, so depend on the I/O subsystem getting you head around how it actually all working in the background.

Carlos: Right, and you also mentioned your Win Docker tool and one of the things you like about it, the reason you’re using it besides the fact we’re on Windows was it helps support previous versions of SQL Server.

Andrew: Yes, so that was another thing that sort of trip me up from the start as well when it comes to this new software that’s been available, new functionalities available on Windows Server is what you’ve got at in the Docker hub at the moment is Express Editions of 2014 and 2016 and then full edition of the next edition of SQL Server. Which I think all the features are available that are available on Windows Core, so it’s pretty fully pledge database engine there. But again it’s a higher version of SQL. I wouldn’t say anyone is using particularly right now. There’s no point in getting my QA and Dev people testing on the higher version that I have in production. We’re running Windows Server 2012 R2 with SQL Server 2012. As far as I know we have no plans to upgrade in the next year anyway so what I did is I have to go out and start looking for it. Is there a software solution out there that will give me SQL 2012 running containers? I eventually found, yup, this company called Windox who have basically built a custom port of the open source software that Dockers put in the market which allows earlier versions of SQL server to running containers on earlier versions of Windows Server. I think it is 2008 upwards, both cases. I mean, I was pretty tough of that because it was exactly what I needed so I went off and attached to them and they’ve got a free Community Edition which I went to grab to ask them. Touch my system admins and grab my old beat up server, installed Windows Server 2012, followed the Windox installation instructions which again was pretty simple. They’ve even simplified it more since I had the first run of it. It’s basically a partial script to get the engine up and running, and once you’ve done that you’re good to go. That has allowed us to get SQL Server 2012 in containers on Windows Server 2012 so I got my Dev and QA Teams now working containers down the same level, and patch level as my production environment.

Carlos: So what’s the difference between the Community Edition and in the paid version?

Andrew: It’s basically the number of containers that you can have.

Carlos: Ok, so what’s the Community Edition limit you to?

Andrew: I think it limits you to two. Literally it’s just a soft gateway to show you how the functionality works and what you can do.

Carlos: Got you.

Steve: Ok, so in that solution then, if you’re downloading the container with the SQL Server version to match what you’re using in production. How is the licensing then done on that? Are you then having to put your own license codes in for SQL Server?

Andrew: We’ve used the Developer Edition so we neatly side step all of that.

Steve: Oh, perfect.

Andrew: I think the way it’s going to work with say like Standard. I think it would be nuts to go with Enterprise in the container at the moment but I think it would work the same way as how you would license your host and you can build from there. 

Carlos: Yeah, I think that’s a great way to make that adoption and to get comfortable with it is to using containers in those lower environments because they do tend to move. You want to spin them up, you want to take them down. You want to shake and bake them a little bit more than your production environments obviously, and that’s what actually containers are really really good at as well. That’s an interesting approach.

Andrew: Definitely treat the containers as throw away object. If you go out and use this one don’t spend time restoring databases into it, fixing things. Blow it away, run a script, build new container.

Carlos: Right, so now what happens if you wanted to add a new database to the container? What’s that like?

Andrew: It handles and works exactly like a normal instance of SQL through Management Studio. You don’t get the agents so that’s the only thing you will see. But you create a database and if you wanted to then replace your base image, you stop that container, blow away your original image and save that as new image. And you’ve got an updated image with your new database. I mean that’s part of one of the reasons as well that we went for containers instead of deploying to VMs. Because the guys we’re deploying our databases from base lines back up files basically kept in TFS, and then applying a load of updates to each and every database to get it to the level of production. Whereas what we’re doing is now is every week we have a certain amount of releases and at the end of the week I deploy all our releases to our base image, update that image, blow the old one away, save a new one. And then next week can all blow the containers away, build new ones and in a couple of minutes and they’ve all got up to date containers exactly the same as those environment.

Carlos: Got you.

Steve: So then for that development environment you’re then able to get those containers, sound like up to date a lot quicker than if you had like even standalone SQL Server Developer Edition installed in every developers machine. You are able to just grab that latest package or latest container, and then everyone is on the same baseline rather than each developer maybe getting out of sync somewhere.

Andrew: That’s it, yeah, so we can say to them shut down, you’ve all got your Dev VMs pointing to the containers. Your containers are now out of date, blow these away, build new ones, and you’ve got, every single one of you has got a replica of the exactly the same image which is up to date with production.

Steve: Yeah, I could see how that alone would just save a massive amount of time.

Carlos: And confusion.

Steve: Yup.

Andrew: The deployment worked on my VM but it’s now working on yours. Why is that? Because there is a mismatch there which we’ve now managed to get rid of.

Carlos: Yeah, exactly. How many times have you heard that? Right, well works online.    

Steve: Yeah, that’s one of the standard developer lines I thought.

Carlos: So I guess we’ve talked a little about some of the pieces that you’ve used. I guess anything, any other advice as far as getting up and running.

Andrew: I mean it is very simple to get it up and running. Bear in mind that this is brand new technology, we’ve had a few issues with integrating it into our own processes. I think the bit of advice I would give to anyone looking to go with containers is don’t go with containers just because you want to use containers. Have a look at your businesses processes and practices and see how they’re going to fit in there. Don’t try and work containers into them if it’s not going to be right because they are great technology to work with but you really need a business case for them there.

Carlos: Ok. But I guess I’m going have to say that I think particularly that scenario that you just provided that almost everybody has that circumstance with their lower environments. Well, I guess assuming their development shops, right. If you’re installing SharePoint for example and that’s your application then maybe that wouldn’t necessarily apply. But I think most development environments where you could make a case for that.

Andrew: Yeah, agreed. I would suggest have a look at and see what they can do for you. I’m very guilty of which technology I want to use and I want to play with it. But I really have to put out business case to get this in. It’s been working pretty well. I mean, we’ve had some issues to the fact that sometimes the containers are up and running and it’s new and it has crashed, and then we have to redeploy and do things a lot. But we get into the habit of the throw away objects. Get rid of them, rebuild and you’ve got another one straight there.

Carlos: Exactly.

Steve: Alright, so then I mean, sounds like containers have been around with Linux for much longer than they have with Windows, obviously because this is brand new Windows. But are there any real conceptual differences between containers in Linux versus containers in Windows other than the obvious operating system differences?

Andrew: I won’t say there are any conceptual differences. I think the main thing is that, as you said, they’ve been around on Linux for a lot longer so that technology is a little bit more mature and some of the features. I think some of the networking stuff isn’t available on the Windows side yet but it’s really just a case of just watch this base I think.

Carlos: Right.

Steve: So then on the networking side, I know if I spin up VM with hypervisor it’s doing this virtualized network interface for that VM. Is there something similar for containers where they’re getting a virtual network interface or they’re using the network card of directly of the OS?

Andrew: I think they used the network card. I’m not too sure I’m afraid actually there. The way I’ve been working with the containers as each one of them, each SQL instance listens on its own specialize port that you can specify when you create the container, so you can map the host port to the container port and when you’re connect them locally or connect on host port. When you connect them remotely you’ll get your individual pullage specify when you create the container. So that’s the one thing we did when we’re building the containers and the aliases that I mentioned is that we use aliases because you can’t use say like host because you can’t specify port. But with aliases you can say connect to this server on this port and that takes them straight into SQL instance.

Carlos: Right, so that mapping actually happens in the container where you can specify when the incoming port is this send it here.

Andrew: Yes. So it’s just a little switch when you specify say, run, create a container from this image, this port to this port, off you go. It does mean there are differences when you connect say, if you’re on the VM where the host is running, where the containers are running. You’ll be connecting all different ports as you were if you were outside of that VM.

Carlos: Ok, so then for someone like me who has not used containers before but it sound interesting and I want to go and give it a try. Is there any advice that you would just recommend to watch out for or be aware of on that first time giving it a try?

Steve: Well, I think if I was going to do that, just to jump in here Andrew, I think also obviously on the Windows side, right, you have Windows the Community Edition to going to get started. And then I would jump to Andrew’s blog and start there.

Andrew: I mean there are blogs there I’ve written from Part 1 to Part 4 that will take you through initial setup and then playing around a little bit. And then actually working with the Docker hub as well which will allow you to upload and pull repositories down and things like that.

Carlos: Alright.

Steve: And that’s side is dbafromthecold.wordpress.com. Is that right?

Andrew: That’s it, yes.

Carlos: Yeah, and we’ll have that available in our show notes as well which will be available at sqldatapartners.com/containers.

Steve: Perfect. Well, I think that wraps up the questions I’ve got about getting started.

Carlos: Ok, awesome. Well, I mean so great conversation, great, you know, interesting technology. Thank you for coming walking us through some of these use cases that you’re using it for. I think particularly on the Windows side, because admittedly that’s where I feel more comfortable. I’m interested in trying Windox and kicking the tires there, and see how it goes.

Steve: Yeah, thanks for having me guys. There are a couple of things I’ll come back to on especially about the network adapter. Those are interesting questions although I wasn’t too sure. And also I do also want to double check the upload and download of containers into the Docker hub and see if it will actually just keep the actual states of the SQL instance or it will take the databases with you. So I can come back to them if you would like to.

Carlos: Alright, yeah, that would be great.

Steve: Or I’ll have to keep an eye on your blog and see if it shows up there I supposed.

Carlos: So shall we go ahead and do SQL Family?

Steve: Yeah, let’s do that, so how did you get started first using SQL Server?

Andrew: I started using, well the first job I got was this local government coming out of university, and it was writing Crystal Reports. I didn’t have any idea with the backend was. I was just writing SQL queries. I thought it was Oracle at first, it was actually SQL Server. I was doing research on, you know, I was writing, and things were running slowly and I start doing a lot of googling around how to speed things up for. So basically how to do certain things with SQL I didn’t know. I eventually stumbled at sqlservercentral.com. I’ve found this huge wealth of information about this whole entire industry of professions that I’ve had absolutely no idea about. And just started peak my curiosity from there and eventually decided that I wanted to become a DBA.

Carlos: Very cool. How then did you get your first job as a DBA?

Andrew: I think completely out of luck. I was very very inexperienced. I was applying for jobs and eventually got into with a place called the United Kingdom Hydrographic Office. The interviewer there gave me a chance and gave me my first job as a DBA. That was about 5½ years ago now.

Carlos: Very cool, ok. Now if you could change one thing about SQL Server what would it be?

Steve: I want to have dark theme for Management Studio but it’s not going to happening anytime soon

Episode 79: SSIS Catalog

Although the SSIS Catalog DB is created with management studio, it does not behave like other databases.  Our conversation in this episode revolves around the catalog, SSIS packages, and some of the complexities of migrating packages.  Steve and I are excited to  chat with our guest Andy Leonard about his thoughts on the catalog and how this feature provides some really interesting benefits for ETL architecture.

Episode Quote

“[The SSIS Catalog] is really there to facilitate a lot of Enterprise functionality that is needed in any data integration engine you use.”

Listen to Learn

  • The benefits of project deployments
  • Some of the issues migrating packages
  • Why restoring the catalogdb can be problematic
  • The various levels or components of a package
  • How you can view reports without going into SSMS

Restoring the SSISCatalog DB
Integration Services stairway series
WorldWideImporters Sample Database
DILM Suite

About Any Leonard

A former chicken farmer, Andy is now is a Data Philosopher at Enterprise Data & Analytics, an SSIS Trainer, Consultant, and developer; a Business Intelligence Markup Language (Biml) developer and BimlHero; SQL Server database and data warehouse developer, community mentor, engineer, and farmer. He is a co-author of SQL Server Integration Services Design Patterns, and author of Managing Geeks – A Journey of Leading by Doing, and the Stairway to Integration Services.  He lives with his family in Farmville, Virginia.

Transcription: DILM Suite - SSIS Catalog Viewer

Carlos: So Andy, welcome to the program.

Andy: Thank you!

Carlos: We want to have you on the show ultimately one because you’re pretty close to Richmond. And of course you’re Richmond famous if nothing else for helping start the local group and we’re very appreciative of that. You’ve been doing quite a bit in SSIS, written several books on it, been kind of putting together this catalog, this DIML Catalog which we will talk about here in a minute. But ultimately, I guess our conversation today is going to revolve around the SSIS Catalog. So let’s go ahead and maybe jump into it and kind of give the 101 overview of what the SSIS Catalog is and what it’s going to do for us?

Andy: Well, first I want to thank you, Carlos, for that warm introduction. I’ll just say that there are a lot of people involved in the Richmond, Virginia SQL Server community and just all of our technical community. We have a great collection of really different communities there from JavaScript across to the .Net community, to SQL Server. And as you know there’s a lot of interaction between the groups. And I’ll just also give you props for the amount of work and the amazing work you’ve done in the past few years after stepping in. I stepped away. I’ve been an attendee and a person who’s been enjoying a lot of the advance there that Utah put on recently. So thank you for that, it’s not just us. There’s a huge team of people involved and we could name names but they know who they are. So with that said, I’ll jump into a Catalog 101. And this aptly time, Kent Bradshaw, a friend and co-worker at Enterprise Data Analytics. He and I earlier today put on a free webinar where we talked about this very topic, an introduction to SSIS Catalog. So what the catalog is, it’s an SSIS framework. And what do I mean by that? It manages SSIS package execution, logging and configuration. And before the SSIS Catalog was introduced in SSIS 2012, a lot of people wrote their own, and I was one of those people. But it’s really there to facilitate a lot of Enterprise functionality that is needed in any data integration engine you use. It doesn’t matter if it’s Informatica or SSIS or one of the others. You always need some kind of framework. You need something to manage execution, something to log, something to manage configuration, so at a very high level that’s what the catalog is.

Carlos: Now, that’s an interesting topic. I didn’t realize that you could use the Informatica packages with the SSIS Catalog.

Andy: You cannot but Informatica has its own functionality built in, its own framework, so it does a similar set of functions as the SSIS Catalog.

Steve: So then, with the SSIS Catalog, people can still run SSIS packages without it but you get a lot more functionality in the logging and capabilities there if the catalog is installed and used. Is that correct?

Andy: Yeah, absolutely can Steve. Microsoft, the way they built SSIS starting with 2012 is they realized they were offering this big shift in paradigms for execution and they made the existing way of operating. They made that backwards compatibility, a very easy to accomplish inside the new, I guess the new paradigm. So what they did was they gave the old way of executing packages a name. They called it package model and the package model deployment, and that is the old way of doing things. And then they created a new default the project deployment model. And so in order to execute packages in the catalog you need to have them in that project deployment model and that’s a property of the Visual Studio solution, the SQL Server Data Tools or SSDT solution that you built. And they really created this really very rich variable boost. And I’ve never seen it failed once, wizard for walking through. First, importing packages and two, SSIS 2012, 2014 or 2016, and they come in as this package deployment model if they are earlier than 2012. And then, once you walk through that wizard they’re now in the package deployment model, and then after that if you want you can convert that project to a project deployment model and deploy it to the catalog and run it there. But you’re right Steve you can do everything the way you’ve been doing it for 10 years if you’d like even in 2016.

Steve: However, that sound like there’s a lot of benefit to using the catalog and not doing it the old way or the package deployment way.

Andy: There are some benefits to doing it that way, I’d say logging. And really the way the logging engine has been architected for the catalog that was a big step in the right direction. Microsoft decoupled a lot of things right at that logging works and just because of that you get package performance boost anywhere maybe from 20% to a couple 100%. So you can see in just a performance boost by importing the projects into, say, running in the file system to running in the SSIS Catalog, and again, it’s all around that logging.

Steve: Interesting, that’s a good point to know then, so if you’re having performance issues it might make sense just try moving them into the SSIS Catalog and see if that improves things.

Andy: Absolutely. You know, a lot of people schedule jobs in SSIS especially if you’re running again SQL Server and if you’re using SSIS you have to have SQL Server. Although the engine itself is free and you can develop and run in the debugger anything you’d like. In order to have it run outside of that development environment you need to have a SQL Server license on that machine, you know, that’s always been. But certainly, if you pull SSIS projects that are pre 2012 or even if you’re running 2012 projects, 2014 projects, and package deployment. Pull those into the Visual Studio Designer. You do the upgrade right there in Solution Explorer, you convert to a project deployment model and then you can right click the project and deploy it right over to our catalog. Yeah, you’ll see performance improvements. It’s not just the logging but the logging is probably the largest one. Probably the next largest performance boost you’ll see is they’ve got a different execution engine that they built specifically for running SSIS packages that are executing in the catalog. So it runs a little faster which isn’t surprising. And when the catalog executes packages, it executes it in that. And I don’t know if you had this experience, but I have. I’ve built a version of something, and then comeback a few years later, been asked to upgrade it, make a new version. And that second version is always better.

Steve: Oh, sure. Yeah, you all know the mistakes for the things that you would do differently the second time around. And it sounds like that’s what they’ve done, is that they’ve just done a better job that second time around.   

Andy: I think so. There’s spots in it where it could improve. And that’s part of the definition of software development, right? Anytime you build a database or piece of software you can always improve it. And I’m sure, I know for a fact talking to some of the folks involved with the project they’ve looked back and said, “Gosh, yes, we should have done that, we should have done this”, but seriously kudos to them. It is a stable engine and it does what they built it to do, and it does it well I think.

Carlos: I am getting flashbacks to the 463 Exam Prep with all these talk of converging from package to projects deployment.

Andy: Yeah, I mean it’s involved. But once you’ve done it a couple of time, Carlos, I don’t know if you had that experience but it’s really not hard. The wizards are, you know in my opinion the wizards are work of art. It’s well architected and it’s something that is really easy to, you know, the kind of fall out of a project plan. The importance of those conversion wizards or either upgrading projects or changing them, they’ll go back and forth. You can go to a project deployment and back to a package. Not only that but the deployment wizard that they built for deploying stuff into the catalog, again it’s another snappy piece of curd. Very, you know, there’s a lot of Enterprise features in there. And if nothing else, I’m really glad we’re talking about this because I don’t think people realized just how well some of that software works and how flexible, and well architected, well designed it is.

Carlos: Yeah, when we talk about the flexibility, I think particularly that’s the big benefit I’ve been able to see is the ability to change the environment parameters. And then have it execute in different places without actually having to then create all these copies of the project of those packages and then have version issues and all that kind of stuff.

Andy: It’s definitely more aware of lifecycles then in any previous version. There are still some corners there when you start thinking about it in the lifecycle management perspective especially for Enterprise software. You know, some of it is just, some of it is not Microsoft’s fault or anybody’s fault. Really, it is things like the nature of XML. So you know that just any XML engine, any document object model that is semantically managed if position, relative position in a file for instance doesn’t really count. What counts is where it is in the tree. And so when you start doing things that you would expect in an application lifecycle management like compare between two versions of a package. You may have identical versions of a package functionally. But internally the text maybe moved around because it is XML. And from a semantic perspective it is identical but coming up with a good semantic compare is not easy.

Steve: Yeah, and I think that’s one of the interesting challenges that I’ve come across and trying to do source control around SSIS packages is that you committed to source control and then it tries to do a text based gif and you end up with a lot more gifs than what you might have expected because things shifted out in the XML.

Andy: It is very challenging. Yeah, and even the act of deploying an SSIS package to the Catalog alters some of the value that are stored in there. Some of those are just insignificant things like there’s a last deployed field that maybe updated. But there are some other things that actually happened to the package XML itself for instance the package protection level is set to a value that in 2012, 2014, 2016 does not appear in the list of protection levels. So in the old days there were six and they were, don’t save sensitive. It’s all around sensitive information so there was don’t save sensitive, encrypt sensitive with user key, encrypt sensitive with a password, encrypt all with a user key, encrypt all with a password, and then the sixth one which isn’t in the new stuff. In the old days there was one called the server storage. And that primarily existed so that you could set your packages to be encrypted on server storage and that was only used when you deploy to the MSDB database which you can still do that’s part of package deployment. But what they did when they create a project deployment is they made the SSIS Catalog and the SSISDB database. It uses encryption and in order to create it you have to supply a password that will solve the encryption. In order to restore it you also need that password and that we’re going to talk about that in a minute. But what happens is in the act of deploying it changes whatever protection level you set for the development environment into the equivalent of server storage. But again that’s not in the list. You can’t pick that as one of your options on your side but the catalog will switch whatever of those five you pick. It will change it to server storage and it’s using SQL Server native encryption, transparent data encryption. And, you know, that’s pretty rock solid. I know a lot of people have moved to that and I was absolutely thrilled when SP1 came out. At the time of this recording we’re doing, we’re doing this at the end of December 2016 back in November at the MSConnect Conference when they announced SQL Server 2016 SP1. They made encryption available across all of these additions and so much more. It was really nice to see that.

Steve: Prior to that announcement the TDE option or transparent data encryption that was only available with Enterprise. Did that you mean that you had to have Enterprise edition of SQL Server to use the SSIS Catalog in this encryption mode?

Andy: Yeah, that’s a great question, Steve, you did not. They had encryption, I don’t know how they did that but you had it in Standard. What happens when you create an SSIS Catalog is there’s a backup of the SSISDB database that shifts with SQL Server. It’s actually still there if you go look. Any place that you’ve checked the box for integration services in your installation if you go dig through program files into DTS directory. You’re going to find an SSISDB.bak file. And what you’re doing with the GUI that they supply in SSMS when you create a catalog you have to enable SQL CLR. It has to have that if it’s not already enabled because all of the execution and a lot of validation and other functionalities that occur in the catalog is actually happening down the .Net framework so there’s a lot of SQL CLR stored prox in there. So that part has to happen and you have to supply a password because you’re going to be using that encryption.

Steve: Ok, so that’s a good transition I think for us then into the topic of restoring an SSIS Catalog database. And I think that’s one of the things as a DBA a lot of people might be really familiar with normal backup and restore but they may have never done it with SSIS Catalog. What kind of thing do you need to know or you do differently when you’re doing that?

Andy: Well, there’s a great article out in the MSDN, and I think there’s a companion article over in TechNet. Both say basically the same things. So they walk you through this, this process, there’s in a lot of ways it’s like restoring any database that uses encryption. So whenever you’re using encryption you have certificates, and you have passwords and all of that sort of stuff. The catalog has all that as well. In addition to that, there are some users, some logins that you need to create to fully restore to operational status in SSIS Catalog. One of the things that I get calls, it’s a tragic call, is someone will have lost the database and they will have been backing it up for years, and SSISDB is out there and it’s all backed up. And they say, “I’m having trouble, I restored it.” You can actually restore it without the password you can do a standard restore. And it will come in there and a lot of your metadata will be press and you can get to it and read it. But when you start executing packages from that, or when you try to export a package for instance, or deploy a package to that catalog all of a sudden you’re getting this errors you never saw before. And what’s happened is the encryption is broken. And in order to make that work and work correctly you have to follow those instructions out there at the MSDN and TechNet sites. I wrote a blog post about it, and I know you guys sometimes include blog posts. I blog at SQLblog.com, and If you go to SQLblog.com and click on my name and then you can search for “restore SSIS Catalog”. I took it from the perspective of what it’s like when you try to deploy an SSIS package or project to a catalog that’s been restored incorrectly. So I started with those error messages and kind of work through, “This is the problem you have now and here’s how you can solve it.” I included my template script that I use all the time. I gave it away but it creates the users and the login you need. I put dummy passwords in for a bunch of stuff and you just go in and everywhere I mark places where you need to update it because I don’t you using the same passwords. But if you do that it restores at about 5 seconds. And you’ve got a, well, an empty one restores about 5 seconds let me say it that way. Your mileage will vary. The more stuff is in the catalog your mileage will vary but the key here is really understanding the importance like so much in our field. Like you see, like the password for the encryption. You’ll think I never need that again, right? Export the keys, export the encryption keys and you see these warnings all the time for years with Reporting Services. You know, all of these products that are great but you can really leave your Enterprise in a lurch if you haven’t done these best practices.

Steve: I think it shows there is the difference between, I won’t say a normal database, but maybe application database one that’s kind of being dig in to the SQL Server Management Studio that is so tightly integrated with several of those components that it kind of require its own special handling.    

Andy: It really does, and you know, for the longest time and I’ll still say this. In fact, I said it today on the webinar, it’s just a database. The SSISDB database but I would hang in Astrosoft that and say that, you know, with encryption and then you kind of need to hang another Astrosoft that says that with the special set of users that are happened, you know, the logins happened to be hard coded into some of the interactions that happened between the database and the applications especially those .Net framework classes. And they’re specific to each version of the catalog by the way so you can’t mix and match two much with it. Yeah, you can get yourself into a world of hurt there just by skipping a step and it’s a very easy step to skip. And like I said, I’ve had the tragic calls from clients saying, “How do I hack this password out of here?” And I’m like, “I don’t know.” I sent up someone like you guys, and say it’s now a database issue. These guys can help, if anyone can help, they can, that’s over my head.

Carlos: The links that you mentioned earlier will be available on our website for the show notes today at sqldatapartners.com/ssiscatalog. We’ll make sure to put all of those components there.

Andy: Awesome!

Steve: Now, one more thing that you mentioned when we were talking before we start recording the show was your Data Integration Lifecycle Management or DILM Suite work that you’ve been doing. What exactly is that and who would want to use it?

Andy: Well, it’s great timing, Steve, to bring that up at this point in the conversation because an option to remembering all of your old passwords and stuff that you may or may not have available is to export this metadata. And the SSIS Catalog has a slew of metadata associated with it. Carlos mentioned earlier about environment variables and parameter values. You can store all of these configurations, metadata in the catalog itself. And as Carlos said, if you setup a lifecycle where say, you’re moving from dev, to test, to hopefully QA, and then production, something like that. I know years ago when I worked at we ended up setting up six of these tiers all the way through. And, you know, the more the merrier. It’s actually a good way to do it. I imagine you can get crazy with the dozen or something but. I always say, “You need a minimum of three tiers in your environment.” And you want to run as close to production in each one those as it makes sense to. And different people are going to define what makes sense differently. Here’s what 10 years of experience with SSIS has shown me makes sense. I want three environments at a minimum because I want to be able to build it in one environment. And I’m actually going to deploy it to the catalog in that environment and I’m going to run local unit test and integration test there. I’m going to look for functionality. I want to make sure that the rows are moving and any kind of transformation is happening as I want. Then I want to deploy that to something other than production and here’s why. When I build it I’m going to design it with all of these defaults configured. I’m going to have default connection strings pointing to my either to my local laptop, or a virtual machine, or some development server that I’m login to a client. I’m going to have everything pointed natively to that. If I deploy it straight from there to production and I have forgotten to externalize something, maybe a connection string or variable value, a path. Then I’m going to figure that out when I try to run it in production and that’s called testing. And we don’t want to test in production. Certainly, any application but definitely not databases. So want to send it to some third location that’s not production, that’s not where you developed. Anyone who tests it there makes sure that it does what you wanted to do. So when you start making that transition there are really five things you need to think about in an SSIS Catalog.  There are the folders because there’s a logical separation of objects in there. All projects have to be in a folder. All environments have to be in a folder. We just talk about environment variables. There are projects and packages of course and those are really managed well by IS pack files. Wrap up the deployment in a nice little container. You double click it, it starts that wizard. I was going on about it earlier, I like that wizard. Those are managed well, that’s number two. So we’ve got folders and projects. And projects and packages are mostly together. The third thing is those environments that Carlos mentioned earlier. They have a collection of environment variables inside of them, that’s number three. Number four is really some catalog specific overrides. In Catalog they are called literals. And the fifth thing is really a reference, and this is how you get environments and projects and parameters to talking to each other. So think about projects and packages, they both can have parameters in them. And then over here we’ve got environments. And you can’t see me but I’m doing hand signs for this. Maybe I should turn my camera on. Like a reference is really a bridge between this project and this environment. And then beneath that you can like you have environments and environment variables you have these little attributes that I call reference mappings. And what they do is they connect the environment variable to the project parameter or the package parameter. So if you could think about the hierarchies kind of going on there you can, if you’re talking all the way down to say a package you can have something that’s kind of linked through about six or seven layers.

The project and the package Microsoft covers well, migrating the folders not so much. And as you get down to environments and the references, the environment variables and the reference mappings, it really the story from a lifecycle management perspective just sort of falls apart. But to get to the environments you have to double click on an environment or right click any properties. And you may have a couple of dozen environment variables in there, data types configured and values configured. Some of them maybe encrypted. There’s a sensitive bit you can set for every value individually. So you could choose what’s encrypted and what’s not. And you open that up after you’ve configured it. It’s different when you first configured it. Well the DILM Suite, when I started working on that, I started working on something else called frameworks. And I’ve got built frameworks forever but I was working on an interface for my framework, so GUI. And I realized about after three weeks, you can’t move this metadata in the catalog. There is no clean way to do this. And I needed that working before I could do anything with frameworks so I wrote a product called “SSIS Catalog Compare”. If you go to dilmsuite.com, there’s like six projects on the page. That’s the only one that I sell. The rest are either open source or are just flat out free. And one of them is called “Catalog Browser” where it’s a tree view. It will show you everything that is associated with those SSIS projects deployed to the catalog. It will show you the reference mappings, it will show you the references, it will show you environment variables and their values and their data types, it will show you environments. And if you’ve ever looked at what is in the node in SQL Server Management Studio. You can get to everything I showed there but you have to open different windows to see everything. And this just presents it in one view. And that’s one of the free tools out here called Catalog Browser.

Steve: Ok, so the Catalog Browser and Catalog Compare those are then applications that are built using the DILM framework. Is that correct?  

Andy: Yeah, that’s a fair way to say it.

Steve: Ok, so if somebody wants to do something that was maybe different but wanted to access those same pieces they could download and use the DILM Suite. And that’s your community edition, right, so people could write code to do that. But if they wanted to jump in and use the Framework Browser or the Catalog Compare they could just download those or purchase those from you and use them as well.

Andy: They certainly could. So what Catalog Compare can do is it’s really built to manage that metadata. It’s built to facilitate and support Devops. Anybody practicing any kind of lifecycle where they are promoting code, you know, stage control is a piece of this. Source control is a piece of this although I’m not touching source control. I’m just looking what’s in catalog 1 and catalog 2. And I’m not so much focused on the piece that Microsoft really does a great job on which is the projects and the packages. I’m focused on the rest of that stuff, the folders, the environments, environment variables, the references, the reference mappings because like I said, it’s look like they got to about that spot and stop for whatever reason. And without using some of these very clever and cool solutions that other people have developed out there. You have to either write your own clever and cool solution or you’re doing a lot of copy and then paste. But the new version is coming out in about, I’m hoping some time in the next few weeks. I’m testing it now. It’s going to include a command line interface. And everything in the Data Integration Lifecycle Management Suite is all built around this idea of achieving a piece of the puzzle so that we could do continuous integration with SSIS. That’s really what I’m after. It’s not going to have all of the pieces. I’m never going to be able to touch testing because frankly Pragmatic Works led you test does a fantastic job of testing SSIS. And they’re hooked right into the testing harness that comes with Visual Studio, John Welch and Brian Knight and the crew down there have done a bang up job. I don’t need to build that, that’s already built. But what I’m hoping to do with this command line interface, one of the commands is DCD for Deploy Catalog Differences. So Carlos, you may be developing SSIS packages on your laptop, and Steve you may be developing on an Integration Server in the office, and what I want to have is the ability for you guys to run this script every night that says, compare this to the catalog in this integration server set over here, and then just move whatever is different there. So whatever I have that I built today that since the last time that the script ran go ahead and deploy those services to this integration server. Now we’ve got that piece of the continuous integration going, right? We’ve got a target server where everything syncs up. So imagine that piece and then imagine running, having some way to automatically execute SSIS packages and then have tests to go look at the results of those executions and provide feedback right into TFS or whatever the testing framework can talk to in Visual Studio. And you’ve got a big piece of the picture now, right, for continuous integration.

Steve: Oh yeah. I can see how that would really save a lot of time in getting that continuous integration environment setup there for your SSIS work.

Andy: That’s what we are hoping for. I kind of mentioned in passing you need to execute the packages. Well, if you’re looking at the dilmsuite.com website, that SSIS framework community edition is a free framework that will actually do that. You have to do a little more work than if you pay for the commercial edition of the framework or the enterprise edition but that’s what it’s there for.

Steve: Wow, good stuff.

Carlos: Very cool!

Andy: Thank you! Yeah, thank you, trying to solve that problem.

Carlos: You know, that’s right. So thanks for the conversation, Andy. Should we go ahead and do SQL Family?

Andy: Absolutely.

Steve: So Andy, how did you get first started with using SQL Server?

Andy: So it was in the 1990s, and I was doing manufacturing automation at the time. On one of my projects we were storing the outputs of tags as they’re called in Human Machine Interface software in an Access database. I believe it was Access too. And to stress test it, I turned on all of the tags, it was over a thousand. And I set the software to record the value every second, and I left for a holiday weekend. I came back and there was a 3GB MDB file on the drive and when I tried to open it wouldn’t open. I went to altavista.digital.com and I typed in Microsoft database and searched and I saw this thing called SQL Server. This was about 20 years ago, maybe a little more. Carlos, I don’t know how long you’ve been around Richmond area but back then they used to have this computer and software sales and shows over at the showplace in Mechanicsville. I went over there, I’m going to say this and then, because it’s been 20 years. I think the stature of limitation is over. But I purchased an NFR copy of SQL Server 6.5 for like $20. It was like a week after that and I took it down to this plant and we installed it and I did the same test and it succeeded. I was able to open up Enterprise Manager, I believe it was Enterprise Manager, maybe it was Query Analyzer, I was able to open up stuffs and still run queries against the tables and get data out of it, even though it was 3GB of data. When they saw that that worked, that the promises were true, they bought a licensed copy. Actually they bought 7.0 because it was out. It just come out I believe and we upgraded, and of course tested it again, it run. That’s what got my toe into water with SQL Server.

Steve: Interesting. I’m amazed on how often we hear that, started with Access it didn’t do what we wanted to do so we did something different from there and that’s how we got to SQL Server. It’s been a pretty common theme along the way with other guests as well.

Carlos: If you could change one thing about SQL Server what would it be?

Andy: It would iterate better. So all software has corners right? Everything has something that it does really really well and something it doesn’t do so well. I do training, Tim Mitchell and I actually deliver the SSIS immersion events these days for SQL skills, and one of the questions I’ll ask to students is, are cursors bad? What do you think? Are cursors bad?

Carlos: depends on the usage, right, your mileage may vary.

Steve: I know, in my experience I’ve seen some cursors used very badly. And I’ve seen some cursors that have been used more appropriately and they’ve worked well. However, they’re not as efficient as I’d like to see them.

Andy: And that’s why I’m going with whole thing about iteration. It’s not just about cursors. Anytime you’re doing any kind of looping inside of the engine. Again, you know, Microsoft love it when I say this way, cursors aren’t bad, SQL Servers are bad at cursors. I feel that’s accurate and I get why the advice is out there, you know don’t use cursors. I had this experience about 10 years ago where or maybe it was more than that, but I was converting some Oracle code over from PL/SQL to SQL Server. And there were some cursors in it that were like three layers deep and they were nasty. And at that time, I knew as much as I do and I promise I’m not picking up on anybody because I’m still learning. But I looked at that and went, “This is just wrong. How could this ever perform?” And we had a test server set in there with it and I ran it and darn men it flew. You know, I rewrote it obviously. I think that may have been my first experience with, and the names of the tables flew out of my head just now, but it was Jeff Moden wrote about it. Back about that time I was looking for it. Some kind of tables, but I’m getting old I can’t remember what things are called. But basically you create a little lookup tables and you just get what you wanted they’re lightning fast.

Carlos: Tally tables.

Andy: Tally tables thank you. I can never remember what they are called. Yeah.

Steve: There we go.

Andy: Yeah, Tally tables, so I have my first experience with those but we had a solution that worked and I checked that box and move forward. Yeah, that would be my answer to that question. I’d like to see it do that. As long as they can do that without breaking any of the other cool stuff because SQL Server doesn’t awfully a lot really well.

Carlos: That’s right.

Steve: So what is the best piece of career advice that you’ve ever received?

Andy: Oh gosh! I’ve received a lot of really good career advice. I’ll share what probably impressed me the most recently. Do what you are, and you know, I’ve been a big believer of doing what you’re passionate about. I modified that belief since listening to Grant Cardone and other speakers talk about this advice where they say, “Become passionate about what you’re doing.” And I agree with that more. I think I was very fortunate in that I’m able to make money doing what I’m passionate about, technology. And, you know, it kind of just fell into place for me. But there are a lot of folks out there who want to get into technology, they’re not particularly passionate about it and they come into the field kind of hesitant and resistant and afraid. And I’ve been tailoring most of my career advice these days to those people and saying, “You know, find a way to get passionate about this.” And I’ve been sharing some of the things that I like and I don’t like.

Carlos: Andy, our last question for you today is if you could have one superhero power what would it be and why do you want it?

Andy: Oh wow! That’s a toughy. I’m a Batman fan, ok, I’m 53 years old. My wife got a picture of me in a kiddie pool when I was about 18 months old with a Bat mobile toy, ok. I am a half century into Batman. I have always been a Batman fan, but the truth is Batman isn’t so much of a superhero. Well, I’ll say it this way, he doesn’t have like those powers like I think you’re asking about here. He just train really hard and he has that discipline to do all that and that drive him to his tragic, it did drive him to that, been a huge Batman fan. But I don’t think he qualifies as having a superpower so I’m going to say that first because first and foremost I’m a Batman fan.

Carlos: There you go.

Andy: I would want to fly. I think Superman’s powers are cool but I would want to fly. You know, I like the idea, I like what Dr. Strange does, right, with folding time and being able to manipulate stuff. But as I’m saying that I’m thinking about what I just said about failing and I will probably blow up the universe. Probably not a good choice after all. Maybe, you know, my third choice, I’ll go to speed. I’m not a fast person at all in any aspects, mentally or physically. And I’ve always been a really slow runner and a poor athlete. It would be cool to run as fast as Flash. I want to stay with that one.

Steve: Nice I like that.

Carlos: Well, Andy, thanks so much for joining us in the program today.

Andy: Thank you guys! It’s been a great conversation. I hope it turns into a fun show for people listening. I really appreciate the opportunity. I think what you guys are doing is great. I’ve listened to several of your podcasts. I haven’t listened to all of them but it’s great to listen to people and learn more about what they’re thinking. And I love the SQL Family questions, those are awesome.

Steve: Alright, well thanks, Andy.

Andy: Thank you guys!

Episode 78: Conversion to SQL Server 2016

As database administrators, we will all upgrade our environments at some point; however, we don’t normally have the opportunity to upgrade to the next version of SQL Server before it becomes available to everyone else.  In this weeks episode of the podcast, Steve and I chat with Brian Carrig about the journey channeladvisor took to implement SQL Server 2016 in their environment, that it was like working with the SQLCAT team, and how they go about making use of some of the new features.  Brian shares with us some of the struggles they were having along with how the 2016 version helped address these issues.

 Episode Quote

“SQL 2016 is probably the version of SQL Server that has had most attention paid to performance improvements in quite some time probably since the 2005 release. They, I believe, I won’t say they promise but it’s not atypical to get 20% performance gains just right out of the gate.”

Listen to Learn

  • What it was like to work with the SQLCAT team
  • The features channeladvisor was after, but also how they went about using them
  • How the in memory options are helps with bursting
  • Why Brian how to create a connect item and what it deals with

Brian on Twitter
BWIN in memory OTLP whitepaper
Using memory optimized table variables
Memory Estimates for memory-optimized tables
Connect Item to store query store in a separate filegroup
SQL Server setup checklist in Github

About Brian Carrig

Conversion to SQL Server 2016Brian Carrig is a Microsoft Certified Master of SQL Server and manages a team of talented DBAs at leading e-commerce cloud solutions provider ChannelAdvisor. In a previous life, Brian spent some time as an academic and holds a PhD in Computer Science. He is a native of Dublin, Ireland but now lives with his wife and two daughters in Cary, North Carolina.

Carlos: So welcome compañeros, this is Episode 78. I am Carlos L. Chacon.

Steve: And I am Steve Stedman. And today’s guest is Brian Carrig.

Carlos: Yeah, Brian is the DBA Manager over at ChannelAdvisor. He is currently living in Raleigh, North Carolina. But you’ll notice an accent. I know that he is not from that area. It’s great to have him and we’ll be chatting with him a little bit about the 2016 features and some of the issues that they have or issues/challenges perhaps, experiences that they’ve had in rolling 2016 out and they’re actually a preview customer so before it was released they had access to it. And so we’ll be going through that. Couple of things we wanted to talk about first and that is I wanted to remind everyone that our SQL Server checklist is out on GitHub and so if you’re interested in contributing to that we’d love to have you. Again, that’s an opportunity there for you to contribute kind of give your ideas in two cents and make it a little more available for others as things continue to change. We’d love to get your experiences there as well.

Steve: Yup, and you know what I think is nice about that is if we get more input on that and update that we’ll be able to put that checklist out maybe a couple of months or once a quarter which is here’s the update with everyone’s suggestions. And perhaps build one of the best SQL Server setup checklist out there.

Carlos: There you go. Yeah, it’s interesting. There are still lots of even though people aren’t necessarily installing SQL Server. It’s still something just to reference because as they’re going in and reviewing environments. It just kind of, “Ok, am I setup correctly?”

Steve: Absolutely, and I’ve found that, I mean a lot of those setup items get missed when a SQL Server initially gets built. People who set it up are not always the ones who know the right things to do for the SQL configuration.

Carlos: That’s right. Ok, so with that let’s go ahead and welcome Brian to the show. Brian, welcome!

Brian: Alright, thanks!

Steve: Yeah, Brian, good to have you on the show.

Brian: Yeah, good to be here.

Carlos: Yes, thanks for coming in. I know we’re all a little bit under the weather. The winter weather has kind of come and then it’s worst and so thanks for taking a little bit of time with us. So ultimately our conversation today is about your experience in migrating to SQL Server 2016, and what that experience was like and some of the challenges that you had? So I guess, first, set us up with that story, what ChannelAdvisor is doing? I guess why did you feel that need to upgrade to SQL Server 2016 before it was released?

Brian: Ok. Well, just to give some background I guess not everybody will know who ChannelAdvisor are and what we do. So we’re an e-commerce software solution provider. We’ve been software solution since 2001, since probably before the name existed. What we do is we whelp retailers and brands to sell online, right. Marketplaces like Ebay, Amazon, Walmart, Jet. And there are a lots of different, there are actually hundreds, possibly thousands of these marketplaces throughout the globe. Basically, we help companies to sell across these marketplaces, manage their inventory and all that kind of stuff so that they don’t have to worry about writing API’s for the various marketplaces. They can just interfaced with us and we will manage their inventory, manage their online sales, digital marketing, all of that. And so we power about I think 3,000 retailers.

Carlos: Oh wow!

Brian: Yeah, with some pretty big customers, Macy, Samsung, UnderArmour, Staples. Yeah, there are quite a lot of big names in there. Before I moved to the US, you mentioned my accent, when I go home to Ireland people say I’ve lost all my accent so. Apparently not.

Carlos: Yes. Isn’t that the case. My dad has the same way. My dad is from Costa Rica. He came here when he was 15, lived here for 30 some odd years. Goes back to Costa Rica now and people are like, “Is Spanish your first language?” He was like, “Yup.”

Brian:  Well, when I got home they say I sound like an American now so.

Carlos:  You’ve been Americanized.

Brian: Yes. So we recently just went through what we call Cyber Five, right at thanksgiving that whole weekend Cyber Monday. And during that weekend we[00:05:00] will, last year we were close to a quarter billion in sales. I won’t say this year’s because I believe the results are not actually official yet. I don’t want to get in trouble with the SSC. But so far as to say we do a lot of processing during that weekend and that’s kind of what drives our uses of SQL 2016 or why we wanted to move to SQL 2016.

Carlos: So that’s your peak usage time?

Brian: Yes. Yes.

Steve: So then, given that that’s the peak usage time, I would assume that probably in July or August you’re really starting to ramp up for that. Is that about the right time for it?

Brian: Yeah. Generally, we’re trying to get the infrastructure in place probably around July or August and kind of gearing up then for the load in the 4th quarter. And then based on just natural growth, we might see a bit of a dip and Q1 of 2017 say. And then we start to ramp up just from natural growth and we kind of get ahead of that. And SQL 2016 is probably the version of SQL Server that has had most attention paid to performance improvements in quite some time probably since the 2005 release. They, I believe, I won’t say they promise but it’s not a typical to get 20% performance gains just right out of the gate. So that’s what interested us. We’re quite sensitive to cost even though we have quite a large SQL infrastructure because we make money when our customers do. So if we just go crazy and expand and deploy a huge amount of infrastructure we can impact our own profitability.

Carlos: Sure. Now, has any of that, I mean using cloud technologies or is it still On-Premise.

Brian: Pretty much everything we do is On-Premise. We do have a footprint in AWS. We have some customers that where we run SQL Server in AWS for because we have geographic restrictions on where their data can reside so we leverage AWS’s data center in Ireland actually to run their workload.

Carlos: Ok, got you. So you’re an early adaptor before it was available. I guess, what was that process like?

Brian: It was fantastic. I can’t compliment to SQL CAT Team enough. It was a real eye opening experience. I once have had terrible experiences with Microsoft support prior to these but this is another level entirely. So yeah, it was really enjoyable.

Carlos: Well, so you mentioned you are looking for 20% increase so that’s the marketing was telling you, right? So you get it in. You get the SQL Server 2016 version. How do you go about to testing that and kicking the tires to see what’s that going to do for your environment?

Brian: Right, that is difficult, right? Currently, we have our 118 SQL Server instances in production. I think it’s close to 3,000 databases and our environment peaks at about 1.6 million transactions per second. It’s really really hard to get a dev environment that reflects that. But you know, we do what we can. In particular, we knew we had some problem areas, and one of the most In-Memory OLTP. We’ve been using that since SQL 2014. While the performance gains were incredible when the features that’s kind of limited in terms of what data types and SQL syntax you could use. We also had a lot of stability issues with In-Memory OLTP particularly around the checkpoint engine. So we have cases, sorry go ahead.

Steve: Oh no, I’m just going to say what type of issues did you run into with that that’s interesting?

Brian: Oh ok. I think I found the connect item on this or somebody did. It was fixed in a hot fixed. But we had an issue where if you run out of transaction logs space. So basically your transaction logs filled and even before Auto Growth could kicked in. The checkpoint thread for the In-Memory OLTP engine would die and fail to
respond. So the only way you could kind of get that process to restart was that take your database offline and bring it back online which in highly transactional environments is problematic.

Steve: And given at that checkpoint files really the only place that new data is going to exist. I could see how big of issue that might be.

Brian: Yeah, so the first time it happened we did not notice and the transaction log on the server was 600 or 700 gigabytes and it did not get truncated until it filled because of this. It would basically have a full transaction log that was 600 gigabytes in size. It ended up being quicker to restore the database from backup rather than first taking the database offline and having it start and go through crash recovery.

Carlos: Oh wow!

Brian:  Yeah. That’s a lot of role for it.

Carlos: Yeah, exactly, scenario you see every day. Now that’s on the 2016 environment or that’s 2014?

Brian: That was on the 2014 environment. They actually fixed it in the service pack for 2014 but I guess that was the point of which we were kind of looking around and saying, you know.

Carlos: Ok, we need something different.

Brian: Yeah, we heard that they had rewritten a lot of the engine for In-Memory OLTP for SQL 2016. So that really prompted us, apart from performance gains. That was another thing that kind of prompted us to go, “Hey, we’d really like to get into 2016 as quickly as possible.”

Carlos: Ok, so then, I guess any other changes to the In-Memory objects if you will. Or what is really just like I needed a new version?

Brian: There was the stability side but also availability of constraints. I think that using In-Memory OLTP has become a lot more viable now that constraints are available in terms of foreign keys, and default constraints, and unique constraints.

Carlos: Now, it reminds me, so the foreign keys, the In-Memory objects. Can I have a foreign key from an In-Memory table to a disk? What are they calling them? What do we call them, not In-Memory table?

Brian: Disk based tables.

Carlos: Disk based tables. Yeah.

Brian:  I actually do not know. I have not tried that.

Carlos:  But you’re talking about foreign keys between two In-Memory tables?

Brian:  Yes.

Carlos: Oh yeah, ok. I wasn’t sure. I didn’t think that was an option and I thought ok maybe that had changed. So ok, so then I guess can you tell how many In-Memory tables do you actually have?

Brian: Right now? I want to say 3 or 4. So it’s not extensive. It could be more. Kevin designed a lot of them so he’ll probably correct me after this confessional.

Carlos: Another excuse to get Kevin back on the podcast.

Brian: There you go. He can back and explain everywhere where I was wrong. I want to say 3 or 4 of the regular In-Memory tables that’s in the database. But what we’re using extensively now and I don’t know if enough people know about this is you can create a table type that is memory optimized and use this as a table variable. Either as a table variable in the procedure or, you know, if you use TVP you can pass it in as a TVP. We’ve seen some incredible performance gains from this. And it’s really simple to implement. I think I have given an example where we reduced the CPU consumed by a procedure by about 75%.

Carlos: So when you’re doing that at that way basically you’re kind of skipping TempDB for the type of jobs or the type of tables that would normally go into TempDB and you’re doing it as a memory only, In-Memory OLTP table, is that right?

Brian: That’s right. It doesn’t touch TempDB at all. So what we were finding after we deployed SQL 2016 and we got those kind of performance gains that we’ve mentioned is that like always here. You’re just shifting the bottleneck to somewhere else so where we start to see some stress was contention in TempDB around reading and writing from the system tables. And the way we looked at reducing that was basically take some load off on TempDB by using memory optimized TVPs.

Carlos:  Always the big system. How do you take it to account the memory
considerations now? Because I mean obviously one of the things with memory optimized objects is that you have to account for that In-Memory now which, you know, previous to that it was just, ah we’ll allocate, you know, 85% or whatever the percentage was of the memory to SQL Server and be done with it. Now we have to carve that up little bit. Can you talk to us about how you decided to do that or what were your considerations there?

Brian: Yeah, certainly. So if you read books online, they say for your fixed tables, right, not temporary objects that you should allocate about 3x the size of the table. But really what it depends on is what your frequency of insertion and deletion from that table is. So basically how much work does the background garbage collection process need to do and how much overhead do you need to handle all the raw versions essentially. And typically if you’re looking at In-Memory you probably have a really high frequency of insert and delete and update type operations.

Carlos: Right, that’s why you’re using it, right?

Brian: Right, it wouldn’t be a problem if your table wasn’t heavily access to. You wouldn’t be looking at this. And also, you know, concurrency because the wider and more sessions that are doing this the more versions that will be held In-Memory. So we have found that in cases you might need to allocate as much as 10x. Yeah, and we talked to a company called bwin.  These are like the pioneers of In-Memory OLTP. They’ve worked a lot with SQL CAT. They’ve got some great white papers out there about using In-Memory OLTP to really boost your performance and the amount of transactions that you can support, and they had similar experiences.

Carlos: I admit I don’t have extensive experience with the In-Memory objects but knowing, because that’s what the recommendation is playing for 3x but you’re seeing a 10x increase. Are these for the DMVs or what are you using to then monitor how much memory the tables were actually taking up?

Brian: So I use Resource Governor and there are also DMVs that will tell you how much memory it is using. And one of the tricks that you can do and I recommend doing this because in a lot of cases your big concern is that you will run out of memory and you will cause the server to crash. If you have just memory optimized objects in one database you can use Resource Governor to bind that database to a resource pool and fix the amount of memory that it can use. So if you’ve run out of memory that you’ve allocated to that database, queries will fail but your server will stay up.

Carlos: Will stay up.

Steve: That’s a very creative approach there. I think often times resource governors really gets downplayed a little bit is not being that useful. But I mean that sounds like a great way to be using it.

Brian: I love Resource Governor. There’s not many people that use it. But we use it extensively here.

Carlos: Interesting, that might be another topic because I agree. It seems that Resource Governors kind of on the wayside but scenarios like that do make it very appealing.

Brian: Yeah, because we’re  and we have a lot of different product teams that often use the same database infrastructure. We use Resource Governor sometimes without doing any actual limiting or governing of resources. What it does is it splits your plan cache. So if you have happened to have teams that use procedures in different ways. They get their own plan cache and you can reduce parameters anything issues.

Steve: Oh, interesting. So another question around the In-Memory OLTP and this is kind of the concern that I came across as I’ve been exploring it, is that and this probably comes from my background with database corruption and checkdb and things like that. But checkdb and checktable won’t check In-Memory OLTP tables. And I guess is that something that you’ve run across or have problems with?
Brian: We have not encountered any problems with that. Yeah, it’s definitely a concern. I mean, the tables that we use for In-Memory OLTP are largely can be taught of as like staging tables, ETL processes. And obviously we extensively use the
memory optimized TVPs. And just to talk about hopefully not to kind of frighten some of your listeners too much, if you’re just using the memory optimized TVPs so these are all temporary objects, you probably won’t even notice the amount of memory that’s been consumed by In-Memory OLTP because they’re short lived.

Carlos: Yeah, exactly, it will be almost like Temp tables, right, you know, in a sense.

Brian: Yes. But no, we haven’t encountered any type problems there. We have encountered corruption on our disk based tables but nothing in the In-Memory OLTP space yet.

Steve: Well, ok, that’s good news on the In-Memory OLTP side.

Carlos: Yeah. So I guess that’s maybe an interesting point we can potentially take us off the line. But how, so the corruption on the disk happens, you know, I guess there’s something get switched there. As long as the database knows what records are there, I guess how could corruption occur there. So your memory would have to become corrupt. It would then affect more of your hardware.

Steve: Yeah, you mean with the In-Memory tables, Carlos, right?

Carlos: With the In-Memory tables, yeah.

Steve: Yes, so I think the thing I’ve seen and I experimented with a little bit is if you have corruption in one of the checkpoint files where all of the changes are tracked in that In-Memory OLTP table to the checkpoint table or checkpoint file. If one of those is corrupt you’re not going to know it until you, and the only way you’re going to know it is when you try and back it up, either with a full or differential backup.  And at that point you’re backup is going to fail, and it will tell you that the, it usually checks on there that you’ll see. And then you can’t actually backup the database and your only point in that, or the only thing you do with that point is go to a previous backup version of that database.

Carlos: Do you have anything else you want to add there, Brian?

Brian: I mean the checkpoint files are really just used for recovery purposes, right, and also for the backups. So in that scenario where you encountered at the backup has failed and you have some kind of storage corruption, presumably your actual data will be fine, right?

Steve: Well, if you’re pulling that data from another table that is a disk based table, yeah it will be great. But if it’s going into In-Memory OLTP table that data may only exist in that checkpoint file until it gets backup.

Brian: Oh, ok.

Carlos: Got you. Yeah, crazy.

Steve: Yup, so. It’s definitely, I mean In-Memory OLTP is definitely a huge performance gain there. And it sound like that’s what you’re using it for and you’re definitely seeing those results.

Brian: Yes. I mean, I think our plans for 2017 definitely involve more In-Memory OLTP.

Carlos: But that’s not the only feature that you’re using in 2016?

Brian: No, another feature that we’re using extensively is Query Store.

Carlos: Right, and this made waves recently because they had enabled it by default in Azure SQL database.

Brian: Oh, that’s right. Yeah, that’s right.

Carlos: So they maybe, you know, who knows in the SP2 it might become available. This is a pure conjecture. This is not an announcement. But I could see that coming in future versions being enabled by default. But currently it’s not.

Steve: Yeah, I could see it and even during the preview top program. They were changing the defaults almost on a monthly basis. I don’t know if anyone was paying a lot of attention to the CTPs but almost each month they kind of they got tweaked.

Carlos: Yeah, I know when I talked with Borko, you know, the PM about that. They we’re looking at those usage patterns. They wanted to get that feedback, see how people were using it and then being more responsive if you will.

Brian: Yeah, for us because we have a lot of adhoc queries. They switched the cleanup the queries that are not frequently used was important.

Carlos: Ok, so I guess talk to us, so I guess we should review quickly what the Query
Store option is. And for my perspective, right, it’s basically tables that are inside the same database that is going to keep a record or a history of all the executions that have happened on your database for further analysis. Is that fair?

Brian: Yeah, I think that’s fair. I think Borko describes it as a flight recorder for your database. And I think that’s a really good description.

Carlos: Yes, that’s right.

Brian: So it keeps a record of all of the queries as they execute and then the plans that they use. So even though, probably the primary use case is that, right, looking for plan changes that are problematic. There’s a lot of rich data in there that you can use for other things as well.  Even cases where it might be queries that are aborted, or cancelled, or timed out for some reason that’s all data you can pull out of there.

Carlos: Right, I know, from the DMV perspective, you know, that you can only see either cumulative totals or the last execution counts. And so that scenario where, “Oh, this morning I was having a problem but now it seems fine”, you know, not much that you can do about it now.

Brian: Yeah, so we typically, prior to Query Store we would typically run Adam Machanic’s sp_whoisactive every minute and dump that to a table. So we would have some kind of history of what’s executing when a developer or a customer comes to us and says, “What, you know, there was an issue an hour ago or two hours ago. What happened?” Right, it’s really hard to tell. So that’s what we use that for and that Query Store is kind of like a more enhanced version of that.

Carlos: Ok, interesting. So, I mean, you talked about aborted queries, you know, that you’re starting to look at or I guess understand better. Other things that you’re getting out of there that maybe you haven’t thought of before?

Brian: We use plan guides and once extensively but we have certainly areas where we use plan guides and if anybody has worked with plan guides to force plans, I mean Query Store is amazing, right.  It execute a procedure, you pass in two parameters and your plan is forced. None of this messing around with like huge blocks of XML or trying to capture the correct plan from the cache.

Carlos: Ok, so now you’re actually saying I can go back into Query Store. I can capture the plan. I have two plans, one is good, one is bad. And I basically say, “I want to use this one.” And that’s your plan guide?

Brian: Yeah. It is a force. It’s not, “Hey, would you mind using this one?” You’re forcing it to use this plan.

Carlos: Sure. Well, that’s kind of an interesting option and I say this only because I was just writing test prep questions and so I’ve been looking at force. Force is kind of an interesting option because any plan is really a force, right. You’re telling it, “Hey, you need to use this.” Instead of what you think you’re going to use.

Brian: Sure, but you’re not giving the optimizer a choice. So the optimizer is being told, “Here is your plan, execute that.”

Steve: And I think that’s a much more direct and guaranteed way of using plan guides rather than saying here’s the plan guide and here are some suggested hints to use.

Brian: Yeah, certainly what we have found is that we’re often using hints to try and get to the plan that we want. As you say, this is more direct. Here, I know the plan, I won’t use it.

Carlos:  Right, right, right.

Brian: I mean that should be a pretty rare scenario. Usually the best thing to do is to leave it to the optimizer to figure out the best plan. But there are always those cases that you have to deal with.

Steve: Yeah. I’ve seen that where, like, if somebody adds a bunch of hints into a query that then make the query behave poorly you can then use the plan guides to override those bad hints that were given to the original query and if you can pull that information around the Query Store that would be extremely useful.

Brian: Yeah, and that you could use it to address the same problem that plan guides largely used to address. Cases where you can edit the query to put hints in there so you use plan guides to force the behavior that you want, you can use Query Store in the same way. So if you deal with a lot of ISVs and you’ve got issues with parameters anything this could be a way to address it.

Carlos: Very good. So the last feature we want to talk about is the column store indexes.

Brian: This is one you’re definitely going to have Kevin back in for. He is our store guru.

Carlos: Yes, so one thing I’ve always, I guess because the example that they frequently
give for the column store indexes is like state. Right, and like ok, I get it state makes sense. I’m constantly grouping by that, I’m reporting by that. You know, things in that nature so that makes it a good for column store index. But outside of that, like, it might be just basically looking for common things that I am either grouping by or reporting on? And that’s where I want to go for my column store indexes?

Brian: Possibly, I mean, one of the things that maybe kind of gets overlooked a little bit. It’s just how good the compression can be on column store. And a lot of your performance gains might be from that so, we had a case where we had 4TB fact table that was reduced to about 600GB in size.

Carlos: Wow!

Brian: I’m making it, yeah, clustered column store index. Also you can generally reduce the amount of non clustered indexes that you need when you basically re arranged your table as a column store.

Carlos: And I guess that’s the new feature in 2016 is I can now create that as a clustered index whereas before I couldn’t.

Brian: Yeah, you can basically mix and match now so you can have clustered column store with non-clustered V3 indexes. Or you can have, probably less common, you can have a regular V3 clustered index with non-clustered column store indexes on the table. Yeah, and one of the gains now as well with 2016 is batch mode execution for column store.

Carlos: Right, now batch mode meaning being able to update and write to.

Brian: Yes, it will execute in batches and you can get considerable performance gains there.

Carlos: Right, very cool. Very cool! Well, awesome. It’s always good to talk with other folks about why they’re doing certain things? How they’re doing them? You know, a lot of times even on our podcasts sometimes we get kind of carried away. Well, I think we always try to dive in to why we’re doing that but discussions like this help bring the problem a little but more to the front and then how we are going about solving that. Should we do SQL Family?

Steve: Yes, let’s do SQL Family then, so one of the first questions there is keeping up on technology? How do you keep up with technology and all the changes that are happening specifically around SQL Server?

Brian: That’s kind of easy in here because we have so many people working here who are really into SQL Server. So everybody can be everybody else’s filter. So everybody is kind of looking at blog posts and so forth. We have a very very active user group in Triangle area. So yeah, basically just conversations that work will usually keep you firmly in the loop.

Carlos: Yeah, it’s interesting. Brian, you’re actually the third person from ChannelAdvisor we had on the podcast. We’ve had Mark. Mark Wilkinson was actually the first, and then Kevin, and then now yourself. Yes, we run into your folks from time to time of course being on the East Coast.

Steve: Yeah, I think you’ve got like, maybe 10 more SQL professionals to go if you want to get the whole set.

Carlos: Only 10, there you go. How did you first get started with SQL Server?

Brian: In a previous life, I was a Linux administrator. Mainly working with MySQL, and the company I worked for at that time took on a project. They didn’t have any in house developers so they took on a project where the project was going to be developed in Delphi so it made into the SQL Server backend and nobody was around to manage it so I did.

Carlos: Ok.

Brian: I started from there.

Carlos: Alright, and then if you could change one thing about SQL Server what would it be?

Brian: Apart from the licensing cost? I will say, TempDB, and I think that’s a major bottleneck for SQL Server still. I would definitely like the ability to allocate TempDB on a per database basis.

Steve: Yup. Alright, so little bit of background then there, Carlos, this question about changing one thing with SQL Server. Where did that originally come from?

Carlos: Yes, we reached out to the audience and asked them what questions should we asking and Brian came up with that one. And it’s been interesting to get, it’s been a good one, right, lots of different thoughts there.

Steve: Give credit where credit is due.

Brian: Thank you! I only realized before the show. I didn’t necessarily have an answer
myself.

Carlos:  Yeah, it’s much easier to ask the questions than it is to answer sometimes. What’s the best piece of career advice you’ve ever received?

Brian: Alright, well so I’m going to credit the leader of the user group in Dublin for this piece of career advice, a guy named Bob Duffy. Some of you in the SQL community have probably heard of him. And the advice was that I should sit the Microsoft Certified Master Exam. When they announced that it was going to be closed, I kind of assume my chance was gone. He was the one who really said no just go for it. And I kind of squeaked at the end there.

Carlos: Wow, ok, very good.

Brian: Early days before it finished.

Steve: So if you can have one superhero power, what would it be and why would you want it?

Brian: Ok, I was going to say the amazing power to reduce SQL licensing cost. I don’t know if that’s. But actually now that I think about it, I think flight would be good. I’ll fly over back to Ireland and, you know, when you’re paying that ticket for four people it kind of adds up. So if I can only fly everybody over with my superhero power that will work out great for me.

Carlos: There you go.

Steve: Yup. And I will be looking for that new comic book of the new superhero that reduces the SQL Server licensing cost.

Carlos: Marvel, if you’re listening, it was our idea first. Ok, well Brian, thank so much for being on the show today.

Brian: Thank you, happy to be on here.

Steve: Yup, thanks Brian. It’s been fun.

Episode 77 Reporting Improvements SQL Server 2016

We invited Paul Turley on the show to chat with us about some of the new features in SQL Server reporting services, but were pleasantly surprised to have the conversation take a different direction.  Paul talks about some of the new training available for those interested in visualizations and why this community can be hard to define–and connect with.  There are so many visualization options and even within Microsoft we have many products that overlap in functionality.  In addition to talking about the pick list items of new features, Paul gives some guidance and why you should choose certain products and what makes good use case scenarios for some of the new features.

With the new analytics features now available in SQL Server 2016 via SP1, I think there is going to be additional requirements for data professionals to provide better analytics features and this episode is a good starter for how to frame those conversations.

 Episode Quote

[There is] “this challenge where the industry is very fragmented. It’s moving in different directions because businesses are adapting technology in different ways.  The reason there are different tools because there are different needs and disciplines. ”

Listen to Learn

  • Why it can be hard to connect with others in the visualization community
  • Some of the new features in SQL Server 2016
  • Why choosing a visualization tool can be difficult
  • The initial architecture of reporting services
  • Good candidates for mobile reports
  • High level architecture for mobile reports
  • The new architecture of reporting services and support for multiple browsers

edX course on data visualizations
Paul’s book on Reporting Services in 2016
PASS Analytics Day

About Paul Turley

Paul TurleyPaul Turley is the owner of Intelligent Business, LLC a consultancy with dedicated Business Intelligence practitioners, visualization designers, mentors, instructors, project managers and solution architects.  He is one of the leaders of the Portland SQL Server users group.  Is both a Microsoft MVP and a Dun and Bradstreet MVP.

” open=”off” use_border_color=”off” border_color=”#ffffff” border_style=”solid”]