Episode 172: Power BI Monitoring

Episode 172: Power BI Monitoring

Episode 172: Power BI Monitoring 560 420 Carlos L Chacon
From a SQL Server perspective we have many ways to identify problem areas. In this episode, we discuss ways to diagnose performance issues in Power BI. They are certainly making many strides; however, the tools won’t feel as mature as they do in SQL Server. We review usage metrics, audit logs, the new Data Gateway monitoring tool and monitoring options in Power BI Premium. The Performance Analyzer had just been released and while we don’t go into great detail, we’ll be sure to include it in the show notes.

Episode Quotes

“You can use the unified audit logs for Office 365 and you can see it’s basically kind of an event stream, and so there’s a lot of different events it can track. You have a lot of detail there, but now you either have to do a data dump with CSV or you’re looking at using PowerShell to pull this all out.”

“I’ve described this before as instead of buying bus tickets you’re buying a bus and you’re cramming as many people as you can in it and sometimes people get evicted because there’s not enough seats.”

“It still feels like a MacGyver kind of solution where you’ve got some duct tape and paper clips holding everything together, but it’s definitely improving.”

Listen to Learn

00:38     Intro to the topic
01:37     Compañero Shout-Outs
04:15     SQL Trail Announcements
05:17     SQL Data Partners Podcast t-shirt giveaway
06:27     SQL Server in the News
10:22     Usage metrics – useful or not so much?
12:51     Usage metrics at the report level – useful but maybe not cohesive
14:00     Audit logs – more detail, but you have to go outside of Power BI at this point
15:28     Power BI Usage Metric Solution template might help with some of the heavy lifting
16:21     New feature for Gateway that can help with monitoring
18:12     New utilization and monitoring feature for Power BI Premium
20:48     When would you want or need to switch to Power BI Premium?
22:24     Recap on the conversation and where Power BI monitoring is at this point
23:34     Closing Thoughts


Music for SQL Server in the News by Mansardian

*Untranscribed Introduction*

Carlos:             Compañeros! Welcome to another edition of the SQL Data Partners Podcast! My name is Carlos L Chacon, your host, and I am here today with Eugene Meidinger.

Eugene:           Howdy.

Carlos:             How are you, sir?

Eugene:           I am good. It is a beautiful day in Pennsylvania today. I’m enjoying it while I can.

Carlos:             Yes, very nice. And compañeros, we hope it’s a nice day where you are, too. If you’re in the northern hemisphere, you’re getting into summer, the southern hemisphere you’re getting a little cooler, but wherever you might be, we’re hoping you’re having a great day. So I’m reminiscing a little bit here, I don’t know if you can tell, but I am actually on a different mic today, and it’s the Blue Yeti that I used back in the past and anyway, I guess a longer story than I meant to go into. I switched my other mic, because I got an arm that would allow me to swing it on the desk. Well, I actually got a bigger desk and decided to pull the Blue Yeti back out because now I have space for it and I don’t have to put it away. Anyway, so, it’s good to be with you today, compañeros, thanks for tuning in. Our topic today is Power BI Monitoring. Spoiler, we’re not talking about database monitoring, so just to make that distinction, there. But before we get into that, we do have a couple of things we need to go over. First, we’ve got shout-outs. Shout-outs, compañeros, I have a shout-out to Chiedo John, and I’d like to give a shout-out to the RVATech group for the Gala they put on. Eugene was actually there. We had a good time. We had to put on a dress shirt and tie.

Eugene:           I had to wear a suit.

Carlos:             I know! You know, get all dressed up.

Eugene:           My wife demanded photos.

Carlos:             Which we were able to provide, luckily. Also a shout-out to John Sterrett for posting a picture on Twitter, got quite a bit of traction from, I believe it was the 2015 Summit.

Eugene:           That’s what it sounded like, yeah.

Carlos:             Yeah, Brent Ozar’s FreeCon. Man, I was glad that I was able to make that. That was a lot of fun and it was a little bit of a Who’s Who. I mean, at the time, of course we’re talking about four years now, right? Lots of people have made names for themselves, if you will, or made companies or gone on to do things or community tools, BIML heroes, MVPs, what have you.

Eugene:           2015 I think, is the year that I missed. I feel like I started in 2013, whatever year Charlotte was and then I did the year after, and then we had a merger and so we put a pause on training and so I didn’t get to do ’15. I did ’16 and then by the time ’17 rolled around I was presenting at Summit and I was working part-time, so I was paying my own way. So yeah, I think that’s the year I missed, unfortunately.

Carlos:             Yes, fun times. So unofficial poll, anybody from PASS who’s listening, I can’t tell you how many times, how many people, again, from the East Coast, “oh yeah, my first time was Charlotte.” I’m in that boat as well. I know we always hear, “oh, these length of times and contracts and blahblahblah”. Put it on the calendar, PASS, 5 years. Make it on the East Coast. This can’t be that complicated. Well, maybe it is. But again, “oh yeah, my first time was Charlotte.” I hear that a lot from our East Coast friends. So, get out to the East Coast! Not that we don’t love you, West Coast.

Eugene:           Is there some hand-sign we need to use. I’m not caught up on that.

Carlos:             Oh, you’re not? You’re not flaring the hand-sign right now?

Eugene:           West side, east side, no.

Carlos:             Oh man. See, I guess you’re on the west side of Pennsylvania, right? Maybe that’s the–

Eugene:           That’s true.

Carlos:             That’s the trouble.

Eugene:           Pittsburgh versus Philly. It’s a fierce rivalry, for sure.

Carlos:             Okay, so compañeros, I do have a bit of SQL Trail news as well. We have been announcing our date for a little bit later in October. We’ve had venue problems, and I don’t know, fortunately or unfortunately, we’ve had to change our date, which is kind of a pain. Luckily, we hadn’t had too much information put out yet. So our new date is October the 2nd through the 4th. Again, it’ll be a half day on the 2nd, we’ll start in the afternoon and then the 3rd and 4th, so more information will be coming. The good news there, compañeros, if you’re keeping score is that we’re going to have Eugene there.

Eugene:           Yeah, I’ll be available now, so I had other obligations the same week, but now that the venue’s changed, I will be there. I’m not quite sure yet what I’m presenting on, but–

Carlos:             We’ll figure that out.

Eugene:           We’ll figure it out.

Carlos:             That week that we chose is a busy week. I mean, from even some of the other conferences, like the health care stuff that we do, everybody’s trying to have their conference that week. And so I feel like it was almost a little bit of a blessing in disguise, if you will, that they pulled the rug out from us, because lots of stuff. October, we knew October’s always busy. And so on that note, as we’re recording this, we talked about our t-shirt giveaway and it’s only been less than 24 hours, but compañeros, I am excited that you guys are chiming in, you guys and gals. Everyone is chiming in and so I’m going to go ahead and we’re going to pull a winner for this week, Eugene.

Eugene:           Oh, nice.

Carlos:             So less than 24 hours, so we don’t have as many as I’m hoping, but right now, we have 13. So I need a number, Eugene, between 1 and 13. No pressure.

Eugene:           No pressure, okay. Alright, well, everyone wants to avoid the edges, so I’m going to be contrary and I’m going to say one. Just number one.

Carlos:             Number one!

Eugene:           Number one, uno.

Carlos:             Here we go. Well, then this person will be very happy that they got in very quickly. So our very first tweet on #sqltrail, Francisco Tapia, you are the winner of the first SQL Data Partners t-shirt giveaway! So we’ll get with you and we’ll send you that shirt.

Eugene:           Nice.

Carlos:             We hope you enjoy it. Okay, so now for a little SQL Server in the News. Now, I have to laugh a little bit at this, because I had heard an announcement as we were trying to figure out what news we were going to share, Eugene, you tell me, “hey, have you heard about SQL Database Serverless?” And I’m like, “Sa– ah– serverless? Uh, no, I haven’t heard about serverless. I’m pretty sure SQL Server needs a server to run.”

Eugene:           Right? At that point it’s just SQL, I guess, or I don’t know, I think it’s like describing Uber as ‘carless’ or AirBnb as ‘houseless’.

Carlos:             Yes, that’s right, that’s right.

Eugene:           It’s just someone else’s, it’s still there.

Carlos:             It made me chuckle a little bit. I’m like, “serverless? That’s funny.” So what I did hear, SQL Database Serverless is the option to pay for Azure SQL Database by the compute, if you will. So ultimately what we’re talking about here is that if your database is not being used, it can, I don’t know if shut down is the right word?

Eugene:           Auto-pause, yeah.

Carlos:             Auto-pause, there we go, and then you won’t be charged. So instead of having to choose a scale, so the S series and P series, then you can choose this serverless option and then you won’t be billed when it’s not used. Now there are a couple of caveats, and we have to– we’re going to lean on our good friend Brent Ozar. So he’s saying that it actually takes 6 hours to pause.

Eugene:           Yeah, at the end of the documentation, you can see under Onboarding into the Serverless Compute Tier, the auto-pause delay, the minimum you can set it to is 6 hours and the maximum you can set it to is a week. And the default’s the 6 hours, so it’s not quite at no-brainer level for me, just yet. Cause I’d love to have something like this for like home-lab databases that maybe I only use whenever I’m doing a demo or something like that. That would be great, but the fact that it’s going to, “okay, I hit it for one hour for my demo, and then it’s still running 6 hours later” is just a, mmm, I’m not sure on that, yet.

Carlos:             Sure, that’s right. You accidentally connect to it–

Eugene:           Yeah, in Brent’s thing, he was saying like, “you don’t want to have this open to the internet, because if some rando is port scanning and hits your thing, alright, it’s spun up, now.”

Carlos:             Right, right.

Eugene:           There’s definitely some gotchas that I hope they work out, but like, it’s an interesting product idea, for sure.

Carlos:             Yeah, that’s right. So I think they’re looking more at dev environments, so things that can be “turned off” at night, things like that. So yeah, it’ll be interesting to see how this works out. And ultimately, what’s interesting to me about this is that as we continue to work with more and more customers, they normally want to know what the cost is going to be. Now, the question, ultimately is will this save any money over some of the lower tier services? So I suppose the nice thing is that in theory, you could get a few more cores or V-cores or DTUs, whatever, and then again, have it shut down in the evening, so you could in theory spend less money. But it’s also very then difficult to say, “well, what’s my price tag going to be? What’s the cost going to be?”

Eugene:           Yeah, I see this being used for like dev and QA databases that you’re not hitting all the time, especially at night. I think for something like production, where you want to be able to budget it out and treat it like an operating expense instead of a capital expense, almost, this doesn’t quite make sense.

Carlos:             Right, yeah. So it’ll be interesting. So as always compañeros, if you’re using this feature, we’d be very interested in– or testing it out. I should say it’s in preview. Not yet available, but it’s in preview. Okay, so our show notes for today’s episode will be found at sqldatapartners.com/powerbi or at sqldatapartners.com/172. Okay, so our topic today is Monitoring, and I guess I should say, it’s not really monitoring, it’s Power BI monitoring. So from a SQL Server perspective, all of the other great tools that are out there, all of those things are still important, they still definitely play a role, that’s just not our topic today. And so there are a couple of pieces that we need to consider when we start talking about monitoring Power BI. And so maybe we just start first with the service?

Eugene:           Yeah, that makes the most sense to me.

Carlos:             So in the PowerBI.com, there are two tracks that we want to talk about and the first, since I have it here on my screen, is let’s talk about usage metrics.

Eugene:           Yeah, if you go into the admin portal for Power BI, you can see usage metrics is like the first option. And it’s okay. You can see your top users, you can see some high level stuff, but it’s basically a Power BI dashboard that somebody made, and part of the challenge is like there’s no way to drill down or get any more details, as far as I can tell. I mean, maybe there’s something I’m missing, but when I go into it and I click around, I don’t get anywhere. And so, it’s good to see that there’s activity. I don’t know, I see this as being useful for someone who’s like, “okay, we’re doing a pilot project for Power BI. Is it getting traction? Is it being used?” But beyond that, it’s like, it doesn’t do you much.

Carlos:             Right, that’s true. And I think the idea here, and they do talk about this on the documents is giving admins the ability to say, “okay, who’s publishing?” Number one.

Eugene:           That’s fair.

Carlos:             And then, I guess, utilization. I’m not seeing reads here, so this is not necessarily usage. This is more who’s publishing reports. And potentially, I can see– and again, there’s no export feature, so that is interesting as well.

Eugene:           Yeah, it’s just basically just a static dashboard, from everything I can tell.

Carlos:             That’s right, that’s right. So some challenges, for example, like I think you could reference this, but depending on how many licenses you have, this may be helpful, but probably wouldn’t help solve who has licenses I need to clean up, things like that. It’d still be a little bit manual, although at least you have a list to start with. Okay, and then the other piece is actually on the reports themselves. So here, we’re talking like you go into the report and then under Actions, we can see– which is interesting that they call it the same thing, usage metrics, but at the report level.

Eugene:           Yeah, so this gives you a lot more detail and one of the nice things, I found out about this from a blog post by Reza Rad, is that you can click on File and Save As and now you can edit that report and do whatever you want with it and it gives you a lot more granular detail. So I think this is really useful, but again, there’s not a great cohesive kind of combination, because now you’re looking at it for one report, and it would be nice to have a pre-built thing that lets you look at the usage overall.

Carlos:             Yeah, cumulative, right?

Eugene:           For your whole tenant, yeah.

Carlos:             Yeah, so Reza does mention that you can save this out, begin to manipulate it so it gives you a launching place. And so what we’re seeing here, we’ve got some usage pieces, but what happens when we actually want to take a look at when we tend to think of performance or how much compute and things that I’m using? And it can be a little tricky.

Eugene:           Yeah, I guess, before we get off of that, it’s worth mentioning the one other thing with the audit logs.

Carlos:             Oh yes, thank you.

Eugene:           So talking about usage, if you go into the admin portal, you can say, “okay, I want to go to the audit logs” and it’s going to say, “alright, well, go to the admin center.” And so you can use the unified audit logs for Office 365 and you can see, it’s basically kind of an event stream, and so there’s a lot of different events it can track. So it can track, “okay, somebody created a workspace, printed a report, created a gateway,” all of these sorts of things. So you have a lot of detail there, but now you either have to do a data dump with CSV or you’re looking at using PowerShell to pull this all out, so your options are not the best.

Carlos:             Yeah, you’re definitely outside of Power BI at this point.

Eugene:           Yes.

Carlos:             In the Office 365 admin. So yeah, but the nice thing is that it does also give you some additional pieces. So I guess, as I’m looking here at ours, it can tell you who’s looking at things like documents, giving access, this is going to be the Office 365 piece, so you will have to filter out just the Power BI pieces if that’s what you want.

Eugene:           Yeah, which is pretty easy to do and it looks like there’s easily 80 different events. So they’ve got– I mean, it literally goes down to, there’s one here, ‘deleted Power BI comment’. So very, very granular, but you’re going to have to do the Ikea route and kind of build something yourself to work with that data.

Carlos:             Sure. Now we should also mention here– so I have not used this, but there is the Power BI Usage Metric Solution Template. And we can put the video up. We’ll link to, of course, Adam is from Guy in a Cube is talking about it. And ultimately, the one thing that I picked out of this is that it’s actually going another route and using the APIs, but I think it’ll be pulling from the audit log. But it then does give you a nice template, here in Power BI and it’s kind of done some of that heavy lifting there for you. One thing I didn’t mention here is that the Power BI Usage Metric Solution Template is not free and there is a cost associated with it.

Eugene:           Alright, so anyway, back to transitioning to performance and that sort of thing.

Carlos:             So one of the things that we tend to think of, well, this gets a little tricky, because again, the Power BI service ultimately sits in Azure, so the pieces that we’re going to be monitoring in our environment could include the Gateway.

Eugene:           Right.

Carlos:             And they have just released, in fact, it was earlier this month, that they released an updated way to analyze what’s going on in the Gateway. So there’s a new feature for monitoring on-premises Data Gateway.

Eugene:           Yeah, one of the things I really like about this is that you can see error messages. So now, there’s a great way to be able to go through and see, “oh, there was a timeout,” or there was some credential issue or something like that. So I think that’s really exciting.

Carlos:             Gotcha, so now this is actually four days ago, here, so this is cutting edge. This probably should have made SQL Server in the News today. But taking a peek, so this is ultimately the May 2019 update and then in there you’ll have the ability to pull this and it looks like you just connect to the Gateway and it will start giving you some of this information.

Eugene:           Yeah, so there’s that. There’s monitoring with the Gateway, and then the other big monitoring piece that I know about is if you have Power BI Premium and being able to see utilization and performance that way.

Carlos:             Right, so before you jump into the Premium piece, I should say that the new monitoring feature replaces what was ultimately PerfMon templates. This was a little bit reminiscent of the PAL tool, so to jump into the database monitoring, for a moment, the PAL tool, which I still like, because of the graphs that it will export for the users, ultimately, they’ve given you the metrics that you should be looking at on the Gateway and then you had to do it that way. This is much nicer, because it’s going to capture those pieces and then you get to connect there. So I guess now we jump into the Premium there, and ultimately we’re looking at yeah, I’m paying for all this capacity, how much of it am I using?

Eugene:           Right, or do I need to scale up?

Carlos:             There you go. As if you’re not paying enough, you know, maybe we need to pay a little bit more. So let’s walk through this. How are we monitoring the Premium capabilities?

Eugene:           Yeah, so they’ve got a monitoring kind of Power BI app. It’s, again, similar to sort of a dashboard and that gives you a decent amount of detail about how much usage is going on and what’s your utilization, how many queries are going through. There’s a lot of metrics that are available though that. It used to be quite a bit worse, but now it’s a decent level, cause if I’m going to be paying $5000 per month, I expect a lot of detail.

Carlos:             Okay, so this is actually then an app that gets installed?

Eugene:           Well, so I think it’s a Power BI app, if that makes sense.

Carlos:             Power BI app, yes, okay.

Eugene:           Yep, so I think it still lives in Power BI.

Carlos:             Yeah, so as we take a peek at this, so maybe some of the pieces that stand out to us ultimately are going to be from a size perspective, but ultimately, we were looking at the normal things. So memory, CPU, utilization, things like that.

Eugene:           Yeah, I mean, cause you think about it, Power BI Premium, you’re really just renting some virtual machines in an Azure data center. They may not describe it that way, but you’re paying for a certain number of cores and that’s what it kind of comes down to, but then they’ve got other information on “okay, do I have slow queries or data set evictions? How many data sets were kicked out because I didn’t have enough memory?” There’s a lot of granular detail that’s available now that was not before. So yeah, so you’ve got high level CPU, RAM, and then you’ve got more detailed pieces to it as well.

Carlos:             Yeah, so obviously I have not played with this, but that idea of data set eviction, we’re kind of dealing with or to parallel that to SQL Server memory, we’re trying to find out what pieces are getting thrown out that SQL Server can’t handle, and are we having memory pressure there? So that idea seems very familiar.

Eugene:           Yeah, yeah. Well, and the thing is, too, I’ve described this before as instead of buying bus tickets you’re buying a bus and you’re cramming as many people as you can in it and sometimes people get evicted because there’s not enough seats. I don’t think there’s anything stopping you from just craming 100, 500, 1000 data sets onto your Power BI Premium and then it has to sort out which ones are actually being used and need to be kept in memory.

Carlos:             Gotcha. So to tangent here, slightly, on that. Do you have any guidance or thoughts? I mean, because these large organizations, I mean you can go to Premium for a number of reasons. Obviously, capacity is one of them, but users are also another one. So my user license, I hit the count, “$10/month doesn’t make sense anymore, let me just switch over to Premium, get some additional features.” So do you have any guidance as to how many data sets– I mean, I guess that’s a loaded question, but is it a size issue or a number of data sets issue?

Eugene:           Yeah, so that’s a good question. You get a certain amount of RAM with Power BI Premium. I forget what, but honestly, SSAS Tabular, the VertiPaq engine, however you want to describe it, it compresses really, really well. So if you’re dealing with so many data sets or so large data sets that stuff’s getting kicked out, you probably haven’t optimized your data model enough. I think the biggest thing, more so, is just number of users than necessarily data sets and that sort of thing. Another option to consider is maybe you end up with a hybrid approach where, “okay, you have a lot of your data living in SSAS and you’re doing live connections and you’re just using Premium more so for some of those things just for the presentation layer and the user access layer and all of that. So if it starts to be a size issue, there’s maybe a hybrid approach that you could take, but I think you’re going to be looking at Premium a lot more based on number of users than the size of your data.

Carlos:             Sure, until you try to, you know, pump all of that warehouse stuff into it.

Eugene:           Yeah.

Carlos:             Okay, so there you go. I guess to recap, we’ve gone over usage metrics at the user level, at the report level, monitoring your on-premises Gateway and then some ways to look at your Power BI Premium capacities. So there are some reports out there, I mean it sounds like we are, for the most part, they’re trying to give us some information. We’re still in the  infancy stages as to what we’d like to see here.

Eugene:           Yeah, it still feels like a MacGyver kind of solution where you’ve got some duct tape and paper clips holding everything together, but it’s definitely improving.

Carlos:             Sure, sure. And I think, very similar to our devops conversation. This is an example where they’re trying to get it into the hands of users, provide all of this great functionality and then admins now have to be like, “oh gosh, now what do I do with this thing?” And they’re trying to give us some insight into how to manage it.

Eugene:           Well, you know, you think about they’re selling it to people in accounting, not necessarily the admins, and then later they’re like, “oh, I guess we need to provide some of these features.”

Carlos:             Yeah, yeah, “oh, by the way, they might be interested in that.”

Eugene:           Yeah, exactly.

Carlos:             Then when it gets to budget season, you’re like, “okay, how many reports are we running?”

Eugene:           Yeah.

Carlos:             Okay, awesome. Well, I think that is going to do it for today’s episode, compañeros. Congratulations again to Francisco for winning our t-shirt! And as always, if you want to reach to us on social media, Eugene, how can folks connect with you?

Eugene:           Yeah, you can find me @sqlgene on Twitter and you can find me at sqlgene.com for my blog.

Carlos:             And compañeros, I enjoy connecting with folks on LinkedIn, mostly because most people have LinkedIn and the other social media pieces aren’t as prevalent. But wherever that might be, I am @CarlosLChacon and we’ll see you on the SQL Trail.

Leave a Reply

Back to top