Hello, Compañeros! You’ve heard of Hekaton and in-memory OLTP before, but what does in memory really mean for your database? Perhaps you have questions like I did, like how does in memory OLTP differ from disk-based tables? What data types aren’t allowed in in-memory tables? Is in-memory only an “on premises” feature or does Hekaton work in Azure too? Is there a minimum amount of memory required for in-memory tables? In Episode 34 of the SQL Data Partners Podcast, I’m going to talk about in-memory OLTP and in-memory tables with Jos De Bruijn. Jos is a Program Manager at Microsoft for SQL Server In-memory OLTP. I chat with Jos about how in-memory tables work and why they’re just as durable as traditional disk-bound tables. He lays out the whole process. Jos also answers common DBA concerns about them and discusses how to make them efficient and reveals the SQL Server 2016 features that affect in-memory OLTP. After that, we explore Jos’ favorite business tools and listen as he talks about what he’d do with a million dollars, what his favorite superhero power would be, and what advice he’d give to anyone in the SQL Family.
Episode 34 Topics:
- How in-memory tables differ from disk-bound tables
- How in-memory OLTP works with transaction locks
- Database recovery with in-memory tables
- New changes in SQL Server 2016 for log types, foreign keys, unique constraints, subqueries… and more
- The differences between SQL Server 2014 and 2016…and why you want to look at 2016
- The “shock absorber” scenario and why in-memory tables work well
- Jos de Bruijn’s favorite tools in his role as a Program Manager at Microsoft
About Jos de Bruijn
Jos de Bruijn is a Senior Program Manager at Microsoft. Originally from Holland, he now works with Microsoft to further develop the Hekaton project. He worked in academia as a professor and researcher before joining Microsoft. His main focus is on query processing, programmability, and transaction semantics for memory optimized tables. Previous Microsoft Ignite sessions include Microsoft SQL Server Unplugged and In-Memory OLTP deep dives. Find him on Twitter at @jdebruijn.
Show Notes
- Microsoft OneNote
- Jos de Bruijn on Twitter
- In-Memory OLTP – what’s new in SQL2016 CTP3
- Speeding up Transactions with In-Memory OLTP in SQL Server 2016 and Azure SQL Database
- Hekaton in 1000 words by Kalen Delaney
- Memory-optimized table variables
- 2013 Microsoft PDF on Hekaton
- Redgate’s Free Book on In-Memory OLTP
- #SQLFamily on Twitter
*Untranscribed introductory portion*
Carlos: Jos, welcome to the program!
Jos: Thank you!
Carlos: Thanks for tuning in from Seattle. I guess it’s still a little bit rainy out there, as normal, but we’re glad you took time out to visit with us.
Jos: It is indeed rainy, but I’m happy to be here and happy to talk to the SQL Family.
Carlos: Yes and we do appreciate that. So ultimately today we want to talk about in-memory oltp feature. Now this feature has a couple of names that we need to tag down. I’ve seen in-memory OLTP, in-memory tables, memory optimized tables, right? Or the new “go faster” button in SQL Server.
Jos: I like the go faster button, let’s go with that.
Carlos: All of these have been used to describe what we’re going to talk about here today.
Jos: Oh, you’re forgetting “hekaton”.
Carlos: How could I have forgotten hekaton?
Jos: Hekaton was the original code name of the project. Actually, it’s still the code name we use internally for development of the features that we talk about publically.
Carlos: So that was one of my questions. So in 2016, you still refer to it as “hekaton”?
Jos: At least internally, yes. We call it “hekaton v2”.
Carlos: Okay, very cool.
Jos: It is basically enhancements to in-memory OLTP.
Carlos: Ah, so why don’t you take us through that? Kind of give us a tour of what in-memory OLTP is and how folks are using it?
Jos: there’s a number of ways you can look at it. So, you can look at it purely from what features you get in your database. There are a number of things you get: you get the memory-optimized tables, as we’ve mentioned. You also have the memory-optimized table types, which not everybody always thinks about, which can be used more broadly than memory-optimized tables.
Carlos: We’ll want to dig into that, because I thought that you created the memory-optimized database, and then you created the table using those partitions. It sounds like you’re talking about something even different than that.
Jos: So, okay, let’s take a step back. Table types are the structures for table variables and table-valued parameters. These are programming constructs that you use typically with your stored procedures often to send bigger tablesets to the server. Table valued parameters are where you define a table type in your database. You can use traditional table types or an in-memory optimized table type. And the in-memory optimized table type lives in the memory space in the user database. It does not live in tempDB, plus it uses the new memory optimized data structures for more efficient data access.
Carlos: Okay, there we go.
Jos: The memory-optimized tables themselves, there is not really any partitioning that comes into play. Basically, memory-optimized tables are just user tables that, from a definition metadata point of view, they are user tables that have the option – the flag- is memory optimized. That’s from a metadata perspective.
From a storage perspective, these tables are stored in memory, so memory space is the main storage location. And actually, this memory space is carved out by the SQL Server exe process. They live fully in memory, so all of the user memory-optimized tables must fit in memory. But because we know that the table lives in memory, we could develop more efficient data access methods and more efficient storage methods for these tables.
Carlos: So part of this is where we get into the latching and locking mechanisms that were previously used from disk-bound tables, if you will. We can do that differently with in-memory optimized tables.
Jos: That’s right. So with memory-optimized tables, we want to first leverage the fact that these are just in memory. So we can have more efficient data access methods. But then the second aspect is, well, we know that there are looking and latching bottlenecks in highly concurrent database systems. If you have a high throughput OLTP system, you have lots of transactions running at the same time. They try to access the same data structures, the same pages, they take latches, they start waiting for each other, you’re just spinning…not doing anything… you’re waiting…
Carlos: A queue starts…
Jos: You add more cores to your server and your performance goes down because of the latches! So, the core algorithms that we developed were in-memory OLTP. The core transaction processing algorithms are all completely lock free. The algorithms themselves were crafted with all the latest advances in high performance computing, lock-free computing, to develop truly lock-free algorithms. So there’s not even a single spin lock in the core transaction processing.
Carlos: Wow.
Jos: And we use them on top of that, combination of multi-version concurrency control and optimistic concurrency control. So with multi-versioning, instead of updating a row in place, we write a new row.
Carlos: Just get a new row.
Jos: Always read the old row and optimistic concurrency means that we observe that in most cases, we don’t have to write update that exact same row. It’s going to be rare. That’s kind of an assumption of optimistic concurrency.
Carlos: Right.
Jos: So you can just go through, you don’t need to take a lock. You do detect, of course…it can happen that there is conflict. You detect it at that time, and one of the transactions will fail and it will need to be retried.
Carlos: I see. It might be worth talking about how that differs from a disk-bound table in the sense that when you write to that, the first thing that has to happen is that it has to write the log file. Right? So once it writes the log file it comes back, and it says, “Ok, now I’m in memory”. And then a checkpoint could happen, and then I can write to disk.
Jos: That’s right. So the logging works somewhat differently with memory-optimized tables. So first off, we only write to the transaction log at the transaction commit. So we don’t have to undo log records.
Carlos: Okay.
Jos: So with tradition tables, undo log records are maintained such that if you make updates to the data page they get flushed to disk, but if the transaction eventually does not commit, you have the information in the transaction log to roll back exchanges.
Carlos: Right.
Jos: With memory optimized tables, the data is all in memory. It doesn’t get flushed to disk. And we don’t write any artifacts of the transaction to disk until the transaction actually commits. So we make all the changes to memory, write the new row versions, mark the old versions as deleted, and at the end of the transaction we compile the log records and write that to the transaction log.
Carlos: Okay
Jos: So it only has the redo information. We never need to undo anything.
Carlos: Almost like a bulk operation.
Jos: Correct. In some sense, yes, you could see it like that. And then we write to the transaction log when the transaction commits, so there’s only one log record so it’s already more efficient. Second, the indexes of memory optimized tables only live in memory, so they are not logged. So there we also get some efficiency.
Carlos: Right.
Jos: We write to a transaction log and it’s flushed to disk. So you do have the usual durability guarantees of transactional memory optimized tables. The fact that the memory is the main storage doesn’t mean that you have a problem if the server fails over. It is all fully logged. There is also a mode where you can say, “I don’t want to have it logged for certain tables.” But the default that you want to have is that it is fully logged for all your tables. So, we commit, we force flush the log record to disk, and only then do we give the control back to the user. So you have the exact same durability guarantees as you have with traditional disk-based tables.
Carlos: Let me just make sure that I understand what’s going on. So, while the table lives in memory, it’s still going to write to the log. Then, behind the scenes there is a disk presence, if you will, of this in-memory table. So that in the event that the server gets turned off, or failover, or high availability, or clustering, that’s where it would read from and roll back any transactions… which I guess wouldn’t happen with any in-memory situation.
Jos: There’s no real undo there. I mean, you don’t replay the whole transaction. But yeah, you mentioned in detail that there is a persistent copy on disk. This persistent copy is maintained in what we call “checkpoint files”.
Carlos: Okay.
Jos: So, when a database checkpoint happens, a number of files on disk are designated as saying, “Okay, these files belong with this database check point. So when you run database recovery for a particular checkpoint. So when you run the database recovery, you always pick the checkpoint as of which you are going to run this recovery.
Carlos: Right, there we go.
Jos: So you’re going to run the checkpoint, so you’re going to actually load the data from the data files that belong to this checkpoint. Those are all in memory, as of this checkpoint, you …
Carlos: You’ll see that when the database starts you might see that in your error log. It’ll say, “Hey, I’m reapplying this checkpoint process.”
Jos: Yes. So I think that is an important thing to keep in mind. During database recovery, you do load all the data in memory from disk.
Carlos: Right
Jos: So it depends very much on the configuration of the database. But very often you’re going to be very IO bound. So, the faster your IO subsystem, the faster you can load the data into memory and the shorter the recovery time will be. If you’re doing basic server restarts or something like failover clustering, then you need to worry about how this is going to affect your RPO.
Carlos: Now, so with a traditional table, right, when you create the table, at least one page gets created to say, “Hey, here’s my table.” Does SQL server default to, “You’re going to create this in memory, I’m going to carve out a chunk of reserved space for this table?”
Jos: No, there’s no reserved space for tables. We reserve some space for indexes. For tables, maybe there might be like a small block of memory. Yeah, I think we do reserve like a very small portion of memory for the base data structures so we can actually start adding rows. But this is like maybe 60kb or something.
Carlos: Right, okay.
Jos: It’s not really a significant amount so there’s not really any pre-allocation of memory for tables that we do.
Carlos: So one of the big fears that I’ve heard from some of the other companeros, the other data professionals, is that, “Oh, great, the developers now have in-memory tables. They’re just going to slap everything into memory and we’re done.” So what are the considerations that organizations need to look at for in-memory tables. So good candidates for in-memory tables are obviously, lots of transactions would be up there. But that would also mean that that’s a very large table.
Jos: You do need to be careful from a management standpoint in terms of memory utilization, because these tables that you designate for memory optimized they must live completely in memory. So you must have available memory so we have enough. So in your capacity planning you plan capacity for disk storage, and in this case you plan capacity for memory storage.
Carlos: Okay.
Jos: You would have the same capacity considerations, basically, but typically you’re using a subset of your tables. Now we do give you some tools to work with in terms of memory management. So what you can do is you can create a resource pool and combine your database to that resource pool. And you can define a maximum memory percent and also a min memory percent for a given resource pool.
Carlos: Okay.
Jos: So it is best practice to bind your database to a resource pool and give it a certain maximum memory percent, so that if you run out you can always temporarily increase it to get some head room and to also insulate other processes running on the server from your activity. Because memory optimized tables are eager, because it is your data, we are never going to give it away unless a user does an explicit DELETE or DROP.
Carlos: Exactly. So in that scenario, I’ve set up my resource pool, and let’s say for easy numbers, let’s say that I have a gig of memory that I want to allocate to this. Once I hit that threshold, do I just get failures on INSERT or what’s the behavior?
Jos: Yes, you’ll get failures from inserts. So you’ll need to have enough capacity. Same if you run out of disk storage space. If your transaction log runs out of storage space, all DML will start failing.
Carlos: Right.
Jos: You need that available storage space.
Carlos: There was some old functionality for DVVC pin table, 2000…. you can take the pages and stick them in memory, right? If you didn’t have that resource pool, would you in theory run into that situation? Again, smaller environment. Let’s just say that you have 16 GB of RAM and you create an in-memory table and you take up all that allocation. Your server just goes down, right?
Jos: It does not go down, because we will never take all 100 percent. We go up to a certain percentage, in this case the depo pool. But there is always some headroom for basic server operations.
Carlos: There you go.
Jos: And if things go really south, there’s always the dead connection. We always reserve memory for the dead connection so you can always go in there.
Carlos: Very good. That is the, “save me from myself”!
Jos: Yeah, yeah, exactly.
Carlos: Very good. So are there any data types that aren’t allowed in memory tables or restrictions from that perspective?
Jos: So, yeah, when thinking about restrictions, some of hte folks may be familiar with the feature in 2014 or may have heard of so many restrictions that are there, like, no foreign keys, row size limitations, no native decompiles or stored procedures, and a whole bunch of other things. No alter tables, etc. So we did make some really big gains in terms of functionality that we now support with SQL 2016. Like, we added support for log types, foreign key check constraints, unique constraints, big row sizes, alter table, alter procedure, OR, UNION ALL, SELECT DISTINCT, subqueries, and a lot of other stuff.
Carlos: Okay, very cool! So basically, the version 2 in SQL Server 2016 is the, you know-
Jos: It is the proper ship vehicle for in-memory OLTP v2.
Carlos: Very cool.
Jos: Of course, there are a lot of other goodies in SQL Server 2016, but I think we made a huge leap in terms of the features that we now support. So I would encourage anyone listening who heard about the features in 2014 and heard about all these limitations to take another look at 2016 because I feel like it is really much more full featured. I mean, there are still some pieces missing that we really need, like for example, sp_rename, line alter table, case support for native modules, but you will see that a lot of the gaps that existed before have been filled in the meantime.
Carlos: Yep. Now, we didn’t talk about this earlier, but do we think of in-memory as an on-premises feature? Meaning, so if I’m in Azure SQL database, is there the concept of in-memory tables?
Jos: Yes, so right now in-memory OLTP is in public preview for premium databases of SQL Server Database. We also have the limitation that you need to create a new database that needs to be stand-alone in the elastic pool. But we will be removing these limitations over time and we will be adding support for previous databases for elastic pools and for standard databases as well. So over time we will be covering a bigger spectrum of databases. But you can kind of start playing with it today.
Carlos: Right. So you mentioned some guidance there. Is there a minimum amount of memory that you’re seeing your customers say that they’re using before they try to engage with in-memory tables?
Jos: No. It really depends on the application pattern. We didn’t talk too much about application patterns, so maybe this would be a good point to talk about that.
Carlos: Let’s do that.
Jos: So there is the core, like, high volume transaction processing where you can think about order processing, about stock market sales, foreign currency exchange which some of our customers are doing. That is where you have relatively big volume tables and these are your core transaction tables and you put those in memory. So you need to size accordingly for those. But this is definitely a scenario very suitable for in-memory OLTP. If you think about financial instruments, think about gaming, in these industries latency is very important so you want to have low latency and predictable low latency. And we can make the latency very predictable because we don’t have locks.
Carlos: Sure.
Jos: Because with traditional tables, if there’s a spike in the workload you will get locks and latches and you will get this contention and then you will also have a spike in your latency. So maybe your latency is normally 10ms, all of a sudden it jumps up to 200ms.
Carlos: Right.
Jos: Some there in-memory OLTP helps tremendously. There are also other scenarios where you can think about more limited user, like smaller tables. For example, let’s say some reference tables that contain metadata for your application that would be updated very frequently. Like, for example, you are a hoster and you want to maintain which user has which VMs on which of your physical machines in your datacenter. These things can be updated quite frequently and need to be accessed frequently and very fast and will have relatively small tables but with high volume. So that’s something to think about.
Another use case we see very frequently is what we call the “shock absorber” scenario. So “shock absorber” means that you have a memory optimized table that is used for the data ingestion. So you have a highly concurrent INSERT workload. For example, think about sensors.
Carlos: Sure.
Jos: You have weather sensors, sensors in your windmills, for example, if your windfarms have sensors are insert into the table highly concurrently. The memory optimized table can deal very well with highly concurrent INSERTS. And what you then do is periodic batch offloads to a traditional table, probably a clustered columnstore index because you want the space savings and the fast analytics queries. And this batch offload is usually much more efficient than the highly concurrent inserts. With this you can actually absorb, because often you have thse kind of shocks where at some point you have some steady flow but then all of a sudden there’s a big shock of data coming in and you can absorb these shocks. Some of our customers are doing that successfully. And a few gigs is enough.
Carlos: Exactly. You can manage that much easier.
Jos: Yeah, there are customers that just have the stored procedure with sql agent in a loop to just continually offload. And you keep your table really small while you are able to ingest a lot of data.
Carlos: Right.
Jos: Another scenario that I think is also very interesting is one customer, EdgeNet, they have a reference table with a data about items that are in shops. They maintain a table that is used by search engines like Bing and Google to find hardware items. You search for hardware items, you quick find a price, location, and store quantity for the physical stores. So what they have is they have a query workload and then they have batch updates that come from their customers. They send them batches once in a while, so that means that with this batch update they need to update the pricing and stock information for almost all the items in the table.
Carlos: Ah.
Jos: With disk space tables, this would take forty minutes and there would be a lot of lock contention and read workload. So they actually built a caching layer to allow cell querying while the memory optimized tables, there is no contention. The updates run in four minutes instead of forty minutes so there is a 10x improvement and they could remove the caching layer completely.
Carlos: Oh wow.
Jos: So, just remove the caching layer.
Carlos: Like a hybrid of the two, right? So you’re using both of them just to do different things. I update here and select from there.
Jos: Yeah, yeah. So this is actually, I think, this can be very powerful. Because there’s a simplification in your architecture in your deployments. That you remove servers. Another powerful example that I often use, another customer uses it for a session state management. For the session state from the web servers. So they scale up the web farm and the session state is maintained by the database. They have a high volume of the websites and they used to have 18 SQL Servers, 18 physical servers running SQL Server. So they had to partition it manually. With in-memory OLTP, they consolidated that to one server.
Carlos: Wow.
Jos: So that is tremendous savings that you can gain. Essentially, you can think about this technology as a way to get much more performance out of your existing hardware. By removing friction you can scale linearly with the number of cores and by shortening the code paths through the memory-optimized algorithms and native compilation. You use fewer CPU instructions for your transactions so you can cram more transactions into a single second running on a single core on the machine.
Carlos: Now in that consolidation example, does that mean that they basically took all the memory from those 18 servers and tacked it into one?
Jos: Their memory footprint is not big.
Carlos: Interesting.
Jos: I don’t know if I could quote an actual number.
Carlos: But it isn’t so much that they took all this memory, but the fact that the speed of the in-memory table behaves so differently that here are efficiencies there.
Jos: Yeah, yeah, yeah. It easily fits in this simple midrange server. It is not high volume of data but high volume of transactions in this case.
Carlos: Very cool! Well thanks for the overview of in-memory tables. Always good stuff, and I think we’ll make sure that people can reference some of the things you talk about in the Show Notes at SQLDataPartners.com/podcast. So Jos, before we let you go, let’s do SQL Family.
Jos: Okay.
Carlos: So here, we like to get to know you a little better, how you work, what makes you tick. One of the things like to talk about is I like to see how other people work. So I want to know what your favorite tool is. Now generally we talk about SQL Server tools, but as a Program Manager maybe you’re doing some other different things. So what’s your favorite tool? Free tool, paid tool, why do you like it, and how does it help you?
Jos: Two things come to mind. Outlook and OneNote. I use them both extensively and they serve my needs. With OneNote specifically, I really like the flexibility that you have. You can put any content in there, add any attachments, very easily reorganize structures, and it’s very easy to share with others as well as collaboratively easily with OneNote.
Carlos: Very cool. Yeah, about the collaboration. We’re actually looking at using OneNote as like a wiki type integration. We haven’t done the plugin to SharePoint yet, but I’ve been looking at that and it’s super nice to be able to have documentation to share and everybody knows where to go to.
Jos: Absolutely. It’s really helpful. I mean, in our team we use it a lot for reference information. It’s much easier to find things in OneNote than in a folder structure filled with Word documents.
Carlos: No question. The search feature alone is the huge reason to use that. So, you just inherited one million dollars. What are you going to do with it?
Jos: I’m afraid I’m going to give you a very boring answer. I would pay off my mortgage…
Carlos: There you go, smart man.
Jos: I would put a lot of it in savings, an investment fund or something like that. And one thing that I might do, that I have been thinking about is picking up sailing again. So maybe I might look into that, some sailing courses. And if I like that…I’m not sure if a million dollars is enough to really buy a boat and maintain it, maybe.
Carlos: Maybe a small sailboat.
Jos: Yeah.
Carlos: Very cool. So, originally you’re from Holland, right? You’ve come over from Microsoft, you’re in Redfin now, and been working with them. I’ve had some different experiences, what are the best pieces of career advice you’ve ever received?
Jos: The best piece of career advice. Yeah, I cannot really pinpoint specific piece of career advice that I have been given, but if you would ask me for a piece of career advice: when you don’t like something you’re doing, stop doing it.
Carlos: There you go.
Jos: That is what I did with my previous career was in academia. I was doing research and lecturing and there were some aspects I did not enjoy about it. So I decided, “Okay, now I need to do something completely different.”
Carlos: Very good, very good. Yeah no, that’s great advice. If you’re not happy about it, stop doing it. I like it. Our last question for this episode: if you could have one superhero power, what would it be and why do you want it?
Jos: What would it be? I don’t know, flying would be cool but if I think about the practical implications of flying I’m not so sure I would be happy with it. If you go too high up and you get too cold. You can collide with airplanes.
Carlos: Yeah, you’d have to register with the FAA.
Jos: Yeah, yeah you bump into the Amazon drones.
Carlos: So, maybe flying’s not your thing. Is there anything else you can think of that you’d enjoy more?
Jos: No, nothing really comes to mind, sorry.
Carlos: Okay, very good. Well Jos, thanks so much for being on the show. It’s been great having you.
Jos: It was great to be here, thanks for having me.
[…] SQL Data Partners Podcast Episode 34: In-Memory Tables (Carlos Chacon) […]