Fastly Distinguished Engineer, Jana Iyengar, discusses QUIC — the more responsive, secure, and agile transport protocol set to replace TCP for the web. Hear how QUIC ensures that applications’ connections are confidential, keeps flexibility for the future in mind, and promises to provide better internet performance to poorly served parts of the world.
Thank you for being here, and I'm quite aware that I'm between you and your lunch. I believe... But presumably, this is interesting enough that it'll keep you in your seats. So welcome to the next 25 minutes of QUIC and HTTP/3 fun. And for those of you who've sort of heard about HTTP/3 and heard about QUIC for the past two, three years, four years maybe, wondering, "Why are we hearing about this again?" I'm here to say one thing which is: HTTP/3 is here.
So, is it time to celebrate yet? Does it mean we can open our champagne bottles and go home and call it all done? Well, not quite. We're almost there, and I'll walk through what exactly that means through the rest of this presentation. Before we go into the details, the innards of HTTP/3 and of QUIC, I want to start off by just saying first what we are doing about it at Fastly, because that's something that'll be of interest to you. And again, we've talked about this, you've heard about this from us, but we are doing a number of things. We are leading development of the protocols. We have the chair of the working group and myself, and editor of the working group we are both employees of Fastly. We also have basically rebuilt all the leading implementations and we are working on, right now, on making it available to customers, real soon now.
So the people I'm talking about here are Mark Nottingham, who you may have seen in other spaces. And then there's Kazuho, who works on the H2O implementation of QUIC, which is called quicly. Because that's what we do, we add "L-Y" to everything. And then there's me, but that's all about us.
Let's jump right in. What is QUIC? How many people here know about QUIC? Can I see a show of hands? How many of you feel comfortable coming up and talking about QUIC?
That's fine. That's perfectly fine. So I'll just start by simply saying there's something you already know, that it's a new transport protocol. So just for what it's worth, there were 50% of the room that raised their hands, I think by my count, for the first question. But QUIC is a new transfer protocol. And the goal here in building QUIC was to try and build a new transport under each HTTP for the modern web and for today's internet.
What do you mean by today's internet? But you know how the internet was 30 years ago. It's definitely not that anymore. And again, as I walk through this, hopefully, you'll understand some of the features that we built into QUIC, how they address modern needs. This is fundamentally UDP-based, and that's because UDP gets through most networks. If you want to build a new transport today, that's pretty much the only way you can build a new transport today. If any of you were around when SCDP was built, for example; it happened but then didn't happen really? And the reason is because UDP is the only real thing. TCP and UDP are the only protocols that really work on the internet. And if you want to build anything on the internet, you know that you've got to build it on top of one of those two things.
But as a result of building it on top of UDP, we have to build all of TCP's functions on top of it again. Because if you know anything about UDP, the one thing you might know is that the RFC is three pages long, and the code for it is about that long too. So it doesn't do very much. So where do you do all the transport stuff? Above it. And that's what we do in QUIC, we have to build all of this. But the good news is that this gives us an opportunity to build all of it from scratch and better. And that's what we've tried to do.
So on of the important things that we've tried to do is we baked in encryption, because we said, "It's 2019 folks. We're not going to make the same mistakes again. We're going to bake in encryption." It's either encrypted transport or nothing. So all the data and material are protected and we use TLS 1.3, as Patrick talked about, many of the features of TLS 1.3 are super important to us. And I'll talk about that in a moment
So that's QUIC. So what's HTTP/3? Anybody here know what HTTP/3 is? Do you know what I'm talking about? HTTP/3? One person in the front. Some half hands. All right, well let's dive into it. It's HTTP over QUIC. There, now you know. How many people know what HTTP/3 is? Excellent. That's a quick learning crowd.
Okay, so this is very straightforward. It's basically feature-parity with HTTP/2. The goal here was not to build a protocol that did something fundamentally different from HTTP/2, it was like, "Let's just take HTTP/2, the features that we have there, and expose the same thing, but do it on top of QUIC."
Right? So we had to make some changes in the way that HTTP interacted with the thing below, and the things in HTTP — the protocol itself, not HTTP, sorry, in HTTP/2. But we effectively got something that was different in its headers and various things than HTTP/2. And so it looks slightly different, but it's basically HTTP over QUIC. It's got feature batteries, or from a customer's point of view, it shouldn't make a difference whether are using HTTP/2 or HTTP/3, you won't see any differences. All the HTTP headers will be exactly the same. So you get the request multiplexing, header compression, push all of the things that you want and love initially in HTTP/2, with one exception: we're getting rid of priorities. Now, there's no Q&A session, so you're not going to get a chance to come and tell me, "Why are you moving priorities?"
So it's not just an HTTP/3, we the working group, the HTTP working group, and the QUIC working group decided basically that priorities needed to be redone. So removing priorities, meaning that we removed the existing priorities, but we are going to redo it and build it back better and nicer. So there's a common scheme that's being devised right now as I speak.
So, that's basically what those are. Why should you care? You're here sitting here watching this, listening to this, and you might be wondering why should you care. And now, I'm going to go into some of the values that HTTP/3 gives you. These are the top things that I would offer. First one is low latency. If you don't care about low latency, then the complexity may not be that worth it. But there are several other things that you might care about. Encrypted transport headers and resilient connections, and walk through each one of these in step.
So first thing's first: low latency. You heard about a little bit about this from Patrick and he and he talked about TLS 1.3 and the 0-RTT connections that it gives you. So I'm going to walk through that just very quickly. Not in a lot of detail, says the man who just put up a slide with a lot of details. But without walking into this, just look at the whole picture. And you can see TCP + TLS 1.3 from the first connection to a website. It looks like that. There are about two round trips before we can start sending data. And with QUIC, we get that within one round trip. So one round trip and then you can start sending data. The reason we are able to do this, and this is important, there's the important aspect of this whole thing, is that because we are building a new transport — encrypted, at that — we made all of these happen together.
If you look at the TCP side, you'll see the TCP handshake happens first, then the DLS handshake happens and then you can maybe do more things there. But the TCP handshake has to happen first. With QUIC, we built them together. We made it so that the QUIC and the TLS handshake happened at the same time. Which is why we were able to eliminate that serialization delay. Now, TCP has an extension that might do this, but it's super hard to get deployed on today's internet, and it's not deployed at all. But with QUIC, you can have this. On a subsequent connection, TLS 1.3 gives you, again, a reduction because it says, "resume connection" and you get a 1-RTT reduction, but you still have to wait for one roundtrip time before we can start sending data. With QUIC, again, because we've combined the DLS and QUIC handshakes, we were able to put data out in the first round trip itself, which means you wait for zero roundtrip times before you start sending data.
This is what it's called as a 0-RTT handshake, meaning that you have to wait for 0-RTTs for the handshake to be finished. That's what it means. So that's a low latency handshake, and that's super useful when your webpage itself can be packed into packets that can be sent in two roundtrip times. This is a 30% improvement in latency for your page. And that's something to bear in mind. So in addition to that, there's also head-of-line blocking in TLS and TCP, which you may be aware of and QUIC fundamentally gets rid of it. Now I won't be going into the details there. But if you want, please come and ask me. I'm happy to answer it offline.
The encrypted transport headers are absolutely fundamental to QUIC. We made a decision early on that we were going to encrypt, "we" being the working group, was going to encrypt as much of the header as possible. Because encryption and privacy are fundamental, not just to QUIC, but it should be fundamental for anybody who's building anything today. And these connections are all protected from tamper and disruption. So to show you a little bit more color, a little bit more detail here. That's your header for HTTP that uses TLS and TCP.
I'm going to start going through each one of these fields. See? I didn't even know to say that was a joke. The joke would be if I actually started going through each of these fields. And with TLS, that is the part that's encrypted. The gray is the part that's visible in the network, meaning it's not encrypted but it's protected by the crypto, meaning it's tamper-proof. There is still the headers that you see up there, our TCP headers, and those are visible on the network. Not just are they visible, they can be tampered with. And they are, in fact, tampered with. If you have a TCP connection going out to any server right now, chances are super high that what you're seeing in the headers and what the servers are seeing in the headers are quite different.
Why does this matter? Do the TCP headers matter here? Aren't we predicting the important stuff? Your data, that's what's important, isn't it? Well, as it turns out, the TCP headers are super important as well. If you're not seeing this paper from about two years ago, we won't walk through the entire paper. But for this piece in the abstract, I'll read this out because it's worth reading. "We developed a system that can report the Netflix video being delivered by a TCP connection using only the information provided by the TC/IP headers." They identified the video you were watching off an encrypted stream, by only looking at the TCP headers. Again, metadata can be super leaky. And there's reasons why they are able to do this with Netflix, but it's instructive, why metadata ought to be encrypted as well.
So with QUIC, this is the header of a common packet. This is the packet that's carried in most of the connection, and almost everything is encrypted. The thing that's visible there, the source port, destination port length, and checksum — I put that there for parity because that's the UDP headers that are visible in the wild. We cannot do anything about this because we are sitting on top of UDP. We cannot encrypt the things that are below us. So, that's what you've got. We've included all the transport state, all the crypto state, and all the application data. So, that's what you get with encrypted transport headers.
And then finally your connections are much more resilient with QUIC. What do I mean by this? Well, at least two things. And importantly, two things. Let's talk about connection migration first. So let's assume that you are sitting at home on a Wi-Fi connected to this to Fastly. Because that's what our servers look like. Or Wi-Fi and you are, say, watching a video, or you're doing a live stream or something like that.
And let's say that you are on the move now, and you want to move. This happens to you all the time. There's a reason this is called the parking lot problem. If anybody has not heard of the parking lot problem, it is when you are stepping out of your house or your office and going into the parking lot, trying to maybe get Google maps or something like that on your phone. And it sucks, because your phone won't let go of Wi-Fi, it won't switch over to cellular and Wi-Fi sucks. Because you're out of the office or out of your home, far away enough that the Wi-Fi signal is bad and does not switch over. You have to go there and turn off Wi-Fi so that your phone goes over to cellular, and then you get the page that you're trying to load.
With QUIC, your connection moves with you. Even if your point of connectivity at the client can change, your IP address can change, you are able to keep the connection as you move. This is a pretty significant feature. Again, you can understand why this is something that may not have been useful 30 years ago. Trust me, I worked on this 20 years ago and it didn't have a lot of take. But you can imagine why it's useful today. That's connection migration for you.
In addition to that, there's a better connectivity over poor networks. What do I mean by bad networks? Let's look at the links to get some early data from Google's early days. This is, unfortunately, the only widespread deployment data that we have so far. It's from Google's early deployment of QUIC. But the version of QUIC that we are now working on should reel in a lot of those performance benefits.
So, just looking at this data we see in so many words. So, this graph here shows you search latency on the y-axis, their social agency and YouTube re-buffering for desktop and mobile, and that's just a bar graph. And it's showing percent reduction in all of these things. Now you want all of these things to be low: latency to be low, re-buffering to be low. Which means you want the reduction to be high. Which means higher on this chart is better. Okay, so let's say this is data for Google's early deployment split by geo, by country, and what you see here is that there's some reduction for sure. There's a 10% reduction. This is outstanding, by the way, but I leave that at that. When you go to, if you've been to South Korea, generally, you know the network connectivity there is insanely good. You get a cheaper hotel but like incredible connectivity in the room, right? And that's just how it is there.
When you come to the U.S., where, you know it's still not awesome, you have like about 1% loss rate on average and you have 40, 50 millisecond round trip times, and you get more reduction.
It's when you get to a place like India where you don't have times are 150 to 200 milliseconds on average, and you're looking at loss rates that are upwards of 5% in general, that you really see QUIC shine through significantly. And this is what I mean by bad networks. It's not just in these geos. This is just illustrative, right? You can have poor Wi-Fi in New York City, and that's a bad network. And QUIC helps in those types of situations.
So why should you care? Those are the three reasons that you ought to care. Why do we care? Now that's just going through what of those features should be most important to you if you're thinking about QUIC.
Why are we interested in building this? Well, again, I have three reasons. First one: because you care. Because it's good for you; we want to build it so that it becomes available to you to use, and your pages are loading better. But in addition to that, it gives us tremendous flexibility. It gives us control and agility because this has built-in user space. So QUIC, as I told you, is built on top of UDP, which means it's much easier for us to do development work and rev things in user space as you might know, well instead of doing it inside the kernel. So if you want to build new features, it's a lot easier for us to do it in user space, and so that it offers us that control and that agility.
It allows us to do more interesting things with the protocol. We can explore various architectural things, directions that we want to explore much easier with a protocol that's more extensible, that's easier to rev. And also because we control, we have our own implementation of it, so we are able to do these revisions easier. And it gives us deployment agility and that's again, fundamental to QUIC itself. It gives us versions, which means we can deploy, for example, within our data centers, we can deploy a version of QUIC that's local to our data centers that does something special within our data centers. So that really allows us to explore those dimensions quite easily. I won't go into greasing here, I won't go into why encryption helps in this particular case, but those are all important aspects of why we are able to be agile with deployment of new features in QUIC.
And we are almost there. We really are almost there. You may have heard this before, but I'm telling you for sure we're almost here. I may be here again next year saying the same thing, but we're almost there. And most importantly, why I say this, is because browsers have now started testing. You can enable QUIC — and when I say QUIC, I mean the version of QUIC that the idea for standardizing with Chrome and Edge right now. It's only available in Chrome Canary and I'll talk about that in a moment again. And there are several implementations out there right now. A large number of companies are building their own implementations and we meet every month and a half or two months, to get together and try to interrupt our implementations, work on implementations off of each other. In fact, next week I'm heading over to Singapore for the IETF meeting there, and we are going to spend two days with all of these implementers, from all of these companies working on making our implementations talk to each other well again.
So we keep doing that, we continue doing that and are going to do that. Again, the idea of specifications are in progress and we expect that the RFCs will be shipped in 2020, so keep your eyes peeled for that. And we really are almost there, but there's still some things that are not done. So if you know shipping products, you know that until the product had shipped it's basically not shipped. That's all you've got. An unshipped product, that's what we've got right now. We are almost there, but it's not yet shipped. It's already finalized. And to be honest, it's an incredibly complex project to build a low-latency, encrypted transport from scratch. The fact that we were able to do it in such a short period of time is quite remarkable, I would say. But it's a very complex task and we've encountered a large number of very difficult and complex protocol engineering problems as we've gone along. And we still have to see how well we'll be able to scale this. OS, operating system kernels, and hardware are not yet optimized for UDP and large-bandwidth QUIC.
But we are going full steam ahead. There are many vendors who are working on improving UDP for QUIC and we are working with — we being at Fastly, at least — we are working with experimenting with QUIC offload and various other things that we can be doing here. So there's work to be done here of course, but we hope to have a launch very soon. We are very close to getting into beta. We are in invitation-only beta now, so if you're interested please come talk to me. I'm happy to take down your information or give you an email to write to.
And I am going to now switch to something slightly different. If you want to test HTTP/3 connectivity, going out of this room, you can do it now. And I'm going to do the demo now. Switching over. Let's see how well this works.
So I'm going to go to Chrome here and there we go. There's still no HTTP/3, and this says, "HTTP/3 is not being used. Can you try reloading?" And we say, "Well, I'm trying to reload but nothing is happening here." But that's because this is just plain, old default Chrome. And Chrome right now doesn't support HTTP/3 by default. And this is fine. You have to get Chrome Canary, which looks like this little, yellow-golden thing here. You have to get Chrome Canary and you have to enable it with some command-line options. This is why I said we're almost there. Remember if I could just launch it, a little work, we would be there.
So this is for testing only right now. Really, that's the whole point. But the fact that browsers are there, we are able to test with real browsers is super useful. So we enable QUIC and I say, "Send the QUIC version to HTTP/3, draft version 24, because that's the current draft version that we are running at." And if I do that, and I go here now to http3.is, I get... Ah... But that's all right.
No, it's not the Wi-Fi. The W-Fi doesn't control my QUIC, my HTTP version, thankfully. Hopefully. But this is because of discovery. So the browser has to learn that the server supports HTTP/3. And it does that by first just speaking over HTTP over TCP, and the server goes, "By the way, I speak HTTP/3 too!" And then if I reload it goes... Yeah!
So this is a page which, after seeing this animation, you may not want to use HTTP/3. But who doesn't love rainbows? You've got to. And that's about all from me. Please come talk to me if you have any questions or if you would like to be on our beta. I'm happy to chat. Thank you.