You appear to be offline. Some site functionality may not work.
Call Us

Credentials for the modern world: authentication with higher reliability, more privacy, and less risk | Altitude NYC 2019

Richard Barnes, Chief Security Architect for Collaboration at Cisco, discusses new and emerging technologies that improve authentication — making it easier to manage credentials, safer to deploy secure services in delegated environments, and possible to verify someone’s identity without unnecessarily revealing that identity to anyone else.


Hi. Good morning, everybody. Glad to be here this morning. So glad to be your first non-Fastly speaker and your first speaker that probably knows very little about performance, but I do know a lot about paranoia from a few perspectives. So as Hooman mentioned, I spent time co-founding Let's Encrypt, working on security at the browser at Firefox. My job nowadays is at Cisco working on things like WebEx and securing modern applications and making those work on the web.


I wanted to talk today about some emerging technologies for doing identity better on the web. When I say identity, I mean how do we prove when a browser connects to a website that it's the real website that it's the authentic website. So I'll start off here with a positive story. So good news, we've kind of done HTTPS. If you look back five years or so ago, when we were first starting to do Let's Encrypt back in 2014 there, HTTPS was not so much a thing. It was deployed in about a third of websites, about a third of requests. And nowadays we're up more toward 80 or 90, so 100% is in sight.


It's time to step back and ask ourselves, okay, we succeeded, we were almost there. What are the next steps? We're looking at HTTPS, what's not to love? What are the extra rough edges we could sand off to make HTTPS better, to make it more private, to make it faster? Faster, I mean, like I said, I don't know much about performance, but TLS performance we're pretty good at it, especially with TLS 1.3. There's some details at my favorite website there, if you want to dive into that. I'll mention some ways we're making it a bit faster with these new tech.


I'm going to spend most of my time talking about these next three bullets. We've made a lot of improvement in this manual provisioning process. How do you set up HTTPS? There's some mention this morning of some managed HTTPS services and Let's Encrypt integrations. I'll talk a bit about how that looks in scope of the rest of the web, the kind of the big internet picture. I'll also talk about some ways that we're working on making identity more private, protecting that server identity as it goes across the network so that your relationship to your users can be more private to the two of you, and we can lock out people who don't need to know about that.


And finally, I'll talk about limiting intermediary risk. This whole conference is about intermediation and using other people to deliver your content, and one of the bad things about TLS right now is that you have to trust those people with your identity. You have to give them a certificate. I got complaints from CISOs all the time about this for our SAS products. They have to give us a certificate with their name in it. So we've got some technology on the way that's going to help. It can't eliminate that risk because you're paying them to act on your behalf, but we can at least bound the risk so it's a little bit less painful.


So let me start with automation. This is kind of where we've made the most progress. Back in 2014, the bad old days, how would you set up HTTPS? Well, if you'd send your admin and your expensive, highly qualified engineer to spend some quality time with a web form and he'd do a bunch manual stuff to prove that he owns the domain, he'd solve a captcha. After an afternoon of interacting with certificate authority, you'd have a certificate and you'd figure out how to configure it into your web server and by then the day's over, and maybe you have HTTPS that works. That certificate you got is going to be good for about a year, maybe two years if you're lucky, just enough time so that you've forgotten about it when it expires. So you have an outage and then you have to do this whole manual dance again while you're in the heat of the outage. So it's really great. A lot of pain here to solve.


The first wave of improvements we saw to try and address all that pain was a bunch of vendor APIs. You've heard much about Fastly APIs this morning, certificate authorities think the same way. So they've got large customers who they're doing a lot of business with, so they expose APIs that customers can use to automate things. These were kind of initial efforts. They had the obvious costs of vendor APIs, which is you had to write the spoke code to that CA. And so if that CA had an outage or something, you had to fall back code. Or if you wanted to use a different CA, you had to fall over. I couldn't reuse the code.


But also they were incomplete. Back when these things were written, the identity verification, how do I verify that the person applying for certificate actually owns or That identity verification was largely manual. That kind of carried over into the APIs. So there'd be a manual interaction to set up the API and do that identity verification. Then once I've proven manually, I can issue certificates under with no problem through the API. So we had this partial automation here.


Then the second wave of automation here was Let's Encrypt, yet another vendor API. The innovation here at Let's Encrypt was to go fully automatic. The only interaction is via API and everything else is in a server room in a highly secure thing under a mountain somewhere. In comparison to the other ones, the legacy APIs, the big advantage here is the full automation. It's what lets us offer Let's Encrypt for free because we have low staff. It automates everything including that identity verification step, which means all you need to do to set up HTTPS if you're using Let's Encrypt is run some software. As long as that software has access to the right stuff to prove your identity and can talk to the CA, it can automatically configure everything to work just right.


What really made this scale up to the rest of the web is that we took this API that Let's Encrypt and now it's no longer just a vendor API. So we took this to the IATF and worked with a bunch of other certificate authorities, other users of certificates to develop this standard we call ACME, the automated certificate management environments, my great acronym triumph. So we developed a standard that suits the needs of a whole bunch of different players in this PKI ecosystem. And as a result, you see a bunch of deployment across a bunch of different hosting platforms, service providers, things like that. GitHub I think is done by Fastly. All of these have Let's Encrypt integrations or integrations with another ACME CA.


But as a result of that, we have a bunch of major CAs, a bunch of tools of a whole open-source ecosystem around this now. And so we have no more need for all this bespoke code tying you to a specific CA. You have full automation with multi-CA support, all with open source tools, and you can fully automate your certificate stuff. We've succeeded at this going from the bad old days to a zero-click or one-click experience. Not deployed everywhere, but I think this is the way the ecosystem is moving. And hopefully, we can get us up to that 100% number on HTTPS by automating this setup and making it work.


So automation. Check, we did that. Keeping identity private. We talk a lot of security about this principle of least privilege. We should only expose information to someone who should actually get it. And we do a really bad job of that with identity in the web nowadays. This is an example, this is a transcript of a browser connecting to I've highlighted in red here all of the ways that is exposed to people it doesn't need to be exposed to. So when you do a DNS request, and I've got the arrow is backward on the response, but you do a DNS request and you're exposing the domain name you're talking to in the request and the response. Your TLS client, hello, sends that domain name out again to say, here's the server I intend to talk to. And then the certificate says I represent has that name again.


Only when you get to the actual HTTP request is there any encryption of that to maintenance. So the network has had, or any bad guys that are in the middle, have had a bunch of opportunities to observe who you're talking to, to manipulate who you might be talking to, and things like that. So how can we do better? Well, why are we concerned about this? Of course. I don't know if you guys live in the paranoid world. I do. This is kind of obvious to security folks. There's a lot of actors out there who want to do bad stuff with this. We have a lot of examples out there of people manipulating DNS results to misdirect traffic. Network providers monetizing things, develop user profiles and things that might be user-hostile or to do surveillance and you'll discover dissidents visiting unsavory websites, what have you. This is what motivates this interest in having more privacy in this identity stuff.


So step one is to turn on TLS 1.3. TLS 1.3 encrypts those certificates. So that takes off one of those red lines. The certificates encrypts that instance of so that instance of that name being exposed is gone, but also any other domain names that that server operates are now hidden from the network. So in this example, the server is also serving, and the network doesn't get to see that that's the case because the certificate's encrypted.


Next step we can do here is to turn on HTTP2. HTTP2 enables coalescing of sessions. I said before that the servers also representing, so if the browser wants to visit it'll do a DNS request, see if it results at the same server, and now reuse that same TLS connection. Now that means that is going to be exposed in the DNS requests, so for that, we have DNS over HTTPS, which Fastly, Patrick McManus has been helping build. With that, we now run the DNS requests themselves over HTTPS. So we do an HTTP request, HTTPS connection to a server, we identify by IP address to open DNS or quad nine or something like that. And we encrypt the DNS requests from the browser, the resolver, and from the resolver to the authoritative.


The entities that are involved in the resolution have to find out to make the system work. But now they're the only entities that find out who you're asking about this transaction. This is kind of where we are right now. This is deployed technology for the most part. TLS 1.3 is about a third of Firefox traffic. H2 is about two-thirds of Firefox traffic, and DoH is rolling out pretty quickly now. I think Firefox has deployed it to 1% of their users and Chrome is running active experiments. If you happen to be in the nice intersection of users that have a modern browser and servers that have turned on all of these technologies, then you're in this pretty good space where the only time a web server identity is exposed is in that very first connection.


In that client hello server name indication where you say, here's the server I'd like to talk to because I haven't talked to the server before, any subsequent requests that are coalesced onto that using HTTP2 are invisible to the network. They're only exposed to the DNS providers and to the web server itself, because they're the people who need to be involved in this principle of least privilege.


But we can do better than that. One thing we can do is coalesce some more. So HTTP2 coalescing, as it exists today, is limited by that first certificate that you provide in the initial TLS handshake. But of course the way people build CDNs, a lot of times a server can present, it can speak for more domains than it's going to present that initial certificate. And then a lot of CDNs can speak for all of their customers from any given instance. So with this secondary certificates idea, if you know that a user's going to browse to another domain that can be hosted from the server, you can say by the way, certificate frame, I also represent So if you're going to go there, you can reuse this TLS connection. So we get even more of this coalescing. You notice, there's no DNS request for that either. You really just keep everything in the loop here, keep everything inside that same encrypted tunnel.


Finally, there was that one last little bit of red exposed to the network. I mentioned server name indication in the client hello is the last bit of exposure. This is a little bit more "futurey" tech that the TLS working group is working on. But the idea here is, in addition to that A record or quad A record we request in the DNS, we request an HTTPS service record, which gives us some key material that the server has pre-published so that I can encrypt stuff to that server at the very first time I connect. And so with that key material I can take that server name indication, which was previously exposed to the network and replace it with an encrypted value that only the right server can decrypt. And so we've got now everything fully encrypted all the way through.


This leads to an interesting, long-term vision of the web. What does the web look like from a network perspective? This is kind of where this general thread leads, if you take it to its logical conclusions. You would see as a network operator, if you're just looking at TLS connections, I intend each of these black arrows to be one TLS connection because everything else with this hyper-powered coalescing can be coalesced onto that TLS connection. So if you're a network operator and you're watching what your users are doing, you'll see a connection to a DNS provider, you'll see a connection to a CDN, you'll see a connection to a Cloud provider, but you won't see any names of what they're talking to within those spaces.


And moreover, if you're trying to do some advanced traffic analytics, Cisco makes products in this space that do super-cool statistical machine learning stuff to classify traffic based just on the packet lengths and things like that, encrypted traffic. If you're doing that, then you have no way to recognize different sites and classify their traffic differently. You can only look at a single TLS connection that has everything to one of these sites. So we haven't just hidden the identities at this stage, we've made it harder to fly more advanced techniques to extract things from within the encrypted envelope as well. So a lot of protection there.


Last thing I'll talk about is how we can trust our intermediaries less. How we can trust our CDNs less, trust our service providers less. I do this a lot in my job at Cisco. I tell our customers, "Please trust us less. We want you to be able to feel more confident using us because you don't have to expose as much stuff to us. You don't have to take on risk by using our services." I love this definition in RFC 4949 that something you trust is something that can be betray you. I like this idea of reducing trust as a way of building trust.


So what do we trust right now? Well, from an identity perspective, since time immemorial, we've trusted the edge. We've trusted whatever edge is delivering, whatever the thing is that a browser does at TLS connection to. So we put the sign-in keys — represented by this fountain pen emoji —we put the sign-in keys, which are sensitive assets because they're tied to a certificate and so they can be used to masquerade as a site, we put those out the edge and we use them to sign TLS handshakes to authenticate our servers to browsers.


Why is this a problem? Well, this is a typical certificate hierarchy. So you've got a root of authority that's valid for a really long time, an intermediate authority that's valid for a less long time, it's still a pretty long time. And then you have these end-entity certificates that are valid for some time, usually one to two years with traditional CAs —Let's Encrypt has pushed that down to three months thanks to automation — but it's somewhere in that three months to two years window. It's a long time. This becomes a problem because these authority certificates are kept literally under lock and key. So the root authorities that are valid for decades, they literally live offline in safes that are accredited for NSA use in a lot of cases.


The online authorities, the issuing authorities live in secure data centers under active controls and things like that. So we've got a lot of controls on these authority keys. But the identity certificates of website use, they have to kind of live out in the wild west. They have to live on the edge. They have to be actively used for however many requests per second. And so the private keys that correspond to those certificates have a lot of risk. However long that certificate lasts is your vulnerability window if that private key gets compromised. And I use compromise a little loosely here, compromise in the hacker sense, like if someone breaks into the infrastructure and steals the private key, that's one type of compromise.


But also, a compromise can be using the sense of authorization. So if you've been using a CDN and given them a public key to represent your identity and you break that relationship and they are no longer authorized to use that public key, then that's effectively a compromise because they can use that to do something that you haven't authorized as the legitimate owner of that domain. They're effectively an unauthorized representative of the domain at that point. This is your window for compromise in either of those senses. Once you fire your CDN, they can still act on your behalf even though you have told them not to.


So how can we do better than this? Phase one of this was to take the sign-in keys off the edge. This is called signature proxying or handshake proxying. The idea here was to create some server and put it in a more trusted domain. Put it under lock and key so there's authority keys. And then when you do a TLS handshake, take forward the TLS handshake all the way back to that trusted node. So only that trusted node would have access to the sign-in keys. The edge would just get the session keys that come out of it so it could do the encrypted session. So you have very fine-grain control. The edge only has access to session keys. It has no access to sign-in keys. Only your trusted key server can access those private keys and access to a point of compromise.


Now, the obvious implication of this, this is the opposite of fast. Whatever benefit you got by putting that thing out at the edge is now negated because you now have a round trip back to where your safe place to have the handshake. So this has some downsides from that perspective. To do that, the second wave of innovation here is to do a more limited delegation. The benefit of the proxy approach is you've got very fine-grain control per session control, but you have this latency cost. We can have a middle point by having an intermediate level of control without latency costs.


The idea here is you do some initial setup to delegate your sign-in authority. So you delegate from the pen to the crayon, if you will. You delegate to the edge, but in a limited way. You give it a couple of days worth of delegation so that you have a very limited window. And then you use that delegated key to actually sign the handshakes and to do the TLS interactions. How does this look in the handshake? Basically, all this is is another layer down in the certificate chain. So instead of making another certificate, you basically just swap out the key in the certificate. The holder of the certificate says please, instead of using this key, the key that was certified by the public CA, trust this other key that I've placed out on the edge to actually serve your HTTPS requests.


There's this object that is defined in the spec that has the public key that you should trust, and then it gets wrapped up in objects that's signed by the key that's in the public certificate. Then in the TLS handshake, a client indicates its support, it says I can use these delegated credentials in addition to the regular PKI. And then you send down the delegated credential in addition to your certificate here. The certificate verify, which has the signature over the handshake, is computed with the delegated key instead of the key that's in the certificate. Kind of how you'd expect another layer of delegation would work.


Now, any problem that computer science can be solved in an additional layer of delegation. Why does this matter? What this does is, like I said, this is all about constraining blast radius, right? We have to enable our intermediaries to act on our behalf because that's what we're paying them to do, but we'd like it if we can minimize the risk profile. So what we've done with this additional level of delegation is take our compromise window from on the order of years to something that's on the order of days. Practically speaking, you can only get down to about three days before clock skew is an issue, but you can very practically do about a week-long delegation and things work just fine. Both of these techniques, I think the signature proxying is an inactive operation right now in a couple of CDNs. Facebook has started prototyping with delegated credentials as a way to do assurance inside of their infrastructure, so to move things to delegate from trusted zones out to less trusted zones.


What I'll observe here is that most of the deployment today has been inside of networks, inside of CDNs, inside of the Facebook networks. I think there's an interesting possibility here, which has been not really well explored to do these assurances across the provider customer boundary. Wouldn't it be cool if it's this delegated credential stuff the customer could hold the certificate key and operate some key server and every so often issue a delegated credential to the CDN or to the platform?


This sounds like it's a little bit of a pain because it's more work for the customer, but we actually have some pretty good experience with this. WebEx teams has a whole end-to-end secure messages thing which is built on this principle. In our end-to-end messaging, a customer holds a key server and clients within that customer talk to that key server to get keys, and then all of the messaging is encrypted in a way that Cisco can't see the messages. So there's a little bit of prior art here for making these sort of collaborative security agreements where the customer holds some critical infrastructure, the stuff that really needs to be secure, and delegates what needs to be delegated to its various service providers. I think there's some room for exploration here. Talk to your favorite CDN provider about support.


I'm going to close here with the last couple of minutes with a thought for the poor CISO. CISOs really get it in both directions here. On the one hand, they really don't want their certificates to be held by their service providers, but on the other hand, they really wished they could see what's going on in their networks. They've been in an increasingly difficult position due to the increasing level of encryption in the network. And now with all this additional privacy stuff, they can't even see the hostnames that people are connecting to. Either they're going to have to buy really expensive boxes from Cisco that decrypt and re-encrypt CLS, which is a bad outcome for everybody. It's expensive for the customers. It's not good. Maybe it's not bad for someone at Cisco, but it's bad for the customers at Cisco. It's bad for the network operators because they have to buy these expensive boxes. And it's bad for you guys as application people making applications because now your TLS is broken in the middle and who knows what those TLS boxes are doing.


Verging into a little bit of a philosophy, but I think there's a need in this industry overall, speaking at a very high level, to rethink how we do network protection. We've got some initial examples of a more collaborative approach, which is what I think we need to get to. We've been changing how users are protected in encryption. We need to make corresponding changes with how we protect the network. I think where that points is to more collaboration. Not breaking encryption, but talking to the people who are in the loop on the encryption. So things like Zero Trust are examples of how applications and how endpoints can collaborate with the network to get security protections without opening up the whole envelope. So just a little bit of philosophy to wrap up here.


Hopefully, I've convinced you that HTTPS is awesome and getting better. We've got a couple problems that are still challenging for HTTPS. We've got all this automation problem, which we've made a lot of progress on, and as we kind of have a good ecosystem about right now. We've got these privacy technologies that are coming down the road to make things much more private and to protect identities as browsers connect to servers. And we've got some stuff that's starting to be deployed for limiting risks due to intermediaries, which I think can be powerful for folks using service providers. And with that, thanks.