HTTPS is not a magic bullet for Web security

This article was published in Ars Technica, you can view the original there, complete with graphics, comments and other fun stuff.

Everyone seems to be cheering for the “S” these days (just like American fashion designer Betsey Johnson did in 1986).

Security theater

When understanding what HTTPS offers—encryption, integrity and authentication—it should be easy to see why your bank uses it, why Gmail, Facebook, Twitter and any other sites you log in to use it (or at least should). What’s less immediately obvious is why every site on the Web can benefit from HTTPS. Does HTTPS help some long archived, no longer maintained bit of ephemera from the early Web?

Software developer and blogger Dave Winer argues, in a post entitled “HTTPS is expensive security theater,” that not only does HTTPS not help old, archived sites, it’s a waste of the site owner’s time. “I have a couple dozen sites that are just archives of projects that were completed a long time ago,” writes Winer. “I’m one person. I don’t need make-work projects, I like to create new stuff, I don’t need to make Google or Mozilla or the EFF or Nieman Lab happy.”

Winer is not alone. In fact, he’s in very good company: no less than Tim Berners-Lee has questioned the move to HTTPS, going so far as to call it “arguably a greater threat to the integrity for the Web than anything else in its history.” Berners-Lee does think the Web should be encrypted; he just doesn’t like the way it’s currently being done. Berners-Lee would like to see HTTP upgraded rather than shifting to the HTTPS protocol.

Together, Winer and Berners-Lee highlight the two big potential problems of moving the Web to HTTPS. It significantly complicates the process of setting up a website and creating something on the Web, and it might break links—billions of links.

It’s easy for savvy developers to dismiss the first problem—that HTTPS adds considerable complexity. But what makes the Web great is that you don’t have to be a savvy developer to be a part of it. Anyone with a few dollars a month to spare can rent their own server space somewhere, throw some HTML files in a folder, and publish their thoughts on the Web. A few dollars more gets you a nice URL, but that’s not strictly necessary.

Requiring sites to include a security certificate adds a significant barrier to entry to the Web. Anyone who has put in the effort to get HTTPS working on even one site knows that it can be a tremendous hassle. And this is likely the biggest obstacle to widespread HTTPS adoption among small site operators (who make up the bulk of the Web).

Until very recently, there was no way to obtain a free SSL certificate (a few certificate authorities did not charge to issue you a certificate, but if you needed to revoke it there was a fee). This was the first challenge that HTTPS proponents set out to solve. Again, companies like Symantec offer solutions for free certificates. And the EFF and Mozilla partnered to create Let’s Encrypt, which also offers free certificates. These options are really free, no catches, and they don’t require users to provide any identifying information. There’s also a set of command line tools that make installing and configuring these certificates pretty simple provided you have some basic sysadmin knowledge (and SSH access to your server).

That’s not the end of the headache, though. Once you have a certificate, you have to install it and get your Web server to serve it up properly. Again, assuming you have a basic sysadmin’s knowledge, this isn’t too hard, though tweaking it until you get an A+ grade on SSLLab’s security test can take many hours of debugging (and even top sites like Facebook only score a B). I have been running my own website, building my own CMSes, and running servers on the Web for 15 years, and I can say without hesitation that getting HTTPS working on my site was the hardest thing I’ve done. It was hard enough that, like Winer, I haven’t bothered with old archived sites.

Over the long run, Let’s Encrypt is hoping to partner with major Web hosts in such a way that users looking to set up their own blog using popular CMS like WordPress get an HTTPS site up and running as easily as clicking a button. Things will, however, likely never be that simple for anyone that wants to take a more DIY approach, writing their own software.

Simplifying the process of setting up HTTPS means more tools in your toolchain. It makes the individual more dependent on tools built by others. Developer Ben Klemens has an essay about exactly this dependency, writing that if “solving the problem consists of just starting a tool up, my sense of wonder has gone from ‘Look what I did’ to ‘Look what these other people did,’ which is time-efficient but not especially fun.”

It may seem trivial to developers employed by large companies solving complicated problems that taking the fun out of the Web is a problem, but it is. If the Web stops being fun for individuals, it becomes solely the province of those companies. We are no longer creators of the Web, just simple users.

Think of the Links

Berners-Lee’s concerns about HTTPS are easier to fix. What happens to all those links to HTTP sites when all those sites become HTTPS? The current answer is they break. There are quite a few proposals that would mitigate some of this at the browser level. When I asked Mozilla’s Barnes about Berners-Lee’s concerns, he told me, “Tim has been a really useful contrarian voice. His views have driven the browser and Web community to address concerns he has raised.”

To prove that Barnes actually does care about URLs, he’s the co-editor of a W3C specification that aims to preserve all those old links and upgrade them to HTTPS. The spec is known as HSTS priming, and it works with another proposed standard known as Upgrade Insecure Requests to offer the Web a kind of upgrade path around the link rot Berners-Lee fears.

With Upgrade Insecure Requests, site authors could tell a browser that they intend all resources to be loaded over HTTPS even if the link is HTTP. This solves the legacy content problem, particularly in cases where the content can’t be updated (like, for example, The New York Timesarchived sites).

Both of these proposals are still very early drafts, but they would, if implemented, provide a way around one of the biggest problems with HTTPS. At least, they’d prevent broken links some of the time. Totally abandoned content will never be upgraded to HTTPS, neither will content where the authors, like Winer, elect not to upgrade. This isn’t a huge problem, though, because browsers will still happily load the insecure content (for now at least).

More honest Web browsers

The Web needs encryption because the Web’s users need it. The Web needs encryption because the network needs it to remain neutral. The Web needs encryption because without it just browsing can turn you into an unwitting helper in a DDoS attack.

There are a lot of companies pushing HTTPS. While most have their own interests first, for now at least those interests align with Web users’ interests. But none of these companies have the kind of power and influence that Google and, to a lesser degree, Mozilla have as browser makers. And it’s up to browser makers to fix the confusion that currently surrounds HTTPS.

This starts with presentation. The current way browsers highlight HTTPS connections is misleading and needs to change.

The green lock icon that browsers used to denote a secure connection is too easily construed as a signal that the site is “secure.” And any perceived labeling of HTTPS sites “secure” and non-HTTPS sites “insecure” is deeply flawed. Again, just because a site uses HTTPS doesn’t mean it’s not storing your password and credit card number in plain text somewhere, and it doesn’t mean that the site hasn’t been hacked to serve malicious JavaScript and so on. As it stands today, browsers do not make it clear enough that the lock icon is a statement about the connection to the site and not the site itself.

As Hoffman-Andrews puts it, “calling HTTPS sites secure is generally not accurate, but it’s definitely accurate to call HTTP sites insecure.” In fact, browsers have no way of knowing if the site is truly “secure” in the broader sense. Neither do you and I. No one is ever going to fix that, but browsers can at least be more accurate and transparent with users.

The Chromium project has already announced plans to change the way it displays the lock and to start marking HTTP connections as insecure. Mozilla will do roughly the same with Firefox.

It’s tempting to see this as hostile to publishers. The message has become fall in line with HTTPS or, as Winer writes, the browsers will “make sure everyone knows you’re not to be trusted.” However, what the broken lock is really saying is that your browser can’t guarantee that the content you’re reading hasn’t been tampered with. It also can’t guarantee that you aren’t currently part of a DDoS attack against a site you’ve never even heard of. Nor can it guarantee that you’re connected to the site you think you’re connected to. All a browser can guarantee is that there is nothing secure about your connection and anyone could be doing anything to it.

All of these things have always been true when you connect to an HTTP site, the only thing that’s changing is that your browser is telling you about it. The far more important change comes after that, when there will be no icon at all for HTTPS connections. All you’ll ever see to indicate “security” is a large red X in the URL bar when you visit a site over HTTPS.

Winer’s fear is that Google especially, because it has a financial interest in HTTPS (HTTPS prevents Google’s competitors from scraping search results), will stop loading and ranking HTTP sites altogether. It would an egregious abuse of their place in the Web ecosystem for any browser to stop loading HTTP content entirely, and so far that’s not happening. But if it does, if Google’s self-interests are no longer aligned with the Web’s, then the Web should resist it. Warnings help users make informed decisions, prohibitions help no one.

The Web has always been a messy, complicated thing. The last thing it needs now is an artificial binary construct of “good” and “bad” as determined by browser vendors. At the same time, the current lack of encrypted connections has created a Web that’s no longer in the user’s control. The Web has become a broad surveillance tool for everyone from the NSA to Google to Verizon. Without encryption, the network becomes a tool for whoever owns the largest nodes. The small creators of this thing we call Web would then simply be at the mercy of the network owners and their various motives.

Giving users greater secrecy, ensuring data integrity in transit, and providing a means of establishing authenticity empower the user and help make the network decidedly less hostile than it is right now. Abuse will still happen. Surveillance will still be possible. But as Mill notes, attacks will “change from bulk to targeted,” and the network can return to being just a dumb pipe.