Google’s announcement that the presence (or lack thereof) of HTTPS is going to influence site search result rankings is a rare and very specific announcement on their part regarding their search algorithm. The initial weight in their ranking algorithm will be low, but they suggest it will increase over time.
If you run a fairly boring web site that doesn’t handle sensitive data and read Google’s official announcement that the presence of HTTPS on a web site will influence search ranking you may be left a bit confused. The picture becomes more clear if you look at this in the context of things that have lead up to this. The most recent manifestation of events leading to this is well articulated in the HTTPS Everywhere talk given by Google at their recent developer conference a few months back (even though search rankings were not the point there).
It isn’t the most exciting talk to watch (even if you are technical) as it’s 45 minutes of talking about why you should care about HTTPS and best practices to go about deploying it. The gist however can be easily described: Even if confidentiality isn’t a direct concern for an individual web site, it is collectively important (more what that means in a moment) and, in any case, authentication and tampering are important to everybody (even if they haven’t realized it yet).
Consider that links in otherwise innocuous sites can be changed to point to malicious destinations. And boring sites can be impersonated, eventually leading to other places, which you might entrust with your information. Also consider that the information exposed — collectively — by visiting a series of otherwise unrelated sites can expose a lot more information about an individual than they may realize (e.g. looming financial problems, medical matters, and pending business takeovers to just name a few off the top of my head).
Security discussions are often difficult ones to have because it can be tough to draw a line between being overly paranoid and being just paranoid enough1. And this announcement isn’t even entirely about security. Many web site operators are simply concerned with search engine rankings and that’s just fine. I often do security work, as well as online marketing, giving me something of a unique perspective. My initial response to Google’s HTTPS search results ranking announcement was not 100% positive. After a bit of digging and further consideration though I fully embrace it.
This is my take on their announcement, and my attempt to make my case for being just paranoid enough while not being overly paranoid.
There are numerous web sites where encryption has not previously been considered to be all that important, to site operators and visitors alike. That doesn’t mean there haven’t been reasons to add HTTPS to those web sites. They simply haven’t been sufficiently compelling (or, at the very least, obvious enough).
HTTPS is not simply about confidentiality, but also about authentication and tamper resistance. Together these become important everywhere on the web, and not least of all on the most innocuous sites. Until now though the scales have mostly been tilted towards confidentiality and only of a very limited scope: mostly password exposure on web sites that require a log-in2.
It’s hard to point at any one factor, but a big one is almost certainly the prevalence of free/open WiFi hot spots, such as those at coffeehouses. Every user of these hot spots is exposing every web site they visit along with the content of those connections. The former if they use the hot spot at all (unless they use both a VPN and tunnel their DNS queries), and the latter if they use it to access a web site not secured by HTTPS.
This was one of the major reasons Facebook and others started supporting HTTPS during the log-in process (if they weren’t already trying to do so)3.
Google has always aimed to take into account the security of a web site on their users behalf when returning search results. In this case though, the overt announcement about this algorithm change and outright statement that the weight of HTTPS in their algorithm may increase over time, is doing more than simply saying that the presence of HTTPS is a passive signal that a web site is mature/well operated/more trustworthy. Google also seems to be using their vast influence4 to actively encourage more operators to add (or shift entirely) to HTTPS.
The concern isn’t simply about confidentiality — although that’s a bigger concern than many might think, even on otherwise innocuous web sites5 — but about other security matters to protect users from malicious actors. Namely enabling authenticating of the web site being visited (i.e. you are interacting with the real google.com, not some evildoer who is doing a passable impersonation of it for their own nefarious purposes) and also to enable measures that prevent tampering with content in transit (i.e. switching out the URL on a Coca-Cola ad — and it doesn’t even have to be an ad — to take the visitor to a malicious web site).
The common thread here is that the web is, well, a web. There is trust spread all around it. Each of us has different degrees of trust in a given web site, but there are numerous ways to get from one web site to the next. And bad actors are waiting to either intercept what we’re doing or take us places we weren’t planning. HTTPS helps resist all of these scenarios.
Google appears to be doing two things here:
- Using HTTPS today as an indirect indicator of a likely-to-be-higher-quality web site
- Using their weight in the industry to tilt the scales for site operators previously on the fence, or outright against adding HTTPS to their web sites, a bit closer to doing so
Sometimes encryption means just confidentiality, but to really be useful it also must incorporate authentication and tamper resistance. Otherwise it’s just asking to be abused. HTTPS, and the underlying protocol TLS, is the closest we have to truly ubiquitous encryption of everything. And when implemented properly it provides all three of the potential benefits of encryption: confidentiality, authentication, and tamper-resistance.
Combined with faster processing and more mature software platforms, perhaps it’s time we stop waiting for IPv6’s end-to-end encryption for everybody pipe dream6, and just accept that HTTPS and TLS are here to stay for the time being. They are the best we’ve got. They work today. They do (or can) cover the vast majority of interesting traffic today.
P.S. For now I’ll leave out that the collective information sharing argument for HTTPS only addresses bad actors, but not legitimate commercial actors, such as advertisers/marketers who will still be able to collect lots of information about your browsing habits. That, however, is less a topic in need of a technical solution. It’s more one in need of a social solution7. The technical aspects are not all that interesting or difficult to implement. It also has no relevancy to the HTTPS Everywhere argument, other than some parallels when it comes to how seemingly disparate bits of information about our Internet browsing habits can be collected together to draw pictures we may not have realized we’ve painted for anyone else8. I mostly included this P.S. to head off the cynicism of the somewhat more paranoid folks (which I happen to agree with in this case) that might otherwise overshadow the benefits of HTTPS Everywhere9.
this is often why, unfortunately, the easiest times to have security discussions are often right after something bad has happened ↩
and even this was overlooked up until only a couple years ago on even major web sites like Facebook ↩
even today many sites still only use HTTPS for the log-in process, reverting to HTTP for everything else ↩
ahem, or weight, pun intended ↩
Even seemingly boring site visits reveal information about the visitor. Particularly when viewed in the context of what sites someone is visiting around the same time. ↩
more because encryption is toothlessly, but arguably pragmatically, optional in IPv6 ↩
e.g. awareness and commercial pressure, and perhaps an acceptance by privacy advocates that privacy is a spectrum which we’re all willing to operating at different points along depending on the context … and what we’re getting in exchange in the case of commercial transactions ↩
and I still trust Coca-Cola more than I do an outright malicious actor ↩
in hindsight this P.S. may be more distracting in and of itself, oops ↩
Most of the time our Ubuntu servers don’t have a GUI. How do you enable automated updates?
It’s pretty easy.
How To Do It
1. Install the package ‘unattended-upgrades’ – e.g.
aptitude install unattended-upgrades
2. Configuration 50unattended-upgrades by opening the configuration file – e.g.
Uncomment the *-security and *-updates lines in the Allowed-Origins section (should be the 3rd or 4th lines in the file) – e.g.
// Automatically upgrade packages from these (origin:archive) pairs
3. Configure 10periodic by opening the configuration file – e.g.
Set ‘APT::Periodic::Download-Upgradeable-Packages’ to ‘1’ (true). And add the following line, at the end of the file:
If things are not working as expected, the logs can be found in /var/log/unattended-upgrades.
When will it apply updates?
Whenever cron.daily runs (see /etc/crontab). Usually about 6:30AM system time.
Do you want to get notified when things are updated?
In the 50unattended-upgrades file uncomment the following line:
Do you want to only get notified when there is an error?
In the 50unattended-upgrades file uncomment the following line:
What about updates that require rebooting?
Some updates, like kernel updates, require rebooting. These are disabled by default. If you have email notifications on you’ll see them there. There is also an automatic reboot option – commented out by default for obvious reasons – in 50unattended-upgrades you can explore using.
Isn’t It Risky to Automate Updates?
It is up to you to decide whether automatic updates are acceptable in your situation. I find that I have a mixture of hosts: some where automated updates are a definite no-no and others where the modest risk introduced by allowing automated security updates is far preferably to waiting for manual patching.
In general, I use this a lot with standalone hosts that do special purpose things behind the scenes, but rarely with production web applications.
Anyone in modern IT is in a powerful position: Every single day they are just one idea away from their next $100,000 in business value creation.
Every single day …one idea away!
Every business, at all times, has room for improvement in areas like efficiency, customer retention, competitive positioning, marketing, follow-up and outreach, and sales (just to name a few).
There are a million places that IT impacts the modern organization. Think behind the scenes – e.g. work flows, business processes, and risk management – as well as where customers interact with the business – e.g. web site/application responsiveness, information accessibility, front-line staff interactions (i.e. tools they rely on).
Technologists that focus on the business first are better positioned to help in these areas. These are all areas where money is either being spent unnecessarily (directly or indirectly), where money is being lost (lost sales or lower than necessary retention), or competitive positioning is being weakened (hurting growth and profitability).
Sometimes technology is the obstacle. Other times it is the solution. Recognizing these situations then applying your expertise to come up with possible solutions is the key to coming up with your next $100,000 idea.
Here are some areas to look first:
- Where existing technology is creating hurdles, friction, or pain
- Where new technology could reduce friction, errors, or delays, such as within repetitive work flows and business processes
- Where customers are looking for something, but aren’t getting it (either at all or fast enough)
- Where an existing technical solution – e.g. service provider, software platform – isn’t optimal, is overkill, or is overlapping with another solution (and thus probably costing more money than necessary)
If you focus primarily on maintenance1, you aren’t coming up with ideas, and you don’t create any new value. If you are coming up with ideas but are not — critically — finding a way to try them out, you aren’t creating any value either. With a bit of pragmatism and a precision application of technology, this can be changed.
Don’t overlook even seemingly small refinements. Remember that businesses often do the same things over and over again. Often complacency sets in and a particular level of performance is accepted, even it’s far from optimal. A small improvement in a work flow that saves a few bucks a day annualized then amortized over several years adds up fast (and every idea doesn’t have to be anywhere near $100,000 to be worthwhile to implement …or even just trial).
Apply this mindset and you’ll become more valuable yourself and – every once in a while – you might even run across a million dollar idea. Good for you. (It’s not as rare as you might think).
or, worse, firefighting and looking like the hero ↩
May 29, 2014
An excellent post by Brian Cervino about how he supports Fog Creek Software’s Trell four million strong user base:
As we pass four million Trello members I thought it would be a good time to share with other small software development teams the fact that providing high quality support doesn’t have to be expensive or impossible. This includes a one business day initial response window for all newly created cases and making sure to follow through on all open cases until resolution. With just a few tools and some dedicated time, it is possible for even just one person like myself to support our entire member base.