For better or for worse, the modern age has ushered in new 'disruptive' technologies the like of which we have never before seen. The classic example of this is what some people have taken to calling the sharing economy.
The sharing economy, in a nutshell, is based on the idea that while traditionally people have bought good or services from specialized third parties (taxi rides from taxi companies, hotel rooms from hotel companies), people totally would buy these things from each other if there existed a reliable channel to mediate those transactions. What's more, lots of people have services to offer, but no good way to offer them. If you're going out of town for a week, your apartment is just sitting there empty, and empty living space has an inherent value which you are not capitalizing on. Catchphrases like "unused value is wasted value" get thrown around a lot when describing this sort of situation.
Enter "sharing economy" apps. Uber, Lyft, et al., let you play taxi using your very own car. Airbnb lets you play hotel with your own property. The apps are a mediated channel for connecting consumers with providers, and (hopefully) giving each a reasonable level of assurance about the other. Basically, they give you a way to easily rent out things you already own, on your schedule. Stated in the abstract this way, it probably sounds nice. And a lot of the time, it is. But it also has its share of failures, and most people seem to turn a blind eye to them, drunk as we are on its successes.
Let's start with the name: "the sharing economy". This is a masterpiece of euphemism and marketing. Sharing is letting someone crash on your couch. Sharing is carpooling. The second you attach a price to something, the second you offer your services on a market instead of as a favor, what you're doing stops being sharing. But of course, sharing is such a nice word that people are reluctant to stop using it, even though very cogent arguments have been put forward about how misleading the name is, and other names have been suggested, most notably "access economy".
The next problem is that price aside, the generous-individuals-sharing-hospitality-because-we're-all-such-good-buddies narrative still isn't really true. Power players, both individual and corporate, have emerged, trying in essence to be the hotel and taxi companies (so to speak) of the sharing economy. The more successful they are, the more resources they have to put towards furthering their success, because that's how capitalism works. Of course, many die-hard capitalists would say that if this is the will of the market, then so be it. But it doesn't sit well -- aren't these exactly the sort of entities the sharing economy promised to move us away from?
Then there's the issue of regulation. And make no mistake: this is a big issue. Uber, for instance, has had no end of legal troubles in virtually every country where it operates. Its failure to fit the business models around which extant legal regulations are built means that it can in many cases dodge or muscle past regulations meant to apply to businesses offering the service it provides. Uber's ability to sidestep laws meant to hold it to ethical standards means that it has been able to engage time and again in startlingly unethical practices.
How unethical, you ask? I'll let you judge that for yourself. All I'm saying is, it's not a pretty picture. And it doesn't stop with Uber's own practices -- they also have a track record of enabling and defending drivers' ethically questionable conduct.
And it's not just Uber: Related companies like Lyft have also been taking all kinds of questionable liberties with their workforce, provoking high-profile lawsuits and setting controversial legal precedent. The question of whether these companies' workers, some of whom are full-time drivers who make their living off of Uber, should even be allowed to organize is still under active discussion, somehow.
It's not just quasi-taxi services, either. San Francisco has gotten pretty tired of Airbnb, seemingly for good reason. Plus, it seems like for every one of the service's funny stories ("boutique igloo"!), there's a horror story to balance it out, and while the blame in these stories rarely rests on one party alone, it's also rare to find one in which the facilitating service is not at least partly at fault.
What this situation reminds me of, somehow, is this little story that showed up in a longer novel, told to one character by another. The story is about the term "bottle-waver", which I think the author coined. It might have been Neal Stephenson, but I'm not sure. But in any case, the story as I remember it goes that there's this tiny island, and there's a tribe living on the island, and they've never made contact with the outside world. They all live peaceful lives, unconcerned with what might lie beyond their shores... until one day, an empty glass bottle washes onto the beach.
This bottle just blows their minds -- they've never even seen glass before, bear in mind, and now suddenly here's this, and they don't have the slightest idea what to make of it. The villagers are equally awed and terrified and so, seeking answers, they take it to the village shaman. The shaman immediately recognizes this glass bottle to be an object of great magical power, but also has no idea how to use it. To save face, the shaman grabs a stick, puts the bottle on the end of the stick, and waves the stick overhead declaring Its power is mine! The villagers, seeing this, are all forced to agree, and everything returns to the way it was.
The bottle-waver, then, is someone who claims as their own that which they don't even understand, seemingly hoping that by recognizing the power of that which they have claimed, they will themselves acquire its power. Actually understanding the power in question is unnecessary, maybe even detrimental -- all you have to do is look, to the less informed, as if you understand it. This reminds me very much of the attitude these companies take towards their collective innovation, the 'sharing economy'. It's unclear whether any of them truly understand or even care about their technologies' ramifications on the marketplace, or on the cultures in which they operate. They've hit upon something nobody's ever seen before -- their glass bottle -- and as soon as they found it, they all lunged for their sticks, to see who could wave it the highest. Now Silicon Valley watches, enthralled, as everyone in the crowd wishes nothing more than to take the bottle's power for themselves. Suggestion after suggestion gets thrown out -- "Uber but for x," "Uber but for y" -- but as of yet, they're all too enthralled to suggest the one thing that might actually help: That we all catch our breath, take the bottle down off the stick, and take a moment to try to figure out what bottles are actually good for.
Friday, January 29, 2016
Wednesday, January 20, 2016
Does UEFI Secure Boot Actually Help Security?
You know, BIOS gets a bad rap. Most people only know it by the splash screen they see when they first boot up, and if they ever have to actually interact with it, what they find is often downright jarring. Flat colors? Keyboard-only navigation? Didn't we leave all this behind decades ago?
Maybe we did in higher-level systems, but not here. And if we're being honest, I've always had a soft spot for those tacky, old-school ASCII menus. They're kind of cute. And UEFI, the successor to BIOS, is so user-friendly it creeps me out a little bit -- you can even use a mouse in it! What kind of low-level interface is that?
I do have to admit, though, that UEFI fixes some important problems. It can boot from multiple-terabyte hard drives, which apparently people need these days. It has networking capabilities that BIOS couldn't dream of. UEFI is more broadly portable across different processors, which helps with security and stability.
That's the good. There's also lots of bad. We could talk about UEFI's negligence towards long-standing device driver issues, but that's nothing next to Microsoft's darling, the UEFI "Secure Boot" feature. Secure Boot is borderline functional for Windows users and an unmitigated disaster for everyone else.
The problem Secure Boot was meant to solve is a classic security issue called the "Evil Maid Attack". As Bruce Schneier explains it:
Secure Boot tries to prevent this using what're called crypographic signatures or digital signatures. Just like signing your name is something that (supposedly) only you know how to do, a cryptographic signature is something only you (with the help of your computer, which has a big personal secret number saved on it) can generate.
You can cryptographically sign any piece of data, and that signature can serve as your personal seal of approval. Anyone can check that your signature on a file is valid, but they can't forge your signature. And if the file changes, your signature won't match it any more, so it's hard to get tricked into signing the wrong thing. As you can probably imagine, these signatures are really useful. For example, all major flavors of Linux use signatures when installing software to make sure their downloads weren't corrupted or tampered with in transit.
So, what if we get the people who wrote our bootloader to cryptographically sign it, and we make UEFI check the signature and sound the alarm if it doesn't match? If your Windows bootloader is signed by Microsoft, you know you can trust it not to steal your password (well, that's not entirely true, but only because Microsoft is creepy). If someone overwrites that bootloader, the signature won't match, and UEFI can warn you of shenanigans and bail out.
This might seem like a fine idea, but it has some bad consequences. Microsoft is vehement about manufacturers enabling Secure Boot and setting it to only accept Microsoft's signature if they want to ship Windows on their hardware. That prevents the computer from loading anything except Microsoft-signed code, meaning that with secure boot enabled, those computers would only be able to run Windows. Regardless of Microsoft's claims to the contrary, this is a blatant attempt at promoting lock-in. The open-source community was, and is, less than thrilled.
"So", you might ask, "why not just set UEFI to accept Linux developers' signatures instead?" The answer is that, more often than not, you can't. Most if not all hardware manufacturers' UEFI implementations don't provide that option. "Why not have manufacturers bake the developers' keys in, then?" Well, in favor of that we have open-source geeks, and opposed to it we have Microsoft. One of these groups holds more influence than the other.
There do exist workarounds, but they're inconvenient and far from universal. More than that, the need for a workaround rather than the presence of a solution represents a toxic shift away from openness. This is why some have advocated renaming Secure Boot as "Restricted Boot".
Microsoft's official stance is that people who don't like Secure Boot being limited to Microsoft signatures can just disable the feature. This makes about as much sense as forcing a subletter to use a room lock whose key you've copied and telling them that if they aren't comfortable with that, they could always just not use a lock at all.
And astoundingly, most of the big players in this debacle have completely ignored the fact that there are better defenses against Evil Maid -- for example, this approach that Joanna Rutkowska outlined five years ago.
This situation has been developing since before UEFI even hit the market. Boot security still sucks, but it's marginally improving. For that, we have organizations like the Free Software Foundation and dedicated developers like Matthew Garrett (who wrote the workaround linked above -- and who turns out to be just as much of a righteous dude in non-UEFI matters) to thank. Microsoft doesn't seem to be coming to its senses any time soon, but hopefully boot security will continue to improve in spite of their influence.
Maybe we did in higher-level systems, but not here. And if we're being honest, I've always had a soft spot for those tacky, old-school ASCII menus. They're kind of cute. And UEFI, the successor to BIOS, is so user-friendly it creeps me out a little bit -- you can even use a mouse in it! What kind of low-level interface is that?
I do have to admit, though, that UEFI fixes some important problems. It can boot from multiple-terabyte hard drives, which apparently people need these days. It has networking capabilities that BIOS couldn't dream of. UEFI is more broadly portable across different processors, which helps with security and stability.
That's the good. There's also lots of bad. We could talk about UEFI's negligence towards long-standing device driver issues, but that's nothing next to Microsoft's darling, the UEFI "Secure Boot" feature. Secure Boot is borderline functional for Windows users and an unmitigated disaster for everyone else.
The problem Secure Boot was meant to solve is a classic security issue called the "Evil Maid Attack". As Bruce Schneier explains it:
Step 1: Attacker gains access to your shut-down computer and boots it from a separate volume. The attacker writes a hacked bootloader onto your system, then shuts it down.In essence, if you encrypt your hard drives with a password only you know, an attacker couldn't access those drives -- but that doesn't stop them from rewriting the piece of code that asks you for the password! If you don't notice realize what's going on until after you've unlocked the drive, that's game over.
Step 2: You boot your computer using the attacker's hacked bootloader, entering your encryption key. Once the disk is unlocked, the hacked bootloader does its mischief. It might install malware to capture the key and send it over the Internet somewhere, or store it in some location on the disk to be retrieved later, or whatever.
Secure Boot tries to prevent this using what're called crypographic signatures or digital signatures. Just like signing your name is something that (supposedly) only you know how to do, a cryptographic signature is something only you (with the help of your computer, which has a big personal secret number saved on it) can generate.
You can cryptographically sign any piece of data, and that signature can serve as your personal seal of approval. Anyone can check that your signature on a file is valid, but they can't forge your signature. And if the file changes, your signature won't match it any more, so it's hard to get tricked into signing the wrong thing. As you can probably imagine, these signatures are really useful. For example, all major flavors of Linux use signatures when installing software to make sure their downloads weren't corrupted or tampered with in transit.
So, what if we get the people who wrote our bootloader to cryptographically sign it, and we make UEFI check the signature and sound the alarm if it doesn't match? If your Windows bootloader is signed by Microsoft, you know you can trust it not to steal your password (well, that's not entirely true, but only because Microsoft is creepy). If someone overwrites that bootloader, the signature won't match, and UEFI can warn you of shenanigans and bail out.
This might seem like a fine idea, but it has some bad consequences. Microsoft is vehement about manufacturers enabling Secure Boot and setting it to only accept Microsoft's signature if they want to ship Windows on their hardware. That prevents the computer from loading anything except Microsoft-signed code, meaning that with secure boot enabled, those computers would only be able to run Windows. Regardless of Microsoft's claims to the contrary, this is a blatant attempt at promoting lock-in. The open-source community was, and is, less than thrilled.
"So", you might ask, "why not just set UEFI to accept Linux developers' signatures instead?" The answer is that, more often than not, you can't. Most if not all hardware manufacturers' UEFI implementations don't provide that option. "Why not have manufacturers bake the developers' keys in, then?" Well, in favor of that we have open-source geeks, and opposed to it we have Microsoft. One of these groups holds more influence than the other.
There do exist workarounds, but they're inconvenient and far from universal. More than that, the need for a workaround rather than the presence of a solution represents a toxic shift away from openness. This is why some have advocated renaming Secure Boot as "Restricted Boot".
Microsoft's official stance is that people who don't like Secure Boot being limited to Microsoft signatures can just disable the feature. This makes about as much sense as forcing a subletter to use a room lock whose key you've copied and telling them that if they aren't comfortable with that, they could always just not use a lock at all.
And astoundingly, most of the big players in this debacle have completely ignored the fact that there are better defenses against Evil Maid -- for example, this approach that Joanna Rutkowska outlined five years ago.
This situation has been developing since before UEFI even hit the market. Boot security still sucks, but it's marginally improving. For that, we have organizations like the Free Software Foundation and dedicated developers like Matthew Garrett (who wrote the workaround linked above -- and who turns out to be just as much of a righteous dude in non-UEFI matters) to thank. Microsoft doesn't seem to be coming to its senses any time soon, but hopefully boot security will continue to improve in spite of their influence.
Friday, January 15, 2016
Politics in Software
This is the start of a two-month series of posts on the intersection of politics and technology. The series consists of two bookend posts, with a number of focused topic discussions in between; this is the first bookend post. Now that the series is concluded, this post has been lightly edited to add links to the later posts.
Near my family's house in Seattle are two major construction projects. The first is building a new, refurbished waste transfer station; the second, a new corporate headquarters. In spite of the differences in these buildings' purposes, I'm willing to bet that the labor crews for each have pretty similar feelings towards their work. What difference does it make, being a bricklayer for the state or a bricklayer for private industry? Perhaps not much. It's understandable how most people tend to view their work as apolitical.
And yet, in building something that other people are going to use, you are in some sense helping those people, and so perhaps we should give serious thought to who it is we help. In some domains it might not matter much -- certainly there's no shortage of people who can lay down bricks -- but in other domains, very real political shifts can take place without anyone caring or even noticing.
This probably sounds pretty abstract. The goal of the series I'm writing here is to bring this discussion down to earth. I'm going to try to illustrate, through concrete examples, the real and serious political consequences of the choices people make on what projects to support and what projects to ignore.
I'm focusing on software issues. There's a reason for this. A lot of people see software development as "digital bricklaying", and not without good reason: both have the potential to be menial, repetitive, borderline rote tasks with little reward aside from wages. It would be a mistake, though, to let this comparison lead us to assume that software is no more political than other menial crafts. As soon as we get into social issues, the comparison breaks down.
There can be deep political ramifications to software design decisions. Most people turn a blind eye here, or take only a superficial interest, caring about the politics just long enough to let someone convince them they're on the right side, then wandering off in a happy haze to implement some new half-baked idea. Half a year later, that idea is raining down all sorts of unintended consequences. This is the sort of thing we would call naïveté, if it were harmless. But when it impacts people's lives, we don't have the luxury of being so kind.
It's not all bad. Yes, we have lots of people out there with vested interests in ensuring copyright law continues to lag behind the digital age because they profit by abusing its archaisms. But we also have Cory Doctorow and Parker Higgins and Sarah Jeong and many more like them, people sincerely committed to tracking the issues, fighting the good fight, and making sure the rest of us can keep up with them, too.
Yes, we have the NSA and its allies actively working to undermine the technologies that keep us all safe and secure online, and recruiting as much talent as they can into their closed ecosystems, indirectly hamstringing public-domain research into technologies that grow more important with each passing month. But we also have the likes of Bruce Schneier and Phil Rogaway, the latter of whose linked paper is far and away one of the best publications in recent memory. These people are at the forefront of the modern issues in security and cryptography, and seem to be doing everything in their power to help advance the public good.
With so many intelligent, articulate, well-educated, well-connected, and well-respected voices on these issues, it almost feels arrogant or presumptuous to add my own. What do I have to say that our current luminaries haven't already said better?
I don't have a good answer to that question. The fact is, in order to pass my major's senior sequence I need to write a seven-part series of blog posts connected by some central theme, and I couldn't find any other theme that sat as well with me as this one.
I strongly encourage the reader to spend whatever time they can on the works of the people I listed above, and others like them. But just in case you decide to spend some time with me as well, here's a bird's-eye view of the topics I'm going to be taking on in the next installments.
Near my family's house in Seattle are two major construction projects. The first is building a new, refurbished waste transfer station; the second, a new corporate headquarters. In spite of the differences in these buildings' purposes, I'm willing to bet that the labor crews for each have pretty similar feelings towards their work. What difference does it make, being a bricklayer for the state or a bricklayer for private industry? Perhaps not much. It's understandable how most people tend to view their work as apolitical.
And yet, in building something that other people are going to use, you are in some sense helping those people, and so perhaps we should give serious thought to who it is we help. In some domains it might not matter much -- certainly there's no shortage of people who can lay down bricks -- but in other domains, very real political shifts can take place without anyone caring or even noticing.
This probably sounds pretty abstract. The goal of the series I'm writing here is to bring this discussion down to earth. I'm going to try to illustrate, through concrete examples, the real and serious political consequences of the choices people make on what projects to support and what projects to ignore.
I'm focusing on software issues. There's a reason for this. A lot of people see software development as "digital bricklaying", and not without good reason: both have the potential to be menial, repetitive, borderline rote tasks with little reward aside from wages. It would be a mistake, though, to let this comparison lead us to assume that software is no more political than other menial crafts. As soon as we get into social issues, the comparison breaks down.
There can be deep political ramifications to software design decisions. Most people turn a blind eye here, or take only a superficial interest, caring about the politics just long enough to let someone convince them they're on the right side, then wandering off in a happy haze to implement some new half-baked idea. Half a year later, that idea is raining down all sorts of unintended consequences. This is the sort of thing we would call naïveté, if it were harmless. But when it impacts people's lives, we don't have the luxury of being so kind.
It's not all bad. Yes, we have lots of people out there with vested interests in ensuring copyright law continues to lag behind the digital age because they profit by abusing its archaisms. But we also have Cory Doctorow and Parker Higgins and Sarah Jeong and many more like them, people sincerely committed to tracking the issues, fighting the good fight, and making sure the rest of us can keep up with them, too.
Yes, we have the NSA and its allies actively working to undermine the technologies that keep us all safe and secure online, and recruiting as much talent as they can into their closed ecosystems, indirectly hamstringing public-domain research into technologies that grow more important with each passing month. But we also have the likes of Bruce Schneier and Phil Rogaway, the latter of whose linked paper is far and away one of the best publications in recent memory. These people are at the forefront of the modern issues in security and cryptography, and seem to be doing everything in their power to help advance the public good.
With so many intelligent, articulate, well-educated, well-connected, and well-respected voices on these issues, it almost feels arrogant or presumptuous to add my own. What do I have to say that our current luminaries haven't already said better?
I don't have a good answer to that question. The fact is, in order to pass my major's senior sequence I need to write a seven-part series of blog posts connected by some central theme, and I couldn't find any other theme that sat as well with me as this one.
I strongly encourage the reader to spend whatever time they can on the works of the people I listed above, and others like them. But just in case you decide to spend some time with me as well, here's a bird's-eye view of the topics I'm going to be taking on in the next installments.
- UEFI "Secure Boot", its consequences for open source, and the dangers of letting moneyed interests write the standards we're all going to use. (link)
- The sharing economy, how it's cool in some ways, and how in other ways it's really not. Due to the economic and regulatory angles, this is one of the most rich and nuanced examples of technology's political dimension. (link)
- The problem with media platforms refusing to pick sides in issues involving harassment. It is commonly believed that non-involvement is a neutral stance. This could not be more wrong. (link)
- The trend towards, and ramifications of, trying to legislate reality, where lawmakers demand technologies that simply do not -- and often cannot -- exist. (link)
These topics may move around a bit as I realize how much or how little I may have to say on the different points here. The first one should be up some time next week!
Subscribe to:
Posts (Atom)