Friday, February 26, 2016

More Politics in Software

Over the past two months, I've been writing a series of posts on the intersection of politics and technology. The series consists of two bookend posts, with a number of focused topic discussions between them; this is the second bookend post.

Programmers are incredibly good at finding stuff to get worked up about. What's your favorite text editor? Vim? emacs? Maybe (god help you) Notepad? gedit? kate? nano? Or maybe you don't use an editor -- ok, then what's your favorite IDE? Eclipse? Visual Studio? NetBeans? Something obscure and language-specific?

Speaking of, what's your favorite language? Python? C? Java? C++? C#? Javascript? Lisp? Haskell?

Astute readers may have picked up on a theme here: Unless you're getting ready to draft a specification or set up a group workflow, none of these questions matter at all. And yet, we're all expected to have strong opinions on them. Conversations like these cement computer science's male-dominated reputation, because they are all about unabashed dick-wavery.

I wouldn't mind this so much if it weren't for the fact that it distracts a lot of smart people from things that actually matter. If you're making the case that easter eggs like "M-x tetris" proves yours is the editor of the gods, you're not making the case that, say, fair use provisions are critical to the future of internet culture. If you're arguing ad nauseam that Eclipse is so bloated as to be all but unusable, you're not wrong, but you're also not learning anything. If you're arguing that modal editors like vim are better because the lack of chording means you're less likely to get carpal tunnel, that's nice, but also kind of weirdly specific.

There are thousands of these silly little issues. My goal with this series was to try to find software-related issues that actually, in some broader sense, matter. With that almost comically lofty goal in mind, let's take a lightning tour of the topics visited.

We started out with a discussion of boot security, where we tried to wrap our heads around the question of how to detect (or maybe even prevent) hardware attacks. The political angle: the recently adopted UEFI standard claims to solve this problem, but in fact makes it worse in a way that

Next, we took a look at the still-emergent "sharing economy", and explored the good and the bad which lurk therein. One takeaway was that while change can be very good, "disruption for disruption's sake" is an absolutely absurd (and absurdly pervasive) guiding principle. Another takeaway: as services get decentralized, it gets really hard really fast to regulate them in any meaningful way, and this can lead to some really bad situations.

The sharing economy post momentarily brushed up against the issue of online platforms serving as facilitators for harassment and abuse. The next installment dealt with this issue head-on. It's incredible that there are large groups of people to whom which this post's title, "Ignoring Abuse On Your Social Platform Is Not a Neutral Stance", is actually a controversial claim.

The final "body" post, "You Can't Legislate Reality", took on a somewhat broader scope, looking at ways that the legislature has gotten tech completely wrong in mind-boggling and often dangerous ways. In particular, that post saves some heated language for a discussion of the TPP.

Now that we've reached the end, there's only one thing left to do. I've heard it said that all that's needed for the triumph of evil is that the good do nothing. Now, that's not entirely wrong, but it's not entirely right either. It's good to be educated about the issues facing your domain of expertise. But that alone is not enough.

A friend once asked me to help fix his computer, and he refused to believe me when I told him I couldn't. "But you're a computer science major!" Yeah, I replied -- so I can give you a really detailed walkthrough of why it's broken! But that doesn't get us any closer to finding the fix. This is the difference between diagnosis and cure.

Tens of thousands of computer hobbyists sitting in tens of thousands of homes or offices could all independently educate themselves about the issues facing their field, all get tremendously incensed about something like the locking-down of router firmware or the government-mandated corruption of digital maps, and all independently decide that Something Must Be Done... but it wouldn't make one iota of difference unless they decide, given that knowledge, to do something.

The fact is, being able to explain exactly how and why the world is getting worse does nothing by itself to forestall this worsening. The people worsening your world for their own interests could not care less how well or poorly you understand what they're doing, as long as you don't try to get in their way. But how do we get in their way?

It's not easy: Most of these issues are national in scope, and very few of us have standing invitations to that particular big-kids table. But that's a bit of a silly complaint coming from people in a field where median incomes are almost all six figures. We've got money to burn, and there are groups who've been fighting the good fight for decades, and they accept donations.

Foremost among these groups is the EFF, a non-profit that relies largely on donations for its funding. We all owe them a debt of gratitude for the work that they've done towards our community's ends. As with any organization, donations are critical to retaining that focus. Once you land that sweet job and start making more money than you know what to do with, maybe think about starting to pay that debt back.

Friday, February 19, 2016

You Can't Legislate Reality

For thousands of years, geometers tried in vain to square the circle -- a task which, in 1882, was mathematically proven to be impossible. A result like this isn't really something you get to debate the specifics of. They call it "proof" for a reason.

That's part of what made the 1897 proceedings of the Indiana General Assembly so bizarre -- because it was there that lawmakers tried to pass a law declaring the problem solved. The bill might well have been passed by the senate, were it not for the intervention of a visiting professor.

This incident is one instance of a theme which recurs whenever legislature collides with math or technology. The legal system just can't seem to wrap its head around how science works. Many are inclined to see malice in this tendency -- a sort of deliberate commitment to backwardness, a gleeful embracement of that which is known to be wrong. Tempting as this is, it's a good rule of thumb never to attribute to malice that which is adequately explained by stupidity.

What that rule of thumb fails to capture, though, is that many cases have plenty of room for stupidity and malice.

In the instance of Indiana's Pi law, the ignorance of certain groups within the legislature was maliciously exploited to feed the egotism of the bill's author, an amateur mathematician trying to make his reputation "solving" impossible problems.

In the instance of the Scopes trial, the scientific illiteracy of certain parties involved was exploited to the benefit of evangelical religious fundamentalists with a well-established track record of using legislation in legally dubious ways.

And in the instance of many recent legal cases concerning copyright, patent law, digital rights management (DRM), intellectual property, cryptography, and hardware design, the ignorance of the legislative and judicial systems on technical matters has been (and continues to be) exploited by avaricious and sometimes malicious vested interests in both government and industry, who use their leverage to advance profoundly antisocial ends.

Cory Doctorow argues compellingly in his recent book, Information Doesn't Want to be Free, that modern attempts at digital rights management (which he refers to using a more general term, "digital locks") are not only futile but also harmful to everyone involved. The essential problem (and here I do Doctorow a great disservice by trying to briefly summarize some of his main points; really, his treatment of the topic is second to none and I can't recommend that book highly enough) is this: What digital rights management schemes try to do is to provide a user with access to a technology, but only for certain purposes -- which, to put it bluntly, is just not possible.

Computers are copying machines. They are very good at copying data, and they can do it at virtually no cost. If you can watch a movie on screen, what's to stop you from telling your display to quietly, in the background, record everything it's displaying? Likewise for audio: once this data is in the user's hands, the users can do what they want with it. This shouldn't be a surprise: Computers are general-purpose, so this sort of flexibility is in their very nature.

All sorts of "solutions" have been proposed. Many devices now ship with purpose-built hardware meant to take control of a computer away from its user for the sake of giving manufacturers and content distributors stronger DRM controls.

Sony, never one to favor such above-the-board approaches, for some time had a standard practice of installing a backdoor rootkit on literally every computer that played one of their CDs, just so they could regularly check up on the user and make sure you hadn't violated copyright. Read up on how that thing worked -- it's seriously evil.

Not that we're going to get into it here, but if you care about encryption and you haven't heard of the clipper chip, that's a history lesson you might want to give yourself. Focus your attention on the "criticisms" section, and then maybe read the case made by Bruce Schneier, who has more credentials here than almost anybody. He also made a short post not to long ago about how the Clipper debacle relates to the issues we face today.

It might be hard to believe the situation has worsened in the last decade, but in some ways it has. The much-maligned Trans-Pacific Partnership (TPP) has been negotiated largely in secret, so that until November of 2015 nobody except for government and big business interests even knew what it entailed. Now that a full draft has been released, we can confirm that the situation is even worse than originally thought. The EFF has a good discussion of the main points that deal directly with technology law. This EFF article hits the major issues. Of particular note, the language is designed to stifle things like conducting security researchfixing your own software and hardware, or talking about whether it's even possible to break DRM. And if you've ever pirated an album, may god have mercy on your soul(Edited to add: Less than an hour after I published this post, Doctorow shared on his blog another simple breakdown written in conjunction with the EFF, which is well worth a read)

These are all things they want, and things they've been trying to implement, but software solutions to these things aren't possible, and so they've turned to legislating reality instead. If they can't outright stop you from copying a copyrighted file, and they can't justify undermining the designs of hardware (including the hardware they use!) in the process of trying to stop you, they can at least try to pass international laws letting them break into your home, confiscate your computer hardware, potentially destroy any or all of it, seize any domains you own, and throw you in jail, if they even suspect you've ever broken copyright. Yes, really. Go read the documents if you don't believe me -- it's all in there.

At what point are we going to recognize how fucked up it is that these are the priorities driving the world's major governments? When is enough enough? If this isn't enough to push us to that point, what will be? Will anything? Do we really have so little spine, so little self-respect? Is there no limit to the abuse we will tolerate?

Friday, February 12, 2016

Ignoring Abuse On Your Social Platform Is Not a Neutral Stance

There are some pretty big problems with social media right now. Or, it might be more accurate to say there's one big problem -- but it's really big. The problem is how, in this age, we deal with abuse and harassment online.

It borders on impossible to express the scope of online abuse and harassment. Probably the most famous example is Gamergate, which we're not going to get into here, because I'd rather eat glass than even do that shitstorm the dignity of a summary. Look it up in another tab if you really don't know.

The point is, there are a number of well-known cases where specific individuals have been targeted by huge crowds for harassment and abuse. But there are orders upon orders of magnitude more cases that have not become even remotely as well-known, but which nevertheless have caused very real harm in people's lives.

In 2014, the Pew research center conducted a study on harassment, with some striking findings. The worst forms of abusive harassment targeted women disproportionately more than men. This may not come as a surprise, but the sheer numbers involved are staggering: 26% of women aged 18-24 reported stalking, 25% reported sexual harassment, and 18% reported sustained harassment. The corresponding figures for men were 7%, 13%, and 16%, respectively.

The takeaway is this: If we sincerely care about fostering diversity in online communities -- and we all should -- then the first step is to recognize how abusive harassment disproportionately targets some demographics over others. Otherwise, it is impossible to put together a coherent picture of how these behaviors take place on whatever platform you might be dealing with.

It goes beyond harassment, in fact: A recent study suggests that women's contributions to open-source projects on Github tend to be accepted more often than men's -- unless the reviewer knows that the code was submitted by a woman, in which case the acceptance rate plummets. Why is the gender distribution of core developers for major open-source projects so lopsided? Gosh, I wonder.

But I've managed to sidetrack myself again. The real point I want to be getting to here near the end of the post is about how institutions handle abuse, or how they fail to. I'm mostly going to pick on Twitter, because if I focused on Reddit et al. instead we'd be here all fucking night. It's mind-bogglingly bad. Ellen Pao tried to take some small, common-sense steps to improve things, and look how that went.

That reminds me: There's one thing we have to get out of the way right now. Let me put it this way. I adore freedom of speech -- it's an absolute, unconditional prerequisite to any broader freedoms -- but that fondness does not extend to most of its most vocal invokers. You know, the people who, soon as they start to sense resistance, start bellowing that you can't do this! I have freedom of speech!

There are so many things wrong with this. First off, not everyone lives in the United States, which is almost never even acknowledged here. Like, come on. Second -- iamnotalawyer -- the first amendment grants you the right to free speech, not the right to be listened to. Third, there are notable exceptions to free speech, like for fighting words or specific kinds of hate speech. Fourth, if someone points out that what you're doing is actively harmful, and your best response is "yeah, but you can't make me stop", that really should prompt some serious introspection. Free speech is great, but having nothing on your side except free speech? Slightly less great.

With that out of the way, here's a couple notes on Twitter in particular. Twitter gets a kick out of pretending they take a neutral stance towards content shared on their platform. They've called themselves 'the free speech wing of the free speech party'. This blind enthusiasm might remind you of a discussion we just had. The issue is, serious harassment restricts ordinary people's willingness to exercise their freedom of speech, both due to emotional fatigue and, in many cases, the fear of personal harm. Refusing to take action against this form of harassment is, unavoidably, an implicit endorsement of its consequences.

So make no mistake: Freedom of speech is still restricted under this "pro-free-speech" platform. It's just that instead of restricting the speech of vitriolic spewmongers who devote countless hours to tormenting their fellow human beings, the platform restricts the speech of their targets. This is not a neutral stance, it is a pro-vitriol stance. I don't think it's an exaggeration to say that this stance is, in fact, anti-compassion. And, of course, it should almost go without saying that this stance is also implicitly every bit as sexist, racist, and otherwise bigoted as the abusers it enables are. How is anyone okay with this?

Motherboard has an interesting timeline outlining how Twitter's rules have changed over its lifespan, along with the cultural shifts that accompanied these changes. It's an interesting story. One big takeaway is that, while Twitter has made some good changes in the past couple of years, its changes have not been universally positive, and we still haven't yet reached a good place. One anecdote in particular comes to mind. Just the other week, a parody account mocking Twitter support and particularly support's reluctance to suspend or otherwise take action against abusers and harassers...

...was itself, for a time, suspended. At least it's good to know the account suspension feature still works, I suppose.