Thursday, December 29, 2016

How to Lose at Mancala (Consistently!)

Over winter break, I spent a few days fiddling with a Mancala AI. It came together surprisingly quickly! Here's a little journaling on that project.

Background: Mancala is a fun, simple board game with a surprisingly high skill ceiling. I've never been very good at the game, but I've had fun playing it anyway. After a recent series of ruinous defeats at the hands of my own family I started thinking about what optimal game strategies might look like.

(Side note: It turns out Mancala is a solved game (i.e. it's known how to play "perfectly") for the most common numbers of stones or houses. I haven't looked at any of the relevant publications, but my impression is that they relied mostly on endgame databases and exhaustive search. That's no fun, so I'm going to ignore those results and try to build this project from the ground up. The goal is to write a program that can beat me at Mancala, and hopefully to learn a few things about the game from it.)

If you don't know the rules of the game, here's a quick overview of the variant I'll be writing about. I'll be calling the game pieces "stones", and the little areas they sit in "houses".

I opted to do this all in C because I had a feeling I'd need speed here and because I wanted more practice with the language.

You can find the Github repo here.

The Basics

My goal here was to start by writing a C framework and basic shell for playing Mancala. The idea is that if this is made modular enough, then the game logic can be tested through manual play, and down the road the code that prompts a user for a move can be swapped out for code that asks an AI the equivalent question.

Boards are represented through a struct:

typedef struct {
     char p1_store;
     char p1[6];
     char p2[6];
     char p2_store;
} board;

This struct is of course effectively just a char array, but it's still nice to have pointers into its various sections, both for convenience and for readability. The board will never have more than 4*6*2=48 stones on it, so there's no drawback to the limited range of a char. And in fact using chars rather than larger data types offers an advantage: their small size makes the struct easier to copy quickly.

There's just one gotcha: we haven't addressed the question of how to map this data structure to the physical board. There are three things we want to do with the board: print it, reference specific houses on it, and play moves on it. In printing it and referencing moves, it's convenient to think of reading off the state of the board left-to-right. But when you're playing out a move, stones progress counter-clockwise -- that is to say, left-to-right along the bottom and right-to-left along the top.

Thus, while it's possible to lay out memory so as to make either of these operations natural, the two are mutually exclusive. We have to choose which to bias the design towards.

The main consideration I used is that if we make move calculation easy but printing and indexing more difficult, then that introduces complexity to many parts of the program. Complexity just about inevitably leads to bugs.

On the other hand, if we make move calculation more difficult but printing and indexing easy, then we've encapsulated the complexity within whatever function handles playing a move -- a small function which'll be easy to rigorously test. The rest of the program avoids having to deal with this complexity as long as it trusts the validity of the move function.

The positioning of p1_store and p2_store (the "endzones" where players are trying to collect stones) was purely to simplify the logic of play_move, since nothing else really cares too much about where in the struct those elements are placed.

The "main loop" of the game is a function, play_game, which does exactly what it says on the tin. play_game takes three arguments: a pointer to a board struct, and two function pointers. These function pointers are meant to indicate two functions which, if passed the game state, will pick moves for players 1 and 2 respectively. Those moves are enacted on the board via play_move, which returns a MOVE_RESULT enum that is then used to provide helpful feedback and update the game state.

I'd never really played with function pointers before in C. They're really cool -- they feel like something that got smuggled into the standard, secretly backported from higher-order languages. If C also had a clean way to curry functions, I'm not sure I'd ever want for anything more.

The use of function pointers in play_game allowed me to start out with players 1 and 2 both played from user input. This manual operation was a good chance to verify that the basic guts of the mancala engine were working properly, and I caught a few bugs in the move function right away. Like I mentioned earlier, that was expected, and the whole setup here was designed to make those expected bugs as easy as possible to root out.

Their real utility, though, came when I decided to add my first shot at an AI to the game. All that's required to swap out a human for an AI is to write a C function with an identical signiture which implemented the AI desired and to change one line in main to pass that function instead of get_move. How's that for modularity?

The next step is to write an AI to play the game. That's where "losing consistently" comes in: playing yourself is great, but playing something else is better, and if the thing you've made to play against can beat you, then hey, that's all the better. My goal from the start is to make an AI that's good enough that it can beat me consistently.

Losing Consistently

The Idea

My first strategy is simple: a naive version of Monte Carlo tree search (MCTS). This is a solid go-to strategy for lots of kinds of games. All you need is to be able to simulate playing games out to their conclusions, logging who won as you go. If you have a working engine but don't have any real theory for the game and don't have a bulletproof way of evaluating positions mid-game, MCTS is a good way to get a basic AI working practically "for free". In spite of having such low requirements, MCTS is an incredibly effective strategy -- it's done wonders for many games, notably including Go. In fact, the AlphaGo system was built by using neural nets to provide heuristics for a (thoroughly non-naive) implementation of a similar algorithm.

In short, this is a strategy that gets results, especially in games with emergent mechanics. A full implementation of Monte Carlo tree search involves weighting different moves during the search to provide a basic heuristic for focusing on the most promising candidate moves. We'll start out without this: hence the "naive", since our heuristic will be the random function.

This naive approach probably won't work quite as well as a full MCTS implementation would, but it's much simpler to implement, so it'll make for a good starting point. Down the road, it'll also be interesting to compare this algorithm's performance to that of full MCTS.

Random Movements

Incidentally, here's something not a lot of people get right: how do you pick a random number between 0 and n in C? Most people would just use (rand() % (n+1)), but this introduces a slight amount of skew towards some numbers, which is unacceptable. Random number generation is at the heart of our algorithm. We need to avoid generator skew, or the algorithm will suffer.

I chose to encapsulate the complexity of this problem inside a function called pick_random_move. This function picks a move at random, while taking care to adjust for skew and to check that the move is legal. The function:

char pick_random_move(char side[]) {
    int divisor;
    int move;

    divisor = RAND_MAX/6;
    do {
        do {
            move = rand() / divisor;
        } while (move > 5);
    } while (side[move] == 0);

    return move;
} then polled by the NMCTS engine to get random moves. Notice that to switch from a naive heuristic to any other, all we'd have to do is swap out this function. More modularity at work. It's a pity we can't easily make this function an argument to the MCTS algorithm which could get curried in before passing the algorithm driver to play_game, but that's C for you.

Quick note about how the function works: RAND_MAX is a predefined macro giving the upper limit on the value of rand(). We want to map these values to values between 0 and 5, without skew. However this would only be possible if RAND_MAX is divisible by 6, and we have no guarantee that that is the case. So, we instead create an imperfect map from values of rand() to numbers between 0 and 6. The way we do this (integer division) guarantees that values 0-5 have the same probability of occurring, but has some skew in how often 6 shows up. We deal with that by just throwing out 6es completely -- repolling the RNG whenever we get one -- and returning on 0 through 5. And just like that, we have a reliable random number generator!

Where does this get us?

The Github repo already contains an implementation of the naive Monte Carlo tree search. It runs out a configurable number of simulations (defined by a preprocessor macro, NAIVE_MCTS_NUM_PATHS, which defaults to 200000) and picks the one which had the highest margin of wins over losses.

In each simulation, all moves are chosen randomly. That's all this thing does -- and yet it works. In aggregate, that strategy is already enough to identify which branches are promising and which are not. My suspicion is that this works because Mancala positions have an extremely low branching factor. This makes branch pruning less important than it would be in e.g. computer Go.

There's one more thing to address: how moves are chosen. There's a few different options here. You could go by the ratio of wins to losses, the ratio of wins to total games played, the integer difference between number of wins and losses observed... In full MCTS, the ratio of wins to games played is typically used, which makes sense since those ratios are also used to set the weights for the random move generation function. However, since we're using naive MCTS, complete with even move probability weighting, I opted instead to go for integer difference, just because it makes for simpler code and gives more or less the same choices on average.

The Github repo currently includes code for setting up both human and NMCTS players. It's even possible to pit the AI against itself (and it plays a pretty good game!). The code is decently well-commented, at least in my opinion, and has a bunch of optional, preprocessor-controlled debug print statements that also help to narrate what's going on.

Next Steps

The naive MCTS is not yet completely bug-free, and occasionally picks illegal moves. Hopefully I'll have that issue figured out soon. After that, my next goal is going to be to try and add a more fully-fledged version of MCTS to the program, mostly just to see if it performs tangibly better. It'll be especially fun pitting different versions of the AI against each other and seeing if one has an edge over another.

After that's done, I'm tempted to experiment a little more with different guiding heuristics. It might be interesting to use a decent AI to build up a big corpus of games, then use that corpus to train some sort of statistical model for guessing candidate moves, and use those guesses to help navigate the search space more effectively. I'm not sure how well that will work -- the original algorithm does after all work from random move choices, so as far as data derived from it goes, one is tempted to invoke 'garbage in, garbage out' -- but it'll be interesting regardless. I'm hesitant to make predictions but I think it'll work pretty well.

In any case, I've already achieved my goal of losing consistently, so I know I'm on the right track.

Saturday, December 24, 2016

Academic Computer Science Needs to Get Its Shit Together

The fact is, our beloved field of computer science has reached an embarrassing low. Among programmers in all but some collegiate circles, calling something "academic" is oblique shorthand for calling it overwrought, obscure, inflexible, and/or too fragile to be useful in the real world. And there's a good reason for this: more often than not, the products of academia in computer science meet all those criteria.

But why? Shouldn't universities be where our best and brightest gather? Isn't the whole idea of academia to draw great minds together so they can collaborate and educate?

The short answer: Well, yes, but that idea doesn't work so well when you're competing for talent against an industry that can afford to triple your salary offers. With few exceptions, industry gets who they want and academia gets stuck with the dregs -- and you can't do much with dregs.

What's the long answer? Glad you asked. Buckle up.

Once upon a time, academia really was (I am given to understand) a heavyweight player in computer science. MIT and University of Illinois have their places in the annals of Unix history right next to Bell and GE, and in fact it was academia where Unix first really gained traction. Same with the internet -- there's this map that's been floating around recently:

There's a lot that's interesting about this picture, but as far as our discussion is concerned, let's note the breakdown of who owns which nodes: there's a few gov't agencies and a few companies represented, but a solid majority of the systems are academic. Universities broke a lot of ground in both developing and implementing the technologies that underlie the internet.

Of course, hardware and networking isn't everything, and there are other areas where academia had a lot to offer. For instance, it's hard to overemphasize the eminent (and eminently quotable) Edsger Dijkstra's influence on programming language design, distributed systems, or really any number of other subjects. Or take Donald Knuth, who wrote the book on computer science, then called it "volume one" and set back to work writing volumes two through five (six and seven still pending!). Or Martin Hellman, who advised Ralph Merkel's groundbreaking PhD work on public-key cryptosystems. Hellman later recruited Whitfield Diffie to his lab and together they built on that work, eventually leading to the landmark discovery of what is now called Diffie-Hellman key exchange.

This was all real ground being broken, real problems being solved, challenging work being done well -- all by academics. If these were once academia's exports, when did it become so maligned? Where did things go wrong?

Well, there's a case to be made that in spite of all the above, things might not have gone wrong so much as stayed wrong. Knuth and Dijkstra, for instance, were both outspokenly critical of their field.

Knuth said of computer science during his time as a student that "...the standard of available publications was not that high. A lot of the papers coming out were quite simply wrong." He made it clear practically in the same breath that one of his main goals with The Art of Computer Programming "was to put straight a story that had been very badly told."

Dijkstra, for his part, held practically every language of his time in the highest contempt. COBOL and BASIC "cripple the mind" and leave students "mentally mutilated beyond hope of regeneration," respectively. FORTRAN he described as "the infantile disorder," while PL/I was "the fatal disease." He also had some critical words for those in his field who refused to acknowledge some of its more uncomfortable truths.And in an abstract moment he opined, "Write a paper promising salvation, make it a 'structured' something or a 'virtual' something, or 'abstract', 'distributed' or 'higher-order' or 'applicative' and you can almost be certain of having started a new cult." Some things, it seems, never change.

A professor of mine once commented, in one of the last meetings of his class, that the structured programming practices he'd been teaching us were important because "my generation has already written all the easy programs. The hard ones are up to you guys." He was of course talking about software development -- the bricklaying of computer science -- but I suspect that a similar quip would apply on the theoretical side of things.

Obviously the field is still new, but by and large it really does seem to be true: The easy, useful results have been discovered. The easy, useful definitions have been made. The easy, useful algorithms have been found. Having reached this point, academics now have two choices: either we take on the stuff that's not easy, or we take on the stuff that's not useful. A quick flip through arXiv suggests that most researchers have opted for the latter option.

The sad fact is, of course, that modern academic culture does nothing to discourage this -- in fact, "publish or perish" actually encourages professors to focus on cranking out useless but simple results. Meanwhile, profound guiding problems like P vs NP or even P vs PSPACE go all but untouched. The culture is such that the average academic who's fool enough to really throw themself at such a problem ends up reduced to a "cautionary tale" in a survey paper.

Make no mistake: there are major, important unsolved problems in computer science. Hell, there are enough that Wikipedia's got a whole list. Breakthroughs on any of these would instantly make the reputations of the researchers involved. But those with the expertise to take these issues on are, more often than not, actively discouraged from doing so. How is a field supposed to produce anything of value when this is its culture?

There's a quote I recall reading, but which I can't seem to dig up. It was from one of the leading researchers on the topic of proving program correctness, given some time before the turn of the millennium, and his observation was that while the field had advanced significantly in the years he'd spent studying it, he was slowly coming to realize that the problems they were trying to solve, nobody else was really all that concerned about.

I know of a few kernel programmers and security buffs who'd take issue with that claim, but aside from them and aside from a few other very specialized contexts it turns out that yeah, nobody really cares too much. In the same vein, very few people care about some equivalence result in complexity theory between two games they've never heard of. Same goes doubly for the underachieving grad student's favorite recourse: survey papers directed at the above.

That story is a pretty common one in theoretical computer science, and honestly it's understandable -- the field is new, and it's only fair to give it some time to get its bearings. Not every result is going to have immediate practical applications, and we'd be doing everyone a disservice if we expected otherwise. But while that might excuse some seemingly useless research, it's no excuse for the sheer mediocrity and apathy that pervade the field.

There are important results waiting to be found. It seems apt to compare modern complexity theory to ancient Greek mathematics -- perhaps, if we really want to push the analogy, even going far enough to equate Knuth with Euclid -- and if this comparison holds, then mind-boggling amounts of valuable theory have yet to be discovered. My own background leaves me tempted to bring up the influence these results could have on computer security, but really, it's hard to think of a subject they wouldn't influence. In fact, there are enough connections between computer science and advanced mathematics that results found here could easily filter back and offer unexpected insights on long-standing problems in that domain as well. John Conway's work gives a number of examples of what that might look like.

Speaking of computer security: one exciting thing we learned post-Snowden that, while we've massively dropped the ball on endpoint security, we've managed to build cryptosystems that actually work. Modern cryptography is one of the few things our clearance-sporting buddies in Fort Meade don't seem to be able to crack. And yet we don't have solid proofs for such basic questions as: do one-way functions really exist? Or, are public-key cryptosystems really possible? On the more practical side, we also have a tremendous amount of work to do as far as post-quantum cryptography and cryptanalysis are concerned.

In short, there's important work to be done, and (since abstract arguments are notoriously hard to monetize) you can bet industry isn't going to do it.

If academic computer science wants to be taken seriously, it needs to get its priorities straight. It needs to stop discouraging work on problems that matter, stop encouraging work on problems that don't, and make an honest effort to equal the landmark achievements of previous decades. There's still plenty of room in the history books.

Monday, October 17, 2016

Virtualizing a Raspberry Pi with QEMU

A while ago, I wrote about building a rack for a Raspberry Pi cluster. If you have a rack, at some point you'll want to put some Pis on it. Virtualization can make the process of imaging these Pis relatively painless. You can generate custom images from the comfort of your desktop and even automate the whole process. Here's a quick crash course in virtualizing the Pi using QEMU.

All the guides I could find on this were at least a couple years old and were missing various important parts of the process. None of them quite worked out-of-the-box. It's hard to blame them, though, since a hardware platform like the Pi is of course a moving target. For what it's worth, this guide should be complete and up-to-date as of October 2016.

The first thing to do is to get a base disk image to work from. It shouldn't make too much of a difference what you choose (unless you opt for something absurd and masochistic like Arch Linux). For this guide, I'll be using Raspbian Lite. My focus will be on emulating images compatible with the Pi 2, though I don't see why they wouldn't work on the Pi 3 as well.

Once you've chosen a distro, download or torrent a .iso file for it. As soon as you have that, you can get started. I like to start by making a working copy of this .iso file right off the bat so that if I make a mistake I can just be throw it away and start over again from a fresh copy, without having to re-download the image.

mkdir -p ~/workspace/raspberry
cp ~/Downloads/2016-09-23-raspbian-jessie-lite.img ~/workspace/raspberry/raspbian.img

raspbian.img will serve as our base image. If you ever want to copy the current state of the virtualized Pi to an SD card, you can do that using dd from this file.

We'll need a custom kernel to get QEMU to boot this Pi image. Currently this github repo maintains up-to-date kernel files for this purpose. Source code and a build script are included, if you're interested in those.

cd ~/workspace
git clone
cp qemu-rpi-kernel/kernel-qemu-4.4.13-jessie raspberry/kernel-qemu

Now we have everything we need to boot up the Pi -- but if we do, it'll hit a kernel panic and dump core. That's no good. If you want to see this happen, then feel free to copy the qemu command from later on, but I'll warn you: it's not very exciting. What we're going to do now is make a couple fixes to different config files so we can get things working.

Before we apply these fixes, we need to be able to mount the Pi image's root partition. There's a hard way and an easy way to do this. The hard way involves reading the image's partition table, multiplying one of the fields in it by 512, and making a huge "mount" invocation. The easy way is this:

cd ~/workspace/raspberry
sudo kpartx -va raspbian.img
# note the name of the loop device chosen by kpartx, then...
sudo mount /dev/mapper/loop0p2 /mnt  # assuming kpartx chose loop0

kpartx does all the work for us, reading the image's partition table and creating loop devices for its partitions. The first one is the Pi's boot partition and the second one is its root filesystem. So following this process gives us the Pi's root filesystem mounted to /mnt. No mess no stress!

Now, here's what we've got to change. First, edit /mnt/etc/ and comment out its first and only line by prepending a # to it:


This prevents the kernel panic. I have no idea why.

Next, create a new file: /mnt/etc/udev/rules.d/90-qemu.rules
and put the following lines into it:

KERNEL=="sda", SYMLINK+="mmcblk0"
KERNEL=="sda?", SYMLINK+="mmcblk0p%n",

These are necessary because (/mnt)/etc/fstab specifies partitions under /dev/mmcblk0 for critical parts of the filesystem. That would make sense for Raspbian under normal circumstances because it boots from an SD card, but since we're exposing our disk image to the system as /dev/sda, we need to add these mappings if we want to keep everyone happy. You could of course also edit /etc/fstab to specify sda instead of mmcblk0, but that would break compatibility between the image and actual Pis.

These two fixes should be enough to get Raspbian to boot cleanly. If you want to use SSH to access the Pi from the host machine, you can optionally set that up as well by disabling password login (which actually is good to do anyway) and adding your public key to authorized_keys. If you don't already have a public key, you will of course need to run rsa-keygen first. Then,

echo "PasswordAuthentication no" >> /mnt/etc/ssh/sshd_config
mkdir -p /mnt/home/pi/.ssh
cat ~/.ssh/ >> /mnt/home/pi/.ssh/authorized_keys

And you'll be good to go. This disables password authentication but grants passwordless access to anyone in possession of your private key.

You should also unmount /mnt at this point:
sudo umount /mnt
This may not be strictly necessary but it seems like common sense given that /mnt is mounted from a partition on the image file we're about to give to QEMU. Under certain circumstances, leaving a partition mounted while also using it with QEMU runs the risk of corrupting the filesystem.

Now, the moment has arrived -- we're going to actually boot up our virtual Pi! Here's the invocation:

qemu-system-arm -kernel kernel-qemu -cpu arm1176 -m 256 -M versatilepb -no-reboot -serial stdio -append "root=/dev/sda2 panic=1" -hda raspbian.img -redir tcp:5022::22

That should work to boot to a login prompt. The default login is:
Username - pi
Password - raspberry
If you didn't disable password authentication for SSH earlier then you should change this password as soon as you log in.

For education's sake, let's break down the different parts of that QEMU invocation:
  • Since the Pi is an ARM system, you naturally invoke QEMU's ARM emulator.
  • We pass it the kernel image we obtained from Github earlier. 
  • We specify the CPU model as arm1176, and we give the device 256M of RAM.
  • The ARM board type we specify as VersatilePB.
  • -no-reboot is necessary because without it, running commands such as sudo shutdown -h now in the guest would not in fact shut down the virtual system but would instead reboot it. Not sure why.
  • We specify -serial stdio to allow stdin/stdout to be used for a serial connection. This isn't strictly necessary as far as I know but it's widely included in examples, and it would certainly be useful for scripting interactions with the guest OS.
  • -append allows us to pass kernel parameters to the guest. We specify the "disk" partition it should mount as root, and we tell the system to reboot after 1 second on kernel panic. You can replace this with a larger integer to wait longer, with 0 to hang on panic, or with a negative number to make kernel panics cause an instant reboot.
  • We also of course specify our working copy of the Raspbian image file as the system's primary hard drive.
  • Lastly, we set up some port forwarding to allow us to connect over SSH from the host. Port 5022 on the host will be redirected to port 22 on the Pi. That lets you ssh to the guest from the host like this: ssh -p 5022 pi@localhost

If you want to copy your configured Pi image to a SD card, you can simply plug in the card, use dmesg to find the name of its block device (we'll assume here that it's /dev/mmcblk0), and then write from the image file to this block device using dd:

sudo dd if=raspbian.img of=/dev/mmcblk0 bs=4M status=progress

And once that completes, your SD card should be all set! Note that you'll probably want to expand the filesystem once you've finished writing to the Pi, since the SD card's max size is likely a fair bit larger than this disk image. On Raspbian, the raspi-config utility provides a helper tool for doing that.

So, that just about does it for using QEMU to virtualize a Raspberry Pi and create custom SD card images! If there's anything you'd like to add, feel free to leave a comment below.

Thursday, May 12, 2016

On Ceasing to Be Exceptional

Somewhere just north of a decade ago -- so when I was 10 or so -- I was helping strangers in online forums troubleshoot games they were making. It was crazy fun. I still remember this one guy: He had a 3D FPS he was building in Game Maker, a tool that (for this use case) gives you a graphics engine and not much else. He had mouselook set up (no trivial feat) and had a bare-bones firing system, but it only worked when the camera was leveled out flat. You couldn't shoot up or down. That's a big problem, but the game was still pretty fun in spite of it, so in the spirit of trying to make a good thing better I offered to help.

With the right background knowledge, the solution is pretty clear. The mouselook system gives us an angle for how far up or down we're looking. What we want is for bullets to rise or fall at a steady rate determined by this angle. So we need a value to determine that rate, and that value will end up being a slope. You can turn angles into slopes using trig: slope = rise/run = sin(θ)/cos(θ) = tan(θ). So you take this slope, scale it based on horizontal speed, and periodically add that to the Z coordinate.

This is easy to figure this out if you know the concepts involved, but take it from me: it's a lot trickier when you've never heard of a slope and all you know about trig is what Wikipedia tells you. But with a few days' work I figured it out, sent the guy a patched version of his game, and went back to playing it. All this work, and yet for the life of me I can't remember his reaction.

About a year or two later, I joined a game making group. They already had programmers, so I told them I made music. They couldn't let me in fast enough when they heard that. I did make some music, but I also pitched in whenever anyone asked for troubleshooting help. Pretty soon I was the go-to guy for figuring out the really pathological stuff.

After a while I started working on my own project, a turn-based strategy game (tragically never completed, but surprisingly close to completion, and sporting a totally bonkers implementation of A*). I got hung up on the AI, lost steam, and turned to other projects. I wrote my own FPS, which (to my astonishment) people actually played. I wrote a couple RPGs from the ground up. I wrote a 2D, team-based multiplayer game inspired by laser tag. In fact, writing the login system for that game sparked my interest in security.

In my free time, I was pushing my limits by working with a fantastic math tutor (Naomi, whose debt I am deeply and eternally in). She taught me everything up to calculus, past which point I taught myself. By the time I started high school, I'd built up a working knowledge of multivariable calculus.

And now, here I am: 22 going on 23, and not feeling too different now than I did then. What happened?

The answer, it feels like, is "not much." People learn at different rates, and I happened to luck out by being someone who learns certain abstract subjects really quickly. That doesn't make me inherently better at them. It just means I got a sort of head start. And I'm coming to suspect that the same is true of many if not most talented young people (including many who were/are way above my level).

I've had the great pleasure in my life of knowing some real capital-G Geniuses -- people who, when you see them at work, it's hard to believe they aren't in touch with something beyond our understanding of the world. It's sublime. But these people are rare. They're literally one-in-a-million. And so, inspiring as they are, they don't change the fact that most talented young people seem to be exceptional not so much in the degree of their abilities as in how quickly those abilities develop.

It's a little bizarre that so many of us hold precocity in such high regard. Think about it: when you compliment a young person by calling them precocious, you're giving them praise that comes with an expiration date.

Say you have a ten-year-old who's reading at the level of someone twice their age. You might call that person talented, exceptional, precocious. If you have a twenty-year-old reading at the level of someone twice their age... well, that's just called being literate. Past a certain age, being pretty good at something ceases, by itself, to be exceptional. You have to find something more.

I was tempted to end here, because that "something more" is different for every person, so it's hard to imagine how to follow up on it. I'd actually left this post as a draft for about a week, dissatisfied but lacking any better idea for how to end, until a remark from my old, excellent friend Henry sparked my interest again. He observed, casually and offhand, that 'you're always making progress, just maybe not in the direction you thought.' To me, this really cuts to the core of what I'm trying to get at. Since I've put so many words into explaining my background, let me carry that a little further to try and show what I mean.

There's an incredible gap between being good at something and being good at explaining that thing. To find evidence of this, one need look no further than the average research university lecture hall. In fact, it's often the case that the better you get in your area of specialty, the harder it gets to explain it. As you learn more, you end up with more and more levels of abstraction separating you from your audience, and so the gap gets ever harder to bridge.

This was something I experienced trying to implement a secure login system -- I'd be going on to one of my game-making buddies about, say, secure key exchange, and after describing how this lets us lock down the login protocol, I'd get back a reply like "ok, but wait, wouldn't it be easier to just send everything in plaintext?" I was so far down the security rabbit hole that it was almost inconceivable to me that anyone would even suggest such a thing. When you focus on a problem for long enough, it's easy to lose touch with outside perspectives.

Now, it's easy to blame awkward exchanges like that one on ignorance. 'Oh, man, how could this guy not even know about sniffing passwords?' There's a certain selfish satisfaction in knowing stuff, especially stuff that other people don't know. But if you know something that the person you're talking to doesn't, and their ignorance is impacting the conversation, whose fault is that? You could fault them for their ignorance, or yourself for failing to be a better communicator. The latter option tends to be more productive.

It can be really difficult, though, to communicate well about difficult topics. Maybe that's obvious, but it's something a lot of people seem to underestimate, to their own detriment. I certainly see that underestimation in myself as I was around the time I outlined earlier. I may have been good at math, but how good was I at explaining it? I may have been interested in security and committed to personally getting it right, but how good was I at showing other people why security is important? Not very, and I think this is somewhere I've grown, practically without meaning to or even being aware of it.

I think this experience generalizes. Suppose for example that you're someone early on in their college career who's always been really good with literature -- reading well above your level, laughing at reading comp questions, writing insightful analyses -- and you come to feel, in college, like somehow you've plateaued. The best is behind you, it feels like. In high school you were hot shit, but now you're just another 20-something with a notebook and some colored pens. You might feel like you're stuck in a rut, like you're not going anywhere, maybe even like being in college is a mistake.

But what if, while you aren't learning anything new about (say) literary theory, you are learning about how to communicate what you know? After all, odds are that if you were hot shit in high school, there was a pretty short list of people with whom you could have a really good back-and-forth about this stuff. Not so with college (though this is not unique to college). Learning how to have a productive discussion without arguing or talking down, learning how to explain something without patronizing, learning how to ask good questions -- all these are valuable skills, and also skills that it's hard to realize you're even working on. That is, until you've made enough progress that you can look back and see how far you've come. When it comes to math, I don't feel like I know a whole lot more now than I did then, but I do feel much more qualified to share what I know.

There's an absolutely beautiful moment in this 1950 movie, Harvey. The movie (adapted from a play) is about this super sweet, middle-aged, heavy-drinking dude, Elwood, whose best friend is a giant rabbit that no one else can see. The whole film is magnificent, but especially one moment near the end, after Elwood's family tries to have him committed to an asylum. The plan backfires when Elwood's simple charm wins over the asylum staff.

The chief analyst tries to explain to Elwood just how awful his sister's plans for him are -- "She's trying to persuade me to lock you up!" -- and can't understand why Elwood isn't more upset. By way of explanation for his good-naturedness in the face of all this, Elwood replies that he's come to understand that in life, "you must be oh so smart, or oh so pleasant. Well, for years I was smart. I recommend pleasant."

There's also this great story about Richard Feynman -- who was, above almost all else, famously good at talking clearly about complex subjects. About a year before his death, when it was clear that cancer would kill him but unclear just how long it would take, he was out on a walk with a good friend. This hardly made for cheerful conversation, it seems, and so his friend was kind of down. Feynman asked him what was wrong, and his friend said, "I'm sad because you're going to die."

Feynman replied that he was sad about that too. It wasn't so bad, though -- he'd come to a realization, you see. "When you get as old as I am, you start to realize that you've told most of the good stuff you know to other people anyway."

May we all be so fortunate.

Wednesday, April 13, 2016

Book Notes

Over this past couple years I've been lucky enough to find time in between classes and research for a bit of pleasure-reading. I thought it might be fun to write a little on different books that've left strong impressions on me. It's nice to keep a record, but maybe I can also share some of the enjoyment I've gotten from them.

It was tempting to put the word "Reviews" somewhere in this headline, but I'm not really trying to just share ratings and little terse blurbs here. Ratings are subjective and reductive, so it's hard to see much point in cooking them up in this context. I'd rather just share a few thoughts and impressions for each book with the hope that you'll find them interesting.

Pnin (Vladimir Nabokov): It's been a year or two since I finished this book, but it's stuck with me. The book takes the form of a sequence of tiny, self-contained episodes, in fact almost -- but not quite! -- a set of short stories. Their unifying thread is Pnin, the tragic hero, a Russian émigré to America fleeing "the Hitler war". Pnin is the absent-minded professor made full, the epitome of stereotype somehow rendered authentic. Nabokov, in creating him, has managed somehow to both epitomize and transcend the archetype. He starts out as someone absurd, someone to be laughed at -- when we first meet him, he is travelling to give a guest lecture, fussing over details while unwittingly boarding the wrong train -- but the longer we spend with him, the more we realize that Pnin himself is not to be ridiculed. Rather, people's reactions to him are.

The fact (we come to see) is that poor eccentric Pnin is a kind soul in an unkind world, and as you read more and more, you find yourself almost involuntarily rooting for him to do well, to score some small victory. These victories are rare, though, and almost always inconsequential. Far more common are tragic mistakes and misunderstandings (which Nabokov somehow manages to render both heartbreaking and incredibly witty). Throughout the book Pnin is put through progressively greater indignities, and yet he refuses to let them break him, refuses to grow jaded and cynical towards this world, which has given him little more than pain. Witnessing this, we are slowly brought around to a sort of awed respect for this strange little man, so naive, so awkward, and yet somehow utterly indomitable. Rare is the character whose failures are so inspiring. I found the book's final chapter deeply moving, and this is a book that I'm very much looking forward to someday reading again.

Something Like an Autobiography (Akira Kurosawa): I'm actually only halfway through this one, but I've come to  like it so much that I can't resist including it. Kurosawa, for those who don't know, is almost certainly the most internationally famous Japanese film director. Some of my favorite movies (Kagemusha and Throne of Blood, to name two) are his. But on top of his talents as director, he also turns out to be a really witty -- and disarmingly honest -- storyteller. The introduction lets you know that his autobiography only extends to around the time he started work on Rashomon, because everything else would be too recent for him to engage in full disclosure. At first I was disappointed by this, but the anecdoes he does provide are so engaging, and offer such an unexpectedly great level of insight into his character, that the book ends up being brilliant in spite of this restricted scope.

It's a bit funny to list this book below Pnin and above The Pale King, since all three follow roughly similar episodic formats. Something Like an Autobiography is composed of a long series of short anecdotes, most just a few pages long, recounting different moments from Kurosawa's life. Some are poignant, like his reflections on the "lost sounds" of his childhood. Others are funny, like when he recalls his "rebellious phase" in middle school. And some are utterly harrowing, like his description the Great Kanto Earthquake and his subsequent exploration, with his brother, of neighborhoods flattened and burnt to the ground, where they found piles of charred corpses and rivers so full of bloated death that they were running red-brown. But even in this passage, Kurosawa's account of what he saw, and (implicitly) of his relationship with his brother, is so compelling that it's impossible to stop reading. Notably, Kurosawa turns out to be a master of brief but vivid descriptions, of almost effortlessly calling up the spirit and image of a place, and so his stories serve not only as an account of his life but also as an account of the times in which he lived. I'd recommend this book to just about anyone, even if they couldn't care less about movies.

The Pale King (David Foster Wallace): Published posthumously, there is something suffocatingly sad about the very existence of this book. I've read all of Wallace's published fiction, and a lot of it is very powerful, but nothing else hit me in quite the same way as this. Some context: Wallace only ever published two other novels, The Broom of the System and Infinite Jest. Reading The Broom of the System, there are the occasional dull moments, and I'd have a hard time identifying anything I'd call the novel's emotional center -- and yet even so, the writing is so charismatic that it's hard to escape the impression that here is someone who, on the page, can do literally anything. Next up, Infinite Jest manages to be incredibly emotionally powerful through its wrestling with themes of addiction, alienation, and the deep human need to find something to give ourselves away to. Infinite Jest was written as a critique of its times, a narrative account thereof, a catalog of insanities (among other things), and yet it tries very hard to convince you that things don't have to be this way, that there is still good in the world.

After Infinite Jest's publication, though, Wallace seemed to decide (perhaps based in part on the book's popular reception) that what he had provided was diagnosis without cure. The Pale King was to be the book in which he corrected this mistake. His struggles to complete it are legendary, and a serious case of writer's block led (at least in part) to his decision to stop taking Nardil, since he suspected that it was numbing him emotionally in a way that prevented him from realizing his vision for The Pale King. He later went back on Nardil, but found that it had stopped working. This seems to have directly led to his suicide in 2008.

This background necessarily colors how we see The Pale King. It stands unfinished; the cure couldn't come soon enough. The novel is still rough around the edges, yet even so you can tell that it could have stood on par with -- or even surpassed -- his other work, had Wallace lived to finish it. It feels kind of gross to tie the author's real life into the novel's narrative, but in this case it's also nigh irresistible, and the (involuntarily obtained) result is one of the most profound, crushing tragedies imaginable. This is probably not the novel I'd recommend to someone who wants an introduction to Wallace (for these readers, Good Old Neon and a few other titles come to mind). However, as a great admirer of Wallace's work, I personally found The Pale King transfixing.

Information Doesn't Want to Be Free (Cory Doctorow) (previously): This is one of the best nontechnical books about the internet I've ever read. Doctorow's overarching theme here is to put forward his "Three Laws", namely:

1) Any time someone puts a lock on something and won't give you the key, that lock isn't there for your benefit.
2) Fame won't make you rich, but you can't get paid without it.
3) Information doesn't want to be free, people do.

Each law is meant as a statement on an issue he takes very seriously, and so it might seem at first like the book is forced into a sort of limited scope. This suspicion is natural, but completely mistaken. In expanding upon his chosen issues, Doctorow manages to draw connections to a head-spinning number of topics -- in fact, he pulls in so many different issues that it would feel disingenuous to try to give any list. There are chapters in here about the history of the music industry. There are chapters about how to find an audience and (hopefully!) make a living as an artist. There are chapters about digital rights management (or "digital locks"), and how these technologies lead to no-win situations. There are chapters about the TPP and the very serious dangers its copyright provisions pose. The list goes on. In everything he discusses, a common theme one begins to pick up on is that problems arise when we fail to put people at the center of our designs.

Every topic Doctorow touches on fits into the overall flow of his discussion almost effortlessly, and his arguments are invariably lucid and well-reasoned. I'd already heard of pretty much every issue he discusses, and yet I felt like I learned so much just by studying how he lays out his cases and picks examples. Doctorow's talent lies not just in his capacity to care deeply and passionately about these issues, but in his capacity to make you see why you should care, too. The world needs more people who can do that.

Another nice thing about Information Doesn't Want to Be Free: it's published through McSweeney's! You can buy it direct from their website, which is a nice break from the usual Amazon-induced guilt many of us associate with buying hard-to-find titles. I don't have a list of my favorite companies to give money to, but if I did, I'd probably put McSweeney's pretty close to the top.

Monday, April 11, 2016

A Great Paragraph From Infinite Jest

From the fictional filmography of James O. Incandenza (four pages into the listing, so page 988):

Cage III--Free Show. B.S. Latrodectus Mactans Productions/Infernatron Animation Concepts, Canada. Cosgrove Watt, P. A. Heaven, Everard Maynell, Pam Heath; partial animation; 35 mm.; 65 minutes; black and white; sound. The figure of Death (Heath) presides over the front entrance of a carnival sideshow whose spectators watch performers undergo unspeakable degradations so grotesquely compelling that the spectators' eyes become larger and larger until the spectators themselves are transformed into gigantic eyeballs in chairs, while on the other side of the sideshow tent the figure of Life (Heaven) uses a megaphone to invite fairgoers to an exhibition in which, if the fairgoers consent to undergo unspeakable degradations, they can witness ordinary persons gradually turn into gigantic eyeballs. INTERLACE TELENT FEATURE CARTRIDGE #357-65-65

Posting an Infinite Jest excerpt to your blog is absurdly pretentious, I know. It's worth it to have the quote close at hand. Somehow, it's been coming up a lot lately.

Friday, February 26, 2016

More Politics in Software

Over the past two months, I've been writing a series of posts on the intersection of politics and technology. The series consists of two bookend posts, with a number of focused topic discussions between them; this is the second bookend post.

Programmers are incredibly good at finding stuff to get worked up about. What's your favorite text editor? Vim? emacs? Maybe (god help you) Notepad? gedit? kate? nano? Or maybe you don't use an editor -- ok, then what's your favorite IDE? Eclipse? Visual Studio? NetBeans? Something obscure and language-specific?

Speaking of, what's your favorite language? Python? C? Java? C++? C#? Javascript? Lisp? Haskell?

Astute readers may have picked up on a theme here: Unless you're getting ready to draft a specification or set up a group workflow, none of these questions matter at all. And yet, we're all expected to have strong opinions on them. Conversations like these cement computer science's male-dominated reputation, because they are all about unabashed dick-wavery.

I wouldn't mind this so much if it weren't for the fact that it distracts a lot of smart people from things that actually matter. If you're making the case that easter eggs like "M-x tetris" proves yours is the editor of the gods, you're not making the case that, say, fair use provisions are critical to the future of internet culture. If you're arguing ad nauseam that Eclipse is so bloated as to be all but unusable, you're not wrong, but you're also not learning anything. If you're arguing that modal editors like vim are better because the lack of chording means you're less likely to get carpal tunnel, that's nice, but also kind of weirdly specific.

There are thousands of these silly little issues. My goal with this series was to try to find software-related issues that actually, in some broader sense, matter. With that almost comically lofty goal in mind, let's take a lightning tour of the topics visited.

We started out with a discussion of boot security, where we tried to wrap our heads around the question of how to detect (or maybe even prevent) hardware attacks. The political angle: the recently adopted UEFI standard claims to solve this problem, but in fact makes it worse in a way that

Next, we took a look at the still-emergent "sharing economy", and explored the good and the bad which lurk therein. One takeaway was that while change can be very good, "disruption for disruption's sake" is an absolutely absurd (and absurdly pervasive) guiding principle. Another takeaway: as services get decentralized, it gets really hard really fast to regulate them in any meaningful way, and this can lead to some really bad situations.

The sharing economy post momentarily brushed up against the issue of online platforms serving as facilitators for harassment and abuse. The next installment dealt with this issue head-on. It's incredible that there are large groups of people to whom which this post's title, "Ignoring Abuse On Your Social Platform Is Not a Neutral Stance", is actually a controversial claim.

The final "body" post, "You Can't Legislate Reality", took on a somewhat broader scope, looking at ways that the legislature has gotten tech completely wrong in mind-boggling and often dangerous ways. In particular, that post saves some heated language for a discussion of the TPP.

Now that we've reached the end, there's only one thing left to do. I've heard it said that all that's needed for the triumph of evil is that the good do nothing. Now, that's not entirely wrong, but it's not entirely right either. It's good to be educated about the issues facing your domain of expertise. But that alone is not enough.

A friend once asked me to help fix his computer, and he refused to believe me when I told him I couldn't. "But you're a computer science major!" Yeah, I replied -- so I can give you a really detailed walkthrough of why it's broken! But that doesn't get us any closer to finding the fix. This is the difference between diagnosis and cure.

Tens of thousands of computer hobbyists sitting in tens of thousands of homes or offices could all independently educate themselves about the issues facing their field, all get tremendously incensed about something like the locking-down of router firmware or the government-mandated corruption of digital maps, and all independently decide that Something Must Be Done... but it wouldn't make one iota of difference unless they decide, given that knowledge, to do something.

The fact is, being able to explain exactly how and why the world is getting worse does nothing by itself to forestall this worsening. The people worsening your world for their own interests could not care less how well or poorly you understand what they're doing, as long as you don't try to get in their way. But how do we get in their way?

It's not easy: Most of these issues are national in scope, and very few of us have standing invitations to that particular big-kids table. But that's a bit of a silly complaint coming from people in a field where median incomes are almost all six figures. We've got money to burn, and there are groups who've been fighting the good fight for decades, and they accept donations.

Foremost among these groups is the EFF, a non-profit that relies largely on donations for its funding. We all owe them a debt of gratitude for the work that they've done towards our community's ends. As with any organization, donations are critical to retaining that focus. Once you land that sweet job and start making more money than you know what to do with, maybe think about starting to pay that debt back.

Friday, February 19, 2016

You Can't Legislate Reality

For thousands of years, geometers tried in vain to square the circle -- a task which, in 1882, was mathematically proven to be impossible. A result like this isn't really something you get to debate the specifics of. They call it "proof" for a reason.

That's part of what made the 1897 proceedings of the Indiana General Assembly so bizarre -- because it was there that lawmakers tried to pass a law declaring the problem solved. The bill might well have been passed by the senate, were it not for the intervention of a visiting professor.

This incident is one instance of a theme which recurs whenever legislature collides with math or technology. The legal system just can't seem to wrap its head around how science works. Many are inclined to see malice in this tendency -- a sort of deliberate commitment to backwardness, a gleeful embracement of that which is known to be wrong. Tempting as this is, it's a good rule of thumb never to attribute to malice that which is adequately explained by stupidity.

What that rule of thumb fails to capture, though, is that many cases have plenty of room for stupidity and malice.

In the instance of Indiana's Pi law, the ignorance of certain groups within the legislature was maliciously exploited to feed the egotism of the bill's author, an amateur mathematician trying to make his reputation "solving" impossible problems.

In the instance of the Scopes trial, the scientific illiteracy of certain parties involved was exploited to the benefit of evangelical religious fundamentalists with a well-established track record of using legislation in legally dubious ways.

And in the instance of many recent legal cases concerning copyright, patent law, digital rights management (DRM), intellectual property, cryptography, and hardware design, the ignorance of the legislative and judicial systems on technical matters has been (and continues to be) exploited by avaricious and sometimes malicious vested interests in both government and industry, who use their leverage to advance profoundly antisocial ends.

Cory Doctorow argues compellingly in his recent book, Information Doesn't Want to be Free, that modern attempts at digital rights management (which he refers to using a more general term, "digital locks") are not only futile but also harmful to everyone involved. The essential problem (and here I do Doctorow a great disservice by trying to briefly summarize some of his main points; really, his treatment of the topic is second to none and I can't recommend that book highly enough) is this: What digital rights management schemes try to do is to provide a user with access to a technology, but only for certain purposes -- which, to put it bluntly, is just not possible.

Computers are copying machines. They are very good at copying data, and they can do it at virtually no cost. If you can watch a movie on screen, what's to stop you from telling your display to quietly, in the background, record everything it's displaying? Likewise for audio: once this data is in the user's hands, the users can do what they want with it. This shouldn't be a surprise: Computers are general-purpose, so this sort of flexibility is in their very nature.

All sorts of "solutions" have been proposed. Many devices now ship with purpose-built hardware meant to take control of a computer away from its user for the sake of giving manufacturers and content distributors stronger DRM controls.

Sony, never one to favor such above-the-board approaches, for some time had a standard practice of installing a backdoor rootkit on literally every computer that played one of their CDs, just so they could regularly check up on the user and make sure you hadn't violated copyright. Read up on how that thing worked -- it's seriously evil.

Not that we're going to get into it here, but if you care about encryption and you haven't heard of the clipper chip, that's a history lesson you might want to give yourself. Focus your attention on the "criticisms" section, and then maybe read the case made by Bruce Schneier, who has more credentials here than almost anybody. He also made a short post not to long ago about how the Clipper debacle relates to the issues we face today.

It might be hard to believe the situation has worsened in the last decade, but in some ways it has. The much-maligned Trans-Pacific Partnership (TPP) has been negotiated largely in secret, so that until November of 2015 nobody except for government and big business interests even knew what it entailed. Now that a full draft has been released, we can confirm that the situation is even worse than originally thought. The EFF has a good discussion of the main points that deal directly with technology law. This EFF article hits the major issues. Of particular note, the language is designed to stifle things like conducting security researchfixing your own software and hardware, or talking about whether it's even possible to break DRM. And if you've ever pirated an album, may god have mercy on your soul(Edited to add: Less than an hour after I published this post, Doctorow shared on his blog another simple breakdown written in conjunction with the EFF, which is well worth a read)

These are all things they want, and things they've been trying to implement, but software solutions to these things aren't possible, and so they've turned to legislating reality instead. If they can't outright stop you from copying a copyrighted file, and they can't justify undermining the designs of hardware (including the hardware they use!) in the process of trying to stop you, they can at least try to pass international laws letting them break into your home, confiscate your computer hardware, potentially destroy any or all of it, seize any domains you own, and throw you in jail, if they even suspect you've ever broken copyright. Yes, really. Go read the documents if you don't believe me -- it's all in there.

At what point are we going to recognize how fucked up it is that these are the priorities driving the world's major governments? When is enough enough? If this isn't enough to push us to that point, what will be? Will anything? Do we really have so little spine, so little self-respect? Is there no limit to the abuse we will tolerate?

Friday, February 12, 2016

Ignoring Abuse On Your Social Platform Is Not a Neutral Stance

There are some pretty big problems with social media right now. Or, it might be more accurate to say there's one big problem -- but it's really big. The problem is how, in this age, we deal with abuse and harassment online.

It borders on impossible to express the scope of online abuse and harassment. Probably the most famous example is Gamergate, which we're not going to get into here, because I'd rather eat glass than even do that shitstorm the dignity of a summary. Look it up in another tab if you really don't know.

The point is, there are a number of well-known cases where specific individuals have been targeted by huge crowds for harassment and abuse. But there are orders upon orders of magnitude more cases that have not become even remotely as well-known, but which nevertheless have caused very real harm in people's lives.

In 2014, the Pew research center conducted a study on harassment, with some striking findings. The worst forms of abusive harassment targeted women disproportionately more than men. This may not come as a surprise, but the sheer numbers involved are staggering: 26% of women aged 18-24 reported stalking, 25% reported sexual harassment, and 18% reported sustained harassment. The corresponding figures for men were 7%, 13%, and 16%, respectively.

The takeaway is this: If we sincerely care about fostering diversity in online communities -- and we all should -- then the first step is to recognize how abusive harassment disproportionately targets some demographics over others. Otherwise, it is impossible to put together a coherent picture of how these behaviors take place on whatever platform you might be dealing with.

It goes beyond harassment, in fact: A recent study suggests that women's contributions to open-source projects on Github tend to be accepted more often than men's -- unless the reviewer knows that the code was submitted by a woman, in which case the acceptance rate plummets. Why is the gender distribution of core developers for major open-source projects so lopsided? Gosh, I wonder.

But I've managed to sidetrack myself again. The real point I want to be getting to here near the end of the post is about how institutions handle abuse, or how they fail to. I'm mostly going to pick on Twitter, because if I focused on Reddit et al. instead we'd be here all fucking night. It's mind-bogglingly bad. Ellen Pao tried to take some small, common-sense steps to improve things, and look how that went.

That reminds me: There's one thing we have to get out of the way right now. Let me put it this way. I adore freedom of speech -- it's an absolute, unconditional prerequisite to any broader freedoms -- but that fondness does not extend to most of its most vocal invokers. You know, the people who, soon as they start to sense resistance, start bellowing that you can't do this! I have freedom of speech!

There are so many things wrong with this. First off, not everyone lives in the United States, which is almost never even acknowledged here. Like, come on. Second -- iamnotalawyer -- the first amendment grants you the right to free speech, not the right to be listened to. Third, there are notable exceptions to free speech, like for fighting words or specific kinds of hate speech. Fourth, if someone points out that what you're doing is actively harmful, and your best response is "yeah, but you can't make me stop", that really should prompt some serious introspection. Free speech is great, but having nothing on your side except free speech? Slightly less great.

With that out of the way, here's a couple notes on Twitter in particular. Twitter gets a kick out of pretending they take a neutral stance towards content shared on their platform. They've called themselves 'the free speech wing of the free speech party'. This blind enthusiasm might remind you of a discussion we just had. The issue is, serious harassment restricts ordinary people's willingness to exercise their freedom of speech, both due to emotional fatigue and, in many cases, the fear of personal harm. Refusing to take action against this form of harassment is, unavoidably, an implicit endorsement of its consequences.

So make no mistake: Freedom of speech is still restricted under this "pro-free-speech" platform. It's just that instead of restricting the speech of vitriolic spewmongers who devote countless hours to tormenting their fellow human beings, the platform restricts the speech of their targets. This is not a neutral stance, it is a pro-vitriol stance. I don't think it's an exaggeration to say that this stance is, in fact, anti-compassion. And, of course, it should almost go without saying that this stance is also implicitly every bit as sexist, racist, and otherwise bigoted as the abusers it enables are. How is anyone okay with this?

Motherboard has an interesting timeline outlining how Twitter's rules have changed over its lifespan, along with the cultural shifts that accompanied these changes. It's an interesting story. One big takeaway is that, while Twitter has made some good changes in the past couple of years, its changes have not been universally positive, and we still haven't yet reached a good place. One anecdote in particular comes to mind. Just the other week, a parody account mocking Twitter support and particularly support's reluctance to suspend or otherwise take action against abusers and harassers...

...was itself, for a time, suspended. At least it's good to know the account suspension feature still works, I suppose.

Friday, January 29, 2016

Sharing Economy Apps and the New Bottle-Wavers

For better or for worse, the modern age has ushered in new 'disruptive' technologies the like of which we have never before seen. The classic example of this is what some people have taken to calling the sharing economy.

The sharing economy, in a nutshell, is based on the idea that while traditionally people have bought good or services from specialized third parties (taxi rides from taxi companies, hotel rooms from hotel companies), people totally would buy these things from each other if there existed a reliable channel to mediate those transactions. What's more, lots of people have services to offer, but no good way to offer them. If you're going out of town for a week, your apartment is just sitting there empty, and empty living space has an inherent value which you are not capitalizing on. Catchphrases like "unused value is wasted value" get thrown around a lot when describing this sort of situation.

Enter "sharing economy" apps. Uber, Lyft, et al., let you play taxi using your very own car. Airbnb lets you play hotel with your own property. The apps are a mediated channel for connecting consumers with providers, and (hopefully) giving each a reasonable level of assurance about the other. Basically, they give you a way to easily rent out things you already own, on your schedule. Stated in the abstract this way, it probably sounds nice. And a lot of the time, it is. But it also has its share of failures, and most people seem to turn a blind eye to them, drunk as we are on its successes.

Let's start with the name: "the sharing economy". This is a masterpiece of euphemism and marketing. Sharing is letting someone crash on your couch. Sharing is carpooling. The second you attach a price to something, the second you offer your services on a market instead of as a favor, what you're doing stops being sharing. But of course, sharing is such a nice word that people are reluctant to stop using it, even though very cogent arguments have been put forward about how misleading the name is, and other names have been suggested, most notably "access economy".

The next problem is that price aside, the generous-individuals-sharing-hospitality-because-we're-all-such-good-buddies narrative still isn't really true. Power players, both individual and corporate, have emerged, trying in essence to be the hotel and taxi companies (so to speak) of the sharing economy. The more successful they are, the more resources they have to put towards furthering their success, because that's how capitalism works. Of course, many die-hard capitalists would say that if this is the will of the market, then so be it. But it doesn't sit well -- aren't these exactly the sort of entities the sharing economy promised to move us away from?

Then there's the issue of regulation. And make no mistake: this is a big issue. Uber, for instance, has had no end of legal troubles in virtually every country where it operates. Its failure to fit the business models around which extant legal regulations are built means that it can in many cases dodge or muscle past regulations meant to apply to businesses offering the service it provides. Uber's ability to sidestep laws meant to hold it to ethical standards means that it has been able to engage time and again in startlingly unethical practices.

How unethical, you ask? I'll let you judge that for yourself. All I'm saying is, it's not a pretty picture. And it doesn't stop with Uber's own practices -- they also have a track record of enabling and defending drivers' ethically questionable conduct.

And it's not just Uber: Related companies like Lyft have also been taking all kinds of questionable liberties with their workforce, provoking high-profile lawsuits and setting controversial legal precedent. The question of whether these companies' workers, some of whom are full-time drivers who make their living off of Uber, should even be allowed to organize is still under active discussion, somehow.

It's not just quasi-taxi services, either. San Francisco has gotten pretty tired of Airbnb, seemingly for good reason. Plus, it seems like for every one of the service's funny stories ("boutique igloo"!), there's a horror story to balance it out, and while the blame in these stories rarely rests on one party alone, it's also rare to find one in which the facilitating service is not at least partly at fault.

What this situation reminds me of, somehow, is this little story that showed up in a longer novel, told to one character by another. The story is about the term "bottle-waver", which I think the author coined. It might have been Neal Stephenson, but I'm not sure. But in any case, the story as I remember it goes that there's this tiny island, and there's a tribe living on the island, and they've never made contact with the outside world. They all live peaceful lives, unconcerned with what might lie beyond their shores... until one day, an empty glass bottle washes onto the beach.

This bottle just blows their minds -- they've never even seen glass before, bear in mind, and now suddenly here's this, and they don't have the slightest idea what to make of it. The villagers are equally awed and terrified and so, seeking answers, they take it to the village shaman. The shaman immediately recognizes this glass bottle to be an object of great magical power, but also has no idea how to use it. To save face, the shaman grabs a stick, puts the bottle on the end of the stick, and waves the stick overhead declaring Its power is mine! The villagers, seeing this, are all forced to agree, and everything returns to the way it was.

The bottle-waver, then, is someone who claims as their own that which they don't even understand, seemingly hoping that by recognizing the power of that which they have claimed, they will themselves acquire its power. Actually understanding the power in question is unnecessary, maybe even detrimental -- all you have to do is look, to the less informed, as if you understand it. This reminds me very much of the attitude these companies take towards their collective innovation, the 'sharing economy'. It's unclear whether any of them truly understand or even care about their technologies' ramifications on the marketplace, or on the cultures in which they operate. They've hit upon something nobody's ever seen before -- their glass bottle -- and as soon as they found it, they all lunged for their sticks, to see who could wave it the highest. Now Silicon Valley watches, enthralled, as everyone in the crowd wishes nothing more than to take the bottle's power for themselves. Suggestion after suggestion gets thrown out -- "Uber but for x," "Uber but for y" -- but as of yet, they're all too enthralled to suggest the one thing that might actually help: That we all catch our breath, take the bottle down off the stick, and take a moment to try to figure out what bottles are actually good for.