Translucent Badges
In discussion with Cameron Neylon earlier today, another use case for badges came up—one that will require a bit of tooling, but would be useful in a lot of contexts. When I submit a paper to a scientific journal, the reviews that come back are usually anonymous (i.e., I don't know who the reviewers were). There are good reasons for this, but it creates a problem: how do I know how to assess those reviews? To borrow Cameron's example, if my physics paper gets one review from Richard Feynman and one from Joe the Mechanic, I probably ought to pay more attention to Feynman's—unless the paper is describing an experimental setup, in which case I should probably care more about Joe's, because Feynman was notoriously bad at doing experiments.
I can't solve this problem with badges right now because each badge identifies the person it was issued for (which is kind of the point). But what if Jane, as a reviewer, could go back to the badge issue (or to the backpack site she's using to aggregate her badges) and say, "Please give me a token I can attach to this review to show that I have an Expert Experimentalist badge"? The token would be digitally signed, so that people could confirm its authenticity, but no personal identification.
By analogy with Peter Wayner's "translucent databases", we can think of these tokens as "translucent badges": they let some light through, but they're not completely transparent. I can see lots of other ways they'd be useful. For example, I would really like to know whether a lengthy comment on Slashdot about software patents was written by a patent lawyer or a teenager in a basement in Saskatchewan—except what I really mean is, "I'd like to know how much the commenter knows about the subject," because that kid in Saskatchewan just might be a self-taught expert. Badges give her a way to validate her expertise; translucency would give her a way to share it more safely.