Over at the Secret Blogging Seminar , Scott Morrison is championing a new project to analyze this year’s mathematics publications and draw attention to freely accessible papers. The Mathematics Literature Project is looking for your help in categorizing published articles according to whether they are freely accessible on arXiv, are freely accessible somewhere else, or are only accessible for a fee. After enough data are collected, one may be able to correlate accessibility and quality as measured by number of citations. There is a handy tutorial that will show you how to add information to the wiki by combing the internet for freely accessible versions of papers in well-known journals. The color-coded progress bars displayed next to each Journal title on the wiki indicate how many articles have been categorized so far and how. To see the key to the color-coding, you need to click on the bar itself.
In case you are interested in looking at some journals not listed on the wiki, the tables of contents of various journals are available through the site JournalTOCs.
Lastly, while reading Data Scientist Michale Li’s The Mathematics of Gamification post about how Foursquare uses Bayesian statistics to determine the quality of updates proposed by its users, I started thinking about how such a strategy might apply to peer review of academic papers in the future. Foursquare uses “honeypots” to judge the quality of super-users’ updates. Perhaps there could be certain academic papers that were “honey-pots” (i.e. were peer-reviewed in a traditional way and determined to be of very high quality). These could help determine super-reviewers’ ratings. Ratings could then also be informed in real-time by conditional probabilities. In other words, a user’s rating could be informed by knowing the probability that a paper is “good” given that the reviewer deemed it “mediocre”. What questions do YOU think that this data base might provoke or help answer?