Market Empiricism

Previous topic - Next topic

psztorc

Empirical scientific publications are all about spreading the word on "experimental evidence". You'd say: "If you run electricity through water using terminals, you can produce Hydrogen and Oxygen gas."

After you published this, people would know that, if they tried this themselves, they could expect the same result. Science thus became a Team Sport. For quality-control and scaling purposes, no single person would replicate everything, only a few "peers" (people in the same area) would check the results to make sure that they made sense. Thus Peer Review was born.

It worked well then, and it works pretty well now. The problem I have with it is that it sometimes doesn't scale past the environment it evolved in (one where everyone knew everyone else's name and what they were working on, one where one only published something they felt would replicate). I might be wrong, but there are some big differences. Nowadays, a replication study ("we got the same results as X") is practically unpublishable (worthless), you need cash just to look at the insiders-club from afar (journal articles are rarely open access), and to join (ie publish something) I just believe that, in certain instances, careful flattery of the reviewers would count for more than accuracy. There's a difference between empirical predictions that sound reasonable to a target audience vs those that are actually correct. Lastly, there is a "publication bias" toward "being interesting". For example, in economics, it is difficult to find a study explaining that 'everything is fine' with respect to current policy (about anything). It's just more fun to criticize.

There is another way: a Truthcoin Dominance Assurance Contract.

I'm going to try one out that's slightly different from what I've previously described. Imagine a 2 x 2 PM stating (Will Trusted Replication Firm attempt to replicate Study X?) along the row and (Will the results of Study X be upheld?) on the column. We have four states:




Not ReplicatedReplicated
Not Attempted: 12
Attempted: 34

Individuals creating these Markets would do so to capitalize on any perceived disagreement, just like any other PM-Author.

Individuals buying States 1 and 2, would be those who either felt that the study wouldn't be chosen for replication, or wanted to subsidize the contract for replication (give Trusted Replication Firm some $ reasons to replicate the study).

Individuals buying State 3 would be those who feel the study is 'bad' and would not replicate if a replication was attempted.

Individuals buying State 4 would be those who feel the study is 'good' and that it would replicate if a replication was attempted.

The audit firm would buy States 3 and 4 equally, just before they decided to audit the study. They can uniquely profit because only they know which studies they will choose to replicate. (Ideally, this would be random). They receive a payout of 1 no matter what the outcome is, which they purchased for <1, so they profit as well.

The whole point of doing this, however, is that one would be able to look at all of the market prices for all studies (not just those that one attempted to replicate) and assess the quality of work that way.

These markets might be a little thin. Would universities or governments subsidize them? What about a single study, perhaps a very controversial or interesting result?
Nullius In Verba

psztorc

Again, these markets would probably cost some money, but this presentation lays out some of the problems they might solve:

Chameleons: The Misuse of Theoretical Models in Finance and Economics
Paul Pfleiderer - Stanford University
March 2014

https://www.gsb.stanford.edu/sites/default/files/research/documents/Chameleons%20-The%20Misuse%20of%20Theoretical%20Models%20032614.pdf
Nullius In Verba

MattGoldenberg

So to me this is an issue of "square peg, round hole".  As you noted, there's already a system in place (peer review) that tries to deal with this. The issue with peer review is that the peer reviewers themselves aren't being reviewed.  Trying to bring in a prediction market introduces additional liquidity problems, and we already know that journals won't be willing to solve those, given that most already don't pay their reviewers.

Crystal can easily be applied to this problem, by adding the notion of reputation to each peer reviewer, and having them provide probability densities of things like replication chance, retraction chance, and correction chance.  Reviewers with higher accuracy will gain more reputation, and be weighted higher in the future, so you don't need a highly populated market with lots of participants. You only need the few who have already proven they're well calibrated.  If you somehow do find someone to provide money, it need not be enough to provide liquidity to a prediction market. They can just pay what the information is worth to them, then it will be distributed based on the reputation of the peer reviewers after the prediction has cleared.