You are viewing a single comment's thread from:

RE: OPEN CONVERSATION. — REGARDING OPEN SCIENCE AND DEVELOPING FREE IMMUTABLE POST PUBLICATION PEER REVIEW PLATFORMS AND CAPTURING NETWORK EFFECTS. ... [ Word Count: 1.500 ~ 6 PAGES | Revised: 2018.11.15 ]

in #development6 years ago

[Another section for a revised post.]

-5-

When testing anything with students don't forget to use real value rewards beyond an insignificance threshold. Then it can be published in most journals that accept experimental economics. (Else the data can be rejected as nonrepresentitive by journals where you would like to publish, if that is a goal.)

The following is just to formulate the problem most correctly. We want to create algorithms that are solutions for the real problem.

Which problem is that? Science publishing is mostly one of a typical lemons market. And not only because a randomly selected individual in the world far more often than not cannot distinguish good product from bad product.

So there are two routes to discovering how to make publishing with no fees anywhere and therefore more participants all around.

(1) Raising awareness by writing about it and publishing in appropriate places. Minus side is most will be paywalled. But it will give what [KITCH95] (Philip KITCHER, The advancement of science, Oxford: University Press, 1995) lovingly called unearned credibility. A paper in econometrica has more umph for getting funding than just talking about it, or publishing on a blog or in another journal. Even if same text. Because most, even fellow travelers, will probably not read it.

(2) Just making a product. Linux style. Very minimal.

How can the prospective user trust that your algorithm will produce consistent quality going on top and low quality dropping to the bottom (in journals this would just be Advice to Reject) therefore not damaging to their reputation if submitting to it? Considering that this depends not only on the algorithm and the best papers submitted to it by authors, but also the worst papers, which will affect the perceived status of the platform, it's Umph, Impact, Etc, even if some papers are excellent.

And considering that most do not and will not read papers [KITCH95]. Some skim. Most don't even skim. So randomly selecting a user, they will judge a paper published based on the platform. Not on the content of the paper.

That in turn affects perception of the platform.

Which exposed good papers to risk. The risk is, at least, that of not getting read despite having done actually good work. (Not getting read when not doing good work is not a problem and not a risk.)

So because the quality of a publishing platform does not depend entirely or even mostly on its structure, it's hard to distinguish platforms that are good from those which are not so good. In the sense of what they do that does contribute largely to a good or bad outcome. Furthermore, at lower granularity, it's therefore hard to distinguish publishing platform that will consistently deliver high quality, when just launched, even if the design is known, even if the design is really good, from publishing platforms that will not consistently deliver high quality.

So even an excellent algorithm needs testing in a very public way, however that is done, because the payoff does not depend only on it.

Meanwhile it must be very convenient to get participation; only a few clicks to submitting a review or something. Like with the latest editorialmanager stuff. No longer do you have to make a password or log in, once invited; just get a link, editor generates pass, sends in email, automated, and can be take you straight to the submit decision page, and meanwhile you can dl the manuscript you are asked to review without logging in anywhere.

Leading to the conclusion that some fun factor is required in the long run. Something outside the dilemma. And algorithms which bring a new level of convenience, while being consistent. Something outside the dilemma.

Coin Marketplace

STEEM 0.20
TRX 0.15
JST 0.030
BTC 65269.02
ETH 2653.11
USDT 1.00
SBD 2.84