PRACTICAL THINKING. — Strategies for open, free, and transparent scientific publishing. It's the future of publishing. But how do you get the future earlier? Sooner rather than later?

in #steemstem6 years ago (edited)


Word count: 2.500                 Updated: 2018.4.9

So . . . I would actually post research on Steemit!

UPDATE: continued in this post.

Here's why.

;)

It would eventually go into journals when concise and sufficiently rigorous. Elegant. Formatted.

But there's nothing quite like a blockchain printing scientific papers on currency tokens to establish priority in an archived manner. And therefore publish much, much earlier. Yes, earlier than even arXiv.

Nobody throws away money. The fact that scientific papers are recorded in currency tokens produces the free archiving. Normally you have to pay quite a bit for that and get it from an organization that will probably survive for a long time. But not anymore.

(I wouldn't do this quite yet. There's too much spam at the moment. But we'll see. The front end is being forked by several groups. And there will be no trending page. Then competition from EOS will occur. All very good. When there's no competition everybody gets the worst product at the highest price.)

Publish a good result in journals which have consistent quality and impact. No rush at this point. A well organized, deep, clear, concise paper. One that others will actually read.

Circulate it to get feedback on possible flaws in arguments even before submitting it. That usually takes over a month. If one don't care how many read it or not, it would just go to arXiv exclusively. Not both arXiv and a journal.

It would in that last case possibly be very long. Longer than it has to be.

Conciseness takes time. Pascal apologized long ago that he could not make his letter shorter. He had insufficient time to do that.

Suppose authors could do both? Game changer. Here's why.

What means consistent? Journals where I or the representative consumer of the content reads at least several of the papers in each issue. Like a restaurant with a star. The next two stars, two out of a possible three, are for consistency of the food, of that good food that gave it the first star.

Problem? Solution?

Some of these journals are behind a paywall. Not accessible even to most academics, especially in Europe, where many institutions cannot afford to subscribe to all publishers. That costs millions, and that can be used for other things.

And that is not easy to change. The libraries which purchase the journals are not the end users.

When the consumer of the good and the one who pays for the good are not the same entity, there exists an above threshold number institutions worldwide will continue to buy these journals. Even if all scientists switch. I'm thinking about the United States. The single largest market. It's an above threshold minority. (We'll talk about why that matters so much later.)

The institutions subsidize open access for their faculties and students in such journals where there is desire and initiative. So the incentives to switch to an entirely new platform for a further sufficiently great minority (in this case a majority) of scientists, especially the young and untentured, and early in their career hence poor, not receiving high salaries or having other streams of income, are not yet sufficiently great.

A typical queried data point will declare: Yes, yes! They too want open transparent systems! And having said that, they'll do nothing, submit nothing using the new technology.

What is required is a new use case. A new value proposition for individuals to transition en masse. New algorithms for example that maximize consistency beyond that offered by current platforms.

So what was that about a sufficient minority? I mentioned that — but why?

Once a sufficient minority transitions, all will transition. Because as Nassim Taleb pointed out again with the minority rule in his latest book, if As will consume Ns or Ms, but Bs will consume only Ms, when scaling up production for a larger population, there are significant economies of scale in producing only Ms. So long as Bs are a sufficiently large minority. Assuming this would not be the case was a major error in the otherwise good post by @dan from two years back.

Let's talk about acceptance models for new technologies. In publishing and in general.

Publishing a partial result on public platforms, possibly anonymously is a win in the sense of providing scientists with a way to transition to open and free platforms by intentionally boosting the reputation of the platforms before they being publishing on it generally. Make it academically necessary to read not just paywall journals. Don't give the competition options. That's one of the things arXiv has done beautifully. Think like arXiv; think strategically, I suggest.

There needs to be a use case in the space, a communications amplifier, that is missing from old sector publishing technology, to get the above threshold minority to switch to the new sector publishing technology.

May be the above is something sufficiently many others would like to do as a transition to something blockchain social media based peer review and publishing. Like to PEvO, as we discussed yesterday.

Otherwise we find, as Richard Gabriel argued in the above link, that we get the improvement, by the market of products and ideas, but at the slowest possible rate. Everything faster requires strategy. Such as a significant use additional case.

Alan Kay argues that plain paper copying by Xerox was more expensive than competing technologies, but it was a communication amplifier and it was easier. So it won. Air travel is still costs more per unit distance than trains in most cases, but it's more convenient in the sense of faster. So people fly.

Returning to the original question: ``In physics, we have the arxiv for this. I would keep steem as a very good medium for communicating about science, but I won't use it for publishing actual research paper. You need something made explicitly for scientists that is more controlled. Otherwise, how to distinguish the good from the bad? Who is deciding what is good and what is bad?''

I said I would. Many in the computer science and mathematics community do that.

The reader must learn to distinguish good from bad. We have waaay too many blind cites in science, researchers reading the abstract or conclusions and assuming the result is valid. This usually creates a broken telephone effect, and confusion results and gets cited.

When anyone references a monograph or a chapter or a paper need to read the whole paper, the whole chapter, the whole book. (That's not currently solved by Steem, as currently most posts have # Reads < # Votes. Maybe improving front ends and competition from EOS will assist.)

Nicolas Rashevsky said it long ago, there's no royal road, no shortcuts in science. Where a paper is published is not sufficient to trust papers. Especially those with technical content. Given twigging and overspecialization, which means the referee is most likely not quite an expert either.

(https://www.jstor.org/stable/i20114445)

Every scientific paper must be carefully read by everyone who wishes to cite it. No shortcuts exist.

Deeper feedbacks are needed. The reviews made public, and reviews made of reviews. And so on. The result is a fixed point called accuracy. The reviewers would have higher dimensional reputations, several values associated with their account. Regarding the results of each of these feedbacks.

There must be no dominant strategy that incentives gaming the system.

Journals are not there to vouch for the truth of all they publish. That is impossible. Results get falsified all the time. Indeed falsification of existing literature is the most notable result and reason to publish.

Publications are filters. Rather they direct attention given the vast ocean of publications, most of them either redundant or irrelevant or obviously rubbish. The readers get consistent high quality work, and they must then read all the arguments, all the evidence, and judge the paper anyway.

Clifford Truesdell also famously wrote the same in a rather acerbic and witty collection of essays.

The solution is to get a lot of open peer review. Papers might even be anonymous. The system must survive even anonymity as a stress test.

I particularly like what Behavioral and brain sciences does. I also like what Annals of mathematics does.

But it should be larger scale, open, free, transparent, and rapid. Everything cited must be read by everyone who cites it. Whatever system makes that happen will be a game changer. The quality as consistency of the journal or platform will be way up there. And more people will want to publish there, more departments will reward publishing there.

Some of my suggestions. High consistency would within a year result in a large impact. And the platform can go from there.

UPDATES

What means quality in scientific publishing. Some thought experiments about publishing technology.

 

Yes. Exist no shortcut to reading carefully the entire paper or monograph.

Any system that encourages such reading more competing systems is one with a real value proposition. That in turn can get writers to switch and to start using a new publishing technology.

Suppose somebody writes a long paper. It has 100 points. But that's fair. It's a long paper. It's what you'd expect from a long, technical paper. So far so good.

And the paper we suppose is submitted to a well known journal known for its quality. I mean it's consistency. (See below why I consider the consistency of a journal as a proxy for its real quality. This is considering things from a perspective of scarce reading time — considering things pragmatically.)

Buried in the middle of the paper, at point 55, we find this hidden treasure, we read:

55.b.ii. Cats have tails. So do chimps.
        We conjectured they're the same species.
55.b.iii. Our sample contained four chimps and five cats.
        We tried to breed them; it didn't work.
55.b.iv. That means both are actually different kinds of lizard.
        Because lizards also have tails.

Huh? . . . What? . . .

That's not remotely true. Not even plausible.

And leads nowhere.

Therefore it shouldn't be there. Such is the difference between fiction and nonfiction. It's why sometimes you prefer to read nonfiction — to get a lead to the truth. At least a hint about the truth. (Which is another reason why consistency is a proxy for quality of nonfiction in general.)

And yet notice that this bit of nonsense might easily be placed in any paper . . . In any text, in any paper. . . Observe that it doesn't make any difference what the text is about — what is it's other content.

Most persons including reviewers don't read what they read and review entirely and carefully. It's not a new complaint. The complaint is as old as shared journals used as publishing technology.

Not only is nobody is really fit to lead others, but many are also unfit to follow others.

(I suggest that isn't widely enough known, but not really unexpected.)

Consider that this post had and probably still has many typos. Small mistakes.
I wrote it in a hurry.

(Many persons do many things in a hurry. Which has consequences.)

So that's clearly not itself a big issue . . . But it reveals a big issue.

(Not merely the obvious one, which is that if you care about being right in arguments you base on the work of others you ought not blind cite. That has been said by Lichtenberg and Schopenhaeur long, long ago, an old mistake, a very old mistake Almost as old as shared scientific journals.)

One inference is that you always have to read what you cite, not just cite its conclusions.

Another inference, a more significant one, is that even if reviewers approved something, read it yourself, carefully, before believing the abstract or the conclusions. The reader must learn what is good or bad.

Now realize is that people are lazy and busy and that injunction will not be honored if that's all there is to it. Trust persons, yes; but trust systems and circumstances and feedbacks more than persons (Adam SMITH, Inquiry into the nature and causes of the wealth of nations, 2, London: Strahan Cadell, 1776; Norbert WIENER, Cybernetics, New York: Wiley, 1948).

Not all reviewers will read all parts of a paper carefully. For various possible reasons. Many things are possible. And not all final consumers of scientific literature will read all parts of a paper carefully.

One of the reasons for open, transparent and free publication of texts and reviews is to have as many careful readers to go over something. And this should be incentivized as much as possible if the goal is to walk a few steps down the long road that leads to truth or something approximately like that.

At least because often some of them don't carefully read what they review and therefore what they recommend to publish or do not recommend to publish.

I suggest that there's a problem in any process which allows the above ``conclusions'' to get published as if scientific literature. A system must result in careful reading of text. Mere affiliation of reviewers who then recommend to publish or not to publish, or to upgrade in position on a page or not to do so, in a busy world, even if everyone is well meaning, is no shortcut to quality. (It's one part of a larger system with many parts, I think.)

The good news is that problems create opportunities for improvement. They create an incentive to switch.

Journals are not there to vouch for the truth of statements they publish. (More on this below in this essay.)

They do not endorse literature. They endorse that attention be directed at certain literature and not other literature. They do this because the time of scientists is scarce — valuable — and this is the source of the value of scientific publishing platforms — they reward the attention they receive. Such is the derived value of scientific publishing platforms.

Now I write science fiction. And I like what I write. I like the genre.

But that's clearly labeled science fiction. Not science, science fiction.

That may contain references to science. Exactly like it may contain references to other real and true things, things that exist as described. Like trees at a recognizable location. But its purpose is entertainment, not truth. Which is clearly stated and obvious in any case.

The converse is the case for scientific publishing, which aims to direct our attention to truth. Nonfiction.

Nobody will bother to read in the long run a document series that's not what it's described. If something is labeled as fiction it must consistently contain entertainment. And if something is labeled as nonfiction it must consistently contain what is likely to be true, or at least be a point along the road to truth.

Much of science is provisional, found to be not true, but it has value because it led to attempts to falsify it, and that lead us to discovering what it true. Much remains only provisionally true (Donald HEBB, A textbook of psychology, 2, Philadelphia: Saunders, 1966).

So what I'm saying is this. A decentralized publishing system has to be able to deal with the following unfortunately typical paper:

http://www.pnas.org/content/109/11/4086.full .

Some people don't care about their own credibility. They're perfectly happy to publish nonsense. They even hold press conferences to draw attention to it.

Study 4: The experimenters asked people what their education level and income was. Then offered candy at the end of the survey. Those with higher education level and income took more candy. Paraphrasing the authors: ``Aha! If they didn't take the candy, more would be left over for children. Aha! This is evidence that higher social class individuals are immoral.''

The authors conclude that greater resources, greater freedom, and greater independence are bad for society. All based on all this substantial evidence. They suggest people need to be less educated, less free, and less independent to be moral. Yet they have no argument and no evidence. They just use the trick of saying what they're going to say . . . say what they're going to prove . . . and then don't say it . . . and don't prove it.

O_O

If we are going to consider that as a valid conclusion, the authors really prove too much. For example, that all adults including the authors are immoral. If they buy hotdogs, there are fewer hotdogs left for the children of the world.

Fascinating.

The other six studies about the same in quality.

Study 1. A guy stood around at an intersection, and saw a few cars cut others off. More often the expensive cars did that. After watching 100 cars he left . . . So after standing at a busy intersection for 5 minutes he left . . .

Quoting the authors: ``Our confidence in these findings . . . ''

¯\ _ (ツ)_ /¯

Proceedings of the National Academy of Sciences of the United States.
Impact factor ~10. Authors from well known institutions.
For comparison: Physical Review Letters ~9, Physical Review A to E ~3 to ~5.

Physics results in a paper. But a lesson to physicists. Forget about particles, forget about mathematics. Just tell a grad student to stand at an intersection for five minutes. Tell him to put in a request for grant money to buy candy. Distribute the candy. Candy also results in a paper. Both can be published.

PNAS is a good journal in general. Yet how does something like this make it through review and what algorithms can be made to prevent something like this coming through by sheer institutional reputation of the authors are important question when building competing technology. (Initial reputation the system assigns to persons signing up and verifying identity.)

There's a term, failing forward. Failing is OK.

Failure teaches us what not to do.

History is a history of failure. Negative knowledge is still knowledge. Important knowledge.

Feedback.

Basil HART actually wrote that. Military strategist.

And more importantly how will a decentralized system deal with these gentlemen writing highly weighted reviews for other papers . . .

Feedback via reviews of reviews, random selection for review, and exchange ratios for reputation tokens, I suggest.

I suggest that if there will be peer reviewing based on reputations, there will need to be identity verifying tokens to sign reviews. So that the system is as trustless and autonomous as possible. All to keep human labor costs as low as possible.

The tokens also track different kinds of reputation. Consider if a major scientist is submitting biased reviews of the work of his competition. His papers might be good, but his ability to write reviews of reviews might be weighted down. This is especially needed in the social sciences . . . wow it's needed there. They have the equivalent of flag wars in those fields. What is the difference between Keynesians, New Keynesians, Neo Keynesians, Classical Keynesians, and Heterodox Keynesians? I don't know, except that they dislike each other . . . Oh, yes, I forgot about some people, the Post Keynesians . . . No children left behind.

: /

Different tokens are needed to automatically disaggregate all that needs to be kept separate. And that while keeping the human labor involved in managing journals as low as possible. I don't like administrative work, it's time consuming and often fruitless. Let the computers do it. We'll create algorithms for the computers and verify they are carried out by secure tokens.

For example, there was a paper basically proving that involuntary unemployement exists on the premise that if a prospective employee went to an interview and offered to work, but only to work some very small number of hours, such as zero hours, he would not be hired . . .

That's trivial, one might tell the author. Such a paper is either satire or nonsense.

Such a paper is pure wordplay . . . but it was published as if a valid result in a well known journal.

If automatically chosen to review that paper I would've voted not to publish it. To keep it at most as ``submitted'' status. But what happens when retaliation votes come back? Those votes would have be reviewed and themselves voted down and this tracked and this at many levels . . . or better and simpler each subfield of science has it's own token and there are exchange rates for tokens . . . Something like that. (I wonder if some fields would have very bad exchange rates relative all the others.)

Reviews of reviews of reviews . . . or multiple tokens . . . or both?

Why ought scientists bother with decentralized blockchains?

Among the primary benefits of using blockchain, besides the fact that printing scientific content on money gives free archiving, with block confirmation taking seconds one some decentralized blockchains and graphs, that's the fastest and most secure way to produce time stamping.

For mathematicians, physicists, and quants there was always arXiv, but even that takes a day or so to post. Frequently revising arXiv papers is discouraged, however. It's intended for final or almost final preprints.

And that's were blockchains and related kinds of systems come into our space; most publishing technologies new and old are not strictly competing. They are mostly complementary.

We still produce print books. Computers have not entirely replaces books. (Donald NORMAN and Neil GERSHENFELD have written on related themes.)

Being able to timestamp early thoughts and revise ideas in a space where ideas and priority are the bread and butter for most participants will lead to earlier and wider sharing of ideas. Each person then builds on the work of others more rapidly and shares the results of that sooner. The positive feedback of growth of knowledge is accelerated.

For example, I just revised this comment a minute I first posted it.

;)

When the nature of the space is that ideas are bread and butter or proxies thereof timestamping and who said what is significant. There is less cost to submitting for review a conjecture or proposal.

One comment to this essay mentioned the case of one reviewer recommending publication and another recommending not to publish. The text in such cases indeed is often not published. Yet the information was shared. In the grand scheme of things it doesn't matter who said what, but for many scientists it matters for them obtaining support. (And it reduces the need to write lengthy histories of the subject every few years.)

Mancur Olson (Power and prosperity, New York: Basic Books, 2000) pointed out that any institutional arrangements that contribute to trust therefore make more easy and viable long horizon activities like science, which is the most durable of all activities. The most durable things are those which accumulate. (For better or for worse — trash is also durable in this sense.) So these things most of all are the glue of time binding and therefore are the primary causes of: (a) decline like in the case of trash or special interest groups (Mancur OLSON, The logic of collective action, Cambridge: Harvard University Press, 1965) and (b) growth (David LANDES, The unbound prometheus, Cambridge: University Press, 1969).

All while most persons falsely imagine oil plant and pipelines are the durables of our civilization, while such things are actually built redundantly and work surfaces replaced or rotated every year or other year, depending on the material. In fact it's science that's primarily the cause of growth (Frederick SEITZ, Foreword, Purposive systems, New York: Spartan, 1968).

400_square.png

    #creativity #science #writing #creative #technology #life #publishing
            #thealliance #steemstem #isleofwrite #writersblock #blog

ABOUT ME

I'm a scientist who writes fantasy and science fiction under various names.

The magazines which I currently most recommend:
Magazine of Fantasy and Science Fiction
Compelling Science Fiction
Writers of the Future


playplayplayBBB.gif
PRACTICAL THINKING — LATESTRECENT POPULAR
FISHING — thinking about tools and technology

©2018 tibra. Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License . . . . . .   Text, illustrations, and images: @tibra. #thealliance symbol is by courtesy of @enginewitty.

Sort:  

I think Steem could have a role in improving scholarly communication. First, lot's of communication happens on centralized platforms like Twitter that should migrate to Steem. Steem also gives scientists an alternative funding mechanism at a time when funding is becoming increasingly difficult. Therefore, projects like Utopion and the future release of Smart Media Tokens are super exciting.

The main thing science needs is stronger incentives for researchers to be immediately open with their research. Through my actions of being completely open, I try to demonstrate how openness allows researchers to succeed. However, the norm in science continuous to be somewhat closed, but there's lot's of enthusiasm for change.

Thanks @tibra for releasing your post under an open license. I do the same with my posts! Some that you may be interested in are:

  1. Plagiarism & blockchain timestamping gone awry: the most interesting case of scientific irreproducibility?
  2. How I used the Manubot to reproduce the Bitcoin Whitepaper
  3. Upcoming Reddit AMA on our Sci-Hub Coverage Study
  4. Censorship gone awry on Reddit: the aftermath of our r/science AMA

I remember your username so sorry if you've already checked out these posts! Cheers to #steemstem and improving scholarly communication.

I will rewrite what I wrote in the other post. Unfortunately, I have no time to talk further, but maybe tonight on discord? :)

Unfortunately I have not that much time to answer deeply, but let' see that I partially agree. I come from a field where everything is public, can be open access by selecting the right journals and even for free, so... Even all papers are freely available on the arxiv. So my problems are somehow slightly different :)

By the way, everything is open access in particle physics thanks to the Scoap3 initiative where institutes from all over the world pay for it. This is not optimal, but already better than nothing.

But of course, this is not the case for any other field different from particle physics and solution must be built. I hope to be able to witness this in my lifetime.

If I have well understood, you would like to have a kind of steem-based platform like the arxiv (but maybe with different rules on how the tokens are attributed; this is the reputation thing the pevo people are after). But then why not simply the arxiv. I don't really understand the need for the token. Why is not an arxiv-like version enough?

Okay, the weak spot of the arxiv is the referral system that is not existent. Then we have scipost. I actually don't see what we should have in addition to this? Scipost includes mostly everything that you mentioned.

Maybe competitors (like pevo)? But the idea is the best so far, IMO. I still don't see where and how the tokens enter the game, but well... to be discussed.

I suggest that if there will be peer reviewing based on reputations, there will need to be identity verifying tokens to sign reviews. So that the system is as trustless and autonomous as possible. All to keep human labor costs as low as possible.

The tokens also track different kinds of reputation. Consider if a major scientist is submitting biased reviews of the work of his competition. His papers might be good, but his ability to write reviews of reviews might be weighted down. This is especially needed in the social sciences . . . wow it's needed there. They have the equivalent of flag wars in those fields. What is the difference between Keynesians, New Keynesians, Neo Keynesians, Classical Keynesians, and Heterodox Keynesians? I don't know, except that they dislike each other . . . Oh, yes, I forgot about some people, the Post Keynesians . . . No children left behind.

: /

Different tokens are needed to automatically disaggregate all that needs to be kept separate. And that while keeping the human labor involved in managing journals as low as possible. I don't like administrative work, it's time consuming and often fruitless. Let the computers do it. We'll create algorithms for the computers and verify they are carried out by secure tokens.

For example, there was a paper basically proving that involuntary unemployement exists on the premise that if a prospective employee went to an interview and offered to work, but only to work some very small number of hours, such as zero hours, he would not be hired . . .

That's trivial, one might tell the author. Such a paper is either satire or nonsense.

Such a paper is pure wordplay . . . but it was published as if a valid result in a well known journal.

If automatically chosen to review that paper I would've voted not to publish it. To keep it at most as ``submitted'' status. But what happens when retaliation votes come back? Those votes would have be reviewed and themselves voted down and this tracked and this at many levels . . . or better and simpler each subfield of science has it's own token and there are exchange rates for tokens . . . Something like that. (I wonder if some fields would have very bad exchange rates relative all the others.)

I understand the issue. I think that having full open access (including to the review) would help! People will think twice before reviewing (and potentially loosing their credibility).

Loading...

This is an important post!! Seriously so. Gratitude for your time and effort. RESTEEM to help get this out there for you, and also so I can reread at my leisure while I am travelling in the coming days. Grateful for the new connection. :)

Ty for the kind words.
And for the resteem.

:)

This is an important conversation. Thanks for getting the creative juices flowing here!

  1. the for-profit paywall blocked science publishing houses are not serving scientists or the public well. If the public paid for the science, oftentimes through their taxes, then they should be allowed to read it, download it, and explore it.

  2. Say you have a good idea, and then send this idea to a journal and then the journal sends it out to review to say reviewers 1 and 2. Reviewer 1 says the paper is bad and suggests "reject". Reviewer 2 says it is ok and suggests "revision". Editor says, well 50/50 so I will reject. Paper is rejected but Reviewer 1 has all of the ideas and data from the paper. This has in the past lead to conflicts and ideas being misused (a nice term for stolen). Having a timestamp that says, well, reviewer 1 received the paper that mentioned x, y, z first and then used those data to submit their own paper AFTERWARDS would help give credit where credit is due. Pretty cool.

You are a total genius as the blockchain needs to be part of the peer review process. Thanks! Lots of great ideas here from the replies here too.

Thanks for the kind words.

Point (2) is among the primary benefits of using blockchain, besides the fact that printing scientific content on money gives free archiving. With block confirmation taking seconds one some decentralized blockchains and graphs, that's the fastest and most secure way to produce time stamping.

For mathematicians, physicists, and quants there was always arXiv, but even that takes a day or so to post. Frequently revising arXiv papers is discouraged, however. It's intended for final or almost final preprints.

Being able to timestamp early thoughts and revise ideas in a space where ideas and priority are the bread and butter for most participants will lead to earlier and wider sharing of ideas. Each person then builds on the work of others more rapidly and shares the results of that sooner. The positive feedback of growth of knowledge is accelerated.

For example, I just revised this comment a minute I first posted it.

;)

Mancur Olson (Power and prosperity, New York: Basic Books, 2000) pointed out that any institutional arrangements that contribute to trust therefore make more easy and viable long horizon activities like science, which is the most durable of all activities. The most durable things are those which accumulate. (For better or for worse — trash is also durable in this sense.) So these things most of all are the glue of time binding and therefore are the primary causes of: (a) decline like in the case of trash or special interest groups (Mancur OLSON, The logic of collective action, Cambridge: Harvard University Press, 1965) and (b) growth (David LANDES, The unbound prometheus, Cambridge: University Press, 1969).

All while most persons falsely imagine oil plant and pipelines are the durables of our civilization, while such things are actually built redundantly and work surfaces replaced or rotated every year or other year, depending on the material. In fact it's science that's primarily the cause of growth (Frederick SEITZ, Foreword, Purposive systems, New York: Spartan, 1968).

Agreed on all fronts! Thanks for taking the time to reply. Time stamping is cool and important and this is the way to move forward. Blockchain and science = together at last...

But no happy ending yet. Going to be a big struggle and there will be resistance. Keep up the great work - I look forward to seeing where this goes.

That's exactly why I'm here. I'm a retired researcher with lots of unfinished projects, lab tips, and unpublished data, so Steemit is the place to discuss it! On Steemit, WE are the peer-review committee!

Exactly right.

I am planning to use Steemit as my new "Journal of Unpublished Data." I have full manuscripts, slide presentations, posters, notes, and raw data to work on and publish in retirement. I think we have similar objectives, so please consider following @qiyi "Unpublished Data."

Finally there is some discussion going regarding scientific publications on blockchain technology. Nice points.

The reader must learn to distinguish good from bad.

Yes.

Yes. In most fields there's no shortcut to reading carefully the entire paper or monograph. Anything that encourages that more than at present is a system with a real value proposition. It can get people to switch and start using it.

I am excited about the idea of this, it would be wonderful for those of us who continue to study and yet are not in an academic setting. It’s very difficult to get a hold of certain journals especially in medicine if you do not have the “ credentials “.

Will be updating and fixing typos.

Congratulations! This post has been upvoted from the communal account, @minnowsupport, by tibra from the Minnow Support Project. It's a witness project run by aggroed, ausbitbank, teamsteem, theprophet0, someguy123, neoxian, followbtcnews, and netuoso. The goal is to help Steemit grow by supporting Minnows. Please find us at the Peace, Abundance, and Liberty Network (PALnet) Discord Channel. It's a completely public and open space to all members of the Steemit community who voluntarily choose to be there.

If you would like to delegate to the Minnow Support Project you can do so by clicking on the following links: 50SP, 100SP, 250SP, 500SP, 1000SP, 5000SP.
Be sure to leave at least 50SP undelegated on your account.

ONE MORE COMMENT.

One thing I forgot to add. Any modern publishing system should be capable of being changed / updated while running. it. (https://www.youtube.com/watch?list=UUkQaCK4Hk3cMrbXbqMMYSoQ&v=IPsZyfGCaKs is an interesting discussion to watch.)

It might be desirable for a way to exist for individual channels (~journal) to add features to their own channel without requiring browser extensions or anything like that, without requiring the whole system to go down, or individual servers and bots being set up, but just objects that live within the channel and can be added and removed in real time. To allow each journal to add a specific feature it wants. That would be a new value proposition. Compared to any existing platform.

The internet as a whole does not go down while part of it is changed. Or replaced. That allows add / removal. This assisted in rapid user adoption of internet technology; encouraged users to switch to it.

This will be the subject of a longer post.

Coin Marketplace

STEEM 0.19
TRX 0.13
JST 0.030
BTC 62832.46
ETH 3374.71
USDT 1.00
SBD 2.48