[HN Gopher] You Want to See My Data? I Thought We Were Friends
       ___________________________________________________________________
        
       You Want to See My Data? I Thought We Were Friends
        
       Author : dnetesn
       Score  : 418 points
       Date   : 2020-07-30 10:10 UTC (12 hours ago)
        
 (HTM) web link (nautil.us)
 (TXT) w3m dump (nautil.us)
        
       | kkotak wrote:
       | I think all scientific papers should be written in this format.
        
       | [deleted]
        
       | Sebb767 wrote:
       | I generally agree, but their solution (fund boring research,
       | publish only in journals with high standard) is in direct
       | contrast to what they stated earlier. It's basically saying yes,
       | we have bad incentives, but we could ignore them. That's not
       | going to happen; shiny new research _will_ attract people and
       | funding. Especially more than  "boring" _what we found before was
       | indeed a finding_ -research.
       | 
       | Now, I don't have a good solution either, unfortunately. What
       | might work is that we require replication work for a PhD or have
       | a certain percentage of a journal dedicated to verification.
       | That, combined with some meta-studies to reward people with
       | citations for replication, might work without fully swimming
       | against the current.
       | 
       | It's a hard problem, really.
        
         | abdullahkhalids wrote:
         | I think a lot of scientists would actually do "boring
         | research". (a) they actually know such work is useful (b) some
         | know that they don't produce work as good as the best in the
         | field, but are happy to do more grunt work at lower stress
         | level, rather than endlessly chasing "high impact"
         | publications.
         | 
         | Unfortunately, there is no funding for such research. Which is
         | find really sad. Private grants might need to show "impact" but
         | State run grants don't have that many constraints, and they
         | could conceivably offer such grants.
        
           | gvurrdon wrote:
           | I'd happily have done that grunt work, back when I was in a
           | lab. The work may not have been interesting but at least I
           | could feel as if I was being useful whilst also paying the
           | mortgage. Instead, I've ended up writing software which is
           | related to assisting journals in the task of getting
           | researchers to share their data.
        
             | abdullahkhalids wrote:
             | That sounds fascinating. Can you talk a bit more about your
             | work? What are the principal challenges in getting people
             | to share their data?
        
               | gvurrdon wrote:
               | No problem. This article might be of some interest:
               | https://www.go-fair.org/fair-principles/ There are, of
               | course, political challenges in persuading researchers to
               | share data, primarily the fear of other researchers using
               | those data for their own papers or to secure funding,
               | depriving the originator of the work of the chance to do
               | the same. I hear this at conferences quite often. The
               | part I'm working on is at attempt to catalogue the places
               | were research data may be stored and the requirements
               | various journals and repositories impose on researchers
               | who want to deposit data there. Gathering this
               | information, curating it and making it easily accessible
               | is not trivial, and relies upon a lot of manual curation
               | by specialist "knowledge engineers".
        
         | searchableguy wrote:
         | A more radical solution: Basic income for all researchers and
         | provide efficient digital way to see how funds are being spent
         | like open collective for open source. Most problems arise at an
         | individual level where people don't get paid unless they are
         | putting research papers on the new hype thing and the other is
         | expenditure on middlemen and bureaucracy which isn't easily
         | visible. Eliminate the use of research grants for funding
         | people's living.
         | 
         | Bit pedantic but you should use they instead of he/she. People
         | can have other pronouns.
        
           | gen155954609803 wrote:
           | >Bit pedantic but you should use they instead of he/she.
           | People can have other pronouns.
           | 
           | One of those other pronouns is literally "they", so you're
           | still risking using the wrong pronoun by using "they" for
           | someone who prefers to be referred to as "he".
           | 
           | Was anyone else taught the rule that when writing about a
           | third-party whose gender is unknown, use your own pronouns?
        
           | kebman wrote:
           | Oh, you mean tenure? UBI is a general thing for the whole
           | population, and not something just for scientists. Tenure is
           | in general a good, as long as scientists keep working, and
           | there are some controls keeping them producing at least
           | "something". And let's face it, most of them do. This is
           | great, because it releaves scientists of pressure, which
           | means more of them can be creative in their research, which
           | actually speeds up discoveries. This is also a valid approach
           | within tech and IT btw. But it should be said that it has a
           | very poor track record outside of these knowledge-heavy
           | professions.
           | 
           | On the other hand, if you really think UBI is a good
           | solution, then I'm afraid you don't know how an economy
           | really works.
        
             | searchableguy wrote:
             | Yeah, I meant something like tenure but less restricted for
             | anyone doing research.
             | 
             | > On the other hand, if you really think UBI is a good
             | solution, then I'm afraid you don't know how an economy
             | really works.
             | 
             | Could you elaborate more?
        
               | kebman wrote:
               | It has been discussed to death, but for the sake of
               | argument, let's review these two completely opposing
               | factions, who both agree that UBI doesn't work:
               | 
               | Zero Hedge on why UBI doesn't work:
               | https://safehaven.com/markets/economy/Why-Universal-
               | Basic-In...
               | 
               | The Guardian on why UBI doesn't work: https://www.theguar
               | dian.com/commentisfree/2019/may/06/univer...
        
               | cousin_it wrote:
               | Your second link mentions "universal basic services", I
               | always supported that idea but didn't know it had a name.
               | It's better than UBI because it sidesteps the problem of
               | cost disease (landlords raising rents in response, etc).
        
               | kebman wrote:
               | Why not go for the original idea, and go straight for
               | Communism instead?
        
             | mattkrause wrote:
             | I think they mean something _like_ UBI: non- or less-
             | competitive grants that would fund modest-sized projects
             | (or pilot studies for larger ones).
             | 
             | Tenure doesn't really help with this problem in the lab
             | sciences. While it lets faculty members keep their jobs, it
             | usually doesn't come with enough funding to do experimental
             | work. In fact, while you can "keep" a tenured position
             | without grants, many places find ways to...discourage that
             | (move your office to a shoebox in the sub-basement, crappy
             | teaching and service assignments).
        
         | asutekku wrote:
         | On an unrelated note, just use theirs instead if his/her. It
         | makes the text easier to read and is the grammatically correct
         | way.
        
           | Sebb767 wrote:
           | Thanks for the suggestion! I've edited the question.
        
         | setgree wrote:
         | Tyler Cowen writes [0] that the most important question in
         | economics, to him, is
         | 
         | > how do differences of culture -- however defined -- interact
         | with traditional economic mechanisms involving prices, incomes,
         | and simple comparative statics? Are those competing
         | explanations, namely cultural vs. economic?
         | 
         | Richie's answers are mostly focused on changing the culture of
         | science, and while there are lots of ways we could change the
         | incentives, none of them would be pretty.
         | 
         | Example: let's say we want to more closely align research and
         | actionable results, e.g., a product a company can use (Brian
         | Armstrong argues for something like this [1]).
         | 
         | Solution: radically reduce public funding for scientific
         | research and for university education as a whole (in line with
         | Bryan Caplan's arguments in "The Case Against Education" [2]).
         | Academics, who would be many fewer in number, would then have
         | to get more of their funding from companies, who (presumably)
         | would:
         | 
         | A) guide them towards asking market-relevant questions, and
         | 
         | B) have a clear incentive to check the data, re-run the code,
         | etc. -- so that the product they built based on that research
         | didn't flop.
         | 
         | I think most people would recoil at this proposal. But that's
         | what comes to mind when I think about fixing the incentives
         | rather than the culture.
         | 
         | P.S. Small nitpick: Richie gives an _example_ of a perverse
         | incentive in lieu of a definition.
         | 
         | [0]
         | https://marginalrevolution.com/marginalrevolution/2017/01/im...
         | 
         | [1] https://medium.com/@barmstrong/ideas-on-how-to-improve-
         | scien...
         | 
         | [2] https://en.wikipedia.org/wiki/The_Case_Against_Education
        
           | dgb23 wrote:
           | I think this approach is flawed.
           | 
           | Most long-term high impact findings come from foundational
           | research, not applied science.
           | 
           | Also scientists would then just tune their research to sound
           | good to investors.
           | 
           | Tuning economic knobs in an area that should be as free from
           | outside pressure as possible seems counterproductive.
           | 
           | No, being stricter in rewarding rigor over perceived
           | usefulness is the way to go.
        
           | michaelt wrote:
           | If you think funding from industry makes academics' results
           | more robust, you might want to look into tobacco-industry-
           | funded research on cancer, oil-industry-funded research on
           | global warming, and ridesharing-industry-funded research on
           | drivers' working conditions.
        
             | JackFr wrote:
             | No one thinks "research on drivers' working conditions" is
             | science. Manifestly self-interested nonsense like that is
             | relatively easily ignored. And if that's the trade we make
             | to get Bell Labs, Xerox Parc and IBM Research, I'll take
             | it.
        
           | stonemetal12 wrote:
           | We tried that and got the smoking is not bad for you science
           | of the '70s, sugar is not bad for you science of the '80s,
           | and the fossil fuels aren't causing global warming of today.
           | How many drugs have been "proved" safe for human consumption
           | because it is better for the bottom line?
           | 
           | Science beholden to business just proves whatever is
           | convenent for business owners not the truth.
        
           | cousin_it wrote:
           | > _Solution: radically reduce public funding for scientific
           | research and for university education as a whole_
           | 
           | Wouldn't that make many scientists (both good and bad) move
           | to other countries where funding is easier to get?
        
             | setgree wrote:
             | Some, perhaps. Others would likely choose different
             | careers, or would adapt.
             | 
             | FWIW, and I didn't clarify this enough in my post -- I
             | meant this as an example solution of something that would
             | change the incentives rather than the culture, not
             | necessarily as my own full-throated endorsement; I do
             | personally think that steps in this direction would be for
             | the best, but it's not as though there wouldn't be
             | downsides that would need to be managed/mitigated.
        
         | meow1032 wrote:
         | > What might work is that we require replication work for a PhD
         | 
         | I don't think this will work. All it will do is devalue the
         | value of replication studies because only PHD students do
         | replication studies. It's also not in their best interest
         | especially if they dispute findings of established researchers.
         | 
         | Also, we have to get away from the idea that the scientist's
         | job is to think and write, and literally all of the other work
         | can be shuffled off onto low wage (or no wage), low status
         | workers. This is one of the biggest reasons that science is
         | going through such a crisis. If you want enough papers to
         | consistently get grants you probably need at least 4/5 PHD
         | students every few years. This causes a massive glut in the job
         | market. It also dissociates scientists from their work. I've
         | met esteemed computational biologists who could barely work a
         | computer. All of their code was written, run, and analyzed by
         | graduate students or post docs. They were competent enough at
         | statistics, but that level of abstraction from the actual work
         | is troubling.
        
           | jpeloquin wrote:
           | Requiring replication work for a PhD seems like a great idea.
           | PhD programs already use a mandatory exercise--the qualifying
           | exam--to check a student's competence, with ambiguous
           | effectiveness. Turning the qualifying exam into a replication
           | study seems like a win: it tests the student's ability to do
           | their actual job rather than pass an abstract test, and
           | produces output that is useful both to the student and the
           | community. The qualifying exam committee (usually ~ 4 PIs
           | from different labs) can do quality control on the
           | replication.
           | 
           | > All it will do is devalue the value of replication studies
           | because only PHD students do replication studies. It's also
           | not in their best interest especially if they dispute
           | findings of established researchers.
           | 
           | Most studies are done by students regardless, so it seems
           | unlikely that replication studies would be devalued merely
           | because they're done by students. Although disputing the
           | findings of established researchers can be risky, they would
           | be publishing jointly with their PI (or, with the above
           | implementation, multiple PIs), not alone with no support. Few
           | students want to stay in academia, so it usually doesn't
           | matter to them if a professor at some other institution gets
           | offended. Most importantly, if everyone is doing replication
           | studies, there will be so many disputations flying around
           | that any particular person is less likely to be singled out
           | for retaliation.
        
             | meow1032 wrote:
             | It sounds like what you're suggesting would be functionally
             | equivalent to PI-led replications, which I would agree is a
             | good idea. There are still some practical problems though.
             | 
             | 1. Studies can be much more expensive than most people
             | think. In my field, a moderately sized study can easily
             | cost $100,000+ if you're only accounting for up front cost
             | (e.g. use of equipment, compensating participants). Someone
             | would have to foot the costs of this.
             | 
             | 2. Studies can be incredibly labor-intensive. PI's can get
             | away with running studies that require thousands of man-
             | hours because they have a captive market of PHD students,
             | Post-docs, and research assistants all willing to work for
             | low wages or for free. PHD students usually don't have the
             | same amount of man-power.
             | 
             | 3. For obvious reasons, studies that require high cost,
             | high man-power work tend to get replicated naturally less.
             | In other words, the least practical studies to replicate
             | happen to also be the most necessary to replicate.
             | 
             | A couple of things I would dispute:
             | 
             | > it seems unlikely that replication studies would be
             | devalued merely because they're done by students
             | 
             | I think academics value work in a particularly skewed way.
             | There is "grant work" and there is "grunt work". Grant work
             | is anything that actively contributes to getting grants for
             | one's institution. Grunt work is everything else. PHD's can
             | do grunt work, but that doesn't mean it will be valued on
             | the job market. For example, software development is
             | actively sought after in (biology) grad students, because
             | it's a very useful skill. However, I've also seen it count
             | against applications as professors because it shows they
             | spent too much time on "grunt work". Software development
             | skills don't win grants.
             | 
             | > Few students want to stay in academia
             | 
             | In some fields there aren't any options except to stay in
             | academia or academia adjacent fields.
        
       | abdullahkhalids wrote:
       | One change that would help and easy to implement is reporting,
       | for a researcher/paper, citations excluding self-citations.
       | 
       | This doesn't take care of citation rings, but does move the
       | needle towards reporting the actual value of a paper/researcher.
        
       | beagle3 wrote:
       | I don't know if the comic properly represents the book, but all
       | of the suggestions are somewhere between ridiculous and useless.
       | 
       | First, about identifying bad research: The "extraordinary claims
       | require extraordinary evidence" is already practiced. It's not
       | that "we've overturned quantum theory" articles that are causing
       | problem - those are quickly and effectively shot down. And it is
       | rare that the "perfectly aligns with a political interest" can be
       | applied. The only actionable one is "see what others think about
       | it", and it's no panacea either.
       | 
       | Bad and fraudulent science like the recently retracted
       | Surgisphere covid paper is abundant. I was trying to track down
       | the origin of the "reduce salt intake" and "limit egg consumption
       | to no more than 2 per day / 2 per week" recommendation in the
       | past, assuming there was hard science behind them. There isn't;
       | and indeed they're slowly being reversed everywhere - but they
       | were prevalent for half a century, with a lot of other research
       | taking them as axioms.
       | 
       | The remdesivir trials have been p-hacked to death - anyone who
       | took an interest was seeing it happen in real time - yet, the
       | scientific community turns a blind eye.
       | 
       | The ketogenic diet is vilified in every mainstream media and most
       | nutritional "science" publications; the headlines are rarely
       | inline with the the actual results, but that's what people
       | remember (Not that nutrition science is really science)
       | 
       | And the other recommendations about fixing it are comparable to
       | "lets solve evils of the US 2-party system! All we have to do is
       | make those two parties vote to take away their power". Academia
       | and science publishing are where they are now because it benefits
       | essentially all the incumbents (at the expense of the rest of
       | society).
       | 
       | The problem description (at least in the comic) is good. Any
       | suggested action ... not so much.
        
         | xenocyon wrote:
         | > The problem description (at least in the comic) is good. Any
         | suggested action ... not so much.
         | 
         | Well put.
         | 
         | In theory, people like the idea of making science better.
         | 
         | In practice, people don't like the idea of fewer papers, boring
         | papers, ambiguity in hiring and tenure, uncertainty on
         | financial return-on-investment, and less institutional/national
         | pride talking points.
         | 
         | The bad incentives and metrics we have haven't happened by
         | chance - they have emerged from our collective desire for
         | science to be useful, sexy, and a reliable function of
         | money/effort spent.
        
         | csours wrote:
         | I think that pre-registering methods is important.
         | 
         | I think that a requirement to publish results either way on any
         | study that's been pre-registered is important.
        
         | Bobbcatt wrote:
         | >"extraordinary claims require extraordinary evidence"
         | 
         | And yet most people accept climate change as fact, even though
         | the claim itself is so extraordinary that we as a species are
         | not capable of producing enough evidence for it.
         | 
         | You can't predict accurately within 1 degree celcius the
         | temperature 7 days from now, but you want me to believe you
         | when you try to do it for 50 years from now.
        
         | joppy wrote:
         | I think the suggestion that "universities could change their
         | hiring policies" is a very good recommendation. There are a lot
         | of problems in academia (bad research, lack of diversity,
         | pressure to publish) that could be reduced if universities
         | changed their hiring policy to something further away from the
         | metric of (number of papers published since receiving PhD) /
         | (time since receiving PhD). Of course no university hires
         | explicitly on that metric, but many of the metrics they use are
         | not far from that, mixing in quality of journals and norms for
         | the field, etc.
         | 
         | Each of the suggested actions, taken together, would seem to
         | have a very positive improvement on the status quo. Would you
         | care to explain why the suggested actions would be "somewhere
         | between ridiculous and useless?"
        
           | beagle3 wrote:
           | OP here; I am not saying the suggestions won't work. I am
           | saying they will not happen because they mostly undermine the
           | power of those who need to take those actions. You are asking
           | the established people in the academia to give up the source
           | of their power and privilege, and in return there's some
           | vague promise of Bette overall outcome for humanity. Why
           | should they?
        
           | meow1032 wrote:
           | Not OP here, but my issue with the recommendations are that
           | they've pretty accurately listed a whole bunch of mostly
           | structural problems with academia, but all of the suggestions
           | boil down to "we all just need to try harder". You can _say_
           | something like:  "journals need to demand higher standards"
           | but what incentive do they actually have to do so? Then you
           | can counter with "scientists could vote with their feet", but
           | what incentives do they have to do that?? You're asking
           | people to consider seriously damaging their career for some
           | nebulous quality metric.
           | 
           | Frankly, having worked in academia long enough to see at
           | least a couple shifts in culture, the only thing I can see
           | that comes out of this is a couple more things get added on
           | to the ever growing checklist of publishing a
           | paper/submitting a grant application.
           | 
           | I think we need to get away from the sort of thinking where
           | large structural problems can be solved by tiny incremental
           | improvements. If you really want to solve the problem, one or
           | more of [Granting Agencies|Journals|Universities] has to be
           | completely torn down and built back up.
        
             | mitjak wrote:
             | > one or more of [Granting Agencies|Journals|Universities]
             | has to be completely torn down and built back up
             | 
             | right, and unless the new institutions are in a financial
             | vacuum, they will remain built on and affected by broader
             | systems, resulting in conflict of interest.
        
             | joppy wrote:
             | It seems to me, still, that a lot of these problems you
             | bring up can be addressed by universities changing their
             | hiring policies. Which makes sense: academics ultimately
             | rely on universities for their income, and so it is the
             | hiring policies which are setting the perverse incentives.
             | And I don't think changing hiring policies would be an
             | incremental change, it would be a huge change (and not
             | likely to be made by any university any time soon, since
             | students rank universities on similar metrics to how
             | universities hire staff -- a prestigious university will
             | lose prestige even if it changes its hiring policies for
             | the better).
        
               | meow1032 wrote:
               | > academics ultimately rely on universities for their
               | income
               | 
               | Sort of, a huge portion of income is from grants,
               | particularly after the first few years from being hired.
               | More importantly, a huge portion of the University income
               | is from grants. When a researcher recieves a grant, there
               | is an "overhead" percentage that goes to the University.
               | Universities hire, in part, to maximize those overheads,
               | which means getting the researchers with the best chance
               | at getting big grants.
               | 
               | Changing the hiring process may affect how PHD students
               | act, but once they're "in the system", they are subject
               | to all the same problematic incentives.
        
               | tejtm wrote:
               | > academics ultimately rely on universities for their
               | income
               | 
               | In my decades at it (digital side of bioinformatics) the
               | cash flow is in the other direction.
        
           | jpeloquin wrote:
           | > Would you care to explain why the suggested actions would
           | be "somewhere between ridiculous and useless?"
           | 
           | Not OP, but the proposed "solutions" not only add more work
           | items to the ever-growing checklist (as mentioned by
           | meow1032), but to be useful everyone must spend even more
           | time checking everyone else's work items:
           | 
           | Solution 1, requiring data sharing and preregistration,
           | greatly increases the work of peer review, perhaps by an
           | order of magnitude. Someone needs to check that the data
           | produces the published results and that the final analysis
           | plan matched the preregistration. That is hard, time-
           | consuming volunteer work, with no reward incentive. Current
           | peer review currently trusts the authors did what they said
           | they did, correctly, and it still takes 4-12 hours to review
           | an article. Most reviewers cut corners. If no one does the
           | work to check the open data or preregistrations, "open
           | science" will be merely performative, with no quality
           | improvement.
           | 
           | Solution 2, changing hiring policies to "look beyond
           | publication and citation numbers", is pretty much what hiring
           | committees already do. But with ~ 200 applicants per job
           | opening the depth of examination per applicant is somewhat
           | limited. As in solution 1, lack of time for deep checks is a
           | problem. Applicants who are well-networked with good pre-
           | existing reputations (i.e, who are plugged into the web of
           | trust) get hired; everyone else doesn't. From a perspective
           | of research quality, this may be a good thing.
           | 
           | Solution 3, funders fund boring / rigorous research, could
           | improve matters in theory. But with only enough money to fund
           | ~ 1 in 10 proposals, projected impact will always be an
           | overwhelming concern. Proposals will include a "research
           | integrity" section or similar and nothing substantive will
           | change.
           | 
           | Solution 4, scientists "vote with their feet" (stop
           | participating in the dysfunctional parts of the system), is a
           | call for people to come up with their own solutions or
           | support other proposed solutions, not a solution in its own
           | right. Ironically, it is perhaps the most useful because it
           | pushes back on the idea that poor quality science is
           | inevitable under the current structure. "Perverse incentives"
           | must not become a generally accepted excuse to sacrifice
           | scientific integrity for the benefit of one's own career.
           | Science is meant to discover new information. Without a
           | culture of integrity, that information will always be
           | suspect, regardless of what top-down interventions are
           | attempted.
           | 
           | An effective intervention must reduce workload or at least
           | break even, not increase it. Or increase the resources
           | available. Otherwise people will be forced (actually forced,
           | not just incentivized; there are only so many work hours each
           | day) to cut even more corners elsewhere to make up for lost
           | time.
        
         | asddubs wrote:
         | thank god we can eat a ton of salt again, to be honest I was
         | doing it the whole time
        
           | jschwartzi wrote:
           | I never stopped. The whole idea had a couple smells to it:
           | 
           | * Maybe increases sodium intake is linked to poor health
           | outcomes because highly processed foods are linked to poor
           | health outcomes. We know that sodium content increases
           | significantly during food processing and that most highly
           | processed foods are really unhealthy.
           | 
           | * Maybe people in early stages of renal failure are more
           | likely to progress to a noticeable state if they consume lots
           | of salt. Then it would stand to reason that people with
           | healthy kidneys have nothing to worry about.
        
         | AnIdiotOnTheNet wrote:
         | > The ketogenic diet is vilified in every mainstream media and
         | most nutritional "science" publications; the headlines are
         | rarely inline with the the actual results, but that's what
         | people remember (Not that nutrition science is really science)
         | 
         | Nutrition science is real science, but unfortunately any actual
         | nutrition science you might accidentally hear about is
         | overwhelmed by people trying to sell a lifestyle, a book about
         | healthy eating, or your eyeballs (to advertisers).
         | 
         | Ironically keto promoters are a big offender here themselves.
        
           | amanaplanacanal wrote:
           | Most nutrition science is pretty bad. As is often said,
           | correlation is not causation, but that's what most
           | nutritional studies are.
           | 
           | It's easy to study large populations and find correlations.
           | You publish a paper, the media reports something, and that
           | becomes the accepted wisdom. But you have no idea how to
           | factor out all the possible confounders in your study.
           | 
           | It's hard to do a study where you make a change to people's
           | diet and see if that affects health outcomes, so it is rarely
           | done. And really that's what you need to do to see what is
           | really going on. So what we are left with is a bunch of
           | associations that may or may not hold up.
        
           | exolymph wrote:
           | Counterpoint: https://meaningness.com/nutrition
        
         | clairity wrote:
         | > 'And the other recommendations about fixing it are comparable
         | to "lets solve evils of the US 2-party system! All we have to
         | do is make those two parties vote to take away their power".'
         | 
         | that's a bit unfair. yes, it does require that in some
         | semblance, but that's not necessarily the one and only lever we
         | have. most of us realistically expect, i hope, that we'd need a
         | variety of political maneuvers to move our democracy toward a
         | more representative and less insular direction.
         | 
         | science is no different--it took many small steps to get into
         | this situation, and will take many small steps to get out, any
         | one of which will seem wholly incapable on its own.
        
       | im3w1l wrote:
       | I kindof think science isn't fixable. Maybe we can get it a
       | little bit better but we just have to live with the suck. At
       | least it makes some progress despite all the flaws.
        
       | Vinnl wrote:
       | It's an excellent analysis of the fundamental problems in
       | academia, but "people should just act against their incentives"
       | isn't really a solution.
       | 
       | It really is the incentives themselves that are the problem: just
       | looking at number of publications and citations (or even:
       | citations of articles _in the journals that your articles happen
       | to be published in as well_ ) when determine who to fund or hire.
       | 
       | The problem there is that we _have_ those metrics, are relatively
       | quick and easy to obtain, which are accepted because they are
       | what 's been used so far - even though plenty of research has
       | pointed out their flaws yet [1]. And anything new that is
       | proposed as a replacement of those metrics (whether other
       | metrics, or other systems of evaluation) is dismissed for not
       | being proven to live up to a standard that the currently used
       | methods do not either, or for not being available quickly or
       | easily enough. (Which is reasonable - e.g. it's not viable to
       | read and properly evaluate all research of your applicants.)
       | 
       | (Disclosure: I do volunteer for a project, https://plaudit.pub,
       | that tries to offer an alternative nevertheless.)
       | 
       | [1] https://medium.com/flockademic/the-ridiculous-number-that-
       | ca...
        
       | James_Henry wrote:
       | A lot of the problems in academia, I believe, come from
       | incompetence, a lack of questioning or at least of doubting
       | others' competence and your own competence, and reliance on
       | unsound ideas about scientific methodology. Andrew Gelman has
       | some good thoughts here that I feel are related:
       | 
       | https://statmodeling.stat.columbia.edu/2020/07/29/the-crooks...
       | 
       | Some people will have the best of intentions, they will really be
       | doing "science" out of the goodness of their hearts and out of
       | their desire to help mankind, but they'll end up publishing trash
       | that gets acceptance and sometimes even praise.
       | 
       | What are the incentives that need to be instilled to fix the
       | problems we currently have? I'd say we need to incentivize
       | competence and humility. How? I don't know exactly, but values
       | like these seem to be able to be instilled through cultural
       | practices and traditions.
       | 
       | Also, especially humility seems to be lacking from many cases of
       | bad science. If people accepted criticism and accepted that they
       | don't really understand all that much, I believe scientific
       | quality would improve. You do have a lot of reasons to not be
       | humble in academia though, as this comic lays out.
        
         | Donthatme wrote:
         | > A lot of the problems in academia, I believe, come from
         | incompetence, a lack of questioning or at least of doubting
         | others' competence and your own competence, and reliance on
         | unsound ideas about scientific methodology.
         | 
         | I kind of agree, but I will state it somewhat differently. Note
         | my experience is in physics and healthcare, so may not apply
         | for all fields.
         | 
         | In my experience, the desired skill set shifts to more
         | management/admin/bureaucracy/money-chasing once your in a
         | professor or professor-like position, as opposed to nitty-
         | gritty researcher in the grad school phase. The incentives for
         | the grad school phase is good science. The incentives for the
         | professor-like phase is grants/papers/awards.
        
         | bjornsing wrote:
         | > I'd say we need to incentivize competence and humility.
         | 
         | Problem is I think that the competent are few, and when the
         | cultural norm is that they must be humble then they stand no
         | chance against the many incompetent.
         | 
         | IM(H)O: Science shouldn't be humble in the face of non-science.
         | As long as it is it will lose. The idea of conflict free great
         | science is a pipe dream. We need a culture that accepts
         | (intellectual) conflict.
        
           | goblin89 wrote:
           | The way I see it, pride is noise.
           | 
           | By staying truly humble in the face of non-science, science
           | provides a calm even backdrop, against which it is more
           | manageable to evaluate worthwhile findings from bullshit.
           | 
           | This is orthogonal to conflict. A conflict could be handled
           | humbly (Rapoport's rule and all), or the opponents could
           | drown the signal of their arguments in the noise of pride.
           | 
           | To abandon humility would be to fight noise with more noise.
        
           | James_Henry wrote:
           | I agree that there isn't enough competence, or that "the
           | competent are few" (though competence isn't a yes or no
           | thing, you can be competent in some respects and not in
           | others).
           | 
           | However, I think that competence really only has a chance if
           | the incompetent are humble. There will be conflicts and these
           | conflicts should be embraced and someone who is incompetent
           | and not humble will fight the existence of conflict rather
           | than the actual scientific issue that needs to be solved or
           | understood.
        
             | bjornsing wrote:
             | What I'm worried about are the incompetent but seemingly
             | humble... They will go around and call the competent
             | "arrogant", they will seem to be right (to a lot of
             | people), because the competent have such high scientific
             | ideals, and a lot of people will feel like they can't live
             | up to them. So they will win. The End.
             | 
             | Real science is a hell of a lot harder than p-hacking and
             | HARKing your way to a great career. At least some of the
             | incompetent know this. They will not play nice.
        
           | LMYahooTFY wrote:
           | I agree and I think this hits deeply into the heart of the
           | matter. Science is precisely resolving conflict in your
           | observations by removing as much bias as possible.
           | 
           | Engaging in that conflict with each other is how we expose
           | new ideas for further analysis.
        
           | gregmac wrote:
           | > the cultural norm is that they must be humble then they
           | stand no chance against the many incompetent
           | 
           | I don't think this is a cultural norm so much as just the
           | Dunning-Kruger effect [1] in play. People who are highly
           | competent still realize there is much they don't know, and
           | that makes them humble. I suspect if you go and find someone
           | widely recognized as an expert in pretty much any field, and
           | ask them if they know all there is to know, you'll find no
           | one says yes.
           | 
           | [1] https://en.m.wikipedia.org/wiki/Dunning%E2%80%93Kruger_ef
           | fec...
        
             | James_Henry wrote:
             | I'm willing to bet that the Dunning-Kruger effect is shaped
             | by culture and an individual's characteristics, like
             | humility.
        
         | einpoklum wrote:
         | > A lot of the problems in academia, I believe, come from
         | incompetence
         | 
         | Indeed, but:
         | 
         | 1. We are all - well, almost all of us - incompetent in many
         | aspects of our lives, and competent only in some.
         | 
         | 2. The incompetent are often not willing to simply cede their
         | place and let the competent do what (arguably) needs to be
         | done.
         | 
         | 3. Proving and verifying competence is quite difficult unless
         | you are yourself competent and close to their field competence
         | in which is in question...
        
           | James_Henry wrote:
           | Hence the need for humility and an understanding that science
           | is about trying to figure stuff out, not about just following
           | rules?
        
       | mabbo wrote:
       | Zach Weinersmith's ability to tell compelling non-fiction via
       | comics is something I truly love. He manages to take what the
       | person is saying, convert it into a comic form of them saying it,
       | while adding humour. And throughout the process, _what_ is being
       | said does not seem to be degraded at all. There 's also a level
       | of openness we all seem to have to something that comes at us as
       | a comic rather than hard text.
       | 
       | I don't even think we have a good word for what this practice is,
       | but I'll go with "Art" because it takes a lot of that.
       | 
       | His book on Immigration[0] is a large-scale version of this skill
       | in practice and I suspect a lot of HN readers might enjoy it,
       | regardless of if you agree with his points or not.
       | 
       | [0]https://www.amazon.com/Open-Borders-Science-Ethics-
       | Immigrati...
        
       | kerkeslager wrote:
       | One big problem is that journals agree to publish studies _after_
       | they are completed, which means that publication in prestigious
       | journals is based on the novelty of the result rather than on the
       | validity of the study /experiment. It's an absolute fundamental
       | of science that you go into it with an open mind, admitting that
       | you do not know what the result will be. A good study/experiment
       | is not one which produces an interesting result, it's one which
       | is properly designed to answer the question it's asking.
       | Evaluating the quality of studies/experiments based on their
       | results is an anti-scientific practice which should be excised
       | like the cancerous tumor it is. The best way to do that is to
       | accept and commit to publish studies/experiments based solely on
       | their design, _before_ the experiment /study has actually been
       | performed.
        
         | renewiltord wrote:
         | Oh I like that. With hypothesis pre-registration, journals
         | could commit based on the hypothesis (and perhaps methods)
         | rather than on the result.
        
           | kerkeslager wrote:
           | I think the methods would be the most important element of
           | peer review and commitment to publish. If you have a good
           | method, the hypothesis is almost irrelevant--it's important
           | to understand what variable is under test, but predictions of
           | what result will be found are somewhat arbitrary.
        
             | Beldin wrote:
             | I would love the idea (for data-driven science) of a peer
             | review _before_ execution of the work. Write up a one- or
             | 2-pager, review methodology and accept /reject based on
             | that. Once the work has been done, there would mostly be
             | editorial comments.
             | 
             | Of course, that ignores the analysis section, which is
             | somewhat important.
             | 
             | But still, a vast improvement over today's way, where you
             | may end up spending a lot of time only to get a vague
             | reject.
        
               | chordalkeyboard wrote:
               | This is similar to "triple blind" but the reviewers
               | review methodology before they have access to the
               | results, but after the work has been performed.
        
             | samatman wrote:
             | It's unclear to me precisely what you mean, and we may be
             | agreeing here.
             | 
             | But the 'variable under test' is definitely part of the
             | hypothesis, and a study without a hypothesis is, not
             | _useless_ , but much less likely to produce a useful
             | result.
             | 
             | There's a relevant xkcd:
             | 
             | https://xkcd.com/882/
             | 
             | If the (pre-registered) hypothesis was that green jelly
             | beans cause acne, then this is at least an interesting
             | result.
             | 
             | If you run this experiment with no particular hypothesis,
             | and then decide on that basis that green jelly beans cause
             | acne, this is just a setup for a later failure to
             | replicate.
             | 
             | At best. At worst, no one bothers checking your results,
             | the company stops selling green jelly beans due to bad
             | publicity, and people who enjoy the green ones are deprived
             | of them for no good reason.
        
               | Enginerrrd wrote:
               | That's a narrow view of publishable results. There's a
               | ton of really useful science done by simply filling in a
               | curve with measured values. There's no hypothesis
               | required. Think eutectic curves in metallurgy and things
               | like that. ...But the methedology being sound is
               | critical.
        
         | bonoboTP wrote:
         | > Evaluating the quality of studies/experiments
         | 
         | That's part of the problem. Publications are seen as
         | achievements. If you got accepted to a prestigious journal or
         | conference, you can list this on your CV as an impressive
         | "award"-like thing. A publication list is not just a list of
         | "Look, this is the kind of stuff I've been working on, have a
         | great read at it", but "Look, my research is so great it got
         | accepted to all these fancy places!".
         | 
         | Publications are therefore unfortunately not merely about
         | sharing new info with the research community but an award show.
         | Ideally a publication would be the _start_ of the conversation:
         | "this is what we found, this is the method we propose, what do
         | you think of it, community? Will you pick it up?" The test is
         | then whether the ideas get adopted. But that's harder to
         | measure. Citations try to approximate it, but it's a very crude
         | approximation. A citation, as such, may mean tons of different
         | things: e.g. a) a deep critique (Negative impact) b) being
         | cited as part of a long block of "these other works exist, too"
         | (Low impact) c) another work substantially based on the deep
         | ideas of the original paper (High impact), d) being listed in a
         | table for comparison, ala "we beat this other method", but no
         | other discussion of the original paper (Low impact), etc.
         | 
         |  _If_ a publication was nothing more than a  "hey, look, this
         | is interesting", then I'd say, publishing mostly novel sexy
         | results would be fine! After all, the surprising cases are
         | those that teach us the most. However, as I said earlier, a
         | paper is not only about "hey, this is interesting", but a "hey
         | I want to advance my career", too. And in a twisted way of
         | logic, I can agree that therefore we could put a band-aid over
         | some of the problem by publishing (rewarding) systematic work
         | with negative or boring results. But ultimately, this goes
         | against the original purpose of papers, that is alerting the
         | scientific community to potentially new information that we
         | haven't known about before.
         | 
         | ----
         | 
         | Ideally, to assess someone's scientific career, there would be
         | at least one smart, attentive, impartial expert taking their
         | time reading through the publications, taking notes, pondering,
         | digesting it all, consulting other experts etc. However, this
         | is too subjective.
         | 
         | Quantitative metrics seem superficially more objective and
         | therefore egalitarian. The original idea is probably that if we
         | just based everything on subjective judgement of scientific
         | importance instead of publication count, there would be even
         | more networking and friendship-based quid-pro-quo back
         | scratching.
         | 
         | But everyone is overworked, and those who aren't, want to keep
         | it that way. So nobody wants to put in the effort to actually
         | interact with the deep content of research. It's too
         | complicated and too opaque.
         | 
         | ----
         | 
         | The problem is, flashy results _are_ by their nature more
         | attention-grabbing on all levels. It 's not just some small
         | perverse incentive. This is how all of us work, this is how
         | history works, how everything works. The winner takes all, the
         | rich get richer etc. We remember the Einsteins of history,
         | those who just worked systematically and didn't find much
         | aren't heroes. And if that's our bar, then people will do
         | everything to look like they clear it. In any system,
         | scientists would have to hype up their impact, it doesn't
         | matter who makes the decisions.
         | 
         | Currently, universities want to employ researchers who will
         | make a visible impact. Because that means attracting funding,
         | but also attracting bright people from all around. Career-
         | conscious researchers want to go to universities that help them
         | market themselves well (good PR departments etc). PR is not
         | only for laypeople as the audience, there is such a flood of
         | research nowadays that even the experts of a small niche cannot
         | keep up with everything happening.
         | 
         | ----
         | 
         | The root of it is human nature, competition, deception,
         | cliques, hierarchies. But the new about it is the scale of it,
         | and the accompanying mechanization of it all. The idea that you
         | can mass-manufacture innovation. That you can expect thousands
         | upon thousands of researchers to make regular breakthroughs
         | and, to say my field as an example, publish tens of thousands
         | of novel AI-related ideas every year. It's related to
         | credential inflation, and fake signaling: people with good
         | academic track records got the good jobs and the respect, so
         | people try to emulate that. Everyone tries to become the 1%,
         | the rock star. And everyone wants to hire the 1%. So just like
         | an evolutionary pressure, people try to appear like the
         | successful. Soon enough the old signal doesn't work anymore. It
         | used to be a high mark of educational level to have passed high
         | school. Today that's a bare minimum. College used to be a
         | meaningful differentiator. Now more than half of young people
         | are "college educated" in developed countries. The next step is
         | about becoming "researchers". Nowadays, having some
         | publications is not a big differentiator. We see this also in
         | title inflation like monkeying around in Excel is "data
         | science" and "AI".
         | 
         | It's not just a monkey's paw. There is no central figure
         | orchestrating it, asking the monkey's paw for more papers. It's
         | a distributed system of agents acting in their self-interest.
         | Nobody wants papers for papers' sake, they want to make
         | defensible, justifiable decisions that will not get them fired
         | and will pass satisfaction up opposite the path where the money
         | is flowing, all the way to the CEOs, politicians and taxpayers.
        
           | marcus_holmes wrote:
           | Would open publishing on the web solve this?
           | 
           | After all, the journal system was invented to solve
           | distribution of papers. We have the internet now, so is there
           | any need for the journal system any more?
           | 
           | Independent reviewers would/could easily step up to pick up
           | the interesting papers and present a "feed" of the good
           | stuff.
        
             | brigandish wrote:
             | A standardised data format for papers would help (beyond
             | that of introduction > methodology etc, treat it like an
             | insurance claim or something of that ilk), that way content
             | could be distributed and compared and discovered far more
             | easily than having to wade through papers written in
             | different formats, with all the flowery academic language
             | etc.
             | 
             | It'd probably make writing them easier too.
        
             | bonoboTP wrote:
             | First we need to understand _what_ we want to solve. The
             | attention economy and PR war goes on just as much on the
             | open internet. E.g. people only reading papers from big
             | shots, MIT, Stanford, Harvard labs etc.
             | 
             | The problem is way deeper than just academia. Such as, is
             | there fairness in the world deep down, is mass-produced
             | excellence possible? Does individual greatness actually
             | exist or is it all just a power play?
             | 
             | Overall, the quality of science is extremely difficult to
             | measure, precisely because it operates on the border of the
             | unknown and because people try their best to appear the
             | best possible. Science is difficult to understand and is
             | often far removed from the here and now, and may only bear
             | fruit decades down the line. It's hard to judge for the
             | same reason that antelopes are hard to catch for cheetahs:
             | competition (mainly the antelope vs antelope type).
             | 
             | In the end, science has only been this mass product for a
             | few decades. Before that it was mostly a pastime of weird
             | nerdy aristocrats or people paid by aristocrats for showoff
             | purposes. Or church people with too much time on their
             | hands.
             | 
             | In reality, from the top down view it's a huge gamble. You
             | try to get good people to do their honest best and then see
             | what happens. Then at the end there will be some
             | breakthroughs. But only a few every few years in each
             | field. However, this does not satisfy the participants. I
             | toiled away as well, but the reward is only paid to the
             | lucky one. So everyone tries to be the lucky one, which
             | perversely pushes everyone to take fewer risks, making the
             | collective likelihood of a breakthrough lower, but their
             | own expected reward better.
             | 
             | -----
             | 
             | My grandfather used to recite the story of a farmer who had
             | three pigs. Every morning he'd throw two apples in their
             | confinement. He'd then grab a big stick and beat the one
             | that didn't get any apple: why didn't it try harder?
             | 
             | -----
             | 
             | My prediction is that as with all signaling spirals and
             | treadmill effects, there will be something new to aspire
             | to, to tell the wheat from the chaff, a signal that's
             | harder to fake. It's a constant race. You demonstrate your
             | fitness by being adaptive to how the system changes.
             | Overall the "quality" of people obviously doesn't change
             | over time, it's just that the competent/powerful drive the
             | criteria to their benefit.
             | 
             | As academia/publishing etc. is now flooded with "the
             | plebs", "the elite" will move on and will perhaps use other
             | criteria.
             | 
             | ----
             | 
             | Now, going back to assuming this is about the object-level
             | science itself. Where to find the best science? You cannot
             | do this in general. You have to educate yourself and dive
             | in yourself. You try to learn how to judge people's
             | character and try to listen to and digest the assessment of
             | those you trust.
             | 
             | There's no other way, gather experience and become "better"
             | yourself. Use the cognitive resources of your brain to try
             | and outsmart your opponent: the writer of the piece of text
             | you are reading. This cannot be standardized/metrificated
             | in a simple way (outside human-level AGI). If your
             | organization does not put in the cognitive power of
             | extensively processing the content of a particular research
             | and critically examining the motivations behind it etc.,
             | there is no way to judge it. Then you're back to
             | credentials. Did it come from a highly cited person? Is
             | this person endorsed by other big shots, where the "seed
             | big shots" are the researchers at the historically most
             | prestigious institutions.
             | 
             | ----
             | 
             | Currently, to find interesting research I personally use
             | Github recommendations, Google Scholar alerts watching for
             | citations of landmark papers (good indicators for progress
             | in niches) and authors. A well-curated Twitter-feed is also
             | useful, as is arxiv-sanity. In the end, I have to make up
             | my mind if it's good work or not, and as everyone I don't
             | have infinite cognitive resources. So I make snap
             | judgements based on paper gestalt, affiliations,
             | plausibility, result tables, etc. If it clears this bar, I
             | dive in more. Over time, you learn to trust some smart
             | people and can follow them online and see what they say and
             | recommend. And continuously learn and grind your brain.
             | Cognitive work cannot be spared, just like you cannot spare
             | physical exhaustion in sport competitions.
        
             | 6510 wrote:
             | The w3 (failed) PICS rating system[1] has always fascinated
             | me. In short: Everyone (including the author) gets to rate
             | everything by whatever scale they want to use. Everyone
             | gets to use ratings made by whoever they like. I could see
             | it work as well for academics and intelectuals as
             | bodybuilders and progamers.
             | 
             | One angle of my fascination was how the [rating] baby was
             | pretty much tossed out with the bath water. It was shouted
             | down and denounced by journalists (as for example an adult
             | filter which ironically ended up its only application) Some
             | journalists described a perspective as if they had a kind
             | of tenure. They were published for so long that the idea of
             | a rating system was just offensive. We could argue that a
             | good rating system would use existing talent for
             | calibration but the real question to ask imho is: If PICS
             | was so bad, what did we get in stead? Anon 5 star ratings?
             | Thumbs up? HN points? Number of github saved games? To say
             | it doesn't compete with publishing in journals is somewhat
             | of an understatement.
             | 
             | End of the day all we are looking for is good meta data. If
             | note worthy people in a field want to endorse a HN topic, a
             | blog posting, a usenet post, a tweet, a youtube video, a
             | facebook posting or a torrent[2] real credit could go to
             | the author.
             | 
             | A rating system or spec therefore could simply accommodate
             | that process. (It should for example require the author and
             | their endorsers make backups available.)
             | 
             | Journals are from the horse and carriage days. It is quite
             | embarrassing how we didn't come up with something modern.
             | 
             | [1] - https://www.w3.org/PICS/services-960303.html
             | 
             | [2] - torrents are nice to share huge data sets
        
         | ylem wrote:
         | This is actually being done in some journals--with registration
         | of studies. I have arguments at times with people about
         | Scientific Reports--I think that as long as the results are
         | technically correct, I think we really need to encourage people
         | to publish "boring" results. I have done this as community
         | service, but it takes a lot of time and effort--even more so if
         | you are correcting a previous boring with result with another
         | boring result.
        
           | kerkeslager wrote:
           | I don't think "encouraging" is enough. It needs to be built
           | into the fabric of how science is performed.
        
         | kashyapc wrote:
         | I like what you're saying. Do you (or anyone on this thread)
         | have any comments/thoughts on the following?
         | 
         | During a conversation with an academic researcher (non-Computer
         | Science) friend, when I brought up the topic of data sharing,
         | especially in context of the infamous "replication crisis",
         | they have their reasons not sharing. I'm loosely paraphrasing
         | here, while trying hard not to misrepresent/misremember their
         | exact views:
         | 
         | "I want to protect my data; I don't have enough time to present
         | my data in a presentable form; and more importantly, they'll
         | just steal my idea and go present it as theirs--and I might
         | lose funding" ... and so on.
         | 
         | I can empathize with the academic pressure of "publish or
         | perish". And not least of all, "need some food on the table,
         | and roof over my head".
         | 
         | But I still wonder, there must be other effective ways to
         | gently persuade a said researcher (especially in the 'soft
         | sciences'--I'm not using the term derogatorily) on the
         | importance of sharing data that allows reproducibility of a
         | given experiment?
        
           | kerkeslager wrote:
           | To be honest, I don't think this is going to be solved from
           | the bottom up. I think a lot of scientists know that they are
           | making compromises between doing science and pursuing their
           | career. But we can't reasonably ask people to do better
           | science when better science means living on an adjunct salary
           | for the rest of their life. The change has to come from
           | publishers.
           | 
           | The replication crisis will continue until publishers
           | incentivize replication.
        
           | theptip wrote:
           | One angle that I think is worth exploring is the funding
           | bodies putting requirements in place around publishing data
           | (and the quality of that data) as a condition around funding.
           | 
           | If NIST required that you publish the data (say within N
           | years to cover the concern about getting scooped on follow-up
           | papers), and dinged you on future funding applications if you
           | didn't meet their quality/reproducibility metrics, perhaps
           | that would help to align incentives.
           | 
           | This is the same sort of idea as requiring research from
           | public funding to be put in open-access journals so that the
           | public can benefit from it.
        
       | annoyingnoob wrote:
       | All of these problems plus low pay, why would anyone go into
       | academia?
        
       | ptero wrote:
       | There is no perfect solution, but requiring access to both the
       | data and full methodology for experimental sciences should help.
       | 
       | Even problem sof cherry picked data would be partially exposed
       | eventually; and eventual exposure is still a very effective
       | deterrent in science. My 2c.
        
       | csours wrote:
       | What you like to see at the top of any article covering the press
       | release of a study?
       | 
       | Something like an infobox with P-factor, whether it was pre-
       | registered, sample size, funding organization, double-blind, etc?
       | 
       | This comic tackles the Academia side of things, but a lot of that
       | motivation comes from press coverage. If the press has better
       | capabilities to be critical of bad studies, Academia will give
       | less credence to the same.
        
       | giardini wrote:
       | OK, so "Science Fictions" was just released and is full of
       | cartoons and we have an obvious promotional push going on. But I
       | simply must recommend the (possibly) more mundane (no comics) but
       | nonetheless excellent David H. Freedman book:
       | 
       |  _" Wrong: Why experts_ keep failing us--and how to know when not
       | to trust them _Scientists, finance wizards, doctors, relationship
       | gurus, celebrity CEOs, ... consultants, health officials and more
       | "_
       | 
       | https://www.amazon.com/Wrong-us-Scientists-relationship-cons...
       | 
       | which begins with an interview with John Ioannidis and goes on to
       | discuss in detail why so many academic (and expert) publications
       | are wrong and how they got that way.
        
         | sradman wrote:
         | Stuart Ritchie's book _Science Fictions_ was not illustrated by
         | Zach Weinersmith, only the article was.
        
       | thecreamedcorn wrote:
       | I find it interesting that obviously smart people (the guy who
       | illustrated this) are unwilling to question whether science in
       | and of itself is a noble and positive endeavour for humanity.
       | It's always an argument like: if the scientific process was
       | followed, or if academia was structured correctly, or if the gov
       | didn't sponsor bad research, etc...
       | 
       | Most of the current world population is totally oblivious to
       | scientific advancement, every civilization before 200 years ago
       | probably had less than percent of the scientific knowledge we
       | have now and what do we have to show for it. We live a bit longer
       | and die less, that's about it. There's no other impacts science
       | has had on the human experience that's an undebatable good, so
       | why do people insist on this grandiose idea that if we just keep
       | following science we'll eventually be an enlightened people.
       | 
       | But I guess that goes against most peoples preconceptions too
       | much so just throw a panel at the beginning and end of your comic
       | saying science is good.
        
       | t0mbstone wrote:
       | I can't help but wonder how many of the issues with scientific
       | papers couldn't be solved with technology. The notion that
       | science is so deeply rooted in antiquated concepts like paper
       | journals and academic constructs like tenure is absurd when the
       | internet has been around for as long as it has.
       | 
       | For example, imagine if scientific papers were voted up or down
       | by a community, kind of like stack overflow.
       | 
       | Or imagine if scientific papers had to publish all of their
       | source materials and instructions for replicating the experiment,
       | and there was a system for tracking and showing whether or not
       | the experiment had been validated or disproven?
       | 
       | What if you "game-ified" scientific papers and gave people points
       | for publishing, but also gave people twice as many points for
       | disproving a paper?
       | 
       | Imagine if we had a platform for tracking scientific theories and
       | experiments that was a combination of democratic/meritocratic
       | administration (like wikipedia), change logging/tracking (like
       | github), and reputation management (like stack overflow)...
        
       | sradman wrote:
       | This comic by Zach Weinersmith summarizes Stuart Ritchie's recent
       | book _Science Fiction_. To combat the problem of low-quality
       | science papers, one of the panels suggests:
       | 
       | > [journals] can demand scientists share their data, and to prove
       | that they've written down their analysis plans before they touch
       | the data
       | 
       | I wonder if this doesn't gloss over a deeper underlying problem:
       | journals have traditionally assumed the copyright of the paper.
       | Journals themselves have an incentive to obfuscate and protect
       | the underlying data and content.
       | 
       | Ultimately, any complex system or institution will be more
       | susceptible to gaming when it is mature and its value proposition
       | clearly established. Anti-gamification is hard to design into the
       | early stages of a system when it is needed most.
        
         | wizzwizz4 wrote:
         | [deleted]
        
           | throwanem wrote:
           | The comic is the posted article.
        
       | drummer wrote:
       | Nothing proves that comic more than the current covid-19
       | 'pandemic' which is largely based on fear and BS (bad science).
       | The doctors and scientists that actually make sense get censored
       | into obscurity while sensational and fear agenda promoting info
       | gets published.
        
       | gentleman11 wrote:
       | The fixes they propose are good ones, but aren't grounded in
       | reality. They ignore how we got here in the first place.
       | Citations matter as a proxy for importance. Negative studies are
       | inherently uninteresting. Companies fund groundbreaking work
       | because they want to be associated with a breakthrough.
       | Scientists publish in bad journals because their careers depend
       | on it - it's an entire lifetimes work to get tenure and research
       | grants. They can't just throw that away. Journals publish lousy
       | studies because they don't have enough good ones - the journals
       | will not self destruct in order to slim down for us.
       | 
       | To fix the system will take a more honest look at the incentives
       | of the people/institutions who create the incentives, and so on.
        
       | adamnemecek wrote:
       | You need a better science publication platform. Like arxiv and
       | github combined.
        
         | sradman wrote:
         | Arxiv, github, and a self-publishing style platform that
         | supports reproducible digital artifacts, i.e., the published
         | paper. IIRC, many flawed papers were the result of data errors
         | saved in a spreadsheet.
        
           | jhrmnn wrote:
           | I fully agree and try to do that in my research, but it's
           | very hard to do 100%. I just wrapped up a 1.5-year
           | computational research project, and we have all the raw and
           | processed data, the main code, the processing scripts, the
           | notebook that generates the figures, etc. But it's still not
           | a fully automated pipeline. The missing pieces:
           | 
           | * Some older calculations were run with somewhat older
           | versions of the code. Of course we believe that the results
           | wouldn't change, and we recalculated some, but not all. We
           | didn't keep track of exactly which version was used for which
           | calculations, because that's simply very demanding in the
           | middle of a complex research project.
           | 
           | * Some data in the text and tables of the paper are still
           | extracted manually from the code. We don't have a full
           | templating system where the data could be automatically
           | inserted into the paper. You could use something like Jinja
           | to do it, but then every coauthor needs to have high
           | technical skills and it's just time-consuming to maintain in
           | general.
        
           | dijksterhuis wrote:
           | ML researcher here -- many people do this already:
           | 
           | https://github.com/carlini/adv-eval-paper
           | 
           | Personally I try to put as much on GitHub as possible.
        
           | beagle3 wrote:
           | Would help in stuff that doesn't require physical
           | measurements - e.g. computer science, neural network results;
           | would not help in medicine, psychology, biology or chemistry,
           | where reactions are reported by (often indirect) observation,
           | and it is often the data that is fudged (or just made up) in
           | retracted papers.
        
             | hobofan wrote:
             | It would also help with those to some degree, though I feel
             | like a lot of them could do a lot better with what's
             | currently available.
             | 
             | I'm currently studying biochemistry and have a few years of
             | experience as a software engineer. In trying to dive into
             | the papers in the field and just trying to replicate the
             | data analysis, I came to see how bad the state of data and
             | code availability is. It varies a lot between subfields,
             | but overall the current state seems pretty abysmal.
        
             | abdullahkhalids wrote:
             | Iirc, some fake data papers have been identified because
             | the random errors on the data were not correct.
             | 
             | Besides reporting is good because others can do alternate
             | analysis on the data.
        
           | adamnemecek wrote:
           | And better citations. I want to be able to link to a
           | particular sentence of a particular version of a paper.
           | 
           | Also pull requests.
        
             | dijksterhuis wrote:
             | > I want to be able to link to a particular sentence of a
             | particular version of a paper.
             | 
             | Probably best that you read through whole papers instead of
             | looking for one sentence.
        
             | mattkrause wrote:
             | Could you explain what the sentence-level citation adds?
             | 
             | I've heard several people ask for this, but never
             | understood why. Most citation formats let you include page
             | numbers; you can usually work in other location information
             | ("See Foo et al. (2020)'s Figure 3A") too.
        
               | adamnemecek wrote:
               | Maybe but most people dont do it.
        
       | fritzo wrote:
       | paraphrasing to emphasize irony:
       | 
       | "Science should be based on solid data: published, auditable,
       | peer-reviewed numbers. Data is good, data is objective, data is
       | truth.
       | 
       | "Academic hiring is broken. We can't base academic hiring on
       | numbers because people game the numbers. In academic hiring we
       | need to be subjective, to evaluate the intrinsic merit of each
       | researcher. Data is corrupt, data isn't sufficiently subjective,
       | data is flawed.
        
       | goatinaboat wrote:
       | Nobody in the world gets to do the "fun" part of their job more
       | than a fraction of their time. I don't think scientists are
       | uniquely hard done by here. They are enormously more privileged
       | than the vast majority of people, being funded to do something
       | purely speculative, it's not too much to ask them to publish it
       | so others can benefit from that spending too - which largely
       | comes from taxpayers doing less fulfilling jobs.
        
       | amatic wrote:
       | I think the core problem is our ignorance of psychology - we
       | don't know how humans really work, what makes us tick, what are
       | the 'incentives' that should be put in system design to move
       | toward better science, and whether 'incentives' are even a good
       | conceptualization of human motivation. We will not fix science
       | until we understand what makes scientists behave as they do, and
       | until we figure out how to design systems for humans. Though,
       | Maybe we stumble upon a better system via blind variation in
       | system properties and selective retention, based on some novel
       | metric. Scientific psychology is rather weak in explaining and
       | predicting how humans will behave.
        
       | sarellaza wrote:
       | I get paid over $190 per hour working from home with 2 kids at
       | home. I never thought I'd be able to do it but my best friend
       | earns over 15k a month doing this and she convinced me to try.
       | The potential with this is endles... Copy
       | Here.......www.salaryapps.com
        
       | einpoklum wrote:
       | A couple of years ago, Prof. Michael Stonebreaker gave a talk in
       | ICDE (IEEE Intl. Conference on Data Engineering) 2018 in Paris on
       | the problems of the pursuit of the "LPU", least publishable unit
       | of work; and his impression that few people pursue deeper and
       | more significant work because of this and other factors. If you
       | can find a summary or a recording of that somehow, it's
       | worthwhile to listen IMHO.
        
       | ajuc wrote:
       | Is there something like negative-citation-index?
       | 
       | Where you spread refuted papers out through citations to other
       | scientists and newspapers.
       | 
       | It could be included as a factor when hiring scientists.
       | 
       | And of course the person who refuted a false paper should receive
       | the citations of that false paper. It's only fair.
        
         | crankishness wrote:
         | Expanding on this, suppose there is an anti-journal,
         | tentatively titled 'Journal of Bad Science', which features
         | thoroughly refuted, bad-faith papers. Not just rejected papers,
         | as the reasons can be as innocuous as a few spelling mistakes,
         | but clearly and unambiguously bad research.
         | 
         | This would form the basis of the Crank Index of a paper, which
         | can be simplified into a stoplight system: Good research with
         | good sources is GREEN. Getting featured in the Journal of BS
         | earns a paper the esteemed distinction of a blaring scarlet
         | RED, citing a RED paper will mark a paper ORANGE, citing ORANGE
         | research leaves you YELLOW... Throwing together lots of ORANGE
         | and YELLOW citations will nudge your paper up the spectrum
         | towards RED.
         | 
         | This would incentivize researchers to not only care about the
         | quantity of the citations they share with each other, but to be
         | extremely vigilant of the quality of those citations as well.
        
           | jpeloquin wrote:
           | It would be great if journals published their articles as
           | structured data. Then readers could compute a Crank Index, or
           | do other automated citation analysis and filtering (e.g.,
           | flag excessive self-citations), using independently developed
           | software. We shouldn't need to wait for journals to live up
           | to their responsibility to innovate and improve their quality
           | control.
           | 
           | As for a list of unambiguously bad papers, we do have
           | Retraction Watch: https://retractionwatch.com/. It's mainly a
           | retraction tracker, but there is also associated community
           | effort to proactively identify research misconduct.
        
           | fsflover wrote:
           | Let's say I cited a "red" paper and explained how and why
           | they were wrong. Does my paper become "orange"? I hope not,
           | but that would require a lot of rigorous manual verification
           | by the journal editors...
        
             | ajuc wrote:
             | Ideally papers would start citing using a new format that
             | makes explicit the dependencies between papers.
             | 
             | For example:                   refutes: ...         expands
             | on: ...         depends on: ...         alternative
             | approach to: ...
             | 
             | etc.
        
         | lecarore wrote:
         | This sounds like a good addition to the incentives system.
        
         | abdullahkhalids wrote:
         | The number of papers which are refuted, in the sense that a
         | serious error is identified that invalidates the main
         | conclusions of the paper, are a vanishingly small percentage of
         | papers published.
         | 
         | This doesn't mean that a vanishingly small percentage of papers
         | are wrong, only that it is very hard to identify errors because
         | papers usually don't contain enough information to fully
         | reconstruct the results. There are a lot of assumptions of good
         | will in the system.
        
           | ajuc wrote:
           | Then accept papers that refute these results by providing
           | arbitrary data where it's missing.
        
       | pvaldes wrote:
       | LOL, The annelida part killed me
        
       ___________________________________________________________________
       (page generated 2020-07-30 23:00 UTC)