withinboredom a day ago

There's this guy I usually have on in the background on youtube who replicates chemistry experiments -- or attempts to. It's pretty rare to see him find a paper that doesn't exaggerate yields or go into enough details, and he has to guess things.

  • datadrivenangel a day ago

    You don't exaggerate yields, you just publish the best one you get out of a dozen attempts. Chemistry is messy.

    • thyristan 21 hours ago

      That, in science, is called "lying".

      Either you publish the range of results, the average plus standard deviation or average plus standard deviation of a subset with the exclusion criteria and exclusion range. Picking a result is a lie, plain and simple, and messiness is not an excuse.

      • passwordoops 21 hours ago

        Hence the crisis we have in science today.

        As an aside, I'm working at a QC chem lab now, with results that have a direct impact on revenue calculations for clients. Therefore the reports go to accountants, therefore error bars dont't exist. We recently had a case where we reported 41.7 when the client expected 42.0 on a method that's +/- 1.5... They insisted we remeasure because our result was "impossible" The repeat gave 42.1, and the client was happy to be charged twice

      • mattmanser 21 hours ago

        See my comment too, you jump to lying, but as the GP said, chemistry is messy.

        • thyristan 21 hours ago

          Any other science is messy as well.

          Truck passing by on the nearby road? Oops, my physics experiment got shaken, results look messy. Lab animal caught a cold? Oops, genetics experiment now has messy data. Atmosphere is turbulent and some shitty starlink satellite passed by at the wrong moment? Oops, my stellar spectra are messy now. Imperfection in my test ingot? Oops, now my tensile strength measurements have messy data because a few ripped too early...

          It is the nature of experimental science to deal with messiness. And dealing with it means being honest about it. You write it like it happened, find the problems in the messy parts of your data, exclude that and explain the why and how. Hand-picked results and just omitting data you find inconvenient is not science, its fraud.

          When I am allowed to just pick one result I can show you a perpetuum mobile, cold fusion, superhuman intelligence in mice and tons of other newsworthy things...

          • mattmanser 9 hours ago

            Can I ask if you've done any actual commercial work in any science?

            From the way you're talking, I'm going to guess you're an armchair commentator.

            One person performing an unfamiliar experiment once is going to get lower yields and occasional failures.

    • awjlogan 20 hours ago

      Compare the yields in a typical JACS (or any high end journal) paper versus those in OrgSyn and I think it's pretty clear that yields in many papers are more than exaggerated. It's a single untraceable number and the outcome of your PhD depends on it - the incentive is very clear. Leave a bit of DCM in, weigh, high vac to get rid of the singlet at 5.30ppm and no one's any the wiser...

  • mattmanser 21 hours ago

    I did a lot of chemistry for a year when I worked as a QA for a pharmaceuticals company before going to uni.

    So much so that when I did Chemistry at uni I got asked if I was cheating a few times in labs, until I explained.

    It's actually really hard to get any experiment perfect the first time.

    Even with a year's practice of measuring and mixing and titration and all the other skills you need, I'd still get low yields, or bad results occasionally. Better than everyone else, but still not perfect.

    I also noticed that the more you do a particular process, the better results you will get. Just like practicing a solo on an instrument lots, or a particular pool shot, or cooking a particular meal. There's a level of learning and experience needed for each process, not all chemistry in general.

  • zipy124 a day ago

    Was it perhaps "that chemist"? He has some decent videos on complete bogus papers but I don't think he does reproductions, I'd be interested in that channel if you happen to find it in your watch history.

    • 8note 16 hours ago

      nileblue/red typically pulls his processes from papers that have some dubious documentation, and his results have variance with the papers'.

      he's not going out of his way to reproduce papers, its just on the way of turning peanut butter into toothpaste, or something of the sorr

drgo 21 hours ago

I think what publishers need to do is retain reviewers (possibly on part-time basis); many retired scientists can benefit from those opportunities and it is a way to keep senior scientists engaged in their fields. For most submitted papers, there is no need for the reviewer to be sub-specialized in the paper's field (most reviews done by the sub specialists are actually done by their postdocs and grad students) and the hiring process (and subsequent evaluation) is ought to be more effective and speedier than randomly contacting people to beg for reviews. Until the review process is taken more seriously by publishers and journal editors, the quality of published science continues to deteriorate.

jruohonen a day ago

> Some 53% of researchers accepted the invitation to review when offered payment, compared with 48% of those who received a standard, non-paid offer. On average, paid reviews came in one day earlier than unpaid ones.

Does not sound like notable effects to either end. (I was once offered a payment for a peer review, but declined it.)

  • mmooss a day ago

    Don't overlook the other experiment's results.

mmooss a day ago

What are the requirements of a review? And what is the marketplace for someone meeting those requirements?

What expertise is required - someone who researches the same questions? Same general domain? Adjacent domain?

And how long does it take? I imagine that depends on many details.

Finally, what are they reviewing for? Is it a once-over for errors in method? Something like grading a student paper?

  • tsumnia 21 hours ago

    Speaking as a CS Education reviewer, some of the criteria can be "signing up to review", though solicitation is often sent to professionals in the domain (through personal requests or blanket email campaigns), as well as through respective mailing lists. I review papers for I think 4-5 conferences, mostly because I have colleagues that serve/publish in those spaces (you declare conflicts of interest to avoid bias).

    Each publisher/conference have their own reviewing guidelines to follow, but at least for the conferences I've reviewed for they include: a summary (2-5 sentences tops), the strengths of, the weaknesses of the research, and potentially your opinion on the piece. You are typically asked to include your familiarity with the research space since you may be reviewing methodologies that you were not explicitly trained in. This all distills into a metric that effectively reflects "this paper should be accepted/not accepted" which is then handed to a 'senior' reviewer to summarize for the conference to decide. All of my conferences are double-blind single submission, but I have colleagues that are able to respond to reviewer critiques.

    Most conferences recognize things like grammatical issues can happen, so reviewers are asked to only point them out rather than use them as a basis for rejection; however if the paper is riddled with mistakes, then it can be grounds for rejection. Likewise, since CS Education is a combination of CS and cognitive psychology, some of the discussion can be attributed toward "appropriateness for CS education research". For example, I once reviewed a paper that clearly was including theater-based education techniques but had CS shoehorned in one paragraph (that was it). Alternatively, measuring time delays in student responses to a tutoring system can help distinguish when students become distracted or take a break.

    • mmooss 20 hours ago

      Thanks. Someone told me that the 'blind' review doesn't often work because they already know who is doing what in their field.

      • tsumnia 20 hours ago

        It can depend on the field and the methodologies that are used - there's been some papers I've reviewed that I could assume who they were based on the contents. I can't really offer a counterpoint on non-blinded reviews as I've only done blind. I have heard some reviewers use the anonymity to be particularly rude, but I've only ever experienced that once but I used our 'discussion' phase to express my concerns.

  • goosedragons 18 hours ago

    Generally they want to know is this paper worth publishing and what are things that need fixing, clarification, etc. The reviewers should be people that understand the topics in the paper so they can identify issues, these are usually people that have published articles on similar topics, or people those people recommend. It's more in-depth than grading a paper.

westurner 5 days ago

> USD $250

How much deep research does $250 yield by comparison?

Knowledge market > Examples; Google Answers, Yahoo Answers, https://en.wikipedia.org/wiki/Knowledge_market#Examples

  • pjdesno a day ago

    I'm not sure why one would compare reviews by acknowledged experts in a field with stuff written by anonymous randos, and it seems highly unlikely that anyone with the appropriate qualifications would be lurking on some mechanical turk-like site.

    I'm also deeply suspicious of the confidentiality of anything sent to one of those sites.

    However this does suggest the idea that a high-powered university in a low-income country might be able to cut a deal to provide reviewing services...

  • moomin a day ago

    It’ll get you an electrician for about three hours in London. How long do these papers take to read critically?

    • voxl 17 hours ago

      One full work day to due decently. Two full work days to do well.

  • tdeck 21 hours ago

    You can get 50 reviews on Fiverr for that price!

odyssey7 19 hours ago

Peer review is work. The workers are subject to capitalism. Pay them or capitalism will optimize the quality unfavorably