• Welcome to OGBoards 10.0, keep in mind that we will be making LOTS of changes to smooth out the experience here and make it as close as possible functionally to the old software, but feel free to drop suggestions or requests in the Tech Support subforum!

Non-Political Coronavirus Thread

This isn't usually an issue in medicine; I think the pandemic and sense of urgency has resulted in this issue. For most medical journals, it would be a violation of the terms of the journal to put the article on the web before it was peer-reviewed and published.

That’s the way it is supposed to be in my field, too. You can technically post working papers, but they’re either pre-peer review or currently under review, so I don’t know many academics who would feel comfortable using those findings as cavalierly as the authors and press in this instance. Seems like pure partisan hackery designed to rile up rubes like 2&2.
 
This is interesting info. It doesn't exactly work this way with antibody testing. You can have 3 samples, all with antibodies detected, and it can still be a false positive result, because the problem isn't with the samples, rather it is with the test or the meaning of antibody presence. Someone can have positive antibodies but it doesn't mean they had, or were exposed to, the disease.

Of course, you can have three false positives in an animal survey too, because the observer is bad at bird song id's. However, the probability of three "independent" tests of the same individual all returning a false positive is low, or it should be if the test is any good to start with. The models I work with assign sites to probabilistic states, i.e., with each positive detection the probability that the animals species of interest is present goes up, but it is never 100% certain when using a false positive model. The second issue of anti-body presence being disassociated from past disease renders the results of a study like this meaningless, if the objective is to retrospectively understand disease presence.
 
I feel like you guys are just gliding over the fact that the error-filled, unreviewed, first draft paper came from STANFORD
 
That’s the way it is supposed to be in my field, too. You can technically post working papers, but they’re either pre-peer review or currently under review, so I don’t know many academics who would feel comfortable using those findings as cavalierly as the authors and press in this instance. Seems like pure partisan hackery designed to rile up rubes like 2&2.

John Ioannidis revels in shit stirring. He's basically best know for papers arguing that a lot of medical research is garbage and that we make recommendations/guidelines based in flimsy evidence. And a lot of times, he's right. But yeah, little bit of irony here.
 
1) I’ve never encountered somebody quite so proud of his relative ignorance than 2&2. It’s like a badge of pride with this one. These errors are like Stats 101 errors. My methods students could have poked holes in this crap.

2) what’s up with this (new) trend, birdman? I see it a lot in Econ, but almost never in Sociology and Political Science. It seems like all of the partisans are using non-peer reviewed pre-prints to publicize their points from what peer review quickly reveals are deeply problematic findings. Do you see stuff like this happening in your field?

I haven't seen any wildlife conservation policy relevant papers put out in a pre-print service, yet. I have seen a number of wildlife statistical estimation papers out in https://www.biorxiv.org/ but their focus hasn't been on influencing policy, just influencing analysis methods. One reason, I think the pre-prints are not a conservation policy issue yet is because government agencies are not permitting their scientists to participate in pre-prints as co-authors. Like, my USFWS colleagues are not allowed to use a pre-print service, all of their science products have to be peer-reviewed.
 
Now this would really suck:
https://www.jpost.com/HEALTH-SCIENC...t-30-different-strains-new-study-finds-625333

"More than 30 different mutations were detected, of which 19 were previously undiscovered.
“Sars-CoV-2 has acquired mutations capable of substantially changing its pathogenicity,” Li wrote in the paper."

Non-peer reviewed, so let's hope there are errors found. First question that comes to mind - if there are 30 strains, does that mean if you get one, you can still get the other 29? Same question as to vaccines.

Tracking back to whats up with all these preprints. If an article does not link that actual study you should automatically be skeptical. I had to go find the paper myself which should give you pause. Second, before the pandemic there has always been caution with anything that comes out of India and China. They have a scientific culture of print it all, quantity over quality going on. A lot of the science is borderline fringe significance, low reproduciblity, and overall just falsehoods. There was even a study by Science or Nature a few years ago where they sent a completely made up paper with flaws to all the different worldwide journals to see which would accept or reject their made up findings and flawed study, lets just say lots and lots of journals exist simply as a money grab. Finally, the media will take findings and twist them into a narrative that may or may not be supported by the actual paper, either because they are pushing an agenda or they just have no understanding of the science, this should also be why the actual paper should always be linked.

Like the paper I can read the abstract and tell you that the study is already flawed because they used Vero cells, which are a cell cultured easy to grow African green monkey kidney epithelial cell line, so any cytopathic death findings arent that relevant but the entire paper is based on those findings.
 
The fuck is a milkwich. Dude is a statistics professor at Columbia and he’s saying that study has more holes in it than Swiss cheese.

The dude that 2&2 laments as the former CEO of Coinbase was also a former professor of bioinformatics and computational biology, and that didn't really carry enough weight for 2&2 to accept as a peer review.

It shouldn't come as a surprise that a paper that represents the results of a mathematical model would be subject to review by statisticians, among myriad others.
 
1) I’ve never encountered somebody quite so proud of his relative ignorance than 2&2. It’s like a badge of pride with this one. These errors are like Stats 101 errors. My methods students could have poked holes in this crap.

2) what’s up with this (new) trend, birdman? I see it a lot in Econ, but almost never in Sociology and Political Science. It seems like all of the partisans are using non-peer reviewed pre-prints to publicize their points from what peer review quickly reveals are deeply problematic findings. Do you see stuff like this happening in your field?

The doubling down again and again is a special kind of special.
 
Tracking back to whats up with all these preprints. If an article does not link that actual study you should automatically be skeptical. I had to go find the paper myself which should give you pause. Second, before the pandemic there has always been caution with anything that comes out of India and China. They have a scientific culture of print it all, quantity over quality going on. A lot of the science is borderline fringe significance, low reproduciblity, and overall just falsehoods. There was even a study by Science or Nature a few years ago where they sent a completely made up paper with flaws to all the different worldwide journals to see which would accept or reject their made up findings and flawed study, lets just say lots and lots of journals exist simply as a money grab. Finally, the media will take findings and twist them into a narrative that may or may not be supported by the actual paper, either because they are pushing an agenda or they just have no understanding of the science, this should also be why the actual paper should always be linked.

Like the paper I can read the abstract and tell you that the study is already flawed because they used Vero cells, which are a cell cultured easy to grow African green monkey kidney epithelial cell line, so any cytopathic death findings arent that relevant but the entire paper is based on those findings.

Thanks...
 
The hard truth is that no matter how statistically flawed studies are to date, the Coronaplague is still a hell of a lot more deadly than the seasonal flu left unchecked. So, we checked it with everything from social distancing suggestions to full fledged shelter-in-place orders. Now those curves are flattened or flattening and people are looking for any excuse to get out and about. But people just aren't smart enough to get out and about and not throw a damn 80 person teenager's birthday party in the backyard where they all touch everything and then all head home to dinner with grandma.
And when that happens, enough of those people, their parents, and grandmas are going to need a ventilator that NYC is happening in a whole lot of places.
Because once again, even when they mean well, people are on the average selfish idiots. And when you let selfish idiots do what they want with a novel virus that causes a higher percentage of acute respiratory distress than the hospitals in those markets can handle, people unnecessarily die.
Now maybe that price is worth opening up the economy and maybe it isn't for you, but dead people are dead people so 100% know that if you're for opening things up too quickly or without strict rules and guidelines in place (testing, tracking, etc.), people are going to die and you'll very likely have a real hand in those deaths.
 
 

Even if the study were statistically/medically sound and positively peer reviewed to the Nth degree, it's still just one study in one tiny area. At best it's an indicator that more study is indicated. It's not a thing intelligent humans use to make any real world decision on public health policy.
 
Even if the study were statistically/medically sound and positively peer reviewed to the Nth degree, it's still just one study in one tiny area. At best it's an indicator that more study is indicated. It's not a thing intelligent humans use to make any real world decision on public health policy.

Yet we made real world public health decisions on unreviewed, wildly speculative, and completely inaccurate projections on the front end, to which people are still holding fast.
 
Yet we made real world public health decisions on unreviewed, wildly speculative, and completely inaccurate projections on the front end, to which people are still holding fast.

Sure, sometimes you have to make decisions based on limited un-peer reviewed information. We often don't have the luxury to sit back and wait for a fully vetted and reviewed and replicated scientific study...as new information becomes available decision processes can and should be updated and modified. But the new information should be evaluated for quality and statistical validity and it is hard to see how this information supersedes the previous work. There are obvious study design and spatial inference limitations to this Stanford analysis, can you name any obvious statistical flaws or spatial limitations to the previous modeling studies that you are labeling as "wildly speculative" and "completely inaccurate"?
 
Back
Top