• Welcome to OGBoards 10.0, keep in mind that we will be making LOTS of changes to smooth out the experience here and make it as close as possible functionally to the old software, but feel free to drop suggestions or requests in the Tech Support subforum!

Is America the most brainwashed country on Earth?/least scientific belief

What we now call the Big Bang Theory was originally proposed by Georges Lemaitre, a Belgian astronomer and university physics professor. He also happened to be a Roman Catholic priest, a Jesuit at that.

I think the worst thing to come out of our contemporary political dialectic is the idea that to have faith means to be anti-science. Practiced with intelligence, an open mind and a thirst for knowledge they actually complement each other. Fides et Ratio.

This also isnt a one way street. Fracking, vaccines and GMOs are the products of the use of the scientific method, but are often roundly criticized by people who one would not categorize as being overly dependent on a literal reading of Christian scripture nor as right wing political fanatics.

I think this may have been mentioned here before, but the debate between Bill Nye and Ken Ham is a must see if interested in this subject. Very entertaining.
 
Nice read on the impact factor from today's Scholarly Kitchen - http://scholarlykitchen.sspnet.org/2014/04/28/reinventing-the-impact-factor-for-the-21st-century/

As we move away from a journal economy into an article based economy, I wonder when the tipping point will finally occur. Journals, and more broadly, publishers, still need the impact factor to stand as a proxy for quality. Yet as a metric, it is a stilted and backwards affair. Both publishers and authors can juke the system, and it's just generally an incubator for bad science. There are plenty of intriguing models in the publishing community, like PLoS, eLife, EMBO, etc., that are trying out open access, post publication peer review, altmetrics, etc. Ultimately though, we still aren't close to the sea change necessary in science, and it's really tough to make these kinds of paradigmatic changes from the top down, whether on the author side or the editorial/publishing side.
 
So here's a great example of a finding that could challenge methodology and the way scientists approach mouse models and experiments with lab rats in the future - http://www.nature.com/nmeth/journal/vaop/ncurrent/full/nmeth.2935.html

I can attach or PM the PDF if anyone is interested.

Anyway, this notion that because there are new discoveries that challenge generations of thought is a reason not to dogmatically accept stated science is fair. We learn new things every day that challenge current notions. We are also just refining things that could not have previously been understood without the methods developed and refined over generations of science.
 
Nice read on the impact factor from today's Scholarly Kitchen - http://scholarlykitchen.sspnet.org/2014/04/28/reinventing-the-impact-factor-for-the-21st-century/

As we move away from a journal economy into an article based economy, I wonder when the tipping point will finally occur. Journals, and more broadly, publishers, still need the impact factor to stand as a proxy for quality. Yet as a metric, it is a stilted and backwards affair. Both publishers and authors can juke the system, and it's just generally an incubator for bad science. There are plenty of intriguing models in the publishing community, like PLoS, eLife, EMBO, etc., that are trying out open access, post publication peer review, altmetrics, etc. Ultimately though, we still aren't close to the sea change necessary in science, and it's really tough to make these kinds of paradigmatic changes from the top down, whether on the author side or the editorial/publishing side.

I can write a whole lot more about this at some point, but I'll just make a quick point here. The biggest problem with academic publishing is the whole concept of writing a manuscript and submitting to a specific journal. It makes no sense, particularly with so much overlap with respect to what journals publish.

Submit a manuscript to the Journal of Rocket Science. Wait 3, 6, 9 months, maybe longer for a rejection. The submit to the American Rocket Science Journal. Same wait. Reviewers tell you to basically do what you did before it was rejected by JRS. Submit it to Rocket Science Quarterly. Same deal.

So now we're talking a 2-3 year process between the first manuscript submission to when it gets published. And a lot of people have wasted their time reading it along the way for no real reward but they're doing it because they're expected to do so.

When the article finally gets published in Rocket Science and Engineering Journal, the publisher will be able to sell the product of all that unpaid labor to the university libraries of the same people who did the unpaid labor. Well, I really shouldn't call it unpaid labor because it's part of the what the universities pay faculty to do.

So universities pay faculty for writing and reviewing manuscripts and editing journals then pay publishers for the journals themselves.

The system makes little sense.
 
I don't think hard science journals have as slow turnaround times as social science journals though, but that's another part of the process that's cumbersome. Not to mention no two journals have the same submission procedures, you need your figures in a bunch of different formats, there are page limits, different forms to fill out in different ways, etc.

Stuff like ORCID that is trying to disambiguate is helpful.
 
I don't think hard science journals have as slow turnaround times as social science journals though, but that's another part of the process that's cumbersome. Not to mention no two journals have the same submission procedures, you need your figures in a bunch of different formats, there are page limits, different forms to fill out in different ways, etc.

Stuff like ORCID that is trying to disambiguate is helpful.

my wife usually has a 1-2 mo wait, FWIW.
 
There is a difference in insightful questioning of science and just plain ignorance and skepticism. Townie's point about stem cells is that because people didn't understand it and had some grand moral objection the funds were diverted away. Yes, there were other advances that came out of the diversion of funds but science should dictate where it goes not the ignorant public based off of moral beliefs from a book. How does one help the scientific community? Make the public more science literate and push for more education.

That's absolute ignorance at its height.

Try Roger Shattuck's coverage of the human genome project and add some historical analysis to your over weening confidence in the morality of a science that has yet to produce anything like an Hippocratic oath - Aldous Huxley - the grandson of Darwin's fiercest defender, presents it beautifully in the "soaring" little book : "The Perennial Philosophy"

Shattuck provides an historical context and his survey - "the Wife of Bath effect" - shows the need for caution.

"Forbidden Knowledge" : From Prometheus to Pornography.

A deeply learned study reviewed and lauded by critics from Le' Monde to the New York Times. It will quickly dispell the shortsighted attitude being put forward on this thread. I first came across this thinker while in Paris reading my bible at that time, TLS. The Times Literary Supplement.
 
Last edited:
I don't think hard science journals have as slow turnaround times as social science journals though, but that's another part of the process that's cumbersome. Not to mention no two journals have the same submission procedures, you need your figures in a bunch of different formats, there are page limits, different forms to fill out in different ways, etc.

Stuff like ORCID that is trying to disambiguate is helpful.

Can vary widely by journal. Our most prominent specialty journal usually lets you know within 6-8 weeks. But yeah I've been procrastinating terribly with a manuscript after it was initially rejected by a "reach" journal because I don't feel like making all the stupid formatting, citation, table, graph, etc changes for another journal. So annoying.
 
Can vary widely by journal. Our most prominent specialty journal usually lets you know within 6-8 weeks. But yeah I've been procrastinating terribly with a manuscript after it was initially rejected by a "reach" journal because I don't feel like making all the stupid formatting, citation, table, graph, etc changes for another journal. So annoying.

And unfortunately, the "minimum threshold" OA journals that allow post publication peer review, cutting down the time to publication have taken a beating in the court of public opinion so far. Luckily PLoS has a lot of social/political capital to work with, so I could see this process and end result improving in the coming months and years.
 
That's absolute ignorance at its height.

Try Roger Shattuck's coverage of the human genome project and add some historical analysis to your over weening confidence in the morality of a science that has yet to produce anything like an Hippocratic oath - Aldous Huxley - the grandson of Darwin's fiercest defender, presents it beautifully in the "soaring" little book : "The Perennial Philosophy"

Shattuck provides an historical context and his survey - "the Wife of Bath effect" - shows the need for caution.

"Forbidden Knowledge" : From Prometheus to Pornography.

A deeply learned study reviewed and lauded by critics from Le' Monde to the New York Times. It will quickly dispell the shortsighted attitude being put forward on this thread. I first came across this thinker while in Paris reading my bible at that time, TLS. The Times Literary Supplement.

I've read this post several times and am struggling to find the point. First, we get it, you read. I think you've also shown yourself to be powerfully swayed by polemics.

I would add that there are moral codes in science that have been built up over time, about what can be done with human subjects and animal subjects, about the sort of "jurisprudence of science" so to speak, etc. There are STRICT laws about this in place, and I'm not sure your mid-century reads speak to that well.
 
"Mid century read"? Ok, dum dum...never mind that the book dedicates large parts to the Human Genome Project.

Whatever. You already know everything. No time for polemic,eh? No time for examining epistemological aims...

"In his “best achievement to date” (Harold Bloom), National Book Award- winner Roger Shattuck gives us a “deeply learned, highly intelligent, and beautifully written” (New York Times) study of human curiosity versus the taboo, from Adam and Eve to the Marquis de Sade to biotechnology research."
 
I can write a whole lot more about this at some point, but I'll just make a quick point here. The biggest problem with academic publishing is the whole concept of writing a manuscript and submitting to a specific journal. It makes no sense, particularly with so much overlap with respect to what journals publish.

Submit a manuscript to the Journal of Rocket Science. Wait 3, 6, 9 months, maybe longer for a rejection. The submit to the American Rocket Science Journal. Same wait. Reviewers tell you to basically do what you did before it was rejected by JRS. Submit it to Rocket Science Quarterly. Same deal.

So now we're talking a 2-3 year process between the first manuscript submission to when it gets published. And a lot of people have wasted their time reading it along the way for no real reward but they're doing it because they're expected to do so.

When the article finally gets published in Rocket Science and Engineering Journal, the publisher will be able to sell the product of all that unpaid labor to the university libraries of the same people who did the unpaid labor. Well, I really shouldn't call it unpaid labor because it's part of the what the universities pay faculty to do.

So universities pay faculty for writing and reviewing manuscripts and editing journals then pay publishers for the journals themselves.

The system makes little sense.

A number of fair points. I've never had a manuscript take longer than about 1.5 years from initial submission to acceptance. In my field, somebody would need to be pretty out of touch with the quality of their paper for it to take close to 3 years to get published. Longest ones I experienced involved submission to 3 journals and another paper that had extensive reviews needed for publication in Nature. Took close to 8 months to do the required experiments but it was worth it for a Nature paper and I think that isn't really the norm for most papers submitted to most hard science journals. Turn around time is also reduced because most of us realize the weak spots of our papers and so even before we submit we have discussed the experiments that need to be conducted in anticipation of reviewer comments.

I do disagree with the idea that there is no real reward involved in reviewing manuscripts, even if you limit that to ones that eventually get rejected by that particular journal. I gain as much insight into the current and future direction of my field from reviewing manuscripts as I do from going to conferences or even reading published papers. As a guy who focuses on diabetes, I don't get articles for review dealing with basic neuronal development. All of my reviews are at least tangentially related to my work and since I read them more critically than a published manuscript and can get a much more in depth look at the data than if I see a poster or short talk at a conference I benefit a great deal from reviewing the papers. On the flip side, there is next to no way to eliminate the need for experts to peer review articles. It is unrealistic for an editorial staff, even at a more focused topical journal, to have the depth of knowledge to delve into a submitted article the way somebody that has dedicated years of their life to that small subject can. I suppose the argument could be made that if I weren't reviewing papers I would have more time to read published articles (to gain a similar level of insight) but I would counter that if you eliminate the majority of expert, peer review it won't save me anytime because now I have to spend far more time when reading a normal article because I have no idea if it is even a decent paper to begin with.

Something does need to be done about all the different formatting requirements. Differences in page allotment and figure numbers are a natural byproduct of the different journals but a lot of the differences are just absurd. Different fonts or sized fonts. Different base format for the figures. Different file formats for submission. Different forms to fill out before submission. Different layouts. Even if you want to publish things in a different manner, I think more of the burden needs to fall more on the staff of the journal to make those kinds of format changes and let the researchers focus on getting the data changed around. I think you could easily set up one or two base formats for journals that could be modified in house by those journals to meet their needs. Townie would be far more capable of speaking as to the feasibility of that than I am.
 
Can you fellas take that to a private chat room. We are trying to cover some serious ground here.

Thanks in advance.
 
"Mid century read"? Ok, dum dum...never mind that the book dedicates large parts to the Human Genome Project.

Whatever. You already know everything. No time for polemic,eh? No time for examining epistemological aims...

"In his “best achievement to date” (Harold Bloom), National Book Award- winner Roger Shattuck gives us a “deeply learned, highly intelligent, and beautifully written” (New York Times) study of human curiosity versus the taboo, from Adam and Eve to the Marquis de Sade to biotechnology research."

was talking about Huxley

use your context clues, lec
 
Can you fellas take that to a private chat room. We are trying to cover some serious ground here.

Thanks in advance.

this is what scientists talking about science looks like, lectro
 
I do disagree with the idea that there is no real reward involved in reviewing manuscripts, even if you limit that to ones that eventually get rejected by that particular journal. I gain as much insight into the current and future direction of my field from reviewing manuscripts as I do from going to conferences or even reading published papers. As a guy who focuses on diabetes, I don't get articles for review dealing with basic neuronal development. All of my reviews are at least tangentially related to my work and since I read them more critically than a published manuscript and can get a much more in depth look at the data than if I see a poster or short talk at a conference I benefit a great deal from reviewing the papers. On the flip side, there is next to no way to eliminate the need for experts to peer review articles. It is unrealistic for an editorial staff, even at a more focused topical journal, to have the depth of knowledge to delve into a submitted article the way somebody that has dedicated years of their life to that small subject can. I suppose the argument could be made that if I weren't reviewing papers I would have more time to read published articles (to gain a similar level of insight) but I would counter that if you eliminate the majority of expert, peer review it won't save me anytime because now I have to spend far more time when reading a normal article because I have no idea if it is even a decent paper to begin with.

Something does need to be done about all the different formatting requirements. Differences in page allotment and figure numbers are a natural byproduct of the different journals but a lot of the differences are just absurd. Different fonts or sized fonts. Different base format for the figures. Different file formats for submission. Different forms to fill out before submission. Different layouts. Even if you want to publish things in a different manner, I think more of the burden needs to fall more on the staff of the journal to make those kinds of format changes and let the researchers focus on getting the data changed around. I think you could easily set up one or two base formats for journals that could be modified in house by those journals to meet their needs. Townie would be far more capable of speaking as to the feasibility of that than I am.

To the bolded point first, I would say that editorial staffs are starting to add PhDs (who have increasingly lost funding, I would add, definitely not most scientists' first choice in careers) for exactly these types of purposes. I know we have 4 on our editorial staff, and they serve to assign reviewers, shepherd manuscripts to more appropriate journals, catalog them by section in journals, etc. They can't do peer review at the level of experts in tiny sub-fields can, though, you're right about that.

As to your latter point, it's one that has been discussed at high levels in publisher meetings for YEARS, I can assure you. The issue at play here is that you have a lot of competing interests (commercial vs non-profit journals, methods journals, open access vs hybrid vs traditional, etc.) and publishers can't seem to agree on a template. The actual formatting needed is not a massive editorial burden, I can tell you that much already. Publishers are positioned to take things in almost any format and turn it into its final typeset form; that process plus distribution is essentially the main function of a publisher. It's not the what that would be difficult, but the how and the why.
 
Last edited:
Good post, BigTree. I'm definitely not saying we need to eliminate peer review. Not at all. I do think we need to streamline it in a few ways.

First, the editorial staffs need to be an effective line of defense. Over the last year, I've gotten a few papers that had very obvious methodological flaws that would be immediately picked up by anybody with reasonable knowledge in the field. Hire a few people to just be that line of defense and they can provide initial suggestions before the editor has to go begging people to review it. I like reviewing to be honest. But if a journal I don't usually review for sends me very suspect work, I may not be willing to review for them again.

Second, your enthusiasm for reviewing gets to what I'd like to see. Instead of editors trying to hunt down experts, journals should just put submissions online and let potential reviewers select what they want to review. There would be some level of screening of course and editors can decide if they want to accept reviews with potential conflicts of interest. Of course, I've reviewed papers written by people I know and vice versa, so I don't think it's that big a deal except in some cases.

I submitted an article for a special edition of an prominent education journal back in October. We were expecting a decision by February at the latest. Mid-Feb, I got an email from the editors saying they'd have reviews in in early March. Heard nothing in March. Heard nothing by the national conference in April. I emailed the editors again and one responded that they've had a tough time getting reviewers for the main issue and the special issue. My system avoids that issue.

I'll leave it at that. I would be in favor of far more radical changes than that, however. Townie, I assume you're in the biz. I'd like to discuss some ideas with you further. PM me if you're interested.
 
Back
Top