listserv messages

Replied by Holly Falk-Krzesinski on 06/29/2017 - 14:56
Here’re a few references you might start with:   ·       Stevens, A.J., Jensen, J.J., Wyller, K., Kilgore, P.C., London, E., Zhang, Q., Chatterjee, S.K., Rohrbaugh, M.L. The commercialization of new drugs and vaccines discovered in public sector research (2015) University Technology Transfer: The Globalization of Academic Innovation (book chapter), pp. 102-145. DOI: 10.4324/9781315882482 ·       Halliday, J. Commercial Aspects of Vaccine Development (2016) Micro- and Nanotechnology in Vaccine Development (book chapter), pp. 411-421. DOI: 10.1016/... [Read more]  
Replied by Stephen Fiore on 06/29/2017 - 16:13
In light of the question on vaccines and the subsequent citations, I wanted to recommend a report, written by Seth Shulman, back in 2002.  It’s called “Trouble on the Endless Frontier:  Science, Invention and the Erosion of the Technological Commons.”  It later evolved into this book “Owning the Future” (see  Anyway, in the introduction, Shulman makes a powerful point about the changing ecosystem of ideas,... [Read more]  

Posted by Kevin N. Dunbar on 06/27/2017 - 20:03
Thank you Steve for pointing out the Guardian article. It is much more important than the pros and cons of open access. The article highlights the last half century or so of why and how publishers such as Elsevier and Wiley have become dominant players in academic publishing. This is particularly important for the Scisip community as many of the posts concern metrics for publication in academic journals.  It is a very interesting article that would be worthy of discussion as it might be very controversial. I expect some will dispute the claims in the article and some will agree with... [Read more]  

Posted by Stephen Fiore on 06/27/2017 - 20:34
Hi Everyone - Because the issues and challenges of Open Access publishing is sometimes discussed in our communities, I wanted to share an impressive article that just came out in The Guardian.  It has some important background, history and insights.  For example, they cite a 2005 Deutsche Bank report that refers to the scientific publishing industry as a “bizarre” “triple-pay” system, in which “the state funds most research, pays the salaries of most of those checking the quality of research, and then buys most of the published product”.  Below I've cut-and-pasted that ... [Read more]  

Posted by Kate Saylor on 06/27/2017 - 10:50
See the announcement below:  12 week science policy fellowship at the National Academies, primarily for graduate students and postdocs in science fields with a demonstrated interest in science policy.  -Kate Saylor---------- Forwarded message ----------From: NASEM Christine Mirzayan Science & Technology Policy Graduate Fellowship Program <>Date: Tue, Jun 27, 2017 at 2:37 PMSubject: Apply by Sept 8: Science & Technology Policy Graduate FellowshipTo:... [Read more]  

Posted by Besselaar, P.A.A. van den on 06/27/2017 - 02:47
We used more than 2000 lists made by authors (for assessment) and compared that with WoS data. The WoS lists cannot be produced automatically but need citation. Lists produced by authors are not very reliable.  - authors write: "a selection of publications" but there is nothing more in WoS - authors leave out papers, such as low cited and/or papers in low impact journals without saying  By the way, other performance information listed by authors seems not very reliable as well. So the picture emerges that lists made by researchers are 'optimized' for... [Read more]  

Posted by Stephen Fiore on 06/26/2017 - 21:42
Hi Everyone - there is another interesting DARPA RFI out continuing its theme of research on improving research in the social sciences.  This comes on the heals of DARPA's recent Ground Truth program ( and the original program that is now titled Next Generation Social Science ( [Read more]  
Replied by David Wojick on 06/27/2017 - 08:09
This Confidence Levels RFI strikes me as wrongheaded. If a claim, conclusion, hypothesis, theory, etc., is controversial (as many related to public policy are) then there is no way to assign a Confidence Level. That is typically what the controversy is about.  Even in the general case, expert opinion is likely to vary greatly. Their implied theory of the nature of science and evidence seems oversimplified and wrong. What can they be thinking? David David Wojick, Ph.D.... [Read more]  
Replied by Phillip Phan on 06/28/2017 - 01:00
Seems to me that this problem has already been solved with meta-analysis. Phil   [Read more]  
Replied by David Wojick on 06/28/2017 - 03:13
How is that? How does meta-analysis assign non-controversial confidence levels to controversial claims?David [Read more]  
Replied by Phillip Phan on 06/28/2017 - 07:41
A meta-analysis is designed to objectively report the confidence intervals of a body of empirical research on a (controversial or non-controversial) question. Because it involves a thorough review of the published (and done correctly, non-published) research it provides the scientific basis for making a claim of confidence. If there is insufficient evidence to complete an MA, then the question is under-researched and discussions of confidence intervals are premature anyway. If a question continues to be controversial in the light of an unequivocal meta-analytic result,... [Read more]  
Replied by David Wojick on 06/28/2017 - 04:35
I see two issues here. First, how does the MA of a body of literature translate to a CL for each individual claim in every article? Second, are you claiming that every MA result is independent of who does it, such that everyone must get the same result? I doubt this very much, since the MA depends on personal judgement. In controversial cases I can easily see analysts on different sides getting very different results. This is because the weight of evidence is relative to the observer (a principle that I recently formulated after careful study of complex issues).David [Read more]  
Replied by Matthew Shapiro on 06/28/2017 - 22:06
What you are suggesting, and really both of your issues are related, is that certain findings will be excluded, intentionally or otherwise. A meta-analysis is as complete as resources allow, so any failing on the part of the meta-analysis author(s) is due to shoddy and/or constrained research and not a selection bias. When the pace is research is expanding rapidly, I suspect that these constraints could lead to crucial omissions of the most recent findings. Matt [Read more]  
Replied by Klochikhin, Evgeny on 06/28/2017 - 17:15
Hi Matt, I can't entirely agree with your statement that selection bias in MA is unusual. In fact some work that we've done on systematic literature reviews shows that a major constraint comes on the part of information retrieval, i.e. incomplete or biased literature search that precedes meta-analysis per se. Social scientists and economists who conduct SLRs and MAs do not always have good computational (and in fact computer science) resource to estimate how complete their literature search is before implementing the actual analysis. The issue is that databases... [Read more]  
Replied by David Wojick on 06/29/2017 - 02:42
I am more concerned about interpretation than selection. When a hypothesis is controversial the  proponents and opponents weigh the evidence differently. A good example from the physical sciences (which I am more familiar with) is the debate over wave versus particle theories of light, which lasted over 100 years. Proponents of the wave theory thought certain evidence was telling but the particle proponents disagreed, and vice versa. Many studies and experiments were done. How would MA have handled this?More generally, the scientific frontier is a realm of complex controversy. I see... [Read more]  
Replied by Jeffrey Alexander on 06/29/2017 - 04:49
I can see what the proposed program is getting at.  Note that the RFI does not state that respondents are expected to offer methods for assigning quantitative confidence levels, so the term "confidence" is being used very loosely.  In a sense, the RFI seems to be seeking ways to evaluate the degree to which individual studies are trustworthy, especially relative to other studies.To the program manager's credit, he also is looking to "unpack" this notion of confidence--so, for example, one could imagine a system where a certain study is noted as using rigorous methods but produces a... [Read more]  
Many thanks to David, Phil, Matt, Jeff, and Evgeny for for discussing this in a public forum. For me this has been the most engaging debate in recent memory. Is this an important enough issue that it would be worth the effort to write a paper which takes the reader step-by-step through a meta-analysis and calculates how various upstream  search strategies, points of view, initial assumptions, etc. would result in differing downstream confidence intervals or even conflicting conclusions? Aaron... [Read more]  
Replied by Belter, Christopher (NIH/OD/ORS) [E] on 06/29/2017 - 11:18
Aaron,   Such papers have already been written and have been available in the biomedical literature for some time. See, for example,   Higgins, J. P. T., & Green, S. (Eds.). (2011). Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 [updated March 2011]: The Cochrane Collaboration. Available from   National Research Council. (2011). Finding What Works in Health Care: Standards for Systematic Reviews. Washington, DC: The National Academies Press... [Read more]  
Replied by Aaron Sorensen on 06/29/2017 - 07:56
Chris,Thanks for this list of papers. I'm wondering if you think they are detailed enough in the nuances raised in this conversation. In other words, do you think the debate would have been avoided if all participants had previously read the references you cite?AaronOn Thursday, June 29, 2017, Belter, Christopher (NIH/OD/ORS) [E] <> wrote: Aaron,   Such papers have already been written and have been available in the biomedical literature for some time. See, for example,  ... [Read more]