The current study investigated a model explaining sexual assault victims' severity of trauma symptoms that incorporated multiple stigma constructs. Integrating the sexual assault literature with the stigma literature, this study sought to better understand trauma-related outcomes of sexual assault by examining three levels of stigma-cultural, social, and self. Results showed self-stigma was significantly and positively related to trauma symptom severity. Thus, results revealed that the internalized aspect of stigma served as a mechanism in the relation between sexual assault severity and increased levels of trauma symptom severity, highlighting the importance of assessing self-stigma in women reporting sexual assault experiences.
Replication studies in psychological science sometimes fail to reproduce prior findings. If these studies use methods that are unfaithful to the original study or ineffective in eliciting the phenomenon of interest, then a failure to replicate may be a failure of the protocol rather than a challenge to the original finding. Formal pre-data-collection peer review by experts may address shortcomings and increase replicability rates. We selected 10 replication studies from the Reproducibility Project: Psychology (RP:P; Open Science Collaboration, 2015) for which the original authors had expressed concerns about the replication designs before data collection; only one of these studies had yielded a statistically significant effect ( p < .05). Commenters suggested that lack of adherence to expert review and low-powered tests were the reasons that most of these RP:P studies failed to replicate the original effects. We revised the replication protocols and received formal peer review prior to conducting new replication studies. We administered the RP:P and revised protocols in multiple laboratories (median number of laboratories per original study = 6.5, range = 3–9; median total sample = 1,279.5, range = 276–3,512) for high-powered tests of each original finding with both protocols. Overall, following the preregistered analysis plan, we found that the revised protocols produced effect sizes similar to those of the RP:P protocols (Δ r = .002 or .014, depending on analytic approach). The median effect size for the revised protocols ( r = .05) was similar to that of the RP:P protocols ( r = .04) and the original RP:P replications ( r = .11), and smaller than that of the original studies ( r = .37). Analysis of the cumulative evidence across the original studies and the corresponding three replication attempts provided very precise estimates of the 10 tested effects and indicated that their effect sizes (median r = .07, range = .00–.15) were 78% smaller, on average, than the original effect sizes (median r = .37, range = .19–.50).
As participant recruitment and data collection over the Internet have become more common, numerous observers have expressed concern regarding the validity of research conducted in this fashion. One growing method of conducting research over the Internet involves recruiting participants and administering questionnaires over Facebook, the world’s largest social networking service. If Facebook is to be considered a viable platform for social research, it is necessary to demonstrate that Facebook users are sufficiently heterogeneous and that research conducted through Facebook is likely to produce results that can be generalized to a larger population. The present study examines these questions by comparing demographic and personality data collected over Facebook with data collected through a standalone website, and data collected from college undergraduates at two universities. Results indicate that statistically significant differences exist between Facebook data and the comparison data-sets, but since 80% of analyses exhibited partial η2 < .05, such differences are small or practically nonsignificant in magnitude. We conclude that Facebook is a viable research platform, and that recruiting Facebook users for research purposes is a promising avenue that offers numerous advantages over traditional samples.
Citation indices are tools used by the academic community for research and research evaluation which aggregate scientific literature output and measure impact by collating citation counts. Citation indices help measure the interconnections between scientific papers but fall short because they fail to communicate contextual information about a citation. The usage of citations in research evaluation without consideration of context can be problematic, because a citation that presents contrasting evidence to a paper is treated the same as a citation that presents supporting evidence. To solve this problem, we have used machine learning, traditional document ingestion methods, and a network of researchers to develop a “smart citation index” called scite, which categorizes citations based on context. Scite shows how a citation was used by displaying the surrounding textual context from the citing paper and a classification from our deep learning model that indicates whether the statement provides supporting or contrasting evidence for a referenced work, or simply mentions it. Scite has been developed by analyzing over 25 million full-text scientific articles and currently has a database of more than 880 million classified citation statements. Here we describe how scite works and how it can be used to further research and research evaluation.
Peer Review
https://publons.com/publon/10.1162/qss_a_00146
Background
Fanconi Anemia (FA) is a rare genetic disorder associated with bone marrow failure (BMF), congenital anomalies, and cancer susceptibility. Stem cell transplantation (SCT) offers a potential cure for BMF or leukemia, but incurs substantial risks. Little is known about factors influencing SCT decision-making.
Objective
The study objective was to explore factors influencing FA patients’ and family members’ decision-making about SCT.
Design
Using a mixed-methods exploratory design, we surveyed U.S. and Canadian FA patients and family members who were offered SCT.
Main variables studied
Closed-ended survey items measured respondents’ beliefs about the necessity, risks, and concerns regarding SCT; multivariate logistic regression was used to examine the association between these factors and the decision to undergo SCT. Open-ended survey items measured respondents’ perceptions of factors important to the SCT decision; qualitative analysis was used to identify emergent themes.
Results
The decision to undergo SCT was significantly associated with greater perceived necessity (OR = 2.81, p = 0.004) and lower concern about harms of SCT (OR = 0.31, p = 0.03). Qualitative analysis revealed a perceived lack of choice among respondents regarding the use of SCT, which was related to physician influence and respondent concerns about patients’ quality of life.
Conclusions
Overall, study results emphasize the importance of the delicate interplay between provider recommendation of a medical procedure and patient/parental perceptions and decision-making. Findings can help providers understand the need to acknowledge family members’ perceptions of SCT decision-making and offer a comprehensive discussion of the necessity, risks, benefits, and potential outcomes.
Wikipedia is a widely used online reference work which cites hundreds of thousands of scientific articles across its entries. The quality of these citations has not been previously measured, and such measurements have a bearing on the reliability and quality of the scientific portions of this reference work. Using a novel technique, a massive database of qualitatively described citations, and machine learning algorithms, we analyzed 1,923,575 Wikipedia articles which cited a total of 824,298 scientific articles in our database, and found that most scientific articles cited by Wikipedia articles are uncited or untested by subsequent studies, and the remainder show a wide variability in contradicting or supporting evidence. Additionally, we analyzed 51,804,643 scientific articles from journals indexed in the Web of Science and found that similarly most were uncited or untested by subsequent studies, while
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.