Automated testing of data integrity in the social sciences

Social science research communities worldwide are working hard to improve their research, analysis, reporting, and publication practices (see, e.g., OpenScienceFramework.org; PsychFileDrawer.org). Issues addressed range from preventing blatant fraud (plagiarism, data fabrication), to increasing transparency and reproducibility of research, reducing “questionable research practices”, facilitating systematic data archiving and study pre-registration, and promoting replication attempts.
Many of these efforts were spurred by the highly publicized fraud case involving Dutch social psychologist Diederik Stapel. Interestingly, the Levelt Committee – investigating the allegations of data fabrication – based their main evidence against Stapel on a statistical detection method developed by Wharton psychologist Uri Simonsohn. This method was published recently, and involves, amongst others, identifying improbable distributions of scores, and improbable (dis)similarities in standard deviations and means across experimental groups. Simonsohn’s method is based on bootstrapping analysis: By simulating replication studies through samples drawn from the original data and in turn comparing distributions, standard deviations, and means to those obtained in the original data set, discrepancies in the original data can be identified. In his paper Simonsohn applied his method to several studies by social psychologists Dirk Smeesters and Lawrence Sanna, which has now led to their retraction.

The aim of the current project is to build a platform that (1) facilitates the use of, and (2) extends,Simonsohn’s detection method. The platform would (a) enable importing of data sets, including necessary background knowledge needed to explore their variables (dependent vs. independent variables, conditions, groups, level of measurement, scale construction, etc.), (b) automatically apply Simonsohn’s detection methods where applicable (assess probability of observed distributions, standard deviations, and means through simulation), (c) automatically apply detection methods identifying specific patterns (i.e., lack of randomness) stemming from computational statistics. For each imported set of data, the platform would assess the probability for it to be fabricated or tampered with.

Supervisors

  • Dr. E. Haasdijk
    Department of Artificial Intelligence
    VU University Amsterdam

Students

  • Gracia Edwards
    Research MSc Social Psychology
  • Arthur Avramiea
    MSc Artificial Intelligence

vulogo_sm