Micro-PK research using tRNGs began in the 1960s with researchers using quantum states as a source of true randomness. Over the following decades, the body of research data increased (e.g.,
Schmidt, 1970;
Jahn et al., 1980,
1987). A meta-analysis by
Radin and Nelson (1989), including 597 studies conducted up until 1987, found a strong effect supporting micro-PK. This result was confirmed 15 years later in a meta-analysis with additional 176 new studies (
Radin and Nelson, 2003). However, these meta-analyses included studies using both tRNGs and poorer-quality algorithmically-based RNGs. A more recent meta-analysis by
Bösch et al. (2006) only included studies using tRNGs. This analysis of 380 studies undertaken between 1961 and 2004 identified a very small and heterogeneous effect that indicated a significant deviation from chance (
Bösch et al., 2006). A significant negative correlation between sample size and effect size was also found (
Bösch et al., 2006). Given the small, heterogeneous effect and this correlation, the authors concluded that the observed effect might have been caused by publication bias (
Bösch et al., 2006); other researchers have questioned this interpretation (
Radin et al., 2006) and a deeper inspection of the
Radin and Nelson (1989,
2003) meta-analyses confirms that these aspects do not apply to their data. Nevertheless many scientists agree that evidence derived from meta-analyses alone does not provide a convincing argument for the existence of micro-PK effects. In addition, meta-analysis methods have recently been criticized, especially with regard to the impact of heterogeneity (e.g.,
Ioannidis, 2016). This has led to the suggestion that “
a single high-quality, well-reported study can be recommended instead of a statistical synthesis of heterogeneous studies” (
Brugha et al., 2012, p. 450). A similar suggestion was made by
van Elk et al. (2015).
However, high-quality studies aimed at replicating existing results are scarce in micro-PK research. One example is the
Jahn et al. (2000) study that utilized research teams from the PEARlab at Princeton University, the Grenzgebiete der Psychologie und Psychohygiene at Freiburg, and the Center for Behavioral Medicine at the Justus Liebig University Giessen. They attempted to replicate the
Jahn et al. (1987) benchmark study involving 97 subjects and data from 2.5 million micro-PK trials. The attempted replication, with 227 participants and over 2 million trials, failed to confirm the original results (
Jahn et al., 2000). Another is the
Maier and Dechamps (in press) micro-PK research that reported on two micro-PK studies using Bayesian methods. The authors reported strong evidence supporting micro-PK in Study 1 (BF10 = 66.7). However, in Study 2, a pre-registered, high-quality replication of Study 1, they found strong evidence for the null effect (BF01 = 11.07). Failure of these high-powered studies to replicate earlier results also raised doubts about the existence of micro-PK.