![]() |
Home
| Databases
| WorldLII
| Search
| Feedback
Law, Technology and Humans |
Improving the Credibility of Empirical Legal Research: Practical Suggestions for Researchers, Journals and Law Schools
Jason M. Chin
University of Sydney School of Law, Australia
Alexander C. DeHaven
Center for Open Science, United States
Tobias Heycke
GESIS—Leibniz Institute for the Social Sciences, Germany
Alex O. Holcombe
University of Sydney School of Psychology, Australia
David T. Mellor
Center for Open Science, United States
Justin T. Pickett
University at Albany, SUNY, United States
Crystal N. Steltenpohl
University of Southern Indiana, United States
Simine Vazire
University of Melbourne School of Psychological Sciences, Australia
Kathryn Zeiler
Boston University School of Law, United States
Abstract
Keywords: Empirical legal research; open science; open access; reproducibility; meta-research.
Part I: Introduction
... the key problem in our view is the unmet need for a subfield of the law devoted to empirical methods, and the concomitant total absence of articles devoted exclusively to solving methodological problems unique to legal scholarship. Without such articles, and scholars to produce them, most research areas with problems that have not been addressed in other disciplines shall remain unfixed and progress on this front shall remain frozen.[1]
Openness and transparency are central to the operation of many legal systems.[2] These virtues are expressed through mechanisms like opening courtrooms to the public and media and publishing legal decisions.[3] The idea is that an open legal system is more likely to earn credibility by inviting scrutiny.[4] Despite the importance of openness and transparency in law, much empirical legal research (ELR) is still conducted using opaque research practices.[5] This opacity makes it challenging for others to verify and build upon existing findings, therefore threatening the long-term credibility of the field. Compounding this problem is the ‘unmet need’ mentioned in the above epigraph—a lack of published research-on-research (i.e., meta-research) aimed at improving empirical legal methods to make them more credible. This article addresses this need by providing concrete guidance based on our experience and fields of knowledge[6] that researchers and institutions may use to improve the credibility of ELR.
Several fields are undergoing a credibility revolution.[7] In the context of empirical research, credibility refers to when a study’s methodology and results are transparently reported so they can be verified and repeated by others, errors are caught and corrected, and conclusions are updated and well calibrated to strengthen the evidence. In other words, credibility does not mean that findings are perfectly accurate or error-free. Rather, researchers prioritise transparency and calibration to easily catch and correct errors. A credible field is one where findings are reported with appropriate levels of confidence and certainty and errors are likely to be corrected. Then, when a finding withstands scrutiny and becomes well established, it is very likely to be true.[8] These methods also fall under the label ‘open science’.[9]
Credible research presents many advantages. First, it is reproducible, meaning that its data, methods, and analysis are reported with enough detail so other researchers can verify conclusions and correct them if needed.[10] Second, it is more efficient because reproducible workflows allow other researchers to build upon existing work and test new questions.[11] Third, its findings are replicable, meaning they can be confirmed through testing with new data.[12] Fourth, its claims are well calibrated so that bodies that fund and rely on this research can employ them with more confidence.[13] Finally, it inspires greater public trust.[14] Many of these benefits were recently encapsulated in a 2018 National Academy of Sciences, Engineering, and Medicine consensus report about openness and transparency in science. The report stated, ‘research conducted openly and transparently leads to better science. Claims are more likely to be credible—or found wanting—when they can be reviewed, critiqued, extended, and reproduced by others’.[15]
These advantages are even greater in applied fields like ELR, where claims may seem more facially persuasive because they have the weight of systematically collected data. This article defined ELR as systematic attempts to collect and analyse a set of data,[16] thereby excluding ‘traditional normative’ legal scholarship.[17] Consequently, we commented on surveys, systematic content analyses of judicial decisions and other archival sources, vignette studies, behavioural studies, interviews and analogous methods. This focus aligns with recent anecdotal concerns about ELR.[18]
We also considered the implications of the credibility revolution for digital research methods in law and provided guidance to legal researchers using these methods. For instance, we discussed best practices concerning online survey methods, which are quickly becoming the most common method for measuring attitudes and behaviours in ELR. We further examined research methods that perform quantitative analyses of legal decisions on online databases and those using text analysis software. Moreover, increased open data practices raise privacy issues in digital research methods (e.g., research methods that study social media activity)[19] that should be carefully managed. Accordingly, we discussed the need to balance the benefits of open data (e.g., a greater opportunity for combining data into larger datasets, data mining) with privacy and respecting the interests of vulnerable groups.[20]
As noted, ELR’s impact regularly extends beyond the pages of law journals. It is regularly cited by courts in the United States.[21]Further, it is relied on by policy-makers and sometimes commissioned and funded by law reform bodies.[22] It is also often publicly funded, so the public has an interest in seeing its methods and data be publicly accessible.[23] A recent example of ELR capturing the public’s interest is a 2019 study. The findings revealed that female High Court judges are interrupted more than males.[24] It further proposed substantial reforms, such as implicit bias training (which itself has a limited evidence base).[25] However, this study’s central conclusion was subsequently contradicted by a study with a much larger sample size (15 years of transcripts compared to two).[26] Still, relevant to the current article, neither the original study nor the replication was conducted using credible methods. The data are not available to be verified (e.g., detecting coding errors) and added to as more High Court transcripts become available in the coming years (e.g., to see if interruption patterns change as culture changes). Even if those data were available, the researchers’ coding scheme and other methods were not reported sufficiently for other researchers to verify the results.
Given the importance of credible ELR,[27] our primary aim was to provide steps for legal researchers (including those using digital research methods) and institutions to improve this field.[28] Before discussing these steps, Part II evaluates the current state of credibility in ELR and compares it to reforms in cognate fields. We provide a review of law journal guidelines to gauge the extent to which they promote credible practices. We found that some law journals are encouraging or require credible practices. This finding leads to Part III, which discusses three common ELR methodologies (content analyses of judicial decisions, surveys, and qualitative methods). This part further evaluates how new reforms and technologies from the credibility revolution can be applied and adapted to improve the way empirical legal researchers use these methods. Part IV turns to journals and societies and the steps they may take to promote research credibility. Part V provides some final reflections on the path forward.
Part II. Credibility of Empirical Legal Research
This part contextualises ELR within the broader movement across social sciences, aimed at improving research credibility. These reforms were partly inspired by widespread failures to replicate findings published in prestigious journals.[29] These failures were difficult to ignore because of the high methodological quality and large sample sizes of the replication studies. The results of these replication projects likely contributed to the opinions expressed by researchers in a survey. The findings showed that 52% of respondents said there was a significant crisis in science, 38% said there was a slight crisis, and only 3% said there was no crisis.[30]
Subsequent meta-research has helped uncover the reasons why peer-reviewed and published findings are less trustworthy than expected and (as we will see) how to tackle these problems. For instance, researchers are finding widespread use of questionable research practices (QRPs) in conducting self-report surveys in many fields (e.g., psychology, economics).[31] QRPs take advantage of undisclosed flexibility in the research process to allow researchers to make their findings seem cleaner and more persuasive than they are. For example, one widespread QRP is deciding to exclude data after observing its effect on the results. QRPs demonstrably inflate a field’s false positive rate.[32] A related practice is failing to publish entire studies when their results are inconclusive or do not support the research hypothesis. This results in publication bias (sometimes called the ‘file-drawer effect’).[33] Like QRPs, publication bias can make the published literature a poor gauge of the actual strength of a theory or finding.
Both QRPs and publication bias can thrive when researchers do not conduct their work openly and transparently, such as when researchers do not regularly make their materials and data available.[34] This enables QRPs and prevents mistakes (e.g., coding errors, statistical errors) from being corrected by other researchers. Similarly, preregistration (i.e., creating a public record of a study’s methods before it is conducted) has historically been uncommon in social sciences, as we will discuss further in 1(a). This makes it challenging to find studies that were conducted but never published.
1. The Credibility Revolution in Social Sciences
Several fields are increasingly adopting reforms that respond to the issues reviewed above.[35] We will now briefly review some of those reforms, which we will revisit in greater detail in Part III concerning how they can be leveraged by empirical legal researchers, including those using digital research methods.
(a) Preregistration and Registered Reports
Preregistration, also known as prospective study registration or a pre-analysis plan, became required by law in some jurisdictions for clinical medical research partly due to widespread concerns about the public health implications of publication bias and QRPs (e.g., the non-publication of entire studies unfavourable to the drug manufacturer, partial reporting of results, and outcome switching).[36] Preregistration involves submitting the hypotheses, methods, data collection plan, and analysis plan to a common registry before data collection.[37] As we will see, preregistrations can be drafted broadly to be sensitive to more exploratory methods and analyses (e.g., text mining and analysis). Further, researchers increasingly include videos that cover research methodologies (e.g., data collection processes).[38]
Preregistration makes unreported studies more findable.[39] It can also discourage (or at least increasingly detect) QRP usage by preserving a record of the methodology as it was before the data were viewed. Researchers following a preregistered analysis plan may also be less likely to mistake prediction with postdiction. In other words, they may be less likely to present an exploratory finding (e.g., through the mining the data for statistical significance) as being one they predicted. If the detailed preregistered plans are followed, this preserves the statistical validity of analyses regarding error control.[40]
The recent State of Social Science (3S) survey found preregistration has been catching on in psychology, economics and political science in the past few years.[41] This survey asked researchers whether and when they first preregistered a study and when they first made their data and materials public (e.g., instruments like surveys, images or sound files presented to participants). The survey showed considerable increases in all three self-reported behaviours over the past several years (Figure 1).
Figure 1. Adoption of Open and Transparent Practices by Social Science Researchers
Notes: Participants were asked the year they first engaged in one of the following practices (if applicable): made their data available online, made their study instruments (i.e., materials) available online and preregistered a study. Participants were researchers in psychology, economics, political science and sociology. The bottom of the shaded regions estimates practice adoption using indirect measures (i.e., searching online databases to calculate the percentage of survey non-respondents who engaged in open practices).
Although recent trends are encouraging, preregistration should not be considered a panacea for all threats to research credibility. Rather, it is an important tool that can be employed as part of a general effort to improve research, along with stronger theories and more precise measurements.[42] Further, preregistration and other open practices should not be seen as a mechanism that mindlessly constrains generative and exploratory research.[43] Rather, they are tools that help distinguish planned and unplanned analyses and encourage greater attention to methodology in the early stages of the research process.
Registered reports (RRs) build upon preregistration. They are a new type of article, where the peer review process is restructured. Reviewers evaluate the proposed methods and justifications for the study (i.e., the material that will be preregistered) before the study has been conducted and the results are known.[44] If the editor accepts the proposal, it is guaranteed to be published if the authors follow through on that plan, which is then preregistered. Therefore, publications are not selected based on results but the research question and method. Like preregistration, RRs can reduce publication bias and QRPs. They can also result in improved methodologies since reviewers provide criticism and advice regarding the methods before data are collected. A total of 275 journals now accept RRs, a number that has ballooned from under five in 2014.[45]
Figure 2. First Listed Hypothesis Confirmed in Standard and RRs
Notes: The authors compared a sample of standard reports to a sample of RRs. They found that 96.05% of the standard reports confirmed the first listed hypothesis, whereas 43.66% of such hypotheses in the RRs were confirmed.
Recent analyses found RRs are more likely to report null results (Figure 2).[46] This is salutary because the proportion of published literature containing positive results is so high (~95%) that it is almost guaranteed that many of the positive results are false positives.[47] The rate of positive results in the (small) RR literature (about 45% according to two studies) is more realistic and consistent with the rate of positive results in large-scale direct replication projects in the social sciences.[48] Moreover, RRs—possibly due to early-stage reviews that can catch flaws before data collection—have also been rated higher in methodological rigour and several other indicia of research quality.[49]
(b) Open Data, Code and Materials
Journals and researchers are increasingly embracing open data and code. For example, several political science and economics journals now require authors to make their data public (e.g., uploaded to a public repository) and provide the code that translates their data into the published results.[50] Similarly, as was found in the 3S study, researchers increasingly share their data and study materials in several fields.[51]
(c) Badges, Checklists, and Other Reforms
Article badges for practices like preregistration, open materials, and open data present a way to recognise and reward open practices without requiring them.[52] After a journal adopted a badge policy, one analysis found that published articles associated with those badges had much higher rates of practices.[53] That said, the effect appeared to be stronger when actors in the field were already aware of the benefits of such practices.[54]
Many fields have also created checklists requiring or encouraging researchers to submit along with their manuscripts (e.g., acknowledging they have reported all their findings and not left some in the file drawer).[55] Some of these checklists have been associated with the fuller reporting of, for example, a study’s methodological limitations.[56]
Finally, reforms to the peer review process are spreading.[57] These include open peer review models (peer reviews are published along with the articles), continuing peer reviews (commentaries can be appended to existing publications), and changes in peer review criteria (such as judging articles based on credibility instead of novelty). As we will discuss further, journals are also increasingly adopting guidelines that encourage or require practices like open data and preregistration.[58]
2. Concerns About the Credibility of Empirical Legal Research
Expressed concerns about the credibility of ELR predate much of the discussion above but have produced no lasting reforms or initiatives that we could find. For instance, Epstein and King reviewed 231 empirical legal articles (identified by ‘empirical’ in their title) in 2001 published in American law journals between 1990 and 2000 and found inference errors in all of them.[59] In many cases, the errors stemmed from the conclusions not being based on reproducible analyses. Over a decade later, little seems to have changed.[60]
ELR faces many of the same challenges found in other social sciences. Perhaps it is unsurprising that questions would be raised about its practices. In many cases, the relationship is direct: ELR practices often borrow from cognate disciplines, like economics and psychology, two fields whose historic practices contributed to the perceived crisis in those fields. Like many others, ELR also typically operates in a research environment where there is an incentive to publish frequently, perhaps at the cost of quality and rigour.[61] Many journals also appear to favour novel and exciting findings in such environments without a concomitant emphasis on methodology. This combination of incentives creates an ecosystem in which low credibility research is rewarded, and those who engage in more rigorous practices are driven out of the field.
ELR also faces its own set of challenges. Much research in this field is published in generalist law journals that may rarely receive empirical work. Some of these journals are edited by students. Many of whom cannot be expected to have the background needed to evaluate empirical methods, and most of whom do not employ peer reviews in article selection.[62] Student editors may also be swayed by the eminence of authors, which is a biasing force, even when editors rely on peer reviews.[63]
Turning to the authors, many empirical legal researchers possess a primarily legal background. Consequently, they may not have the specialised methodological knowledge required to ensure their work is credible (which many people with extensive training in social science also lack, as the replication crisis has brought to light). Further, as trained advocates, some authors may be culturally inclined towards strong rhetoric that the data may not entirely justify.
3. Are Law Journals Promoting Research Credibility?
Journals represent an important pressure point on research practices because they choose what to publish and control the form in which research is reported (e.g., encouraging or requiring that raw data and code be published along with the typical manuscript narrative). Moreover, traditional practices resulted in false-positive findings being published in leading journals,[64] whereas open practices could deter the questionable research practices that produce these false positives.[65] Accordingly, it is useful to ask, to what extent are ELR journals promoting credible practices?
As mentioned above, many journals in fields outside of law have begun to reform their guidelines to encourage behaviours like posting data and preregistering hypotheses. Much of this was spurred by the development of the Transparency and Openness Promotion (TOP) Guidelines and its goal of ‘promoting an open research culture’.[66] The original TOP Guidelines covered eight standards, each of which can be implemented in three levels of increasing rigour. A ‘0’ indicates that the standard does not comply with TOP, such as a policy that encourages data sharing or says nothing about data sharing. Levels 1–3 vary by the standard. However, for data transparency, they correspond to 1) the journal requiring disclosure of whether data are available (e.g., a data availability statement), 2) the journal requiring data sharing if it is ethically feasible, and 3) the journal computationally verifying that data can be used to reproduce the results presented in the paper.
The Center for Open Science recently developed the TOP Factor,[67] an open database for evaluating and scoring journal policies against the standards set in the TOP Guidelines. The database was developed to enable communities served by the journals to evaluate a journal’s policies and easily compare journals within a discipline more readily. The TOP Factor is intended to help journals receive credit for having more credible practices and, consequently, motivate further adoption of such guidelines. It is distinct from existing journal metrics, such as the Journal Impact Factor and Altmetrics, in that it evaluates research practices rather than impacts. The TOP Factor measures the original eight TOP guidelines, whether the journal has policies to counter publication bias (e.g., accepting RRs), and whether it awards badges.
We scored journals with the TOP Factor to determine the extent they promote research credibility (Table 1).[68] We included three types of journals, selecting them for having a high status within their category. First, we included the 11 Australian generalist law journals that received an A or B rating from the Council of Australian Law Deans, a contentious but influential rating system.[69] We also included the top 10 US law journals rated by the Washington and Lee Law Journal Rankings (2019) because they have received critical attention from researchers in the US.[70] The Washington and Lee ranking is based on citations from other US law journals and legal decisions. We also scored six law journals we know regularly publish empirical research (Journal of Empirical Legal Studies, Journal of Legal Studies, Journal of Law and Economics, American Law and Economics Review, Journal of Legal Analysis, and Law, Probability, and Risk).
Table 1. Transparency and Openness Policies in Law Journals
Journal
|
TOP Factor
|
Impact Metric
|
Data Citation
|
Data Transparency
|
Analysis Code
|
Materials Transparency
|
Journal of Legal Studies
|
10
|
0.49
|
1
|
3
|
3
|
3
|
Journal of Law and Economics
|
10
|
0.96
|
1
|
3
|
3
|
3
|
Yale Law Journal
|
4
|
100
|
0
|
2
|
2
|
0
|
American Law and Economics Review
|
4
|
0.96
|
0
|
2
|
2
|
0
|
Stanford Law Review
|
2
|
75.74
|
0
|
2
|
0
|
0
|
Journal of Legal Analysis
|
2
|
1.73
|
2
|
0
|
0
|
0
|
Harvard Law Review
|
0
|
99.23
|
0
|
0
|
0
|
0
|
Columbia Law Review
|
0
|
74.62
|
0
|
0
|
0
|
0
|
University of Pennsylvania Law Review
|
0
|
72.06
|
0
|
0
|
0
|
0
|
Georgetown Law Journal
|
0
|
68.33
|
0
|
0
|
0
|
0
|
California Law Review
|
0
|
67.71
|
0
|
0
|
0
|
0
|
Notre Dame Law Review
|
0
|
63.77
|
0
|
0
|
0
|
0
|
Supreme Court Review
|
0
|
63.14
|
0
|
0
|
0
|
0
|
University of Chicago Law Review
|
0
|
62.17
|
0
|
0
|
0
|
0
|
Journal of Empirical Legal Studies
|
0
|
1.28
|
0
|
0
|
0
|
0
|
Law Probability and Risk
|
0
|
0.68
|
0
|
0
|
0
|
0
|
Adelaide Law Review
|
0
|
B
|
0
|
0
|
0
|
0
|
Alternative Law Journal
|
0
|
B
|
0
|
0
|
0
|
0
|
Federal Law Review
|
0
|
A
|
0
|
0
|
0
|
0
|
Griffith Law Review
|
0
|
B
|
0
|
0
|
0
|
0
|
Melbourne University Law Review
|
0
|
A
|
0
|
0
|
0
|
0
|
Monash University Law Review
|
0
|
A
|
0
|
0
|
0
|
0
|
Sydney Law Review
|
0
|
A
|
0
|
0
|
0
|
0
|
University of New South Wales Law Journal
|
0
|
A
|
0
|
0
|
0
|
0
|
University of Queensland Law Journal
|
0
|
B
|
0
|
0
|
0
|
0
|
University of Tasmania Law Review
|
0
|
B
|
0
|
0
|
0
|
0
|
University of Western Australia Law Review
|
0
|
B
|
0
|
0
|
0
|
0
|
Notes: Table 1 includes three groups of journals: the top 10 US law journals according to the Washington and Lee rankings, six journals that regularly publish empirical work, and 11 Australian law journals receiving As or Bs from the Council of Australian Law Deans (CALD). They are evaluated against the factors in the transparency and openness journal guidelines. The TOP Factor is the sum of 10 items (https://osf.io/t2yu5/) awarded 0–3 based on the extent to which the policy is insistent. These 10 items are data citation, data transparency, analytic code transparency, materials transparency, reporting guidelines, study preregistration, analysis preregistration, replication, publication bias and open science badges. The latter five items are not listed because all journals received 0s for them. The highest possible TOP Factor score is 30. The Impact Metric is the standard status indicator among these journals (the Washington and Lee combined score, the journal impact factor measuring citations to articles in the journal, and the CALD’s grade considering various sources but was last compiled in 2009).
Table 1 shows that our results are consistent with existing concerns about the level of methodological rigour that student-edited law journals require and promote (the table in our supplementary materials contains links to the journal websites containing these guidelines). None of the Australian journals scored above 0 on the TOP Factor. Only two of the 10 US student-edited law journals scored above 0. Those were Yale Law Journal (4) and Stanford Law Review (2). Stanford received a 2 for data transparency by requiring the posting of data subjects for countervailing reasons. Specifically, Stanford Law Review’s policy is as follows:
At a minimum, empirical works must document and archive all datasets so that third parties may replicate the published findings. These datasets will be published on our website. The Stanford Law Review will make narrow exceptions on a case-by-case basis, particularly if the datasets involve issues of confidentiality and/or privacy.
Yale Law Journal received a 2 for data transparency and having a data policy similar to Stanford and another 2 for further applying that rule to analytic code.
The six journals we identified as regularly publishing empirical work fared better on the TOP Factor, but there is considerable room to improve. The relatively high scores for some of these journals came from interfacing with economics, a field in which the computational verification of reported findings is becoming more mainstream.[71] For example, the Journal of Legal Studies and Journal of Law and Economics have expressly adopted the data and materials guidelines used by economics journals. The American Law and Economics Review has policies for data and code, but they are not as demanding. The Journal of Legal Analysis has the strongest guidelines for data citations, adopting the Future of Research Communications and e-Scholarship (FORCE11) Joint Declaration of Data Citation Principles.[72] Troublingly, the Journal of Empirical Legal Studies and Law, Probability and Risk both scored 0 overall.
None of the studied journals joined the 275 journals in other fields that accept RRs. Further, none had award badges, recommended reporting guidelines, or had any policies about replication.
Part III. Guidance for Researchers Seeking to Improve their Research Credibility
This section further elaborates on some of the credibility reforms we discussed in the previous part. First, we will provide some general guidance for empirical legal researchers interested in implementing these reforms. As part of that discussion, we will highlight resources particularly appropriate for social scientific research and widely-used guidelines that can be adapted for ELR methodologies. Following this, we discuss applying these reforms to three mainstream empirical (often digital) legal methodologies: content analyses of judicial decisions, surveys, and qualitative methods. These general recommendations are all subject to qualifications, such as the ethics of sharing certain types of data.
(a) Preregister Your Studies
Empirical legal researchers interested in enhancing their work’s transparency can preregister their studies using platforms like the OSF[73] or the American Economics Association registry.[74] These user-friendly services create a timestamped, read-only description of the project.[75] The registration can be made public either immediately or be embargoed for a pre-specified amount of time or indefinitely. When reporting the results of preregistered work, the author should follow a few best practices. First, they should include a link to the preregistration so reviewers and readers can confirm what parts of the study were pre-specified. Second, the authors should report the results of all pre-specified analyses, not just those most significant or surprising. Third, any unregistered parts of the study should be transparently reported, ideally in a different subsection of the results. These ‘exploratory’ analyses should be presented as preliminary, testable hypotheses that require confirmation. Finally, any changes from the pre-specified plan should be transparently reported. These changes (some of which will be trivial and some may substantially alter the interpretability of the results) can be better evaluated if they are clearly described.
Several templates are available to guide researchers through a preregistration process.[76] Some are quite specific, leading the researcher through several questions about their study’s background, hypotheses, sampling and design. On the other end of the spectrum are those that give researchers free rein to describe the study in as little or as much detail as they like. We are unaware of any templates specifically designed for ELR (although this would be a worthwhile project). However, existing templates are well suited for experimental and observational research. Part III.2 will discuss how preregistration might operate in various ELR paradigms. We also discuss its limitations.
(b) Open Your Data and Analytic Code
Empirical legal researchers wishing to improve their work’s reusability and verifiability and ensure their efforts are not lost to time (as research ages, underlying data and materials become increasingly unavailable because authors are unreachable)[77] have many options open to them. Many free repositories have been developed to assist with research data storage and sharing. Given the availability of these repositories, the fact that the publishing journal does not host data itself (or make open data a requirement) is not a good reason to refrain from sharing. Rather, authors can reference a persistent locator (such as a digital object identifier [DOI]) provided by a repository.
Table 2 displays a selection of repositories that may be of particular interest to empirical legal researchers, along with some key features of those repositories. This is partly based on the work of Oliver Klein and colleagues, who explained the characteristics of data repositories that researchers should consider when deciding which service to use.[78] These include whether the service provides persistent identifiers (e.g., DOIs), whether they enable citations to the data, whether they ensure long-term storage and access to the data, and whether they comply with relevant legislation.
Table 2. A Selection of Data Repositories for Empirical Legal Research
Name
|
Website
|
Highlights
|
figshare
|
It is free up to 100GB and is often used for sharing data analysis and
code. It is hosted in the UK and is a for-profit company.
|
|
GESIS
datorium
|
It is free and hosted by the Leibniz Institute for the Social Sciences
(Germany), a non-profit organisation. Data can be embargoed,
and access can be
restricted.
|
|
Open
Science
Framework
|
It is free and provides forms for preregistration that can be connected to
projects. It is operated by the Center for Open Science
with options to store
data in multiple countries. The Center for Open Science is a non-profit in the
US.
|
|
The Qualitative Data Repository
|
Fees apply (waivers available). It is operated by Syracuse University and
focuses on qualitative data. It assists in determining what
to share and
how.
|
|
UK Data
Service
Standard
Archiving
|
It is free and focuses on large-scale survey data. Data must be sent to the
service, which also curates the data (i.e., self-deposits
are not possible). The
UK Data Service is funded by the Economic and Social Research Council, which is
a public body. Data can be
embargoed, and access can be restricted.
|
|
Zenodo
|
It is free and operated by the European Organization for Nuclear Research
(CERN), a non-profit European research organisation hosted
in the EU. Data can
be embargoed, and access can be restricted.
|
Notes: A list of six open access data repositories that empirical legal researchers may consider using. All but one (figshare) are non-profits. As described below, they provide hosting in various countries and jurisdictions.
Access to data alone already provides a substantial benefit to future research, but researchers can do more.[79] In this respect, researchers should strive to abide by the Global Open (GO) FAIR Guiding Principles, developed by an international group of academics, funders, industry representatives and publishers.[80] These principles are that data should be findable, accessible, interoperable and reusable (i.e., FAIR). We have already touched on findable data (e.g., via a persistent identifier) and accessible data (e.g., via a long-term repository). Interoperable data can be easily combined with other data and used by other systems (e.g., explaining what variables mean through code). Reusable data typically means data with a license that allows their reuse and ‘metadata’ that fully explains its provenance. One helpful practice to improve interoperability and reusability is associating data with a ‘codebook’ or file explaining the meaning of variables. The Center for Open Science maintains a guide to interoperability and reusability relevant to social sciences.[81]
In the context of digital research, producing FAIR data is particularly important because it allows other researchers using these methods to build off each other’s work. For example, FAIRness allows researchers to find (e.g., through Google’s dataset search)[82] and leverage (e.g., perform meta-analyses and systematic reviews to understand a finding’s robustness better) multiple datasets, which can enhance the scope of studies relying on digital research methods. It also enables researchers to combine and compare data across jurisdictions to test new hypotheses (e.g., see a project that studied using neuroscience as evidence in court across several jurisdictions[83]). By searching registries (i.e., databases of preregistrations), researchers can find studies whose data was never published in a conventional article, possibly because the results were null.
While FAIRness is important, some have raised questions about the relationship between the increasing accessibility of data and knowledge and wisdom.[84] While it can be tempting to view the Open Science Framework’s exponential growth (from hundreds of users in 2012 to over 200,000 today)[85] as an unqualified good, we should consider our relationship with data and what needs they fulfil. For instance, collecting more data for data’s sake and requiring openness do not necessarily make for better science; researchers must come with training and education on examining data to make accurate inferences from results (see Part IV).[86]
Concerning what should be shared, we recommend starting from the presumption that all raw data will be shared and then identifying any necessary restrictions and barriers.[87] Those restraints should then be expressly noted in the manuscript. Preregistration can help with this process because it requires researchers to think through data collection before it occurs.
One of the most pressing considerations is privacy, often true for digital research methods.[88] For instance, several ethical issues arise from collecting and sharing data from online support forums.[89] Albrecht and Citro noted that privacy concerns, especially around personal health data, can rise to the level of human rights issues, particularly when vulnerable and marginalised groups are concerned.[90] Given the increasing amount of data that can be produced about individuals’ health statuses, researchers must mitigate data risks surrounding power imbalances between the researcher(s) and their institution(s) and the communities from which they are collecting data.[91]
Privacy issues can sometimes be managed through seeking consent to release data through the consent procedure and de-identifying data, both of which should be approved by the relevant institutional review board. Concerning consent, useful guidance can be found in the Open Brain Consent project.[92] Note that de-identification may not be possible to the extent necessary for ethical sharing (e.g., when risks of re-identification are high).[93] Additionally, numerous recent changes to laws around data protection should be considered during the research design stage to best enable the ethical sharing of de-identified participant data.[94] Fortunately, some repositories employ qualified personnel who appropriately limit access to raw data,[95] and best practices exist for sharing data from human research participants.[96]
Analytic code, such as scripts produced by several statistical software packages, allow readers to understand exactly how researchers produced the reported findings from the data. Statistical packages typically allow the author to annotate the code with plain-language explanations.[97] Sharing code makes it possible for others to verify conclusions.
(c) Open Your Materials
Open materials also enhance a study’s credibility. Researchers may wish to share materials like interview protocols and scripts, surveys and any image, video or audio files presented to participants.
One of the clearest benefits of open materials is replicability. In other words, future researchers can build off existing work using materials (e.g., surveys) in different periods and contexts. For example, a researcher may wish to repeat a survey or interview conducted in the past to test for a change over time. Alternatively, future researchers may wish to modify instruments used in the past rather than recreate the wheel, even if they do not exactly replicate a previous study.
(d) Consult an Existing or Analogous Checklist when Possible
We are unaware of any law journals requiring or encouraging authors to complete checklists when submitting their work for publication. However, there may be existing checklists for ELR methodologies developed by others using the same methodology. For instance, a group of social and behavioural scientists recently created a checklist for conducting transparent research in their field.[98] They used an iterative, consensus-based protocol (i.e., a ‘reactive-Delphi’ process) to ensure the checklist reflected the views of a diverse array of researchers and stakeholders in their field. Empirical legal researchers conducting behavioural research may find this transparency checklist useful when planning their research and preparing it for publication. The Equator Network curates a database of reporting checklists relevant to various research methods and disciplines.[99]
(e) Publish in Open Access Outlets
Publishing in open access journals also helps ensure that other researchers and the public can use the research. When this is not possible, researchers can publish their work as preprints, which are freely available versions of manuscripts shared on a public online repository.[100] Preprints are a promising avenue for allowing researchers to share their work, increasing the attention it garners[101] without the influence of esoteric journal formatting guidelines or editor pet interests. It may be beneficial for ELR to consider the responses of journals in other fields. For example, the biology journal eLife will only review papers that have been published first as preprints starting in July 2021.[102] This moves away from the model of journals as deciders of what should be published towards them acting as a preprint referee service.
Several other outlets exist for outputs that do not fit neatly into traditional categories (e.g., books and articles) but may be of interest to lay audiences and might be more accessible. This type of engagement may be important in improving the public’s view of scientific work and improving trust in science.[103] Examples of such engagement include citizen science, science PR and science communication through social media, blogging and other communication driven largely by the internet and mass media channels.[104]
2. Three Specific Applications of Research Credibility to ELR
(a) Systematic Content Analysis of Judicial Decisions
Content analysis of judicial decisions has been widely used to address important legal issues.[105] As with other methods, credible research practices can be used to strengthen the inferences drawn from the analysis of judicial decisions and help ensure their enduring usefulness and impact.[106] This subsection drills down two specific ways credible practices can be applied to the empirical analyses of legal authorities: preregistering these studies and using transparent methods.
Preregistration poses a particular challenge when the data are pre-existing, as with the analysis of judicial decisions and other authorities. This is because the hypothesis and methodology should be developed before the researchers have access to the data in preregistration’s purest form. If this is not the case, researchers may inadvertently present their hypotheses as independent of the data when actually constructing an explanation for what they already knew. Another challenge is that researchers may be tempted to sample and code cases in a way that fits their narrative. For instance, researchers may unconsciously determine that cases are relevant or irrelevant for their sample based on what would produce a more publishable result. While these challenges are significant, they do not mean that preregistration is not worthwhile when systematically analysing judicial decisions. Indeed, other useful methods like systematic reviews and meta-analyses use pre-existing data and incorporate preregistration as part of best practices.[107]
One of us has some experience preregistering content analysis of judicial decisions and has found it to be a challenging but useful exercise. For instance, he and colleagues sought to determine whether a widely-celebrated Supreme Court of Canada evidence law decision had produced more exclusions of expert witnesses accused of bias than the previous doctrine in a recent study.[108] The challenge was that it would have been useful to examine some cases before coding them to understand how long the process would take (e.g., for staffing purposes) and how to set up the coding scheme (e.g., evaluating whether judges advert to different aspects of the new doctrine so coders could unambiguously say courts were relying on these rules). However, they were also aware that it would be easy to change their coding scheme and inclusion criteria based on the data to show a more startling result (e.g., slightly shifting the timeframe might make it seem that the focal case had more or less of an impact). In other words, some preregistration type was needed, but the standard form seemed too restrictive.
Accordingly, they decided to establish the temporal scope of the search before looking at the cases to accommodate the difficulty of pre-deciding the criteria by reading a portion of the cases before establishing the coding scheme. They disclosed this encroachment into the data in the preregistration so readers could adjust their interpretation of the results accordingly. This was useful since they could account for several issues that would have been challenging to anticipate without prior knowledge of the cases. For instance, sometimes judges in bench trials would not exclude a witness but rather say that the witness would be assigned no weight. It was challenging to determine if this should be coded as an exclusion. They could anticipate this for the bulk of the cases by looking at some of the data. Had it been done ad hoc completely without any preregistration, it would have been challenging to decide how to code these cases in an unbiased way. They also took the step of disclosing cases that were borderline and required discussion among the authors. However, they did not do so as systematically as they would in the future.[109] In the end, they could give what they thought was a credible picture of the target case’s effect, with preregistration helping to reduce the possible influence of bias and highlighting the study’s limitations.
Notably, preregistration should not be used to inhibit exploratory research. For instance, researchers engaging in linguistic analysis of documents and digital resources (e.g., Leximancer or NVivo) often do not start with pre-defined hypotheses that can be preregistered. However, this type of exploratory work is just as important as confirmatory research. Preregistration does not inhibit exploration. Rather, it allows researchers to ‘explore their data in any way they like, as long as it is clear that is what they are doing’.[110] More generally, the idea is not that they ensure perfectly accurate results. Instead, they should help other researchers evaluate the claims being made, as with all the transparency-related practices we discussed.[111]
Other transparency and openness efforts can also improve the analysis of judicial decisions. One common method in systematic reviews (which also uses pre-existing data) that legal researchers may leverage is preferred reporting items for systematic reviews and meta-analyses (PRISMA) diagrams (see Figure 3).[112] These diagrams thoroughly document the process by which an existing body of knowledge is searched. Like legal researchers, systematic reviewers typically start with keyword searches to identify a group of published studies. These are then winnowed down based on preregistered criteria to what is ultimately analysed. The resulting PRISMA diagram provides a concise explanation of that process.
Figure 3. PRISMA Flow Diagram
Notes: The PRISMA flow diagram is used in many fields to improve the transparency of the secondary analysis of pre-existing studies. Specifically, it makes clearer why some studies were included in the analysis while others were not. As demonstrated in this figure, it can be adapted by empirical legal researchers to transparently report the authorities included in their analysis and those that were excluded. In this example, the researcher searched two prominent databases, but this can be changed as needed.
Consider a recent series of studies that looked at using neuroscience in courts across several jurisdictions to see the value of a PRISMA diagram in the context of ELR. Findings showed that reliance on neuroscience evidence has increased, and the researchers detailed its different uses.[113] Subsequent researchers may wish to extend those findings to see whether the discovered trends and uses have changed. They may further want to stand on the shoulders of the earlier researchers and extend the analysis to other jurisdictions. In either case, they would need clear methods to follow to reproduce the searches, exclusion criteria and coding. Following a well-understood framework like PRISMA to see what exactly was searched and how the search list was reduced to what was presented in the article would be beneficial.
(b) Survey Studies
Legal researchers have used surveys, often conducted online, to address various questions, such as what Australian judicial officers perceive as their greatest challenges and bankruptcy’s impact on debtors.[114] Moreover, almost half of all quantitative studies on topics related to crime and criminal justice use surveys.[115]
Many existing resources aim to improve survey methodology (e.g., sampling and the wording of questions and prompts).[116] However, we are interested in improving the credibility of survey research in law, ensuring that survey findings are reproducible, any errors are detected and corrected, and conclusions are calibrated to strengthen the evidence.
One key aspect of credible survey research—one that is regularly breached in law and elsewhere—is data transparency. In the case of surveys, data transparency pertains to reporting and making available to other researchers all key information about the questionnaire and data collection procedures. An excellent guide is the American Association of Public Opinion Research’s (AAPOR) survey disclosure checklist, which recommends that researchers disclose the survey sponsor, data collection mode, sampling frame, field date (or dates of administration) and the exact wording of questions.[117]
Beyond the exact wording of questions, which should be disclosed per AAPOR, we also recommend researchers make the entire questionnaire publicly available. This permits others to reuse the questions and replicate the order of questions, which could largely affect responses.[118]
In terms of sampling, researchers should state explicitly whether sample selection was probabilistic (i.e., random selection) and describe how sampled respondents compare to the population of interest. This is important because researchers are increasingly using online non-random samples recruited via crowdsourcing or opt-in panels.[119] However, they are mislabelling these samples as ‘representative’, even when they may not match the general population on important characteristics. Mislabelling these online convenience samples representative (which suggests they are probabilistic) may lead readers to put more confidence in the generalisability of findings than is warranted.
Another key type of information that researchers using surveys should disclose is the response rate. Reviews of the literature have revealed widespread failure to report response rates and inconsistencies in calculating those rates.[120] Smith concluded that disclosure standards are routinely ignored, and technical standards regarding definitions and calculations have not been widely adopted in practice’.[121] The problem has only worsened with the expansion of online sampling, further complicating the calculation of responses. For example, correctly calculating the cumulative response rate for a survey fielded with a probability-based online panel, like NORC’s AmeriSpeak panel, requires considering the initial panel recruitment rate and panel profiling rate. However, researchers frequently misreport the study-specific completion rate as the response rate.[122] A path forward requires all researchers using survey data to report the response rate and adhere to the AAPOR’s standard definitions when calculating that rate.[123]
Finally, survey researchers should document and disclose their selection criteria transparently. Beyond common eligibility criteria, such as adult status and country of residence, online platforms give researchers numerous other options that can impact sample composition. These options are rarely reported. For example, researchers commonly restrict participation to workers with certain reputation scores (e.g., at least 95% approval) or prior online task experience (e.g., completed 500 prior tasks) on Amazon Mechanical Turk.[124] Such eligibility restrictions can have a profound effect on data quality and sample composition.[125] Therefore, we recommend that researchers using online platforms, like Amazon’s Mechanical Turk, disclose all employed eligibility restrictions. Additionally, researchers should disclose if they excluded respondents for quality control reasons, such as for speeding through the survey, failing attention checks or participating repeatedly (e.g., duplicate Internet Protocol addresses). They should further report how the exclusions affect the findings. All such exclusions, if undisclosed and decided on after looking at the effects on findings, are considered QRPs and potentially result in false-positive findings.[126]
(c) Qualitative Research
Qualitative methods also play an important role in ELR[127] and the open science movement. These include methods like ethnographies, interviews, focus groups and case studies.[128] While it may be tempting to assume that the reforms we discussed are inappropriate for seemingly more freeform and exploratory research, that is not always the case. We do not deny that there are fundamental differences between quantitative and qualitative methods. For instance, Lisa Webley noted that qualitative researchers often see their work as more interpretivist than positivist, more inductive than deductive, and more interested in socially constructed facts than perpetuating a universal meaning at times.[129] Additionally, many qualitative researchers are sceptical of the modern credibility revolution and open science movement. They view them as imposing quantitatively-focused evaluation criteria on qualitative researchers without understanding the contextual or epistemological differences between these research types.[130] Sensitivity to these epistemological differences has sometimes been lacking during social science’s credibility revolution.[131] However, even though many reforms are grounded in positivist frameworks, ongoing research seeks to translate some new practices to qualitative research methods.
Before delving into the reforms, we note that we are not the first to highlight the importance of credibility in qualitative legal research. For instance, Allison Christians examined research methodology in a meta-analysis of case studies in international law. She found that not one article explained why the specific case was chosen: ‘In each case, the articles simply identify the event or phenomenon as a “case” without further discussion’.[132] She notes that this raises the possibility of selection bias, whereby the case is not representative or probative of the claim it seeks to support. Further, Christians found that studies did not sufficiently explain why certain material was collected to document the case and why other materials were left out. She suggested, ‘What is missing from the literature and what might make the data even more compelling, is a discussion about the authors’ objectives, processes, and reasoning for collecting and using the data’.[133] Further, when data and materials were relied on, Christians found that these sources were often uncited.[134] Recall that data citation is a TOP guideline that only three journals in our sample addressed (Table 1).
Christians’ observation about thinking through and justifying case selection reinforces the importance of preregistering qualitative studies (when preregistration aligns with the research epistemology).[135] Indeed, preregistration is an increasingly discussed reform in positivist circles within qualitative research. One group recently developed a preregistration form through a consensus-based process (the same process used to create the checklist for behavioural studies above), participated by many researchers in the field.[136] Qualitative preregistrations may include details about the research team’s background as it relates to the study as a form of reflexivity (e.g., attending to the experiences, positions and potential biases the researchers bring to bear on what they are studying). They may further include research questions and aims, planned procedures, sampling plans, data collection procedures, planned evidence criteria and triangulation, auditing and validation plans. As qualitative research tends to be more iterative than quantitative research, preregistrations may be most useful not as a means for researchers to establish ‘objectivity’ but as a means to fully explore assumptions they make going into their study. This is another tool for reflexivity as the study progresses.
If preregistration does not align with one’s research epistemology, it is still possible to engage in transparent practices so that others might evaluate research decisions and learn from researchers’ practices. Researchers may be interested in maintaining open laboratory notebooks (adapted to qualitative practices)[137] and/or sharing their research materials (e.g., recruitment materials, interview and focus group protocols, fieldnotes, codebooks, etc.) on a repository, such as the Open Science Framework. Data may also be shared on the Open Science Framework or the Qualitative Data Repository.[138] However, there are important ethical considerations to consider before sharing data. Kristen Monroe outlined several concerns with the Data Access and Research Transparency (DA-RT) and Journal Editors Transparency Statement (JETS) initiatives related to qualitative research.[139] These include space constraints that may hinder fully accounting for qualitative data, participant protection, the time needed to prepare qualitative data for sharing, data collection costs, the right of first usage and a potential chilling effect on certain research topics. Others have outlined concerns surrounding missing layers of interpretation and the importance of consent as an ongoing process.[140]
Researchers should handle datasets involving information from vulnerable populations (e.g., sexual assault survivors or refugees) with care so that participants’ personal information is appropriately protected. Fortunately, many data repositories offer access controls so that researchers may embargo data or provide conditions for access if desired (Table 2). Additionally, some researchers have begun including consent language around sharing data with other researchers on the condition the participants’ anonymity is preserved or to offer conditional consent, where participants can agree to participate but not share data with anyone other than the study’s authors.
Part IV. Guidance for Journals and Law Schools
Most researchers readily endorse norms associated with the reforms we have discussed above.[141] Still, expressed acceptance of these norms exceeds the associated behaviours in other fields (e.g., researchers say sharing materials is important but do not always live up to that ideal).[142] Part III attempted to address one reason behaviours may be lagging behind norms—a lack of concrete guidance aimed at legal researchers. We will now address two more factors that affect the behaviour of researchers: incentives and policies. We will draw on general research on these factors but adapt them to the distinctive ecosystem of legal research and teaching.
As discussed above, there is considerable heterogeneity in journal guidelines among law journals. The most significant advancement in transparency that we found was in two journals adopting guidelines from economics journals. This suggests that journal editors and boards in the ELR space may be open to adopting new guidelines. However, it is important that the task is as easy as possible and the guidelines are tested in similar fields. For that reason, we suggest low-cost and pre-vetted moves journals may make.
Law journals should consider the sample TOP guidelines’ implementation language curated by the Center for Open Science.[143] These can be adapted to the needs of the specific journal. Similarly, journals may consult the guidelines of other law journals in our sample that have implemented TOP (Table 1). RRs adopted by journals in fields ranging from psychology to medicine can be easily rolled out by law journals with recommended author and reviewer instructions available for reuse.[144] RRs may be especially attractive in law, where early-stage reviews may assist researchers with more limited backgrounds in methodologies. Looking towards the horizon, it may be time for the field to develop a journal with a philosophy that values methods over results and regularly publishes work about the methodology itself, such as the present article (potential models are Advances in Methods and Practices in Psychological Science and Collabra: Psychology in psychology).
The situation with US student-edited and reviewed law journals is more complicated because, among other reasons, their editorial boards experience a great deal of turnover. They may be less likely to have empirical backgrounds, and, as Part III indicated, the uptake of current guidelines can be improved. Some of these factors are also at play among Australian law journals. However, the landscape in US student-led journals seems to include the biasing factor of editors considering acceptance of the article at other eminent journals. Consequently, editors may be hesitant to screen articles that have received approval from other journals but do not meet high methodological standards.
These unique hurdles are not insurmountable in the long run. One incremental measure may be for these journals to award badges. Recall that badges do not necessarily factor into article acceptance. Instead, they allow authors to signal to others that they have taken steps to improve their work’s credibility and usability.[145] Moreover, some of the current article’s authors are beginning a project to draft sample guidelines designed for law journals that occasionally publish empirical work. Having these ready-made guidelines may make the change less daunting. We plan to circulate them among law journals, providing many of the justifications presented in this article. Please contact the corresponding author if you would like to contribute.
Finally, the simple step of encouraging the submission of replication studies can be an important step toward improvement and reform. The International Review of Law & Economics, a US peer-reviewed journal, recently held two conferences featuring replication studies that were eventually published in special issues focused on replication studies in law and economics.[146] As discussed above, replication can be essential in correcting and clarifying existing findings,[147] but such studies are rare. Without empirical evidence about a field’s replicability, it may be challenging to see the need for reform. Either individual studies or larger efforts to systematically estimate the replicability of a sub-discipline can provide insight into the extent and consequences of these problems.
Law schools and law faculties can also play a role in encouraging credible practices. This naturally begins with hiring, where committees already seem to place some value on empirical research by hiring individuals who do such work.[148] However, it is less clear whether these committees place value on the credibility and rigour of empirical work (as opposed to factors like its surprisingness and ability to draw headlines). If committees do not consider credible practices, hiring practices may perpetuate irreproducible research.
Hiring committees may wish to align their search criteria and candidate evaluation with recent work, laying out frameworks for basing researcher assessments on the credibility of their work.[149] The Hong Kong Principles distilled research assessments into five factors. They sought to move fields from success indicators (e.g., the esteem of journals and impact factors) to other criteria (e.g., disseminating knowledge through open data and the analysis of existing but poorly understood work) through evidence syntheses and meta-analyses. However, policies that place weight on transparency in hiring should consider barriers to taking up such practices.[150]
Precedents are available to assist institutions seeking to change their hiring practices. The Center for Open Science maintains a list of job listings that refer to research practices.[151] For instance, a recent University of Toronto listing stated, ‘Our department embraces the values of open science and strives for replicable and reproducible research. We therefore support transparent research with open data, open material, and pre-registrations. Candidates are asked to describe in what way they have already pursued and/or plan to pursue open science’.[152] These principles may also be applied to tenure standards.
After hiring, more may be done to promote collaboration between those who have specialised empirical training and experience and those who do not.[153] This would be especially useful in law, where legal experts may not be expected to have the specialised knowledge to empirically test their ideas. One barrier to this initiative is authorship norms and the concern that the methodological work may go unrecognised. In these circumstances, law schools may note a move towards a ‘contributorship’ model of authorship, which recognises the various types of work that go into a publication (see the CRediT statement that began this article).[154]
Internal encouragement within schools and faculties can only go so far, especially when the broader environment rewards high impact publications (where impact is often not directly related to methodology strength). This may be especially problematic in the US, where law school rankings are tied to the eminence of the journals that the faculty publishes. Still, there are incentives to focus on methodology in the US and abroad.[155] For instance, funders are increasingly concerned with and sometimes require open practices.[156]
In producing knowledge for the legal system, empirical legal researchers have a heightened duty to present a full picture of the evidence underlying their results. We are excited for what the next several years hold for better fulfilling that duty. While there are sticking points, like the need for more training, there are also increasing models to follow from cognate fields and an energised group of researchers motivated to put them into action. In the past, calls for change in ELR have sometimes gone unheeded, but never have they been made in the context of a large, sustained movement in the rest of the research ecosystem.
Credit Statement[157]
Bibliography
Aczel, Balazs, Barnabas Szaszi, Alexandra Sarafoglou, Zoltan Kekecs, Šimon Kucharský, Daniel Benjamin, Christopher D. Chambers et al. “A Consensus-Based Transparency Checklist.” Nature Human Behaviour 4, no 1 (2020): 4–6. https://doi.org/10.1038/s41562-019-0772-6.
Albrecht, Kat and Brian Citro. “Data Control and Surveillance in the Global TB Response: A Human Rights Analysis.” Law, Technology and Humans 2, no 1 (2020): 107–123. https://doi.org/10.5204/lthj.v2i1.1487.
Ali, Paul, Lucinda O’Brien and Ian Ramsay. “Bankruptcy and Debtor Rehabilitation: An Australian Empirical Study.” Melbourne University Law Review 40, n 3 (2017): 688–737.
American Association of Public Opinion Research. “Standard Definitions.” https://www.aapor.org/AAPOR_Main/media/publications/Standard-Definitions20169theditionfinal.pdf.
American Association of Public Opinion Research. “Survey Disclosure Checklist.” https://www.aapor.org/Standards-Ethics/AAPOR-Code-of-Ethics/Survey-Disclosure-Checklist.aspx.
Anderson, Melissa S., Brian C. Martinson and Raymond De Vries. “Normative Dissonance in Science: Results from a National Survey of US Scientists.” Journal of Empirical Research on Human Research Ethics 2, no 4 (2007): 3–14. https://doi.org/10.1525%2Fjer.2007.2.4.3.
Angrist, Joshua D. and Jörn-Steffen Pischke. “The Credibility Revolution in Empirical Economics: How Better Research Design is Taking the Con Out of Econometrics.” Journal of Economic Perspectives 24, no 2 (2010): 3–30. https://doi.org/10.1257/jep.24.2.3.
Appleby, Gabrielle, Suzanne Le Mire, Andrew Lynch and Brian Opeskin. “Contemporary Challenges Facing the Australian Judiciary: An Empirical Interruption.” Melbourne University Law Review 42, no 2 (2019): 299–369.
Baker, Monya. “1,500 Scientists Lift the Lid on Reproducibility.” Nature News 533, no 7604 (2016): 452. https://doi.org/10.1038/533452a.
Bannier, Elise, Gareth Barker, Valentina Borghesani, Nils Broeckx, Patricia Clement, Kyrre E. Emblem, Satrajit Ghosh et al. “The Open Brain Consent: Informing Research Participants and Obtaining Consent to Share Brain Imaging Data.” Human Brain Mapping 42, no 7 (2021): 1945–1951. https://doi.org/10.1002/hbm.25351.
Barnett, Kathy. “Citation as a Measure of ‘Impact’: Female Legal Academics at a Disadvantage?” Alternative Law Journal 44, no 4 (2019): 267–274. https://doi.org/10.1177/1037969X19874847.
Bavli, Hilel J. “Credibility in Empirical Legal Analysis.” SSRN. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3434095.
Bell, Felicity. “Empirical Research in Law.” Griffith Law Review 25, no 2 (2016): 262–282. https://doi.org/10.1080/10383441.2016.1236440.
Bowrey, Kathy. A Report into Methodologies Underpinning Australian Law Journal Rankings. (Report prepared for the Council of Australian Law Deans, February 2016).
Branney, Peter, Kate Reid, Nollaig Frost, Susan Coan, Amy Mathieson and Maxine Woolhouse. “A Context-Consent Meta-Framework for Designing Open (Qualitative) Data Studies.” Qualitative Research in Psychology 16, no 3 (2019): 483–502. https://doi.org/10.1080/14780887.2019.1605477.
Center for Open Science. “Approved Protected Access Repositories.” https://osf.io/tvyxz/wiki/8.%20Approved%20Protected%20Access%20Repositories/.
Center for Open Science. “Best Practices.” https://help.osf.io/hc/en-us/categories/360001530634-Best-Practices.
Center for Open Science. “OSF Preregistration.” https://osf.io/prereg/.
Center for Open Science. “Registered Reports: Peer Review before Results are Known to Align Scientific Values and Practices.” https://www.cos.io/initiatives/registered-reports.
Center for Open Science. “Templates of OFS Registration Forms.” https://osf.io/zab38/wiki/home/.
Center for Open Science. “The TOP Guidelines were Created by Journals, Funders, and Societies to Align Scientific Ideals with Practices.” https://cos.io/top.
Center for Open Science. “TOP Factor.” http://topfactor.org.
Center for Open Science. “TOPMixedLevelsJournals.gdoc.” https://osf.io/edtxm.
Center for Open Science. “Universities.” https://osf.io/kgnva/wiki/Universities.
Chambers, Chris. “What’s Next for Registered Reports?” Nature 573 (2019): 187–189. https://doi.org/10.1038/d41586-019-02674-6.
Chin, Jason M., Michael Lutsky and Itiel E. Dror. “The Biases of Experts: An Empirical Analysis of Expert Witness Challenges.” Manitoba Law Journal 42 (2019): 21.
Christensen, Garret, Zenan Wang, Elizabeth Paluck, Nicholas Swanson, David Birke, Edward Miguel and Rebecca Littman. “Open Science Practices Are on the Rise: The State of Social Science (3S) Survey.” MetaArXiv. Last modified October 18, 2019. https://doi:10.31222/osf.io/5rksu.
Christians, Allison. “Case Study Research and International Tax Theory.” St Louis University Law Journal 55 (2010): 331–367.
Claremont McKenna College’s Program on Empirical Legal Studies (PELS). “Call for Papers: Empirical Legal Studies Replication Conference, Spring 2019.” https://www.cmc.edu/robert-day-school/call-for-papers-empirical-legal-studies-replication-conference-spring-2019.
Diamond, Shari Seidman. “Empirical Legal Scholarship: Observations on Moving Forward.” Northwestern University Law Review 113 no 5 (2019): 1229–1241.
Diamond, Shari Seidman. “Empirical Marine Life in Legal Waters: Clams, Dolphins, and Plankton.” University of Illinois Law Review 2002, no 4 (2002): 803–818.
Dickersin, Kay and Iain Chalmers. “Recognizing, Investigating and Dealing with Incomplete and Biased Reporting of Clinical Research: from Francis Bacon to the WHO.” Journal of the Royal Society of Medicine 104, no 12 (2011): 532–538. https://doi.org/10.1258%2Fjrsm.2011.11k042.
Dillman, Don A., Jolene D. Smyth and Leah Melani Christian. Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method. New Jersey: John Wiley & Sons, 2014.
Epstein, Lee and Gary King. “The Rules of Inference.” University of Chicago Law Review 69, no 1 (2002): 1–133.
Equator Network. “Enhancing the QUAlity and Transparency Of health Research.” https://www.equator-network.org/.
Fanelli, Daniele. “Negative Results are Disappearing from Most Disciplines and Countries.” Scientometrics 90, no 3 (2012): 891–904. https://doi.org/10.1007/s11192-011-0494-7.
Flake, Jessica K. and Eiko I. Fried. “Measurement Schmeasurement: Questionable Measurement Practices and How to Avoid Them.” Advances in Methods and Practices in Psychological Science 3, no 4 (2020): 456–465. https://doi.org/10.1177/2515245920952393.
Force11. “Joint Declaration of Data Citation Principles—Final.” https://www.force11.org/datacitationprinciples.
Frankenhuis, Willem E. and Daniel Nettle. “Open Science is Liberating and can Foster Creativity.” Perspectives on Psychological Science 13, no 4 (2018): 439–447. https://doi.org/10.1177/1745691618767878.
Fu, Darwin Y. and Jacob J. Hughey. “Meta-Research: Releasing a Preprint is Associated with More Attention and Citations for the Peer-Reviewed Article.” Elife 8 (2019): e52646. https://doi.org/10.7554/eLife.52646.
Funk, Cary, Meg Hefferon, Brian Kennedy, and Courtney Johnson. “Trust and Mistrust in Americans’ Views of Scientific Experts.” Pew Research Center. https://www.pewresearch.org/science/2019/08/02/trust-and-mistrust-in-americans-views-of-scientific-experts/
Galloway, Mitchell T. “The States Have Spoken: Allow Expanded Media Coverage of the Federal Courts.” Vanderbilt Journal of Entertainment & Technology Law 21, no 3 (2018): 777–822.
Gelman, Andrew. “Ethics and Statistics: Honesty and Transparency are not Enough.” Chance 30, no 1 (2017): 37–39. https://doi.org/10.1080/09332480.2017.1302720.
Gibbons, Michael. “Science’s New Social Contract with Society.” Nature 402, no 6761 (1999): C81–C84. https://doi.org/10.1038/35011576.
Google. “Dataset Search.” https://datasetsearch.research.google.com/.
Gow, James, Colin Moffatt and Jamie Blackport. “Participation in Patient Support Forums May Put Rare Disease Patient Data at Risk of Re-Identification.” Orphanet Journal of Rare Diseases 15, no 1 (2020): 1–12. https://doi.org/10.1186/s13023-020-01497-3.
Hall, Mark A. and Ronald F. Wright. “Systematic Content Analysis of Judicial Opinions.” California Law Review 96, no 1 (2008): 63–122.
Han, SeungHye, Tolani F. Olonisakin1, John P. Pribis, Jill Zupetic, Joo Heung Yoon, Kyle M. Holleran, Kwonho Jeong et al. “A Checklist is Associated with Increased Quality of Reporting Preclinical Biomedical Research: A Systematic Review.” PLoS One 12, no 9 (2017): e0183591. https://doi.org/10.1371/journal.pone.0183591.
Hardwicke, Tom E., Stylianos Serghiou, Perrine Janiaud, Valentin Danchev, Sophia Crüwell, Steven N. Goodman and John P.A. Ioannidis. “Calibrating the Scientific Ecosystem through meta-Research.” Annual Review of Statistics and Its Application 7 (2020): 11–37. https://doi.org/10.1146/annurev-statistics-031219-041104.
Haven, Tamarinde L. and Leonie Van Grootel. “Preregistering Qualitative Research.” Accountability in Research 26, no 3 (2019): 229–244. https://doi.org/10.1080/08989621.2019.1580147.
Haven, Tamarinde, Timothy Errington, Kristian Gleditsch, Leonie van Grootel, Alan Jacobs, Florian Kern et al. “Preregistering Qualitative Research: A Delphi Study.” SocArXiv. Last modified July 6, 2020. http://doi.org/10.31235/osf.io/pz9jr.
Holcombe, Alex. “Farewell Authors, Hello Contributors.” Nature 571, no 7763 (2019): 147–148. https://doi.org/10.1038/d41586-019-02084-8.
Holcombe, Alex O., Marton Kovacs, Frederik Aust, Balazs Aczel. “Documenting Contributions to Scholarly Articles Using CRediT and Tensing.” PloS ONE 15 no 12 (2020): e0244611. https://doi.org/10.1371/journal.pone.0244611.
Horbach, S. P. J. M. Serge and Willem W. Halffman. “The Changing Forms and Expectations of Peer Review.” Research Integrity and Peer Review 3, no 1 (2018): 1–15. https://doi.org/10.1186/s41073-018-0051-5.
Hunter, Jill and Richard I. Kemp. “Proposed Changes to the Tendency Rule: A Note of Caution.” Criminal Law Journal 41 (2017): 253–260.
Jacobi, Tonja, Zoë Robinson and Patrick Leslie. “Querying the Gender Dynamics of Interruptions at Australian Oral Argument.” University of New South Wales Law Journal Forum 4 (2020): 1–19.
John, Leslie K., George Loewenstein and Drazen Prelec. “Measuring the Prevalence of Questionable Research Practices with Incentives for Truth Telling.” Psychological Science 23, no 5 (2012): 524–532. https://doi.org/10.1177%2F0956797611430953.
Jupyter. “Jupyter.” https://jupyter.org.
Kidwell, Mallory C., Ljiljana B. Lazarević, Erica Baranski, Tom E. Hardwicke, Sarah Piechowski, Lina-Sophia Falkenberg, Curtis Kennett et al. “Badges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method for Increasing Transparency.” PLoS Biology 14, no 5 (2016): e1002456. https://doi.org/10.1371/journal.pbio.1002456.
Kleck, Gary, Jongyeon Tark and Jon J. Bellows. “What Methods are Most Frequently Used in Research in Criminology and Criminal Justice?” Journal of Criminal Justice 34, no 2 (2006): 147–152. https://doi.org/10.1016/j.jcrimjus.2006.01.007.
Klein, Olivier, Tom E. Hardwicke, Frederik Aust, Johannes Breuer, Henrik Danielsson, Alicia Hofelich Mohr, Hans IJzerman et al. “A Practical Guide for Transparency in Psychological Science.” Collabra: Psychology 4, no 1 (2018) 1–15. https://doi.org/10.1525/collabra.158.
Korobkin, Russell. “Empirical Scholarship in Contract Law: Possibilities and Pitfalls.” University of Illinois Law Review 2002, no 4 (2002): 1033–1066.
Krause, Nicole M., Dominique Brossard, Dietram A Scheufele, Michael A Xenos and Keith Franke. “Trends—Americans’ Trust in Science and Scientists.” Public Opinion Quarterly 83, no 4 (2019): 817–836. https://doi.org/10.1093/poq/nfz041.
Loughland, Amelia. “Female Judges, Interrupted: A Study of Interruption Behaviour during Oral Argument in the High Court of Australia.” Melbourne University Law Review 43, no 2 (2019): 822–851.
Marsella, Anthony J. “Toward a ‘Global-Community Psychology’: Meeting the Needs of a Changing World.” American Psychologist 53, no 12 (1998): 1282–1291. https://doi.org/10.1037/0003-066X.53.12.1282.
Meyer, Michelle N. “Practical Tips for Ethical Data Sharing.” Advances in Methods and Practices in Psychological Science 1, no 1 (2018): 131–144. https://doi.org/10.1177/2515245917747656.
Moher, David, Lex Bouter, Sabine Kleinert, Paul Glasziou, Mai Har Sham, Virginia Barbour, Anne-Marie Coriat et al. “The Hong Kong Principles for Assessing Researchers: Fostering Research Integrity.” PLoS Biology 18, no 7 (2020): e3000737. https://doi.org/10.1371/journal.pbio.3000737.
Monroe, Kristen Renwick. “The Rush to Transparency: DA-RT and the Potential Dangers for Qualitative Research.” Perspectives on Politics 16, no 1 (2018): 141–148. https://doi.org/10.1017/S153759271700336X.
Morse, Stephen J. “Actions Speak Louder than Images: The use of Neuroscientific Evidence in Criminal Cases.” Journal of Law and the Biosciences 3, no 2 (2016): 336–342. https://doi.org/10.1093/jlb/lsw025.
Munafò, Marcus R., Brian A. Nosek, Dorothy V. M. Bishop, Katherine S. Button, Christopher D. Chambers, Nathalie Percie du Sert et al. “A Manifesto for Reproducible Science.” Nature Human Behaviour 1, no 1 (2017): 1–9. https://doi.org/10.1038/s41562-016-0021.
Murray, Ian and Natalie Skead. “‘Who Publishes Where?’: Who Publishes in Australia’s Top Law Journals and Which Australians Publish in Top Global Law Journals.” University of Western Australia Law Review 47, no 2 (2020): 220–282.
National Academies of Sciences, Engineering, and Medicine. “Open Science by Design: Realizing a Vision for 21st Century Research.” (2018). https://doi.org/10.17226/25116.
Necker, Sarah. “Scientific Misbehavior in Economics.” Research Policy 43, no 10 (2014): 1747–1759. https://doi.org/10.1016/j.respol.2014.05.002.
Nettheim, G. “The Principle of Open Justice.” Tasmanian Law Review 8, no 1 (1984): 25–45.
New South Wales Law Reform Commission. Open Justice Court and Tribunal Information: Access, Disclosure, and Publication (Consultation Paper 22, December 2020).
Newman, Greg, Andrea Wiggins, Alycia Crall, Eric Graham, Sarah Newman and Kevin Crowston. “The Future of Citizen Science: Emerging Technologies and Shifting Paradigms.” Frontiers in Ecology and the Environment 10, no 6 (2012): 298–304. https://doi.org/10.1890/110294.
Nosek, Brian A. “Shifting the Research Culture Toward Openness and Reproducibility” (2021). https://osf.io/3ztr9/.
Nosek, Brian A. and Timothy M. Errington. “What is Replication?” PLoS Biology 18, no 3 (2020): e3000691. https://doi.org/10.1371/journal.pbio.3000691.
Nosek, Brian A., Charles R. Ebersole, Alexander C. DeHaven and David T. Mellor. “The Preregistration Revolution.” Proceedings of the National Academy of Sciences 115, no 11 (2018): 2600–2606. https://doi.org/10.1073/pnas.1708274114.
Nosek, Brian A., G. Alter, G. C. Banks, D. Borsboom, S. D. Bowman, S. J. Breckler, S. Buck et al. “Promoting an Open Research Culture.” Science 348, no 6242 (2015): 1422–1425. https://doi.org/10.1126/science.aab2374.
Open Science Collaboration. “Estimating the Reproducibility of Psychological Science.” Science 349, no 6251 (2015): 943. https://doi.org/10.1126/science.aac4716.
Page, Matthew J., David Moher, Patrick M. Bossuyt, Isabelle Boutron, Tammy C. Hoffmann, Cynthia D. Mulrow, Larissa Shamseer et al. “PRISMA 2020 Explanation and Elaboration: Updated Guidance and Exemplars for Reporting Systematic Reviews.” BMJ 372 (2021). https://doi.org/10.1136/bmj.n160.
Paluck, Elizabeth L., Roni Porat, Chelsey S. Clark and Donald P., Green. “Prejudice Reduction: Progress and Challenges.” Annual Review of Psychology 72 (2021): 533–560. https://doi.org/10.1146/annurev-psych-071620-030619.
Pasquali, Matias. “Video in Science: Protocol Videos: The Implications for Research and Society.” EMBO Reports 8, no 8 (2007): 712–716. https://doi.org/10.1038/sj.embor.7401037.
Pechar, Emily, Thomas Bernauer and Frederick Mayer. “Beyond Political Ideology: The Impact of Attitudes Towards Government and Corporations on Trust in Science.” Science Communication 40, no 3 (2018): 291–313. https://doi.org/10.1177%2F1075547018763970.
Peer, Eyal, Joachim Vosgerau, and Alessandro Acquisti. “Reputation as a Sufficient Condition for Data Quality on Amazon Mechanical Turk.” Behavior Research Methods 46, no 4 (2014): 1023–1031. https://doi.org/10.3758/s13428-013-0434-y.
Perrier, Laure, Erik Blondal, and Heather MacDonald. “The Views, Perspectives, and Experiences of Academic Researchers with Data Sharing and Reuse: A Meta-Synthesis.” PloS ONE 15, no 2 (2020): e0229182. https://doi.org/10.1371/journal.pone.0229182.
Pownall, Madeleine, Catherine V. Talbot, Anna Henschel, Alexandra Lautarescu, Kelly Lloyd, Helena Hartmann, Kohinoor M. Darda et al. “Navigating Open Science as Early Career Feminist Researchers.” PsyArXiv. October 13, 2021. https://doi.org/10.31234/osf.io/f9m47.
PRISMA. “PRISMA 2009 Flow Diagram.” http://prisma-statement.org/documents/PRISMA%202009%20flow%20diagram.pdf.
PRISMA. “Registration.” http://www.prisma-statement.org/Protocols/Registration.
Qualitative Data Repository. “The Qualitative Data Repository.” https://qdr.syr.edu.
R Studio. “R Markdown.” https://rmarkdown.rstudio.com.
Rowhani-Farid, Anisa and Adrian G. Barnett. “Badges for Sharing Data and Code at Biostatistics: An Observational Study.” F1000Research 7, no 90 (2018). https://dx.doi.org/10.12688/f1000research.13477.1.
Schapira, Matthieu, Rachel J. Harding and Open Lab Notebook Consortium. “Open Laboratory Notebooks: Good for Science, Good for Society, Good for Scientists.” F1000Research 8 (2019). https://dx.doi.org/10.12688/f1000research.17710.2.
Scheel, Anne M., Mitchell R. M. J. Schijen and Daniël Lakens. “An Excess of Positive Results: Comparing the Standard Psychology Literature with Registered Reports.” Advances in Methods and Practices in Psychological Science 4, no 2 (2021): 1–12. https://doi.org/10.1177%2F25152459211007467.
Scheel, Anne M., Mitchell Schijen and Daniel Lakens. “An Excess of Positive Results: Comparing the Standard Psychology Literature with Registered Reports.” PsyArXiv. February 5, 2020. https://doi.org/10.31234/osf.io/p6e9c.
Sheehan, Kim Bartel and Matthew Pittman. Amazon’s Mechanical Turk for Academics: The HIT Handbook for Social Science Research. Irvine: Melvin & Leigh, Publishers, 2016.
Simmons, Joseph P., Leif D. Nelson and Uri Simonsohn. “False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant.” Psychological Science 22, no 11 (2011): 1359–1366. https://doi.org/10.1177%2F0956797611417632.
Smaldino, Paul E. and Richard McElreath. “The Natural Selection of Bad Science.” Royal Society Open Science 3, no 9 (2016): 160384. https://doi.org/10.1098/rsos.160384.
Smedley, Richard M. and Neil S. Coulson. “A Practical Guide to Analysing Online Support Forums.” Qualitative Research in Psychology 18, no 1 (2021): 76–103. https://doi.org/10.1080/14780887.2018.1475532.
Smith, Tom W. “Developing Nonresponse Standards.” Survey Nonresponse (2001): 27.
Soderberg, Courtney K., Timothy Errington, Sarah Schiavone, Julia Bottesini, Felix Singleton Thorn, Simine Vazire, Kevin Esterling et al. “Initial Evidence of Research Quality of Registered Reports Compared to the Traditional Publishing Model.” MetaArXiv. November 16, 2020. https://doi.org/10.31222/osf.io/7x9vy.
Syed, Moin, and Kate C. McLean. “Disentangling Paradigm and Method Can Help Bring Qualitative Research to Post-positivist Psychology and Address the Generalizability Crisis.” PsyArXiv. March 30, 2021. https://doi.org/10.31234/osf.io/dvekh.
Tamminen, Katherine A. and Zoë A. Poucher. “Open Science in Sport and Exercise Psychology: Review of Current Approaches and Considerations for Qualitative Inquiry.” Psychology of Sport and Exercise 36 (2018): 17–28. https://doi.org/10.1016/j.psychsport.2017.12.010.
The American Economics Association. “The American Economic Association’s Registry for Randomized Controlled Trials.” https://www.socialscienceregistry.org/.
Thompson, Andrew J. and Justin T. Pickett. “Are Relational Inferences from Crowdsourced and Opt-In Samples Generalizable? Comparing Criminal Justice Attitudes in the GSS and Five Online Samples.” Journal of Quantitative Criminology (2019): 1–26. https://doi.org/10.1007/s10940-019-09436-7.
Tyler, Tom R. “Procedural Justice and the Courts.” Court Review 44, no 1–2 (2007): 26.
Uggen, Christopher and Michelle Inderbitzin. “Public Criminologies.” Criminology & Public Policy 9, no 4 (2010): 725–749. https://doi.org/10.1111/j.1745-9133.2010.00666.x.
Van Meter, Heather J. “Revising the DIKW Pyramid and the Real Relationship between Data, Information, Knowledge, and Wisdom.” Law, Technology and Humans 2, no 2 (2020): 69–80. https://doi.org/10.5204/lthj.1470.
van Rooij, Iris and Giosuè Baggio. “Theory Before the Test: How to Build High-Verisimilitude Explanatory Theories in Psychological Science.” Perspectives on Psychological Science (2020): 1745691620970604. https://doi.org/10.1177/1745691620970604.
Vazire, Simine. “Implications of the Credibility Revolution for Productivity, Creativity, and Progress.” Perspectives on Psychological Science 13, no 4 (2018): 411–417. https://doi.org/10.1177%2F1745691617751884.
Vazire, Simine. “Our Obsession with Eminence Warps Research.” Nature News 547, no 7661 (2017): 7. https://doi.org/10.1038/547007a.
Vazire, Simine and Alex O. Holcombe. “Where Are the Self-correcting Mechanisms in Science?” PsyArXiv. August 13, 2020. http://doi.org/10.31234/osf.io/kgqzt.
Vines, Timothy H., Arianne Y. K. Albert, Rose L. Andrew, Florence Débarre, Dan G. Bock, Michelle T. Franklin, Kimberly J. Gilbert et al. “The Availability of Research Data Declines Rapidly with Article Age.” Current Biology 24, no 1 (2014): 94–97. https://doi.org/10.1016/j.cub.2013.11.014.
Weatherall, Kimberlee and Rebecca Giblin. “Inoculating Law Schools Against Bad Metrics.” In Feminist Perspectives on Law, Law Schools and Law Reform: Essays in Honour of Professor Jill McKeough, edited by K. Bowrey. Federation Press: Sydney. Forthcoming. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3772437.
Webley, Lisa. “Qualitative Approaches to Empirical Legal Research.” The Oxford Handbook of Empirical Legal Research (2010): 927–950.
Whitbourn, Michaela. “Female High Court Judges ‘Far More Likely’ to Be Interrupted than Male Peers: Study.” The Sydney Morning Herald, February 5, 2020. https://www.smh.com.au/national/female-high-court-judges-far-more-likely-to-be-interrupted-than male-peers-study-20200204-p53xjw.html.
Wilkinson, Mark D., Michel Dumontier, IJsbrand Jan Aalbersberg, Gabrielle Appleton, Myles Axton, Arie Baak, Niklas Blomberg et al. “The FAIR Guiding Principles for Scientific Data Management and Stewardship.” Scientific Data 3, no 1 (2016): 1–9. https://doi.org/10.1038/sdata.2016.18.
Williams, Matthew L., Adam Edwards, William Housley, Peter Burnap, Omer Rana, Nick Avis, Jeffrey Morgan and Luke Sloan. “Policing Cyber-Neighbourhoods: Tension Monitoring and Social Media Networks.” Policing and Society 23, no 4 (2013): 461–481. https://doi.org/10.1080/10439463.2013.780225.
Zeiler, Kathryn. “The Future of Empirical Legal Scholarship: Where Might We Go from Here.” Journal of Legal Education 66, no 1 (2016): 78–99.
Primary Sources
Buckley v Valeo[1976] USSC 24; , 424 US 1 (1976).
EU General Data Protection Regulation. “Receipt 26 EU GDPR.” https://www.privacy-regulation.eu/en/recital-26-GDPR.htm.
[1] Epstein, “The Rules of Inference,” 6; Hall, “Systematic Content Analysis of Judicial Opinions,” 74.
[2] Nettheim, “The Principle of Open Justice,” 25–45.
[3] See Galloway, “The States Have Spoken,” 777–882; New South Wales Law Reform Commission, Open Justice Court and Tribunal Information.
[4] Tyler, “Procedural Justice and the Courts,” 26; ‘Sunlight is ... the best of disinfectants.’: Buckley v Valeo (1976), 67 (quoting Louis D Brandeis).
[5] Epstein, “The Rules of Inference,” 1–133; Zeiler, “The Future of Empirical Legal Scholarship,” 78–99; Bavli, “Credibility in Empirical Legal Analysis.”
[6] These are systematic content analyses of judicial decisions, survey methods, qualitative methods and changing research cultures.
[7] Vazire, “Implications of the Credibility Revolution,” 411–417; Angrist, “The Credibility Revolution in Empirical Economics,” 3–30. See also National Academies of Sciences, Engineering, and Medicine, Committee on Toward an Open Science Enterprise (NASEM), Open Science by Design, 107.
[8] Vazire, “Where Are the Self-Correcting Mechanisms in Science?”
[9] NASEM, Open Science by Design.
[10] Hardwicke, “Calibrating the Scientific Ecosystem,” 16.
[11] Vines, “The Availability of Research Data Declines Rapidly with Article Age,” 94–97.
[12] Nosek, “What is Replication?,” e3000691.
[13] Hardwicke, “Calibrating the Scientific Ecosystem,” 11–37.
[14] Funk, “Trust and Mistrust in Americans’ Views of Scientific Experts,” 24.
[15] NASEM, Open Science by Design, 107.
[16] Korobkin, “Empirical Scholarship in Contract Law: Possibilities and Pitfalls,” 1035.
[17] Diamond, “Empirical Marine Life in Legal Waters: Clams, Dolphins, and Plankton,” 805.
[18] Zeiler, “The Future of Empirical Legal Scholarship,” 78–99.
[19] Williams, “Policing Cyber-Neighbourhoods: Tension Monitoring and Social Media Networks,” 461–481.
[20] Albrecht, “Data Control and Surveillance in the Global TB Response: A Human Rights Analysis,” 107–123.
[21] Zeiler, “The Future of Empirical Legal Scholarship,” fn 34.
[22] Hunter, “Proposed Changes to the Tendency Rule: A Note of Caution,” 253–260; Uggen, “Public Criminologies,” 725–749.
[23] Gibbons, “Science’s New Social Contract with Society,” 81–84.
[24] Loughland, “Female Judges, Interrupted,” 822–851; Whitbourn, “Female High Court Judges ‘Far More Likely’.”
[25] Loughland, “Female Judges, Interrupted,” 844–845; Paluck, “Prejudice Reduction: Progress and Challenges,” 533–560.
[26] Jacobi, “Querying the Gender Dynamics of Interruptions,” 1–19.
[27] On the general importance of ELR and training young researchers, see Bell, “Empirical Research in Law,” 262–282.
[28] Indeed, one barrier to data sharing is a lack of knowledge of how to do it, see Perrier, “The Views, Perspectives, and Experiences of Academic Researchers with Data Sharing and Reuse: A Meta-Synthesis,” 13.
[29] See Open Science Collaboration (OSC), “Estimating the Reproducibility of Psychological Science.”
[30] Baker, “1,500 Scientists Lift the Lid on Reproducibility,” 452.
[31] John, “Measuring the Prevalence of QRPs,” 524–532; Necker, “Scientific Misbehavior in Economics,” 25–45.
[32] Simmons, “False-Positive Psychology,” 1359–1366.
[33] Fanelli, “Negative Results are Disappearing,” 891–904.
[34] Hardwicke, “Calibrating the Scientific Ecosystem,” 18.
[35] Christensen, “Open Science Practices are on the Rise.”
[36] Dickersin, “Recognizing, Investigating and Dealing with Incomplete and Biased Reporting of Clinical Research: From Francis Bacon to the WHO,” 532–538.
[37] Nosek, “The Preregistration Revolution,” 2600–2606.
[38] Pasquali, “Video in Science,” 712–716.
[39] This is because registries of studies, many of which may not yet be published, can be searched.
[40] Nosek, “The Preregistration Revolution,” 2600–2606.
[41] Christensen, “Open Science Practices are on the Rise”; Figure 1 is reproduced under a CC-BY Attribution 4.0 international license.
[42] van Rooij, “Theory Before the Test”; Flake, “Measurement Schmeasurement: Questionable Measurement Practices and How to Avoid Them,” 456–465.
[43] Frankenhuis, “Open Science is Liberating and can Foster Creativity,” 439–447.
[44] Chambers, “What’s Next for Registered Reports?,” 187–189.
[45] Center for Open Science, “Registered Reports.”
[46] Scheel, “An Excess of Positive Results”; Figure 2 is reproduced under a CC-BB Attribution 4.0 international license.
[47] Fanelli, “Negative Results are Disappearing,” 891–904.
[48] OSC, “Estimating the Reproducibility of Psychological Science,” 943.
[49] Soderberg, “Research Quality of Registered Reports.”
[50] Center for Open Science, “The TOP Guidelines.”../customXml/item1.xml
[51] Christensen, “Open Science Practices are on the Rise.”
[52] Kidwell, “Badges to Acknowledge Open Practices,” e1002456.
[53] Kidwell, “Badges to Acknowledge Open Practices,” e1002456.
[54] Rowhani-Farid, “Badges for Sharing Data and Code at Biostatistics.”
[55] Equator Network, “Enhancing the QUAlity and Transparency Of health Research”; Aczel, “A Consensus-Based Transparency Checklist.”
[56] See Han, “A Checklist is Associated with Increased Quality of Reporting,” e0183591.
[57] Horbach, “The Changing Forms and Expectations of Peer Review,” 1–15.
[58] Nosek, “Promoting an Open Research Culture,” 1422–1425.
[59] Epstein, “The Rules of Inference,” 1–133.
[60] Zeiler, “The Future of Empirical Legal Scholarship,” 78–99.
[61] In science, see Smaldino, “The Natural Selection of Bad Science,” 160384; In law, see Zeiler, “The Future of Empirical Legal Scholarship,” 79–80, 87–98.
[62] Zeiler, “The Future of Empirical Legal Scholarship,” 78–79.
[63] Vazire, “Our Obsession with Eminence Warps Research,” 7.
[64] OSC, “Estimating the Reproducibility of Psychological Science.”
[65] Simmons, “False-Positive Psychology,” 1362.
[66] Nosek, “Promoting and Open Research Culture.”
[67] Center for Open Science, “TOP Factor.”
[68] The raw data can be found at The Authors, TOP Factor—Law Journals.xls, https://osf.io/58nxr/. The rating rubric can be found at Center for Open Science, TOP-factor-rubric.docx, https://osf.io/t2yu5/. Results are up to date as of December 27, 2020.
[69] See Bowrey, A Report into Methodologies Underpinning Australian Law Journal Rankings; Murray, “‘Who Publishes Where?’: Who Publishes in Australia’s Top Law Journals and Which Australians Publish in Top Global Law Journals,” 220–282.
[70] See Zeiler, “The Future of Empirical Legal Scholarship,” 78–99.
[71] Christensen, “Open Science Practices are on the Rise.”
[72] Force11, “Joint Declaration of Data Citation Principles.”
[73] Center for Open Science, “OSF Preregistration.”
[74] The American Economics Association, “The American Economics Association’s Registry for Randomized Controlled Trials.”
[75] Nosek, “The Preregistration Revolution,” 2600–2606.
[76] Center for Open Science, “Templates of OFS Registration Forms.”
[77] Vines, “The Availability of Research Data Declines Rapidly with Article Age,” 94–97.
[78] Klein, “A Practical Guide for Transparency in Psychological Science,” 6.
[79] NASEM, “Open Science by Design,” 28.
[80] Wilkinson, “The FAIR Guiding Principles,” 1–9.
[81] Center for Open Science, “Best Practices.”
[82] Google, “Dataset Search.”
[83] Morse, “Actions Speak Louder than Images: The Use of Neuroscientific Evidence in Criminal Cases,” 336–342.
[84] Van Meter, “Revising the DIKW Pyramid and the Real Relationship between Data, Information, Knowledge, and Wisdom,” 69–80.
[85] Nosek, “Shifting the Research Culture Toward Openness and Reproducibility.”
[86] Gelman, “Ethics and Statistics: Honesty and Transparency are not Enough,” 37–39.
[87] Klein, “A Practical Guide for Transparency in Psychological Science,” 2.
[88] Klein, “A Practical Guide for Transparency in Psychological Science,” 4.
[89] Smedley, “A Practical Guide to Analysing Online Support Forums,” 76–103.
[90] Albrecht, “Data Control and Surveillance in the Global TB Response: A Human Rights Analysis,” 107–123.
[91] Marsella, “Towards a ‘Global-Community Psychology’,” 1285–1287.
[92] Bannier, “The Open Brain Consent: Informing Research Participants and Obtaining Consent to Share Brain Imaging Data,” 1945–1951.
[93] Gow, “Participation in Patient Support Forums May Put Rare Disease Patient Data at Risk of Re-Identification,” 1–12.
[94] See, e.g., EU, “General Data Protection Regulation,” R 26.
[95] Center for Open Science, “Approved Protected Access Repositories.”
[96] Meyer, “Practical Tips for Ethical Data Sharing.”
[97] R Studio, “R Markdown”; Jupyter, “Jupyter.”
[98] Aczel, “A Consensus-Based Transparency Checklist.”
[99] Equator Network, “Enhancing the QUAlity and Transparency of Health Research.”
[100] NASEM, Open Science by Design, 66.
[101] FU, “Meta-Research: Releasing a Preprint is Associated with More Attention and Citations for the Peer-Reviewed Article,” e52646.
[102] Flake, “Measurement Schmeasurement: Questionable Measurement Practices and How to Avoid Them,” 456–465.
[103] Krause, “Trends—Americans’ Trust in Science and Scientists,” 817–836; Pechar, “Beyond Political Ideology: The Impact of Attitudes Towards Government and Corporations on Trust in Science,” 291–313.
[104] Newman, “The Future of Citizen Science: Emerging Technologies and Shifting Paradigms,” 298–304.
[105] Hall, “Systematic Content Analysis of Judicial Opinions,” 63–122.
[106] Epstein, “The Rules of Inference,” 38–45, urged researchers to use more reproducible practices for this reason. We agree and will try to make this general exhortation more concrete.
[107] Page, “PRISMA 2020 Explanation and Elaboration: Updated Guidance and Exemplars for Reporting Systematic Reviews.”
[108] Chin, “The Biases of Experts,” 21.
[109] Chin, “The Biases of Experts,” fn 85.
[110] Frankenhuis, “Open Science is Liberating and can Foster Creativity,” 441.
[111] Hardwicke, “Calibrating the Scientific Ecosystem Through Meta-Research,” 11–37.
[112] PRISMA, “PRISMA 2009 Flow Diagram.”
[113] Morse, “Actions Speak Louder than Images: The Use of Neuroscientific Evidence in Criminal Cases,” 336–342.
[114] For judicial officers, see Appleby, “Contemporary Challenges Facing the Australian Judiciary,” 299–369; for bankrupts, see Ali, “Bankruptcy and Debtor Rehabilitation,” 688–737.
[115] Kleck, “What Methods are Most Frequently Used in Research in Criminology and Criminal Justice?,” 147–152.
[116] See, e.g., Nardi, “Doing Survey Research.”
[117] American Association of Public Opinion Research, “Survey Disclosure Checklist.”
[118] Dillman, Internet, Phone, Mail, and Mixed-Model Surveys: The Tailored Design Method.
[119] Thompson, “Are Relational Inferences from Crowdsourced and Opt-In Samples Generalizable,” 1–26.
[120] Smith, “Developing Nonresponse Standards,” 27.
[121] Smith, “Developing Nonresponse Standards,” 39.
[122] The correct calculation is as follows: cumulative response rate = recruitment rate x panel profiling rate x completion rate.
[123] American Association of Public Opinion Research, “Standard Definitions.”
[124] Sheehan, Amazon’s Mechanical Turk for Academics.
[125] Peer, “Reputation as a Sufficient Condition for Data Quality on MTurk,” 1023–1031.
[126] Simmons, “False-Positive Psychology,” 1359–1366.
[127] Webley, “Qualitative Approaches to Empirical Legal Research,” 927–950.
[128] Christians, “Critical Issues in Comparative & International Taxation.”
[129] Webley, “Qualitative Approaches to Empirical Legal Research,” 929–931.
[130] Tamminen, “Open Science in Sport and Exercise Psychology,” 17–28.
[131] Syed, “Disentangling Paradigm and Method Can Help Bring Qualitative Research to Post-positivist Psychology and Address the Generalizability Crisis.”
[132] Christians, “Critical Issues in Comparative & International Taxation,” 336–337. See also Webley, “Qualitative Approaches to Empirical Legal Research,” 927–950.
[133] Christians, “Critical Issues in Comparative & International Taxation,” 362.
[134] Christians, “Critical Issues in Comparative & International Taxation,” 359: ‘These authors—perhaps like many legal scholars—used their discussions with these individuals to better understand the studied subject or to construct theories about the studied subject, but they did not cite to the primary source of data—namely, notes from interviews or e-mail correspondence’.
[135] Haven, “Preregistering Qualitative Research,” 229–244.
[136] Haven, “Preregistering Qualitative Research: A Delphi Study.”
[137] Schapira, “Open Laboratory Notebooks.”
[138] Qualitative Data Repository, “The Qualitative Data Repository.”
[139] Monroe, “The Rush to Transparency,” 141–148.
[140] Branney, “A Context-Consent Meta-Framework for Designing Open (Qualitative) Data Studies,” 483–502.
[141] Anderson, “Normative Dissonance in Science: Results from a National Survey of US Scientists,” 3–14.
[142] Anderson, “Normative Dissonance in Science: Results from a National Survey of US Scientists,” 3–14.
[143] Center for Open Science, “TOPMixedLevelsJournals.gdoc.”
[144] Center for Open Science, “Registered Reports.”
[145] Kidwell, “Badges to Acknowledge Open Practices,” e1002456.
[146] Claremont McKenna College’s Program on Empirical Legal Studies (PELS), “Call for Papers.”
[147] Loughland, “Female Judges, Interrupted,” 822–851; Jacobi, “Querying the Gender Dynamics of Interruptions,” 1–19.
[148] Diamond, “Observations on Moving Forward,” 1229.
[149] Moher, “The Hong Kong Principles,” e3000737. In law, see Weatherall, “Inoculating Law Schools Against Bad Metrics”; Barnett, “Citation as a Measure of Impact: Female Legal Academics at a Disadvantage.”
[150] Pownall, “Navigating Open Science as Early Career Feminist Researchers.”
[151] Center for Open Science, “Universities.”
[152] Center for Open Science, “Universities.”
[153] Epstein, “The Rules of Inference,” 120.
[154] Holcombe, “Farewell Authors, Hello Contributors,” 147–148.
[155] Frankenhuis, “Open Science is Liberating and can Foster Creativity,” 439–447.
[156] NASEM, Open Science by Design, 129–30.
[157] The credit statement was generated using the tenzing app: Holcombe, “Documenting Contributions to Scholarly Articles Using Credit and Tenzing.”
AustLII:
Copyright Policy
|
Disclaimers
|
Privacy Policy
|
Feedback
URL: http://www.austlii.edu.au/au/journals/LawTechHum/2021/22.html