


















Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
JASP is a free, open-source alternative to SPSS that allows you to perform both simple and complex analyses in a user-friendly package.
Typology: Study notes
1 / 26
This page cannot be seen from the preview
Don't miss anything!
JASP is a free, open-source alternative to SPSS that allows you to perform both simple and complex analyses in a user-friendly package. The aim is to allow you to conduct complex Classical (stuff with p values) and Bayesian statistics (outlined in section 8), but have the advantage of a drag-and-drop interface that is intuitive to use.
JASP is still in development with new features being added almost on a monthly basis. This means you should constantly be checking their Twitter (@JASPStats) or Facebook (JASPStats) accounts to see if there is a new version available. This guide currently supports the features available in version 0.8.6 (as of February 28, 2018). If this is slightly out of date and there is a new feature you are confused about, feel free to email me and remind me to update it.
Although many universities predominantly use SPSS, it is extremely expensive which means you probably cannot use it unless you are affiliated with a university, and even then the licensing means it is often a nightmare to use on your own computer. JASP is a free, open-source alternative that aims to give you a simple and user-friendly output, making it ideal for students who are still getting to grips with statistics in psychology. Here are just a few benefits of using JASP:
1.3.1 Effect sizes
Effect sizes are one of the most important values to report when analysing data. However, despite many articles and an APA task force (1999...no one ever listens) explaining their importance, SPSS only offers a limited number of effect size options and many simple effect sizes are required to be calculated manually. On the other hand, JASP allows you to simply tick a box to provide an effect size for each test, and even provides multiple options for some statistical tests.
1.3.2 Continuously updated output
Imagine you have gone through all of the menus in SPSS to realise you forgot to click one option that you wanted to be included in the output. You would have to go back through the menus and select that one option and rerun the whole analysis, printing it below the first output. This looks incredibly messy and takes a lot of time. In JASP, all of the options and results are presented on the same screen. If you want another option to be presented, all you have to do is tick a box and the results are updated in seconds.
1.3.3 Minimalist design
For each statistical test, SPSS provides every value you will ever need and more. This can be very confusing when you are getting to grips with statistics and you can easily report the wrong value as SPSS also has their own naming conventions (e.g. sig. instead of p value). In JASP, the aim is minimalism. You start off with the bare bones result, and you have the option to select additional information if and when you need it.
1.3.4 Reproducible analyses
A large number of errors being reported in psychological research has led to calls to improve the reproducibility of research findings (Munafo et al. 2017). This means that you can show someone exactly how you got to the results you included in your report. In JASP, you have the opportunity to save your data and analyses together as a .jasp file. This preserves the analyses you performed to show yourself (thinking of your future self is probably the most important factor as even you will probably forget which options you selected) and others months or years after conducting the analyses. In SPSS, you can save the output file, but this relies on you reverse engineering all the options that you selected, providing room for error if you miss one of the options. SPSS also creates unnecessary barriers to accessing data as you can not open .sav files without having a valid SPSS license. Therefore, your data would not be accessible to anyone who did not have access to SPSS.
1.4.1 How to download JASP
JASP can be downloaded for free on their website for either Windows, OSX, or Linux (if that’s your thing). Installing it should be pretty straightforward, just follow the instructions it provides you. After installing it and opening the program, you will find the ”Welcome to JASP” window shown in Figure 1.
Figure 1: The JASP startup window
1.4.2 Entering data in JASP
The first difference you will find between SPSS and JASP is how you enter data. In SPSS, you enter data manually through the data view screen. In JASP, there is currently no facility to enter data directly, and you have to load the data after it has been created in a different program. JASP currently supports SPSS .sav files (this is useful if you already have data in SPSS as you do not need to use SPSS to view or analyse it), Excel .csv files, and Open Office .osd files. If you data is a normal Excel Workbook (.xlsx), you first have to convert it to a .csv file. If you need to create a .csv file from a .xlsx file, here is a useful link that explains how to create one. One feature in JASP is what they call ’data synchronisation’. Although the data is still hosted as a .csv or a .sav file, you can double click on a data point in JASP and it will open the data file in either Excel, SPSS, or Open Office (the only downside to this is if you no longer have access to SPSS you would not be
Figure 3: An empty window with a data file loaded
2 Guide Organisation
This guide currently covers three basic statistical tests: T-Tests, correlations, and ANOVA. The first part of this guide focuses on how these can be analysed using the classical approach to statistics through the use of Null Hypothesis Significance Testing (NHST). The Bayesian equivalent of the T-Test is then introduced in section 7. Throughout the guide, the aim is to demonstrate show how basic analyses that you may be familiar with performing in other statistical packages can be performed in JASP, and offer some practical recommendations. There are some digressions where the topics are not usually discussed in normal textbooks such as using the Student or Welch’s T-Test, or outlining different types of effect size for ANOVA. However, the main focus is on the process of performing the analyses, and not on the rationale and background to using them. If you are unfamiliar with any of the tests, there are other more comprehensive sources that will act as a guide (e.g. Field 2011; Baguely 2012). The data for all of the examples are from real published research and were made available on the Open Stats Lab (McIntyre 2016). All the analyses you are going to perform are the same as what were performed in the original research for the classical examples. We are then going to take another look at some of the studies to see how they can be analysed using Bayesian statistics. Some of the data sets have been modified slightly to remove or recalculate some variables. The data sets that have been used specifically throughout this guide can be found on the Open Science Framework. To download all the data sets together, you can click on the Data folder and select download as zip. Alternatively, you can just download whichever data set you need for each example.
3 Independent Samples T-test
The first example that we are going to look at is from a study by Schroeder and Epley (2015). The aim of the study was to investigate whether delivering a short speech to a potential employer would be more effective at landing you a job than writing the speech down and the employer reading it themselves. Thirty-nine professional recruiters were randomly assigned to receive a job application speech as either a transcript for them to read, or an audio recording of them reading
the speech. The recruiters then rated the applicants on perceived intellect, their impression of the application, and whether they would recommend hiring the candidate. All ratings were on a Likert scale ranging from 0 (low intellect, impression etc.) to 10 (high impression, recommendation etc.).
3.2.1 Loading the data
Firstly, we need to open the data file for this example. Look back at section 1.4 on how to open a .csv file and open Schroeder-Epley-data.csv from the folder you downloaded whilst reading section
Figure 4: Changing factor labels
3.2.2 Getting a first look at the data
From the window in Figure 3, click on the Descriptives tab > Descriptive Statistics to find the new window in Figure 5. From here, we can take a look at the data by ticking the box ’display boxplots’ and dragging all three of our dependent variables into the white box to the right of the full list of variables. This will fill the table in the far right screen with the data for the three dependent variables and provide you with three boxplots. However, this only provides you with the descriptive statistics for the whole sample. This is not very informative as we had two independent groups: one for those provided the transcripts, and one for those provided the audio recordings.
Figure 6: Boxplots and descriptive statistics split by condition
go back through the menus and click a range of options. The new analysis just appears below the old one (which is now greyed out but still visible if you scroll up). We can drag all of the dependent variables into the dependent variable box and drag Condition into Grouping Variable. We now have a range of options to click and the temptation to start looking at T-Tests is almost irresistible. All we have to do is look at a few more tables and then we are ready to go. On the menu below the list of variables, click on Normality and Equality of Variances under the Assumption Checks heading, and also click Descriptives under Additional Statistics. You should now get something that looks like Figure 7. Another useful design feature in JASP is that the tables are designed in APA style so that you can easily copy and paste them providing the variables have appropriate names. First, we will look at the Shapiro-Wilk test which assesses the assumption of normality. The idea behind this is that the assumption of normality is the null hypothesis in the test. Therefore, if you get p <. 05 , then the data do not come from a normal distribution and the assumption of normality is violated. This test and a similar one called the Kolmogorov-Smirov test can also be found in SPSS. However, although the Shapiro-Wilk test is generally considered to be better, there are issues with using both and assessing normality visually, such as using Q-Q plots (not available here in JASP, but they can be produced in SPSS), is highly recommended (if you are interested in learning more, consider reading this). As you selected all three DVs, the Shapiro-Wilk table reports the test for each one and is divided by condition as we have independent groups. As we can see in each row, none of the tests are significant so we can (tentatively) conclude the assumption of normality has not been violated in this example. Secondly, we will look at Levene’s test for the assumption of equal variances (homogeneity of variance). You should have come across this test previously and uses a similar logic to the Shapiro-Wilk. The null hypothesis is that the variances are equal between the groups, therefore a sufficiently large difference in variance between the groups will be indicated by a significant result. This test is also heavily criticised for reasons similar to the Shapiro-Wilk test above, so any conclusion you make should be in conjunction with a careful look at the data using plots (scroll up to the boxplots, is the variance roughly the same for each condition?). However, the Levene’s test suggests that the assumption of equal variances has not been violated and we can continue.
Figure 7: Assessing normality and homogeneity of variance
This is the moment you have been waiting for. After all the visualising and checking, you want to finally look at some inferential statistics. We can stay on the analysis page similar to Figure 7 as most of the results are already here, but we were just ignoring them temporarily and require a few more options. On the menu section, Student should be selected under Tests by default, but we also want to select Welch. Under Additional Statistics, we also want to select Mean difference and Effect size. If you really want to tidy things up, you could always untick both of the boxes under Assumption Checks. Remember JASP automatically updates so you can select the information when and if you need it. You should have a window that looks like Figure 8. Looking at the Independent Samples T-Test table, we have all the information we want (and in contrast to SPSS, all the information we need). We have both a Student (this produces the same result as SPSS) and Welch T-Test (Welch’s T-Test should be the default option but see appendix section 11.1 for more information). Remember what the boxplots and descriptive statistics showed us, participants who were provided with audio recordings gave higher ratings than those provided with transcripts. We can now support this using the T-Test result for intellect, impression, and hiring recommendation, but we will only go through the intellect result here. In published articles, T-Tests should be reported in the standard format of: t (df ) = t statistic, p value, effect size. For intellect, we would write the result for the Student T-Test up as t ( 37 ) = 3. 53 , p =. 001 , Cohen’s d = 1. 13. As we selected the mean difference, this is the unstandardised (or simple) effect size between the two conditions, and simply tells us what the difference was between the means for intellect in our two conditions. This shows that those in the transcript condition rated the applicant 1.99 points on our scale lower on average than those in the audio recording condition. This makes sense in our example, but if another study was performed using a different scale, the mean difference in each study would not be comparable. This is where Cohen’s d comes in. It is a standardised effect size that expresses the difference between two conditions in terms of standard deviations. In our example, those in the transcript condition rated the applicant 1.13 standard deviations lower on average than those in the audio recording condition. As this is a standardised unit, we would be able to compare this to other studies that used a different scale. To interpret this result, we can look at the guidelines Cohen (1988) originally suggested. He suggested results can be considered as small (±0.2), medium (±0.5), and large (±0.8) effects. However, this was only ever meant as a heuristic and it is important that you compare the effects to those found in
singer. We are therefore interested in whether the infants increased the proportion of time spent looking at the singer who sang the familiar song after they sang, in comparison to before they sang to the infants.
The first thing we need to do is load a new data file. Go back to the folder you downloaded at the beginning of section 2 and open Mehr-study1-data.csv. Think about the process we went through in the first example to explore the data, and then you need to think about how you are going to analyse the data. The conditions of interest are called ’Baseline_Proportion_Gaze_to_Singer’ and ’Test_Proportion_Gaze_to_Singer’. As this study is repeated measures, we will be needing a paired samples T-Test, but the remainder of the procedure is the same as the Independent Samples T-Test.
5 Pearson’s Correlation
Now that we have seen how you can run T-Tests, the next test to perform is a simple correlation between two continuous variables. Correlations allow us to assess the degree of relationship between two variables. The first example we are going to work with is from Beall, Hofer and Shaller (2016) who investigated if the outbreak of infectious diseases can influence voting behaviour. They were specifically interested in the emergence of the Ebola virus and whether it was associated with support for a more conservative candidate over a liberal candidate in the US Federal elections. There are two variables of interest that we are going to investigate: frequency of Google searches for the Ebola virus, and political support for either a conservative or liberal candidate in the 2014 US Federal elections. The first variable is called Daily.Ebola.Search.Volume and is the search volume for particular topics in a geographical region. The topic with the highest search volume in a particular day is scored 100, and all other topics are expressed as a percentage of that value. Therefore, the closer the value is to 100, the more people Googled the Ebola virus on a specific day. The second variable is called Voter.Intention.Index. This was calculated by subtracting the percentage of voters who intended to support a liberal candidate in the election from the percentage of voters who intended to support a conservative candidate. Therefore, positive values indicate greater support for conservative candidates and negative values indicate greater support for liberal candidates.
Start by loading Beall-Hofer-Shaller-data.csv into JASP. We are going to start again by looking at the descriptive statistics. Enter both variables listed above into the empty white box seen in Figure 5 and used in the previous examples.
We can now start to think about whether a parametric version of correlation is appropriate for our data. We want both both variables to be measured on a continuous scale, we want both measurements to be in pairs, and we want there to be no outliers. We will think about each one in turn. Both of the variables are measured on a continuous scale. Remember to check that JASP has correctly worked out what type of data each variable is by looking at the top of the columns in the data view screen as seen in Figure 3. We want both variables to have a little ruler on top, which they both should have.
Next, we want both variables to be in pairs. We start to have a small problem here. If you looked closely when you opened the data, or when you were exploring the descriptive statistics, you might have seen that we have 65 rows of data for the Ebola search index, but we only have 24 rows for the voter intention index. This makes sense as the data is split into days and the voter intention index is based on polling data. We do not have polling data for every day, so the correlations will only be based on the number of rows we have both a voter intention index and an Ebola Google search volume. This leaves us with 24 complete pairs of data to analyse. Finally, we want there to be no outliers. Outliers are extremely problematic for correlations as it can bias the r value. We can look back at the boxplots you created during the tasks in section 5.2.1. It appears that the data looks fine, so we are all set to go ahead and calculate some correlations.
We will be using a different analysis tab than the one we used for the T-Tests. Firstly, click on Regression > Correlation Matrix to open a new analysis window. The next thing we want to do is to drag both of the variables into the empty white box like we did for the descriptive statistics. This will fill the table with two numbers, one for Pearson’s r (the correlation value) and one for the p value. We also want to tick the box for a correlation matrix to visualise the correlation. This is incredibly important for correlations as radically different distributions can produce similar correlation values. This will produce the screen you can see in Figure 9. One of the first things you might notice in comparison to SPSS, is that you only get one set of numbers in the correlation table. SPSS gives you the full matrix of every combination of your variables (including correlating the same two variables). In JASP, it just provides you with what you need to know for two variables: what is the correlation coefficient, and what is the p value. Simple. We could write the result up like this: ”The correlation between daily Ebola search volume and voter intention was small and non- significant, Pearson’s r(22) =. 17 , p =. 430 .” We have the (22) after r as that is the degrees of freedom for a correlation. JASP does not provide it you directly, but it is the number of people in the analysis minus two (24 was the sample as we only have 24 matching pairs of data, so 24-2).
Figure 9: Correlation matrix and scatter plot
Figure 10: Descriptive statistics and displaying a simple line plot
for our data there are some outliers in the top right corner and the data is skewed as it snakes around the line (skew could also be investigated through creating histograms for each condition in descriptive statistics, and this would lead you to making approximately the same decision. One option would be to try and transform the data to make it more normal. A common ”get out of jail free card” that people report when using ANOVA is that it is ”robust to violations of parametric assumptions”. This is only partially true when the sample size is equal in each group (as it is in this case), and then has further requirements on the sample size being large enough to be approximately normal. For our purposes in demonstrating how to use ANOVA in JASP, we will continue with the data in its present form, but note that if you were analysing this data for real, it would be more appropriate to explore transforming the data, or consider a non-parametric equivalent.
6.2.2 Main ANOVA results
Now it is time to look at the main ANOVA results. The main table can be viewed in Figure 12. The only additional options that have been selected is to display each measure of effect size that is provided. Eta squared (η 2 ) tells you how much variance in your DV is explained by your IV. You can also select partial eta squared (ηp 2 ) which in the case of a one-way ANOVA, is exactly the same as eta squared. SPSS only provides you with partial eta squared as a measure of effect size. The difference between these two measures is when you add additional factors to create a factorial ANOVA. Eta squared tells you how much variance is explained by your IV as a proportion of the total variance. Therefore, for a factorial ANOVA factor 1 may explain 7% of the variance in your DV, and factor 2 may explain 12% of the variance. The remainder will be explained by the interaction and error but together they will add up to 100%. However, this may not be entirely useful for comparing values across studies as each study will have a different amount of total variance. Partial eta-squared is the amount of variance explained by the IV in comparison to both the variance explained by the IV and the error associated with it (the variance it cannot explain), and all the other IVs are partialled out (hence partial eta squared). This explains why they are the same when you only have one IV as there is only the variance explained by the one IV and the error. We also have omega squared (ω 2 ) which is slightly smaller than eta squared. This provides the same information, the proportion of variance explained by the IV, but it also corrects for the bias due to it being an estimate from the sample and not from the population. As the sample size increases, the difference between eta squared and omega squared will decrease. As there are only 18 participants in each group, the effect size is 27% smaller in this example. There
Figure 11: Assessing ANOVA parametric assumptions using Levene’s test and Q-Q plots
is even a partial version of omega squared, but unfortunately it is not available in JASP. Further information on different types of effect sizes can be found in Lakens (2013). As we only have one IV, the most appropriate choice is to report the omega squared to ensure it is less biased due to it being estimated from a sample. After that little digression, we can focus on what the table is telling us for the main ANOVA results. This is very straight forward and we can see that there is a significant effect of condition on the number of intrusive memories. We can also see that Condition explains 10.4% of the variance in the number of intrusions if we use the less biased omega squared. This could be written as ”there was a significant effect of condition on the number of intrusive memories, F (3, 68) = 3. 80 , p =
. 01 , ω 2 = 0. 10 ”. This is all well and good, but we typically do not want to stop there, what we really want to know is how our conditions differed, and which one resulted in the lowest number of intrusive memories. We have two options for this: we can either use planned contrasts if we had specific predictions, or we can use post-hoc tests to examine every possible comparison.
6.2.3 Planned contrasts
If we have specific predictions and we only want to examine a subset of the total possible com- parisons, we can use planned contrasts. You may remember that SPSS provides you with an option to set up your own contrasts using coefficients, or you can select from a range of existing contrasts. There is slightly less flexibility in JASP, as you can only select from a range of contrasts and not define your own. For this example, one possibility may have been to predict that the combination of both treatments would be the most effective (as per the original article), and you could use this as the standard to compare to all the other conditions. For this to work, we need to slightly rearrange the data which can be simply done by double clicking on the Condition header to rename the factor labels like in Figure 4. Both treatments is currently in second place, but we can rearrange this by selecting it and clicking the up arrow to move it into first place. If we then select the Simple contrast under Contrasts (click none and select simple), it uses both treatments as the condition to compare all the other treatments to. Looking back at Figure 12, we can see that both treatments results in significantly fewer intrusions than both the control condition and reactivation on its own. However, it does not produce significantly fewer intrusions than Tetris on its own. This could be written as ”planned contrasts showed that both treatments produced fewer intrusive memories than both the control condition (t(34) = 3. 04 , p =. 003 ) and reactivation in isolation (t(34) = 2. 78 , p =. 007 ). On the other hand, there was not a significant difference
Figure 13: Main ANOVA results and post-hoc tests
the amount of rotation (in degrees) they were able to move their head. They then created two artificial conditions where their actual movements resulted in less movement through the goggles (understated visual feedback) or more movement (overstated visual feedback). Therefore, through the participants eyes they either perceived more or less movement without realising it was being manipulated. This was measured as a proportion of the movement they were able to perform during the baseline measure. Therefore, values less than one mean that they moved less than in their baseline and values greater than one mean that they moved more than in their baseline.
Time to open another data file, open Harvie-2015-data.csv from the folder you downloaded in section 2. In this study we have one repeated measures IV with three levels (visual feedback). As this is a repeated measures experiment, these levels will be three separate columns in our data set. The columns we will need for our IV are called Understated_Visual_Feedback, Ac- curate_Visual_Feedback, and Overstated_Visual_Feedback. If you look closely at the Accu- rate_Visual_Feedback column, all the values are 1 as the other two conditions are expressed as a proportion of this condition. We have one DV (degree of rotation as a proportion of the baseline measurement) which are the measurements in each column of our IV, we do not need to include a specific measurement column like we did for the independent groups ANOVA.
7.2.1 Descriptive statistics and parametric assumptions
Similar to the last section, we are not going to repeat the procedure for getting descriptive statistics as you should be pretty good at it by now. We can focus on getting the ANOVA window up and running to assess the parametric assumptions. Click on ANOVA > Repeated Measures ANOVA to get an empty window like Figure 14. Instead of directly dragging our variables into the spaces like in the previous examples, we need to do a bit of specifying whenever we use the repeated measures ANOVA window. Under Repeated Measures Factors, we need to tell JASP what our IV (or IVs if you have a factorial repeated measures ANOVA) is and how many levels it has. Double click on RM Factor 1 and give it a name, I’ve called it Visual Feedback. Then click on Level 1 and rename it Understated, Level 2 can be Normal, and Level 3 can be Overstated. Notice how Repeated Measures Cells changes as we specify our IV and its levels. This is where we now need to drag our variables. Highlight each
Figure 14: Empty window for a Repeated Measures (RM) ANOVA
variable and click on the arrow to the left of Repeated Measures Cells to place each variable into its respective cell (make sure the names match). You should now have a window that looks like Figure 15 on the left side. We can now focus on examining whether the parametric assumptions have been met. We need to worry about whether the residuals are normally distributed and whether we have sphericity (are the differences between each level approximately equal?). In contrast to the independent groups ANOVA, we do not have the option to look at a Q-Q plot so it will be difficult to assess the assumption of normality in JASP (hopefully it will be included in one of the next releases). JASP does provide us with the ability to assess if the assumption of sphericity has been met. In this case, it even tells us that the assumption has been violated before we explicitly ask for Mauchly’s test as a note under the Within Subjects Effects table. If we select Sphericity tests under Assumption Checks we will get the table displayed in Figure 15. As we already knew from the note, sphericity has been violated and we will have to apply a Greenhouse-Geisser correction to the degrees of freedom. This is easily done and you just have to select Greenhouse-Geisser as the only option under Sphericity Corrections. In keeping with the minimalist design, we just get two rows of output with our corrected results, instead of the pages of repeated values that SPSS insists upon.
7.2.2 Main ANOVA results
We are now very close to looking at the main results, we just need to select a few options. As always, we want a measure of effect size and we will report omega squared as a less biased measure of the proportion of variance explained as we only have one IV (remember the differences from section 6.2.2). We can also select some planned contrasts as if we are following the predictions from the paper, they expected the understated visual feedback to produce greater rotation than normal feedback, and for overstated visual feedback to produce less rotation than normal feedback. With a bit of trial and error, the repeated option under contrasts provides us with these two comparisons. You should now have a window that looks like Figure 16. There is a significant effect of visual feedback on the degree of rotation. Looking at the omega squared value, this explains 20% of the variance in degree of rotation. This could be written up as ”there was a significant effect of visual feedback on the degree of rotation that the participants were able to rotate their necks, F (1. 58 , 74 .26) = 18. 91 , p <. 001 , ω 2 =. 20 ”. We can then follow this up with our planned comparisons to investigate how each manipulation compared to the baseline level of rotation. With the aid of the plot in Figure 15, we can see that understated visual feedback resulted in a greater
Figure 16: Main RM ANOVA results and planned contrasts
The primary aim of JASP is to make Bayesian statistics accessible to everyone. Bayesian analyses can be performed in other software packages such as R, but this requires knowledge of writing code which is not immediately accessible if you have no experience. Therefore, JASP provides access to Bayesian statistics and removes some of the computational barriers that were previously in place. However, the full extent of Bayesian statistics is not currently available in JASP and if you require specialist models beyond ANOVA, you will need to explore packages such as R, Stan, or WinBugs. Two of the examples from earlier in the guide will now be repeated to show you how they can be analysed through a Bayesian perspective.
The first task will be to reanalyse the Mehr et al. (2016) study from example two. Part of their study was to initially test if the gaze time of the child was the same towards both the researcher who was going to sing the familiar song, and to the researcher who was going to sing the unfamiliar song. This was important as the children should not be gravitating towards one particular researcher before they start singing. However, a lack of difference in gaze time was the conclusion from a non-significant paired samples T-Test in gaze time. This perpetuates the fallacy that a large p value indicates there is a small effect and the null hypothesis should be accepted. As Dienes (2014) highlights, this conclusion cannot be made using p values alone, and it is an area where Bayesian statistics can be particularly useful. We will now analyse this example using a Bayesian paired samples T-Test to see if there is support for the null hypothesis in comparison to an alternative hypothesis, or whether it was insensitive to detect a difference.
8.3.1 Reanalysis one: performing a Bayesian paired samples t-test
Go back to the data folder and open ’Mehr-study1-data.csv’. Using the T-Tests tab, select the Bayesian Paired Samples T-Test as opposed to the Paired Samples T-Test used in example two. This should open a new set of menu options and an empty table on the right. Similar to the procedure for a regular T-Test, drag the two variables Familiarization_Gaze_to_Familiar and Familiarization_Gaze_to_Unfamiliar into the white box with the little ruler in the bottom right corner to have a screen looking like Figure 17.
Figure 17: JASP window for a Bayesian Paired Samples T-Test
8.3.2 Menu options in JASP
Although the menu options and table layout look similar to when you performed a regular paired samples T-Test, there will be some new options and terminology that need outlining first.