Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Introductory physics lab guide, Lecture notes of Experimental Physics

Describes principles for introductory physics experiments and course policies.

Typology: Lecture notes

2016/2017

Uploaded on 12/01/2023

benjamin-lehmann-1
benjamin-lehmann-1 🇺🇸

1 document

1 / 24

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Lab Manual Supplement
Benjamin Lehmann
Version: July 6, 2017
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18

Partial preview of the text

Download Introductory physics lab guide and more Lecture notes Experimental Physics in PDF only on Docsity!

Lab Manual Supplement

Benjamin Lehmann

Version: July 6, 2017

4 CONTENTS

Part 1

Thinking (and writing) like a scientist

Many lab courses focus on teaching you specific laboratory techniques. The introductory physics labs have a very different mission. We will use very simple equipment, and I will not be grading you on how well you perform the measurements. Instead, we will focus heavily on the scientific process behind each experiment.

1.1 The scientific method

At some point in your life, you probably learned about the scientific method, which is usually phrased something like this:

  1. Ask a question;
  2. Form a hypothesis to answer it;
  3. Test the hypothesis with an experiment;
  4. Analyze the results;
  5. Refine your hypothesis and repeat.

This is an idealized version of the process, but it captures the important points. What we will focus on in this class are the details of steps 3 and 4: how do we devise an experiment to test a hypothesis? How do we analyze the results, and how do we draw conclusions about the hypothesis from the data? First, let’s review the vocabulary we need in order to go through this process systematically. An observation is any data at all about the real world. Often, we will start with an observation that is qualitative, meaning that it is not described in terms of numbers. For example, you can observe that the sky is blue, and that is a perfectly legitimate observation. However, to understand the underlying mechanism behind an observation, it is useful to rephrase observations in a quantitative form, describing them mathematically. For example, instead of saying that the sky is blue, we can measure the wavelengths of the light coming from the sky. If we notice that observations follow a predictable pattern, it can be useful to formalize that pattern as a law. A law allows us to make a prediction based on the pattern that we have previously observed, but it does not contain any claims about the underlying nature of the phenomenon. Early in the quarter, you learned or will learn about Newton’s laws, which are good examples of laws:

5

After a hypothesis makes many successful predictions—that is, predictions which survive exper- imental tests—it may then become the scientific consensus. This means that a strong majority of scientists in the field believe that the model is the best available description of nature. It does not necessarily mean that scientists believe the model to be fundamentally correct or complete, and some scientists may still disagree with the rest. Taken together, a set of related hypotheses and results which are well-tested in a particular set of conditions is called a theory. Notice the difference between a theory and a law: a law describes a phenomenon, but a theory explains the mechanism behind it. A law is not “stronger” than a theory—and in fact, many laws can be derived from an underlying theory. In physics, a theory need not be fundamentally correct, but it should provide an effective description of nature in a specific set of conditions. For example, classical mechanics is a theory which underlies Newton’s laws. Classical mechanics is known to be inconsistent with data in certain conditions, but it remains a very useful tool for most everyday situations. Be aware that terminology varies between fields; for instance, a mathematical “theory” refers to a collection of definitions and theorems, which are proven results guaranteed to be correct. Neither of these uses of the word “theory” are especially related to the everyday usage of the word. Let’s review the vocabulary:

After observing a phenomenon that we do not understand, we propose a hypothesis or model to explain it. We rephrase the model in a quantitative form, so that we can generate quantita- tive predictions. If the model is falsifiable, as it needs to be, we can think of an observation which could falsify it. Then we perform an experiment to make that measurement quanti- tatively. We compare the predicted results to the measured results, and determine whether the model is consistent with the data or excluded by the data. If our model, other related hypotheses, and their consequences have strong explanatory power, the combined framework may be considered a theory. In time, the theory may become the scientific consensus.

We have one problem left: how do we decide whether the predictions of a model are consistent with experimental data? If you have ever taken a lab course before, you know that the predicted numbers will never agree perfectly with the measured numbers. This is not something we can ever escape completely—every experiment is subject to sources of error. To decide whether or not a model is excluded by a dataset, we need to determine how much deviation between theory and experiment is acceptable. This is where the concept of uncertainty comes in.

1.2 Understanding uncertainty

Suppose that a model predicts a value of 30.0 m/s^2 for the acceleration of some object, and when we perform the experiment, we measure a value of 32.4 m/s^2. Intuitively, if the experiment is only precise to within 5 m/s^2 , we cannot exclude our model. But if the experiment is precise to within 0 .001 m/s^2 , then maybe we can. How can we find out? The first thing to understand is that every measurement is subject to some random variation. If we repeat the experiment several times, we will probably get results that are close to 32.4 m/s^2 , but we should not be surprised to see 32.2 m/s^2 , or maybe 33.9 m/s^2. This variation may have many different sources, including the precision of the measuring instrument and small differences in the initial conditions of each experiment. Regardless of the sources, this inherent variation means that we cannot say exactly what the true value is. Instead, we will only be able to make a probabilistic statement about the real value of what we are trying to measure.

8 1.2. UNDERSTANDING UNCERTAINTY

Measured value [m/s^2 ]

Number of measurements

Figure 1.1: Data from an experiment repeated 10,000 times, grouped into intervals of width 0 .25 m/s^2. More measurements fell between 32.0 and 32.25 m/s^2 than into any other single in- terval, but many more measured values fell into the combined interval between 31 and 33 m/s^2.

In most situations, if we do the experiment many, many times, and plot the results, we will see a shape like that shown in fig. 1.1. In the figure, we have taken 10,000 measurements, grouped them into intervals, and counted how many measurements landed in each interval. (For example, 236 measurements gave a value between 30.0 and 30.25 m/s^2 , so the gray bar between those two numbers on the x axis extends up to 236 on the y axis.) This kind of plot is called a histogram. If you look at the figure, you will see that the shape is centered near 32.0 m/s^2 , but many measurements are a little distance away from that center. The further we go from the center, the lower the count of measurements. This shape is called a normal distribution, and it shows up in almost every experiment^2. (You may have also heard it called a bell curve.) Looking at the figure a little more closely, we can see that some of our measurements give results that seem far away from the central value—some as far out as 28 or 36 m/s^2. Indeed, if our data follows a normal distribution, then there is some very small probability of getting a result arbitrarily far away from the center. We could get a measurement of 100 or even 1000 m/s^2 —it’s simply that the probability of measuring such a value is exceedingly small. The shape of the plot tells us what measurement values are most probable. In other words, the normal distribution is one example of a probability distribution. The important consequence is that when we compare our model to our experimental data, we need to keep in mind that even if our model is very accurate, there is some probability that measurements will greatly disagree with our predictions. Likewise, even if our model is very inaccurate, there is some probability that measurements will closely align with our predictions by chance alone! As we make more and more measurements, the probability of getting one measurement that lies far away from the center keeps on going up. But at the same time, as we gather more data, the center value becomes more and more sharply defined, as you can see in fig. 1.2. You can estimate where the center lies by taking the mean of the measurements. The width of the peak is characterized by a quantity called the variance. A lower variance corresponds to a narrower peak. Even though the center becomes easier to locate as we take more measurements, the variance is

(^2) There are fascinating mathematical reasons for this fact. To find out more, look up the central limit theorem.

10 1.2. UNDERSTANDING UNCERTAINTY

Score 0. 0 1. 0 2. 0 3. 0 4. 0 5. 0 Probability 1. 000 0. 317 0. 046 0. 0027 6. 33 × 10 −^5 5. 73 × 10 −^7

Table 1.1: Standard normal table. Each of the probabilities shown is the probability of obtaining a standard score at least as great as that in the first row of the table, assuming that the deviation can be in either direction.

by the Greek letter σ (sigma), this result is just the answer from step 1 written in terms of σ. For example, if our measured value is 31. 96 ± 0 .09 m/s^2 and our predicted value is 32.0 m/s^2 , then the difference in step 1 is 0.04 m/s^2. In step 2, we divide this difference by 0.09 m/s^2 to express it as

  1. 44 σ. This means that the standard score (or Z-score) is 0.44. Now we can look up this value of the standard score in a table. We find that the probability of getting a result at least this distant from the mean is about 0.66. Thus, it’s completely plausible that our measurement could occur by random chance even if the model is correct, so we conclude that the model is consistent with the data. If our prediction had been 31.5 m/s^2 , then the standard score would be 5.1. Our table tells us that the probability of a measurement at least this far from the prediction is less than 6 × 10 −^7 , which is less than one in a million! In this case, we can reasonably reject the hypothesis. We say that this model is excluded at 5. 1 σ. At this point, you may object that our decision to reject a hypothesis with one standard score and not to reject another with a different standard score is arbitrary. After all, there is always a possibility that a hypothesis is correct; we can say that our data is extremely unlikely in such a case, but it is not impossible. The objection is sound—there really is some degree of subjectivity to the choice of threshold past which we reject a hypothesis. The best we can do is to agree on a threshold before we do the experiment, and weigh every experiment in the context of the threshold chosen. A result that crosses the threshold is said to be statistically significant. Different fields have different conventions for statistical significance. For example, in the social sciences, a standard score of 2.0 is often considered significant, corresponding to a probability of about 1/20 of measuring the observed values given the proposed hypothesis; in particle physics, the standard score must be at least 5.0 to be considered significant. This matters. Remember, if you do an experiment one hundred times, getting one result that has a probability of 1/100 is not unusual! Case in point: last year, the LHC detected a signal^4 that deviated from established theory at 3. 4 σ. Physicists spent lots of time trying to figure out what undiscovered particles might be responsible. But after more data was collected, the signal disappeared; its appearance was nothing but random chance. (This is also why doctors don’t order hundreds of tests for every patient—if a typical false-positive rate is 1%, then the average patient would appear to have several diseases.) All of the uncertainties that we have talked about so far have random effects on the measure- ments, and this is one important class of errors. But when we evaluate the uncertainties in an experiment, we also need to consider sources of error which always push the measurement in one direction. For example, suppose that the instrument we are using to measure acceleration in our experiment is poorly calibrated, and the numbers it reports are always high by 0.2 m/s^2. This kind of non-random error is called a systematic error. When designing an experiment, we work to eliminate as much systematic error as we can. Handling systematic error is complicated. We will do it in a simplified way, described in appendix A. Now that we understand the role of uncertainty in in testing our hypotheses, we still need to know how to estimate the uncertainties inherent in our experiments. There are mathematically sophisticated approaches to this problem, and some people spend a large part of their careers

(^4) To read more about this, look up the LHC diboson excess.

understanding the errors in a large experiment. In this class, our estimates of uncertainty will be equal parts science and art: we will try to list all of the sources of error in the experiment and characterize their impact on the results. We will discuss this in greater depth in the appendix.

1.3 Scientific voice

There is one very important part of the scientific method which we have not mentioned yet: any results need to be shared with the scientific community. In practice, this usually takes place in the form of papers published in scientific journals. Writing these articles requires a different set of skills from much of the other writing we do in life. Now that we understand the scientific process more deeply, let’s discuss the implications for scientific language. The most important things to keep track of in scientific writing are the claims you are making and their scope or breadth. A claim is something you are arguing for. It should have strong support, but there can still be some gaps for later work to fill; you should acknowledge these gaps and evaluate their significance. On the other hand, it is very important to claim nothing more than what you can support—for example, if you perform an experiment showing that the sky is blue, you should not claim that the sky is blue on every planet. As a result, most scientific papers make claims that are extremely specific (i.e., with narrow scope). Since the entire scientific method revolves around testing claims, a good paper is very direct about the claims that it makes. Of course, we know now that the scientific method can never prove that a model is correct. This shapes the way that we explain our claims: for instance, we cannot say that our study “proves” something. Instead, we need to use the language of statistics. You will acquire an intuition for this kind of writing by reading. Let’s begin with a simple pair of examples:

Example 1 (wrong). Newton’s second law is one of the most fundamental assump- tions underlying Newtonian mechanics. We perform an experiment to prove that the second law is true. A known force is applied to a cart on a track, and the resulting acceleration is measured. In an analysis of 10 trials, the measured acceleration matches the prediction, so we conclude that the second law is correct.

Example 2 (better). Newton’s second law is one of the most fundamental assump- tions underlying Newtonian mechanics. We perform an experiment to test the second law. A known force is applied to a cart on a track, and the resulting acceleration is mea- sured. In an analysis of 10 trials, the mea- sured acceleration is consistent with the pre- diction at 0. 5 σ. Our results provide strong evidence for the validity of the second law at velocities up to 5 m/s.

The first example makes a statement that is too strong for multiple reasons: apart from uncertainty in the experiment, it is not clear what assumptions the author is making. Additionally, since it does not explain how strong the result actually is, it is impossible for the reader to weigh the significance of the claim. The second example fixes these problems by explaining the findings in terms of probability rather than making absolute claims. Another way to articulate claims without making them too strong is to use qualifiers, which are words that emphasize or weaken a statement. For example, familiar qualifiers from everyday speech are kind of, pretty much, and not really. We don’t use these particular phrases in scientific writing, and it’s not just because they’re informal. The much more serious problem with qualifiers like these is that they are too imprecise to add anything to the argument. You can use qualifiers to explain or highlight a conclusion that another scientist would find to be evident from your data, but they shouldn’t be vacuous. For instance, in example 2, we said that our results

Ask yourself whether you really need each word that you write. Lots of words in the first example can be omitted with no ill effects. The result is cleaner writing that’s easier to read. Often, wordy writing follows from imprecise language—so if you find yourself writing a lot, ask yourself whether you’re using the right words. Finally, when writing a scientific article, think carefully about your readers. Who are they, and what will they think as they read each sentence? In this class, we will practice writing for a audience of fellow scientists, and scientists are notoriously picky about holes in an argument. You should imagine that your work is being read by someone who really wants to prove you wrong. The best way to write for a reader like that is to be careful, precise, and measured. Remember, a scientific article is an argument, and everything that you know about making a convincing argument applies.

14 1.3. SCIENTIFIC VOICE

16 2.2. REPORT COMPONENTS

Example start (better). We perform an experiment to test conservation of momentum using gliders on an air track.

This tells the reader everything they need to know succinctly, without too much detail. The argument in your report should be clear and well-supported, but your results do not need to match your expectations. It is absolutely fine to perform an experiment, observe that it is inconsistent with your hypothesis, and communicate that in your report. Do not assume that your outside knowledge must be correct, and the experiment faulty! If your results are inconsistent with other things you know, then do your best to explain the difference. Are there sources of error in your experiment that you have not fully accounted for? Are there assumptions involved in your prediction that might not apply? Whatever you think, do not alter your conclusions to reflect what you expected. That includes saying things like the following:

Example (wrong). Our results are inconsistent with conservation of momentum, but we know that conservation of momentum should hold, so our experiment must be in error.

In this class, we will always take the position that we are the first people to perform each experiment. If you have outside knowledge of the “correct” value of a measurement, you should set it aside—it should not enter your report unless you have direct experimental evidence for it. That said, if you think your measurement should come out a different way, do call me over to take a look. There may be something wrong that I can help you to fix. Finally, let’s come back to something we said before: a prediction is not a guess, and this class is not meant to test how well you can guess at the outcome of an experiment. This class is meant to teach you to perform good experiments and interpret their results. Understanding your predictions and comparing them with data is central to that process. Do not make guesses in your reports; if you have no basis to make a prediction, then don’t make a prediction. If you think your results are inconsistent with a prediction, analyze the implications. Don’t just say that you were wrong and call it a day. And if something went wrong with a calculation, fix it!

2.2 Report components

The layout of your report is up to you, but a good report will address all of the points listed below in some form. Not every experiment is the same; sometimes it won’t be easy to write an abstract, or a formal “predictions” section won’t be appropriate. I will do my best to give you guidance as needed during the labs, but you can use your best judgment and ask me if you have questions.

  1. Introduction. The introduction to your report should explain what you are going to do and why. Typically, the introduction should start off with motivation: why are we motivated to perform this experiment? What problem is it going to solve? Then you can discuss, in broad terms, how you will solve that problem.
  2. Methods. Tell your reader how the experiment works. There are two criteria for the minimum level of detail to include: first, a reader with your level of technical background should be able to understand how to do the same basic experiment, even if they have to use their judgment to fill some gaps. Second, you should introduce any components of the experiment that you need to reference later, particularly if they relate to an assumption you make in your predictions or analysis. Do not write a play-by-play account of every move you make, and never copy from the lab manual.
  1. Predictions. Generally, your experiment will test a hypothesis. To do that, you need to make quantitative predictions from the hypothesis that you can compare with data. Explain your predictions to the reader; you can use mathematics as appropriate, but much of this should take the form of complete sentences. In particular, you should clearly explain the assumptions you are making in your prediction.
  2. Observations. Record all of the data that you will need for your analysis. This can include single numbers, tables, or plots. Do not spend too much time thinking about how to present your data—just make sure they appear.
  3. Analysis. Compare your predictions with your data. You should analyze the uncertainty in your experiment, and use that uncertainty to determine whether your observations are consis- tent with the hypothesis. You should explain the implications for physics, and highlight any assumptions you are making. This is the most important section of your report.
  4. Conclusions. Briefly remind the reader of the big picture of your experiment, and sum up your results. This should not take more than a paragraph.
  5. Abstract. An abstract is a single paragraph that summarizes the entire report. It differs from the conclusion in that the conclusion assumes the reader has already read the rest of the report. An abstract assumes that the reader has seen nothing else at all about your work. In scientific papers, the abstract comes first, and often another scientist will use the abstract to decide whether to read the rest of the paper. We will usually write abstracts at the end simply because you need to know the results of the experiment first. If you prefer, you are welcome to leave space at the beginning of your report and write your abstract there instead.

2.3 Logistics

In the classroom: Bring your prelabs to the front of the room as you enter. We will spend the first half hour of each class reviewing the relevant material together, and then you will have the remaining time to collect data and write your report. I will stop you every now and then to talk about certain questions as a class, and we will usually discuss the major points of the analysis by the end of the second hour. Generally, you should not leave early even if you think you are finished, since you might miss something important.

Computers: You are welcome to type your reports and email them to me. However, I have the same expectations for typed reports as I do for handwritten reports. If you need to include graphs and you aren’t able to make them digitally, you can submit them on paper and refer to them in your typed work. All materials related to a lab need to be handed into me by the end of that class.

Missing labs: If you cannot avoid missing a lab, please let me know as soon as possible, and we may be able to work something out. If you miss two labs, you will not pass the course.

Grades: Your grades will be based on your reports and your prelabs. Both will be evaluated on a scale from 0 to 4. Prelabs will count for 19% of your grade, and reports will count for 79%. The remaining 2% will be based on a short reading quiz to be given during the second class.

Contacting the instructor: If you need to contact me, send an email to blehmann@ucsc.edu. Please write [6L] in the subject line to make sure that I see it.

Appendix A

Quantifying uncertainty

This appendix is provided as a quick reference for statistical calculations. You are encouraged to print it and bring it with you to the lab. I have not included explanations of where these results come from, and they have been formatted for the conventions of our course only. Please ask me if you have questions.

Throughout this appendix, X stands for the set of measured values. We will assume that X contains N measurements, which we will label x 1 , x 2 ,... , xN. We will also say that the values in X are measurements of a variable called x. (This is not necessarily position—it could stand for anything.) For concrete examples, we will use the following dataset:

i 1 2 3 4 5 xi 11. 24 13. 39 13. 40 11. 39 11. 76

(A.1)

A.1 Mean

Definition. The mean of measurements of a variable x, denoted by 〈x〉, is defined by

〈x〉 =

N

∑^ N

i=

xi = x 1 + x 2 + · · · + xN N

(A.2)

What is it? The mean tells you where the center of your measurements lies. This corresponds to the location of the peak in a histogram of your data. As you take more data points, the mean will become more precise.

When should I use it? The mean is one of the numbers you should report for a measurement, along with the standard error. Take the mean to combine separate measurements of the same quantity.

When should I avoid it? Do not take the mean of numbers that do not represent the same quantity, or which are derived using different methods. Avoid taking the mean if you do not have more than two data points.

Example. For our dataset, N = 5, so 〈x〉 = 15 (11.24 + 13.39 + 13.40 + 11.39 + 11.76). You can now calculate 〈x〉 = 12.24.

20 A.2. SAMPLE STANDARD DEVIATION AND VARIANCE

A.2 Sample standard deviation and variance

Definition. The sample variance is defined by

σ^2 x =

N − 1

∑^ N

n=

xi − 〈x〉

(A.3)

where 〈x〉 is the mean. The sample standard deviation is the square root of the sample variance:

σx =

N − 1

∑^ N

n=

xi − 〈x〉

(A.4)

What is it? The sample standard deviation, or just “standard deviation,” is a measure of the spread of a set of values. It corresponds to the width of the peak in a histogram of your data (see e.g. fig. 1.1). The standard deviation is the square root of the variance; the standard deviation has the same units as the data, while the variance does not.

When should I use it? The standard deviation can be used to characterize the precision of your experiment. It tells you how far apart individual measurements will probably be. You should think of the variance as an ingredient that you use to calculate the standard deviation—we won’t use it on its own.

When should I avoid it? Do not use the sample standard deviation as the error on your measurement. Use the standard error of the mean instead.

Example. We computed the average in the previous example, finding 〈x〉 = 12.24. We still have N = 5, so N − 1 = 4, and the variance is

σ x^2 =

(11. 24 − 12 .24)^2 + (13. 39 − 12 .24)^2 + (13. 40 − 12 .24)^2

+ (11. 39 − 12 .24)^2 + (11. 76 − 12 .24)^2

(A.5)

(− 0 .996)^2 + (1.154)^2 + (1.164)^2 + (− 0 .846)^2 + (− 0 .476)^2

(A.6)

= 1. 16 (A.7)

To find the standard deviation, we take the square root of the variance:

σx =

1 .16 = 1. 08 (A.8)