
















Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
Describes principles for introductory physics experiments and course policies.
Typology: Lecture notes
1 / 24
This page cannot be seen from the preview
Don't miss anything!
Many lab courses focus on teaching you specific laboratory techniques. The introductory physics labs have a very different mission. We will use very simple equipment, and I will not be grading you on how well you perform the measurements. Instead, we will focus heavily on the scientific process behind each experiment.
1.1 The scientific method
At some point in your life, you probably learned about the scientific method, which is usually phrased something like this:
This is an idealized version of the process, but it captures the important points. What we will focus on in this class are the details of steps 3 and 4: how do we devise an experiment to test a hypothesis? How do we analyze the results, and how do we draw conclusions about the hypothesis from the data? First, let’s review the vocabulary we need in order to go through this process systematically. An observation is any data at all about the real world. Often, we will start with an observation that is qualitative, meaning that it is not described in terms of numbers. For example, you can observe that the sky is blue, and that is a perfectly legitimate observation. However, to understand the underlying mechanism behind an observation, it is useful to rephrase observations in a quantitative form, describing them mathematically. For example, instead of saying that the sky is blue, we can measure the wavelengths of the light coming from the sky. If we notice that observations follow a predictable pattern, it can be useful to formalize that pattern as a law. A law allows us to make a prediction based on the pattern that we have previously observed, but it does not contain any claims about the underlying nature of the phenomenon. Early in the quarter, you learned or will learn about Newton’s laws, which are good examples of laws:
5
After a hypothesis makes many successful predictions—that is, predictions which survive exper- imental tests—it may then become the scientific consensus. This means that a strong majority of scientists in the field believe that the model is the best available description of nature. It does not necessarily mean that scientists believe the model to be fundamentally correct or complete, and some scientists may still disagree with the rest. Taken together, a set of related hypotheses and results which are well-tested in a particular set of conditions is called a theory. Notice the difference between a theory and a law: a law describes a phenomenon, but a theory explains the mechanism behind it. A law is not “stronger” than a theory—and in fact, many laws can be derived from an underlying theory. In physics, a theory need not be fundamentally correct, but it should provide an effective description of nature in a specific set of conditions. For example, classical mechanics is a theory which underlies Newton’s laws. Classical mechanics is known to be inconsistent with data in certain conditions, but it remains a very useful tool for most everyday situations. Be aware that terminology varies between fields; for instance, a mathematical “theory” refers to a collection of definitions and theorems, which are proven results guaranteed to be correct. Neither of these uses of the word “theory” are especially related to the everyday usage of the word. Let’s review the vocabulary:
After observing a phenomenon that we do not understand, we propose a hypothesis or model to explain it. We rephrase the model in a quantitative form, so that we can generate quantita- tive predictions. If the model is falsifiable, as it needs to be, we can think of an observation which could falsify it. Then we perform an experiment to make that measurement quanti- tatively. We compare the predicted results to the measured results, and determine whether the model is consistent with the data or excluded by the data. If our model, other related hypotheses, and their consequences have strong explanatory power, the combined framework may be considered a theory. In time, the theory may become the scientific consensus.
We have one problem left: how do we decide whether the predictions of a model are consistent with experimental data? If you have ever taken a lab course before, you know that the predicted numbers will never agree perfectly with the measured numbers. This is not something we can ever escape completely—every experiment is subject to sources of error. To decide whether or not a model is excluded by a dataset, we need to determine how much deviation between theory and experiment is acceptable. This is where the concept of uncertainty comes in.
1.2 Understanding uncertainty
Suppose that a model predicts a value of 30.0 m/s^2 for the acceleration of some object, and when we perform the experiment, we measure a value of 32.4 m/s^2. Intuitively, if the experiment is only precise to within 5 m/s^2 , we cannot exclude our model. But if the experiment is precise to within 0 .001 m/s^2 , then maybe we can. How can we find out? The first thing to understand is that every measurement is subject to some random variation. If we repeat the experiment several times, we will probably get results that are close to 32.4 m/s^2 , but we should not be surprised to see 32.2 m/s^2 , or maybe 33.9 m/s^2. This variation may have many different sources, including the precision of the measuring instrument and small differences in the initial conditions of each experiment. Regardless of the sources, this inherent variation means that we cannot say exactly what the true value is. Instead, we will only be able to make a probabilistic statement about the real value of what we are trying to measure.
Measured value [m/s^2 ]
Number of measurements
Figure 1.1: Data from an experiment repeated 10,000 times, grouped into intervals of width 0 .25 m/s^2. More measurements fell between 32.0 and 32.25 m/s^2 than into any other single in- terval, but many more measured values fell into the combined interval between 31 and 33 m/s^2.
In most situations, if we do the experiment many, many times, and plot the results, we will see a shape like that shown in fig. 1.1. In the figure, we have taken 10,000 measurements, grouped them into intervals, and counted how many measurements landed in each interval. (For example, 236 measurements gave a value between 30.0 and 30.25 m/s^2 , so the gray bar between those two numbers on the x axis extends up to 236 on the y axis.) This kind of plot is called a histogram. If you look at the figure, you will see that the shape is centered near 32.0 m/s^2 , but many measurements are a little distance away from that center. The further we go from the center, the lower the count of measurements. This shape is called a normal distribution, and it shows up in almost every experiment^2. (You may have also heard it called a bell curve.) Looking at the figure a little more closely, we can see that some of our measurements give results that seem far away from the central value—some as far out as 28 or 36 m/s^2. Indeed, if our data follows a normal distribution, then there is some very small probability of getting a result arbitrarily far away from the center. We could get a measurement of 100 or even 1000 m/s^2 —it’s simply that the probability of measuring such a value is exceedingly small. The shape of the plot tells us what measurement values are most probable. In other words, the normal distribution is one example of a probability distribution. The important consequence is that when we compare our model to our experimental data, we need to keep in mind that even if our model is very accurate, there is some probability that measurements will greatly disagree with our predictions. Likewise, even if our model is very inaccurate, there is some probability that measurements will closely align with our predictions by chance alone! As we make more and more measurements, the probability of getting one measurement that lies far away from the center keeps on going up. But at the same time, as we gather more data, the center value becomes more and more sharply defined, as you can see in fig. 1.2. You can estimate where the center lies by taking the mean of the measurements. The width of the peak is characterized by a quantity called the variance. A lower variance corresponds to a narrower peak. Even though the center becomes easier to locate as we take more measurements, the variance is
(^2) There are fascinating mathematical reasons for this fact. To find out more, look up the central limit theorem.
Score 0. 0 1. 0 2. 0 3. 0 4. 0 5. 0 Probability 1. 000 0. 317 0. 046 0. 0027 6. 33 × 10 −^5 5. 73 × 10 −^7
Table 1.1: Standard normal table. Each of the probabilities shown is the probability of obtaining a standard score at least as great as that in the first row of the table, assuming that the deviation can be in either direction.
by the Greek letter σ (sigma), this result is just the answer from step 1 written in terms of σ. For example, if our measured value is 31. 96 ± 0 .09 m/s^2 and our predicted value is 32.0 m/s^2 , then the difference in step 1 is 0.04 m/s^2. In step 2, we divide this difference by 0.09 m/s^2 to express it as
(^4) To read more about this, look up the LHC diboson excess.
understanding the errors in a large experiment. In this class, our estimates of uncertainty will be equal parts science and art: we will try to list all of the sources of error in the experiment and characterize their impact on the results. We will discuss this in greater depth in the appendix.
1.3 Scientific voice
There is one very important part of the scientific method which we have not mentioned yet: any results need to be shared with the scientific community. In practice, this usually takes place in the form of papers published in scientific journals. Writing these articles requires a different set of skills from much of the other writing we do in life. Now that we understand the scientific process more deeply, let’s discuss the implications for scientific language. The most important things to keep track of in scientific writing are the claims you are making and their scope or breadth. A claim is something you are arguing for. It should have strong support, but there can still be some gaps for later work to fill; you should acknowledge these gaps and evaluate their significance. On the other hand, it is very important to claim nothing more than what you can support—for example, if you perform an experiment showing that the sky is blue, you should not claim that the sky is blue on every planet. As a result, most scientific papers make claims that are extremely specific (i.e., with narrow scope). Since the entire scientific method revolves around testing claims, a good paper is very direct about the claims that it makes. Of course, we know now that the scientific method can never prove that a model is correct. This shapes the way that we explain our claims: for instance, we cannot say that our study “proves” something. Instead, we need to use the language of statistics. You will acquire an intuition for this kind of writing by reading. Let’s begin with a simple pair of examples:
Example 1 (wrong). Newton’s second law is one of the most fundamental assump- tions underlying Newtonian mechanics. We perform an experiment to prove that the second law is true. A known force is applied to a cart on a track, and the resulting acceleration is measured. In an analysis of 10 trials, the measured acceleration matches the prediction, so we conclude that the second law is correct.
Example 2 (better). Newton’s second law is one of the most fundamental assump- tions underlying Newtonian mechanics. We perform an experiment to test the second law. A known force is applied to a cart on a track, and the resulting acceleration is mea- sured. In an analysis of 10 trials, the mea- sured acceleration is consistent with the pre- diction at 0. 5 σ. Our results provide strong evidence for the validity of the second law at velocities up to 5 m/s.
The first example makes a statement that is too strong for multiple reasons: apart from uncertainty in the experiment, it is not clear what assumptions the author is making. Additionally, since it does not explain how strong the result actually is, it is impossible for the reader to weigh the significance of the claim. The second example fixes these problems by explaining the findings in terms of probability rather than making absolute claims. Another way to articulate claims without making them too strong is to use qualifiers, which are words that emphasize or weaken a statement. For example, familiar qualifiers from everyday speech are kind of, pretty much, and not really. We don’t use these particular phrases in scientific writing, and it’s not just because they’re informal. The much more serious problem with qualifiers like these is that they are too imprecise to add anything to the argument. You can use qualifiers to explain or highlight a conclusion that another scientist would find to be evident from your data, but they shouldn’t be vacuous. For instance, in example 2, we said that our results
Ask yourself whether you really need each word that you write. Lots of words in the first example can be omitted with no ill effects. The result is cleaner writing that’s easier to read. Often, wordy writing follows from imprecise language—so if you find yourself writing a lot, ask yourself whether you’re using the right words. Finally, when writing a scientific article, think carefully about your readers. Who are they, and what will they think as they read each sentence? In this class, we will practice writing for a audience of fellow scientists, and scientists are notoriously picky about holes in an argument. You should imagine that your work is being read by someone who really wants to prove you wrong. The best way to write for a reader like that is to be careful, precise, and measured. Remember, a scientific article is an argument, and everything that you know about making a convincing argument applies.
Example start (better). We perform an experiment to test conservation of momentum using gliders on an air track.
This tells the reader everything they need to know succinctly, without too much detail. The argument in your report should be clear and well-supported, but your results do not need to match your expectations. It is absolutely fine to perform an experiment, observe that it is inconsistent with your hypothesis, and communicate that in your report. Do not assume that your outside knowledge must be correct, and the experiment faulty! If your results are inconsistent with other things you know, then do your best to explain the difference. Are there sources of error in your experiment that you have not fully accounted for? Are there assumptions involved in your prediction that might not apply? Whatever you think, do not alter your conclusions to reflect what you expected. That includes saying things like the following:
Example (wrong). Our results are inconsistent with conservation of momentum, but we know that conservation of momentum should hold, so our experiment must be in error.
In this class, we will always take the position that we are the first people to perform each experiment. If you have outside knowledge of the “correct” value of a measurement, you should set it aside—it should not enter your report unless you have direct experimental evidence for it. That said, if you think your measurement should come out a different way, do call me over to take a look. There may be something wrong that I can help you to fix. Finally, let’s come back to something we said before: a prediction is not a guess, and this class is not meant to test how well you can guess at the outcome of an experiment. This class is meant to teach you to perform good experiments and interpret their results. Understanding your predictions and comparing them with data is central to that process. Do not make guesses in your reports; if you have no basis to make a prediction, then don’t make a prediction. If you think your results are inconsistent with a prediction, analyze the implications. Don’t just say that you were wrong and call it a day. And if something went wrong with a calculation, fix it!
2.2 Report components
The layout of your report is up to you, but a good report will address all of the points listed below in some form. Not every experiment is the same; sometimes it won’t be easy to write an abstract, or a formal “predictions” section won’t be appropriate. I will do my best to give you guidance as needed during the labs, but you can use your best judgment and ask me if you have questions.
2.3 Logistics
In the classroom: Bring your prelabs to the front of the room as you enter. We will spend the first half hour of each class reviewing the relevant material together, and then you will have the remaining time to collect data and write your report. I will stop you every now and then to talk about certain questions as a class, and we will usually discuss the major points of the analysis by the end of the second hour. Generally, you should not leave early even if you think you are finished, since you might miss something important.
Computers: You are welcome to type your reports and email them to me. However, I have the same expectations for typed reports as I do for handwritten reports. If you need to include graphs and you aren’t able to make them digitally, you can submit them on paper and refer to them in your typed work. All materials related to a lab need to be handed into me by the end of that class.
Missing labs: If you cannot avoid missing a lab, please let me know as soon as possible, and we may be able to work something out. If you miss two labs, you will not pass the course.
Grades: Your grades will be based on your reports and your prelabs. Both will be evaluated on a scale from 0 to 4. Prelabs will count for 19% of your grade, and reports will count for 79%. The remaining 2% will be based on a short reading quiz to be given during the second class.
Contacting the instructor: If you need to contact me, send an email to blehmann@ucsc.edu. Please write [6L] in the subject line to make sure that I see it.
This appendix is provided as a quick reference for statistical calculations. You are encouraged to print it and bring it with you to the lab. I have not included explanations of where these results come from, and they have been formatted for the conventions of our course only. Please ask me if you have questions.
Throughout this appendix, X stands for the set of measured values. We will assume that X contains N measurements, which we will label x 1 , x 2 ,... , xN. We will also say that the values in X are measurements of a variable called x. (This is not necessarily position—it could stand for anything.) For concrete examples, we will use the following dataset:
i 1 2 3 4 5 xi 11. 24 13. 39 13. 40 11. 39 11. 76
A.1 Mean
Definition. The mean of measurements of a variable x, denoted by 〈x〉, is defined by
〈x〉 =
i=
xi = x 1 + x 2 + · · · + xN N
What is it? The mean tells you where the center of your measurements lies. This corresponds to the location of the peak in a histogram of your data. As you take more data points, the mean will become more precise.
When should I use it? The mean is one of the numbers you should report for a measurement, along with the standard error. Take the mean to combine separate measurements of the same quantity.
When should I avoid it? Do not take the mean of numbers that do not represent the same quantity, or which are derived using different methods. Avoid taking the mean if you do not have more than two data points.
Example. For our dataset, N = 5, so 〈x〉 = 15 (11.24 + 13.39 + 13.40 + 11.39 + 11.76). You can now calculate 〈x〉 = 12.24.
A.2 Sample standard deviation and variance
Definition. The sample variance is defined by
σ^2 x =
n=
xi − 〈x〉
where 〈x〉 is the mean. The sample standard deviation is the square root of the sample variance:
σx =
n=
xi − 〈x〉
What is it? The sample standard deviation, or just “standard deviation,” is a measure of the spread of a set of values. It corresponds to the width of the peak in a histogram of your data (see e.g. fig. 1.1). The standard deviation is the square root of the variance; the standard deviation has the same units as the data, while the variance does not.
When should I use it? The standard deviation can be used to characterize the precision of your experiment. It tells you how far apart individual measurements will probably be. You should think of the variance as an ingredient that you use to calculate the standard deviation—we won’t use it on its own.
When should I avoid it? Do not use the sample standard deviation as the error on your measurement. Use the standard error of the mean instead.
Example. We computed the average in the previous example, finding 〈x〉 = 12.24. We still have N = 5, so N − 1 = 4, and the variance is
σ x^2 =
To find the standard deviation, we take the square root of the variance:
σx =