Learn all about Ohm's law

sources of error in ohms law

5 Error Sources in Ohm’s Law Experiment [How to avoid them]

The practical observations of  Ohm’s law experiment  never match the theoretical readings.

In fact, you can never match the theoretical calculations with practical values.

However, you can take some precautions to closely match the values.

Today’ you’ll learn the 5 error sources which are responsible for misleading readings. You’ll learn to keep you and your equipment safe by avoiding the blunders. You’ll also learn to obtain quite accurate readings. Let’s start off by understanding the types of errors.

Scientific measurement and instrumentation errors are often classified into three types:

  • Personal errors: Mistakes made by the user due to his inexperience.
  • Systematic: The faults in the instrument itself and the faults which may occur due to environmental conditions.
  • Random errors: An accidental error whose cause is unknown. (We’ll ignore it here).

What is a personal error [Don’t of Ohm’s law]

Generally, a personal error is an outright mistake which is made by the person himself. For example, you ignore a digit while taking observations. In case of Ohm’s law, you can commit a personal error by:

Wrong connecting the circuit

The ammeter is used to measure the current.  It always connects in series with the circuit. Wrong connecting the ammeter will damage the instrument.

The voltmeter measures the potential difference between two points. It connects in parallel to the circuit. Wrong connecting the voltmeter will yield wrong readings.

Wrong taking the readings

Wrong measurements usually happen due to careless handling behavior. Carefully take the readings to avoid the errors.

Systematic errors

Tolerance values of resistors.

Carbon and metal film resistors are the most popular class of resistors which are employed in our labs. Such resistors have a tolerance value which ranges between 0.05-20%. The leftmost band of carbon resistors indicates the possible tolerance of resistance. A silver band indicates a tolerance of 10%, the golden band indicates 5% and brown band indicates 1%. More tolerance means your resistance, and thus the voltage/current will fluctuate away from the theoretical value.

You have two choices to bypass this error.

Use a brown [1%] or grey [0.05%] band resistor which has low tolerance value and thus will provide a lower error.

Measure the resistance first and base your theoretical formula calculations on this value.

Quality of Multimeter

Your multimeter is the actual tool which measures the electrical quantities. While low-quality multimeters yield wrong observations, they are equally dangerous. Again you have two choices.

Variable DC Power Supply

A variable power supply displays the output voltages on its main screen. For the time being, the accuracy of components decreases and your supply might display wrong results. Such cases are common in general labs where supplies are used thousands of times.

Use your multimeter to confirm the actual volts coming out of power supply.

Let’s summarize our results:

sources of error in ohms law

  • ← Ohm’s Law in Series Circuits
  • Theory VS Experimental Verification of Ohm’s Law →

Series And Parallel Circuits Lab Sources Of Error

Whether you’re a hobbyist working on a home science project or a professional investigating the intricate workings of electricity, studying series and parallel circuits is essential. But before you can observe and measure the behavior of such networks, it’s important to understand the potential sources of error that can affect your results. From inaccurate readings to incorrect circuit design, the potential sources of errors in series and parallel circuit experiments are vast. Some errors can be attributed to faulty equipment, while others may be related to the user’s inexperience or lack of technical knowledge. For instance, if you don’t properly connect a resistor or capacitor in a parallel circuit, you could end up with inaccurate readings. Other sources of errors may include misreading or misunderstanding circuit diagrams or erroneously assuming something about the behavior of components within the circuit. Additionally, if you’re running an experiment for a long period of time, you may inadvertently cause a catastrophic failure due to voltage fluctuations, or burn out a component if you use too high of a voltage. In order to reduce or eliminate errors in your experiments, it's important to carefully follow the directions provided with your equipment and thoroughly double-check your setup. Additionally, you should make sure your tools are in proper working order and always consult a qualified technician if you're uncertain as to how to proceed with your experiments. Although sources of errors in series and parallel circuits can produce unreliable results, understanding and taking precautions to reduce them can help ensure your experiment objectives are achieved. By following the guidelines outlined in this article, you can ensure the accuracy of your measurement and obtain reliable results.

Difference Between Random Systematic Error With Comparison Chart Circuit Globe

Difference Between Random Systematic Error With Comparison Chart Circuit Globe

Experiment Series And Parallel Circuits Pdf Free

Experiment Series And Parallel Circuits Pdf Free

Pdf Sources Of Error In Tetrapolar Impedance Measurements On Biomaterials And Other Ionic Conductors

Pdf Sources Of Error In Tetrapolar Impedance Measurements On Biomaterials And Other Ionic Conductors

Doc Experiment Series And Parallel Circuits Zahin Ikram Academia Edu

Doc Experiment Series And Parallel Circuits Zahin Ikram Academia Edu

Solved Ii Online Lab Series And Parallel Circuits 4 2 Chegg Com

Solved Ii Online Lab Series And Parallel Circuits 4 2 Chegg Com

Rc Circuit Analysis Series Parallel Explained In Plain English Electrical4u

Rc Circuit Analysis Series Parallel Explained In Plain English Electrical4u

Determination Of Uncertainty In Measuring Instruments Electrical Engineering Programs

Determination Of Uncertainty In Measuring Instruments Electrical Engineering Programs

Lab 4 Series Amp Parallel Circuits

Lab 4 Series Amp Parallel Circuits

What Is Measurement Error Definition Types Of Errors In Circuit Globe

What Is Measurement Error Definition Types Of Errors In Circuit Globe

Series And Parallel Circuits Learn Sparkfun Com

Series And Parallel Circuits Learn Sparkfun Com

Question Calculating The Absolute Error Of A Gravitational Acceleration Measurement Nagwa

Question Calculating The Absolute Error Of A Gravitational Acceleration Measurement Nagwa

To Find Resistance Of A Given Wire Using Metre Bridge Physics Practical

To Find Resistance Of A Given Wire Using Metre Bridge Physics Practical

How To Calculate Percent Error

How To Calculate Percent Error

Component Failure Analysis Series And Parallel Circuits Electronics Textbook

Component Failure Analysis Series And Parallel Circuits Electronics Textbook

A Quantum Processor Based On Cohe Transport Of Entangled Atom Arrays Nature

A Quantum Processor Based On Cohe Transport Of Entangled Atom Arrays Nature

The Impact Of Age On Water Purification Systems Elga Labwater

The Impact Of Age On Water Purification Systems Elga Labwater

Online Circuit Simulator Schematic Editor Circuitlab

Online Circuit Simulator Schematic Editor Circuitlab

Ohm S Law Explanation And Verification Ppt

Ohm S Law Explanation And Verification Ppt

Rlc Series Circuit And Parallel Lab

Rlc Series Circuit And Parallel Lab

Error Correcting Dynamics In Visual Working Memory Nature Communications

Error Correcting Dynamics In Visual Working Memory Nature Communications

Back Home

  • Science Notes Posts
  • Contact Science Notes
  • Todd Helmenstine Biography
  • Anne Helmenstine Biography
  • Free Printable Periodic Tables (PDF and PNG)
  • Periodic Table Wallpapers
  • Interactive Periodic Table
  • Periodic Table Posters
  • Science Experiments for Kids
  • How to Grow Crystals
  • Chemistry Projects
  • Fire and Flames Projects
  • Holiday Science
  • Chemistry Problems With Answers
  • Physics Problems
  • Unit Conversion Example Problems
  • Chemistry Worksheets
  • Biology Worksheets
  • Periodic Table Worksheets
  • Physical Science Worksheets
  • Science Lab Worksheets
  • My Amazon Books

Sources of Error in Science Experiments

All science experiments contain error, so it's important to know the types of error and how to calculate it. (Image: NASA/GSFC/Chris Gunn)

Science labs usually ask you to compare your results against theoretical or known values. This helps you evaluate your results and compare them against other people’s values. The difference between your results and the expected or theoretical results is called error. The amount of error that is acceptable depends on the experiment, but a margin of error of 10% is generally considered acceptable. If there is a large margin of error, you’ll be asked to go over your procedure and identify any mistakes you may have made or places where error might have been introduced. So, you need to know the different types and sources of error and how to calculate them.

How to Calculate Absolute Error

One method of measuring error is by calculating absolute error , which is also called absolute uncertainty. This measure of accuracy is reported using the units of measurement. Absolute error is simply the difference between the measured value and either the true value or the average value of the data.

absolute error = measured value – true value

For example, if you measure gravity to be 9.6 m/s 2 and the true value is 9.8 m/s 2 , then the absolute error of the measurement is 0.2 m/s 2 . You could report the error with a sign, so the absolute error in this example could be -0.2 m/s 2 .

If you measure the length of a sample three times and get 1.1 cm, 1.5 cm, and 1.3 cm, then the absolute error is +/- 0.2 cm or you would say the length of the sample is 1.3 cm (the average) +/- 0.2 cm.

Some people consider absolute error to be a measure of how accurate your measuring instrument is. If you are using a ruler that reports length to the nearest millimeter, you might say the absolute error of any measurement taken with that ruler is to the nearest 1 mm or (if you feel confident you can see between one mark and the next) to the nearest 0.5 mm.

How to Calculate Relative Error

Relative error is based on the absolute error value. It compares how large the error is to the magnitude of the measurement. So, an error of 0.1 kg might be insignificant when weighing a person, but pretty terrible when weighing a apple. Relative error is a fraction, decimal value, or percent.

Relative Error = Absolute Error / Total Value

For example, if your speedometer says you are going 55 mph, when you’re really going 58 mph, the absolute error is 3 mph / 58 mph or 0.05, which you could multiple by 100% to give 5%. Relative error may be reported with a sign. In this case, the speedometer is off by -5% because the recorded value is lower than the true value.

Because the absolute error definition is ambiguous, most lab reports ask for percent error or percent difference.

How to Calculate Percent Error

The most common error calculation is percent error , which is used when comparing your results against a known, theoretical, or accepted value. As you probably guess from the name, percent error is expressed as a percentage. It is the absolute (no negative sign) difference between your value and the accepted value, divided by the accepted value, multiplied by 100% to give the percent:

% error = [accepted – experimental ] / accepted x 100%

How to Calculate Percent Difference

Another common error calculation is called percent difference . It is used when you are comparing one experimental result to another. In this case, no result is necessarily better than another, so the percent difference is the absolute value (no negative sign) of the difference between the values, divided by the average of the two numbers, multiplied by 100% to give a percentage:

% difference = [experimental value – other value] / average x 100%

Sources and Types of Error

Every experimental measurement, no matter how carefully you take it, contains some amount of uncertainty or error. You are measuring against a standard, using an instrument that can never perfectly duplicate the standard, plus you’re human, so you might introduce errors based on your technique. The three main categories of errors are systematic errors, random errors , and personal errors. Here’s what these types of errors are and common examples.

Systematic Errors

Systematic error affects all the measurements you take. All of these errors will be in the same direction (greater than or less than the true value) and you can’t compensate for them by taking additional data. Examples of Systematic Errors

  • If you forget to calibrate a balance or you’re off a bit in the calibration, all mass measurements will be high/low by the same amount. Some instruments require periodic calibration throughout the course of an experiment , so it’s good to make a note in your lab notebook to see whether the calibrations appears to have affected the data.
  • Another example is measuring volume by reading a meniscus (parallax). You likely read a meniscus exactly the same way each time, but it’s never perfectly correct. Another person taking the reading may take the same reading, but view the meniscus from a different angle, thus getting a different result. Parallax can occur in other types of optical measurements, such as those taken with a microscope or telescope.
  • Instrument drift is a common source of error when using electronic instruments. As the instruments warm up, the measurements may change. Other common systematic errors include hysteresis or lag time, either relating to instrument response to a change in conditions or relating to fluctuations in an instrument that hasn’t reached equilibrium. Note some of these systematic errors are progressive, so data becomes better (or worse) over time, so it’s hard to compare data points taken at the beginning of an experiment with those taken at the end. This is why it’s a good idea to record data sequentially, so you can spot gradual trends if they occur. This is also why it’s good to take data starting with different specimens each time (if applicable), rather than always following the same sequence.
  • Not accounting for a variable that turns out to be important is usually a systematic error, although it could be a random error or a confounding variable. If you find an influencing factor, it’s worth noting in a report and may lead to further experimentation after isolating and controlling this variable.

Random Errors

Random errors are due to fluctuations in the experimental or measurement conditions. Usually these errors are small. Taking more data tends to reduce the effect of random errors. Examples of Random Errors

  • If your experiment requires stable conditions, but a large group of people stomp through the room during one data set, random error will be introduced. Drafts, temperature changes, light/dark differences, and electrical or magnetic noise are all examples of environmental factors that can introduce random errors.
  • Physical errors may also occur, since a sample is never completely homogeneous. For this reason, it’s best to test using different locations of a sample or take multiple measurements to reduce the amount of error.
  • Instrument resolution is also considered a type of random error because the measurement is equally likely higher or lower than the true value. An example of a resolution error is taking volume measurements with a beaker as opposed to a graduated cylinder. The beaker will have a greater amount of error than the cylinder.
  • Incomplete definition can be a systematic or random error, depending on the circumstances. What incomplete definition means is that it can be hard for two people to define the point at which the measurement is complete. For example, if you’re measuring length with an elastic string, you’ll need to decide with your peers when the string is tight enough without stretching it. During a titration, if you’re looking for a color change, it can be hard to tell when it actually occurs.

Personal Errors

When writing a lab report, you shouldn’t cite “human error” as a source of error. Rather, you should attempt to identify a specific mistake or problem. One common personal error is going into an experiment with a bias about whether a hypothesis will be supported or rejects. Another common personal error is lack of experience with a piece of equipment, where your measurements may become more accurate and reliable after you know what you’re doing. Another type of personal error is a simple mistake, where you might have used an incorrect quantity of a chemical, timed an experiment inconsistently, or skipped a step in a protocol.

Related Posts

Bias and Sources of Error

Tomato plants grow in a greenhouse

Tomato plants grow in a greenhouse (Goldlocki, Wikimedia Commons)

How does this align with my curriculum?

Grade Course Topic

Share on: facebook X/Twitter LinkedIn Pinterest

Learn how scientists identify and minimize bias and sources of error to produce accurate results.

In a  Fair Test , it is important to work very hard not only to control the variables, but also to minimize sources of the experimenter’s bias and error. Bias and errors can result in inaccurate results from an experimental inquiry.

How can bias occur in experimental inquiries?

In an inquiry,  bias  occurs when a person influences the results, which in most cases is not intentional. People generally find what they expect to find, because they look for specific things and inadvertently overlook other things, which leads to biased results. This type of bias happens to us all and it is very difficult to control – even for scientists! The background knowledge and prior learning we bring to the situation will affect how we interpret the information we receive. Students in particular may want their hypotheses to be proven to be true by their observations and so may unintentionally introduce biases into their experimental inquiries.

Randomization

One type of bias involves giving preference to a certain group or part of the data. For example, students may hypothesize that one treatment (e.g., giving plants more water) will make plants grow taller than another treatment (e.g., giving plants less water). When students then choose seeds to grow for this experiment, they may unconsciously choose the largest seeds for the treatment that will receive more water and the smallest seeds for the treatment that will receive less water. In this case, the students may be unconsciously trying to give the seeds a head start by choosing the larger seeds.

How to avoid this type of bias:

Have the students  randomly  choose seeds used in the treatments. Through the process of  randomization , objects or individuals are randomly assigned to experimental groups. In this way, the experimenter does not show preference (or bias) to any one group.

Blind Experiments

Sometimes other subconscious biases can occur during experimental inquiries. Some of this bias can be minimized using what is known as a  Blind Experiment  or Blind test. Blind experiments are often used when an object or group generates some sort of feeling in the experimenter. For example, on the  Fair Test  page there was reference to two types of seeds called “Big Beauties” and “Small Wonders.” Given these names, students may not look after the plants in the same way. They may try to look after the “Big Beauties” more carefully or give them some sort of unfair advantage.

How to avoid this type of bias:  

Have objects or individuals used in experiments labelled by letter (e.g., A, B) or number (e.g, 1, 2) rather than by name, in the same manner as the seeds used in the Tomatosphere TM Seed Investigation. In this way, the experimenters (students) will not know which group each member belongs to and will be more likely to treat groups in the same way.

How can errors occur in experimental inquiries?

No one is perfect! That is why errors can happen even in the most carefully designed experimental inquiries. Knowing the types of errors that are common in experimental inquiries can help students to minimize them.

These types of errors are also known as “blunders” or “miscalculations” and they happen to everyone. In experimental inquires, these types of errors can occur due to:

  • incorrect reading of instructions (e.g., 50 mL vs. 500 mL, sugar vs. salt, etc.);
  • incorrect measuring (e.g., inches instead of cm, °F instead of °C, voltage instead of current, etc.);
  • incorrect reading or use of instruments (e.g., meniscus, calipers, thermometer, etc.);
  • incorrect calculations (e.g., dividing instead of multiplying, using the wrong formula, etc.); and
  • incorrect recording (e.g., transposing numbers, putting values in the wrong place on a chart, etc.).

How to avoid these types of errors:

Encourage students to be careful when they read instructions, take measurements from instruments such as rulers or thermometers, perform calculations, and record observations. When working in groups, encourage students to check and confirm each other’s measurements and calculations. They should also practice taking readings or using equipment prior to doing so in an experimental situation.

Other types of mistakes are possible as well. They include such things as:

  • accidents (e.g., knocking over a plant pot and having to scoop the contents back in, giving a plant too much water, etc.); and
  • not following directions (e.g., watering some plants every day and other plants every other day, forgetting to water some of the plants, etc.).

Encourage students to undertake experimental inquiries carefully and to the best of their abilities. Having a clearly defined plan and well laid out work area for inquiries will help minimize accidents.

Students helping to water a plant

Shown is a colour photograph of a group of five young children, about five years of age, helping to water a plant. 

A clay plant pot filled with soil is in the centre of a square table. On the left, a child with curly brown hair and an orange t-shirt holds a green watering can above the pot with both hands. In the centre, a child with dark brown hair in a pink barrette, picks up seeds from the table around the pot. Further right, a child in a pink frilly dress reaches towards the pot. Next, a child with short blond hair and a black t-shirt is placing a seed in the soil. On the far right, a child in a blue sweater with black hair in several thick braids, looks at the pot with curiosity.

Experimental Errors

In addition to mistakes, there are other types of non-human errors that are related to the accuracy and precision of measurements. These are known as  Experimental Errors . Experimental error is the difference between a measurement and its accepted value. There are two main types of experimental error – Systematic Error and Random Error.

Systematic Errors

Systematic Errors affect the  accuracy of a measurement (for more, see the  Precision and Accuracy  backgrounder). Errors of this type tend to result in measurements that are consistently too high or consistently too low. For example, a digital scale that reads 102 grams for a 100 g standard weight, a clock that is running slow, or a thermometer that has a crack in it.

Systematic errors can be difficult to detect, but the more students understand how to use and calibrate given tools and take observations effectively, the more these types of errors can be reduced. Also, encourage students to use a “gut check” – e.g., “this scale reads 102 g and I know that the weight should be 100g, so the scale needs to be calibrated.”

Random Errors

Random Errors affect the  precision of a measurement (for more, see the  Precision and Accuracy  section). Unlike systematic errors which tend to give results that are  always  either too high or too low, random errors can give results that are  sometimes  too high and  sometimes  too low.

For example, a student is using an ammeter to measure current in a given circuit. The results the student gets are 0.16 A, 0.15 A, 0.17 A, and 0.14 A. Clearly, there is a wide range of results, which should not be the case if no variables were changed when the readings were taken (i.e., mathematically, the reading should have been 0.15 A). The causes of random errors can be difficult to predict and it can take time to figure out the source of the errors. Random errors tend to be common when studying living things as they can be individually affected by changes to the environment; such as variation in temperature, humidity, light, etc. and there is natural variation as a result of their genetic makeup.

Random errors, also known as random variation, are an intrinsic part of any measurement. For example, the ammeter readings above have a percent average deviation of 6.7% (which, for ammeters, isn’t all that bad) so the final reading could be reported as .15 A ± .01 A (6.7% of .15).

Random errors can be minimized by checking to make sure equipment, such as electrical measuring devices, are in good working order. It can also be minimized by using materials from the same source (e.g., seeds from the same package), collecting data at the same time of day, etc.

How are Bias and Sources of Error Minimized in Tomatosphere™?

The Tomatosphere TM Seed Investigation has been designed to minimize some of the types of bias and errors described above.

Did you know? The Tomatosphere™ Seed Investigation is a Single-blind experiment. This means that the researchers at Tomatosphere™ know which seeds are which, but that the classes participating in the inquiry do not (until they have submitted results).

The Seed Investigation is also a  Blind Test . In any given year, educators and students will not know which seeds have been into space (or treated with space-like conditions) and which have not since they received two packages of seeds labelled with different letters. Once educators submit their results to Tomatosphere™, they find out which seeds are which. This should avoid any bias students may have towards the ‘space’ or ‘non-space’ plants.

Tomatosphere seed packages

Shown is a colour photograph of two packets of Tomatosphere seeds.

The packets are white with red and black printing. They are identical except for a bold letter A printed in the top left corner of the one on the left, and the letter B printed in the top left corner of the one on the right. At the top of each is an illustration of a red tomato with a space shuttle flying around it. The word Tomatosphere is below in block letters.  www.tomatosphere.org  is printed below that. At the bottom is a red outline of a box with six logos: Canadian Space Agency, Stokes Seeds, University of Guelph, Heinz, Heinz Tomato Seed and Let's Talk Science/Parlons Sciences.

Also, within a class and the Seed Investigation as a whole, many samples are collected. This large  sample size  helps to minimize random errors generated by the small environmental differences to which the seeds are subjected.

For more information, see the  Sample Size and Reproducibility  backgrounder.

Guided Practice

Have students read the following examples and identify which type of error is occurring as well as what could be done to prevent the error.

  • A balance scale reads 0.25 g even when there is no mass on it, but students use the scale the way it is.
  • The student who is assigned to watering the Tomatosphere™ plants forgets to do so for one week.
  • A student uses a measuring spoon to carry water to the plants (but sometimes the water spills a bit…)
  • Students measure the dry mass of three 8 week old Tomatosphere™ plants and get measurements of 131 g, 136 g, 127 g.

In their own words, have students explain the difference between “bias” and “error” as well as between “systematic errors” and “random errors”.

Have students think about and record the errors that they know, or believe, to have occurred during an experimental inquiry in a log such as a science journal.

When students plan inquiries, have them list potential errors and sources of bias that might occur as well as ways they could prevent them as part of their method.

  • Type of error:  Systematic error How to avoid:  Calibrate the scale. The scale should read 0 when there is no mass on it, and it should measure a standard mass correctly
  • Type of error:  Mistake (not following directions) How to avoid:  Clearly outline tasks, such as watering, to students. Have students check a chart when they have completed tasks. How would you deal with this situation (forgetting to water for one week)? If only one plant had not been watered, then students could use data from the other plants. If all the plants were not watered, the experiment should be done over again.
  • Type of error:  Mistake (accident) How to avoid:  Move the water source closer to the plants or use a different type of measuring device that is less likely to spill when carried. The accessibility and location of materials can be optimized to reduce accidents.
  • Type of error:  Random error How to avoid:  Measure the dry mass of more plants if possible. Variations in dry mass are affected by how well the water is removed from the plants before taking measurements.

Research Bias This article by Explorable explains design bias, selection/sampling bias, procedural bias, measurement bias, interviewer bias, response bias, and reporting bias.

Avoiding Bias This page by 3Rs-Reduction.co.uk explores multiple ways to avoid statistical bias.

Experimental Design This page from Yale University explains the different elements of experimentation.

Experimental Errors and Uncertainty This PDF from the University of Rochester explains how to use refined experimental methods to avoid errors in measurement.

Random Errors This entry in AQA Science Glossary defines random errors using the example of Aaron’s Table.

Systematic Errors This entry in AQA Science Glossary defines systematic errors using the example of a wrongly calibrated instrument.

Minimizing Systematic Error This page from Cornell University explores different ways to avoid error in experimentation.

Identifying Potential Reasons for Inconsistent Experiment Results This video (5:03 min.) by Study.com explains how planning for error and inconsistent conditions can minimize inconsistent results.

Related Topics

Monkey Physics Blog

For all things physics, how to write sources of error.

Sources of Error

Sources of Error in Physics

This article will help you:

  • learn how to identify sources of error for a physics experiment
  • describe common mistakes that students make in physics lab reports
  • provide examples of how to describe sources of error

What Are Sources of Error?

In everyday English, the words “error” and “mistake” may seem similar.

However, in physics, these two words have very different meanings:

  • An error is something that affects results, which was not plausible to avoid (given the conditions of the experiment) or account for.
  • A mistake is something that affects results, which should reasonably have been avoided.

We will see examples of each in the remainder of this article.

Common Incorrect Answers

Part of learning how to write a good sources of error section includes learning what not to do.

Following are some common incorrect answers that students tend to include in their sources of error section.

  • Human error . The problem with this phrase is that it’s way too vague . It may be okay if the nature of the error is human in origin (provided that it’s an inherent error and not a mistake), but it’s not okay to be express the error in vague terms. Advice : Don’t write the phrase “human error” anywhere on your lab report.
  • Round-off error . This problem with this is that students almost never have enough precision in their answers for round-off error to be significant. Even a cheap calculator provides at least 8 figures, whereas most first-year physics experiments yield results where only 2-3 of those digits are significant figures . You can definitely find more significant sources of error to describe instead of round-off error.
  • Incorrect technique . If you used equipment incorrectly or followed the procedures incorrectly, for example, these are mistakes —they are not sources of error. A source of error is something that you could not plausibly expect to avoid.
  • Incorrect calculations . If there are mistakes in your calculations, these are not sources of error. Calculation mistakes are something that students are expected to avoid. Sure, some students make mistakes, but mistakes are not sources of error.
  • Accidental problem . If an accident occurred during the experiment, which could plausibly be avoided by repeating the experiment, this is not a source of error. For example, if you perform the Atwood’s machine lab and the two masses collide, it’s a mistake to keep the data. Simply redo the experiment, taking care to release the masses so that they don’t collide.
  • Lab partner . Don’t blame your lab partner—at least, not in your report—because it isn’t a valid source of error.

Sources of Error: What to Look for

When you identify and describe a source of error, keep the following points in mind:

  • It should sound like an inherent problem that you couldn’t plausibly avoid .
  • It should be significant compared to other sources of error.
  • It needs to actually affect the results . For example, when a car rolls down an inclined plane, its mass cancels out in the equation for acceleration (a = g sin θ), so it would be incorrect to cite an improperly calibrated scale as a source of error.
  • You should describe the source of error as precisely as possible. Try not to sound vague .
  • Unless otherwise stated by the lab manual or your instructor, you should describe the source of error in detail . In addition to identifying the source of the error, you can describe how it impacts the results, or you might suggest how the experiment might be improved (but only suggest improvement sparingly—not every time you describe a source of error), for example.
  • The error should be consistent with your results. For example, if you measure gravitational acceleration in a free fall experiment to be larger than 9.81 m/s 2 , it would be inconsistent to cite air resistance as a source of error (because air resistance would cause the measured acceleration to be less than 9.81 m/s 2 , not larger).
  • Try not to sound hypothetical . It’s better if it sounds like your source of error is based on observations that you made during lab. For example, saying that a scale might not be calibrated properly sounds hypothetical. If instead you say that you measured the mass to be 21.4 g on one scale, but 20.8 g on another scale, you’ve established that there is a problem with the scales (but note that this is a 3% error: if your percent error is much larger than 3%, there is a more significant source of error involved). On the other hand, if you get 20.43 g on one scale, but 20.45 g on another scale, this error is probably insignificant—not worth describing (since the percent error is below 0.1%).
  • Sound scientific and objective . Avoid sounding dramatic, like “the experiment was a disaster” or “there were several sources of error.” (There might indeed be several sources of error, but usually only 1-2 are dominant and the others are relatively minor. But when you say “several sources of error,” it makes the experiment seem far worse than it probably was.)
  • Demonstrate good analysis skills, applying logic and reasoning . This is what instructors and TA’s hope to read when they grade sources of error: an in-depth, well-reasoned analysis.
  • Be sure to use the terminology properly. You can’t expect to earn as much credit if you get words like velocity and acceleration, or force and energy, confused in your writing.
  • Follow instructions . You wouldn’t believe how many students lose points, for example, when a problem says to “describe 2 sources of error,” but a student lists 5 or only focuses on 1. Surely, if you’re in a physics class, you’re capable of counting. 🙂

Examples of Sources of Error

Here are some concrete examples of how to identify and describe sources of error.

(1) A car rolls down an incline . You measure velocity and time to determine gravitational acceleration. Your result is 9.62 m/s 2 .

A possible source of error is air resistance . This is consistent with your results: Since the accepted value of gravitational acceleration is 9.8 m/s 2 near earth’s surface, and since air resistance results in less acceleration, your result (9.62 m/s 2 ) is consistent with this source of error. If you place the car on a horizontal surface and give it a gentle push, you will see it slow down and come to rest, which shows that there is indeed a significant resistive force acting on the car.

(2) You setup Atwood’s machine with 14 g and 6 g masses. Your experimental acceleration is 3.74 m/s 2 , while your theoretical acceleration is 3.92 m/s 2 .

A possible source of error is the rotational inertia of the pulley. The pulley has a natural tendency to rotate with constant angular velocity, which must be overcome in order to accelerate the pulley. It turns out (most textbooks do this calculation in a chapter on rotation) that if the mass of the pulley is significant compared to the sum of the two masses that its rotational inertia will have a significant impact on the acceleration. In this example, the sum of the masses is 20 g (since 14 + 6 = 20). If the pulley has a mass of a few grams, this could be significant.

One way to reduce the effect of the pulley is to use larger masses. If you use 35 g and 15 g instead, the sum of the masses is 50 g and the pulley’s rotational inertia has a smaller effect. (If you do this lab, add up the masses used. Only describe this as a possible source of error if the pulley’s estimated mass seems significant compared to the sum of the masses.)

(3) You fire a steel ball using a projectile launcher . You launch the ball five times horizontally. Then you launch the ball at a 30° angle, missing your predicted target by 6.3 cm.

A possible source of error is inconsistency in the spring mechanism . If that’s all you write, however, it will sound hypothetical. If instead, you noticed variation in your five horizontal launches, which had a standard deviation (something you can calculate) of 4.8 cm, you can establish that the variation in the spring’s launches is significant compared to the distance (of 6.3 cm) by which you missed the target.

You Still Have to Think

I can’t list every possible source of error for every possible experiment—this article would go on forever.

You need to apply reasoning skills.

I have shown you what not to do.

I have shown you what to look for in a source of error.

And I have given you concrete examples for specific cases.

Be confident. You can do it. 🙂

Copyright © 2017.

Chris McMullen, Ph.D.

  • Essential Physics Study Guide Workbook (3 volume series, available at the calculus or trig level)
  • 100 Physics Examples (3 volume series, available at the calculus or trig level)
  • The Improve Your Math Fluency series of Math Workbooks

Click here to visit my Amazon author page.

Share this:

Leave a comment cancel reply.

' src=

  • Already have a WordPress.com account? Log in now.
  • Subscribe Subscribed
  • Copy shortlink
  • Report this content
  • View post in Reader
  • Manage subscriptions
  • Collapse this bar

Learn The Types

Learn About Different Types of Things and Unleash Your Curiosity

Understanding Experimental Errors: Types, Causes, and Solutions

Types of experimental errors.

In scientific experiments, errors can occur that affect the accuracy and reliability of the results. These errors are often classified into three main categories: systematic errors, random errors, and human errors. Here are some common types of experimental errors:

1. Systematic Errors

Systematic errors are consistent and predictable errors that occur throughout an experiment. They can arise from flaws in equipment, calibration issues, or flawed experimental design. Some examples of systematic errors include:

– Instrumental Errors: These errors occur due to inaccuracies or limitations of the measuring instruments used in the experiment. For example, a thermometer may consistently read temperatures slightly higher or lower than the actual value.

– Environmental Errors: Changes in environmental conditions, such as temperature or humidity, can introduce systematic errors. For instance, if an experiment requires precise temperature control, fluctuations in the room temperature can impact the results.

– Procedural Errors: Errors in following the experimental procedure can lead to systematic errors. This can include improper mixing of reagents, incorrect timing, or using the wrong formula or equation.

2. Random Errors

Random errors are unpredictable variations that occur during an experiment. They can arise from factors such as inherent limitations of measurement tools, natural fluctuations in data, or human variability. Random errors can occur independently in each measurement and can cause data points to scatter around the true value. Some examples of random errors include:

– Instrument Noise: Instruments may introduce random noise into the measurements, resulting in small variations in the recorded data.

– Biological Variability: In experiments involving living organisms, natural biological variability can contribute to random errors. For example, in studies involving human subjects, individual differences in response to a treatment can introduce variability.

– Reading Errors: When taking measurements, human observers can introduce random errors due to imprecise readings or misinterpretation of data.

3. Human Errors

Human errors are mistakes or inaccuracies that occur due to human factors, such as lack of attention, improper technique, or inadequate training. These errors can significantly impact the experimental results. Some examples of human errors include:

– Data Entry Errors: Mistakes made when recording data or entering data into a computer can introduce errors. These errors can occur due to typographical mistakes, transposition errors, or misinterpretation of results.

– Calculation Errors: Errors in mathematical calculations can occur during data analysis or when performing calculations required for the experiment. These errors can result from mathematical mistakes, incorrect formulas, or rounding errors.

– Experimental Bias: Personal biases or preconceived notions held by the experimenter can introduce bias into the experiment, leading to inaccurate results.

It is crucial for scientists to be aware of these types of errors and take measures to minimize their impact on experimental outcomes. This includes careful experimental design, proper calibration of instruments, multiple repetitions of measurements, and thorough documentation of procedures and observations.

You Might Also Like:

Patio perfection: choosing the best types of pavers for your outdoor space, a guide to types of pupusas: delicious treats from central america, exploring modern period music: from classical to jazz and beyond.

  • WolframAlpha.com
  • WolframCloud.com
  • All Sites & Public Resources...

possible sources of error in an electrical circuit experiment

  • Wolfram|One
  • Mathematica
  • Wolfram|Alpha Notebook Edition
  • Finance Platform
  • System Modeler
  • Wolfram Player
  • Wolfram Engine
  • WolframScript
  • Enterprise Private Cloud
  • Application Server
  • Enterprise Mathematica
  • Wolfram|Alpha Appliance
  • Corporate Consulting
  • Technical Consulting
  • Wolfram|Alpha Business Solutions
  • Data Repository
  • Neural Net Repository
  • Function Repository
  • Wolfram|Alpha Pro
  • Problem Generator
  • Products for Education
  • Wolfram Cloud App
  • Wolfram|Alpha for Mobile
  • Wolfram|Alpha-Powered Apps
  • Paid Project Support
  • Summer Programs
  • All Products & Services »
  • Wolfram Language Revolutionary knowledge-based programming language. Wolfram Cloud Central infrastructure for Wolfram's cloud products & services. Wolfram Science Technology-enabling science of the computational universe. Wolfram Notebooks The preeminent environment for any technical workflows. Wolfram Engine Software engine implementing the Wolfram Language. Wolfram Natural Language Understanding System Knowledge-based broadly deployed natural language. Wolfram Data Framework Semantic framework for real-world data. Wolfram Universal Deployment System Instant deployment across cloud, desktop, mobile, and more. Wolfram Knowledgebase Curated computable knowledge powering Wolfram|Alpha.
  • All Technologies »
  • Aerospace & Defense
  • Chemical Engineering
  • Control Systems
  • Electrical Engineering
  • Image Processing
  • Industrial Engineering
  • Mechanical Engineering
  • Operations Research
  • Actuarial Sciences
  • Bioinformatics
  • Data Science
  • Econometrics
  • Financial Risk Management
  • All Solutions for Education
  • Machine Learning
  • Multiparadigm Data Science
  • High-Performance Computing
  • Quantum Computation Framework
  • Software Development
  • Authoring & Publishing
  • Interface Development
  • Web Development
  • All Solutions »
  • Wolfram Language Documentation
  • Fast Introduction for Programmers
  • Videos & Screencasts
  • Wolfram Language Introductory Book
  • Webinars & Training
  • Support FAQ
  • Wolfram Community
  • Contact Support
  • All Learning & Support »
  • Company Background
  • Wolfram Blog
  • Careers at Wolfram
  • Internships
  • Other Wolfram Language Jobs
  • Wolfram Foundation
  • Computer-Based Math
  • A New Kind of Science
  • Wolfram Technology for Hackathons
  • Student Ambassador Program
  • Wolfram for Startups
  • Demonstrations Project
  • Wolfram Innovator Awards
  • Wolfram + Raspberry Pi
  • All Company »

Chapter 3

Experimental Errors and

Error Analysis

This chapter is largely a tutorial on handling experimental errors of measurement. Much of the material has been extensively tested with science undergraduates at a variety of levels at the University of Toronto.

Whole books can and have been written on this topic but here we distill the topic down to the essentials. Nonetheless, our experience is that for beginners an iterative approach to this material works best. This means that the users first scan the material in this chapter; then try to use the material on their own experiment; then go over the material again; then ...

provides functions to ease the calculations required by propagation of errors, and those functions are introduced in Section 3.3. These error propagation functions are summarized in Section 3.5.

3.1 Introduction

3.1.1 The Purpose of Error Analysis

For students who only attend lectures and read textbooks in the sciences, it is easy to get the incorrect impression that the physical sciences are concerned with manipulating precise and perfect numbers. Lectures and textbooks often contain phrases like:

For an experimental scientist this specification is incomplete. Does it mean that the acceleration is closer to 9.8 than to 9.9 or 9.7? Does it mean that the acceleration is closer to 9.80000 than to 9.80001 or 9.79999? Often the answer depends on the context. If a carpenter says a length is "just 8 inches" that probably means the length is closer to 8 0/16 in. than to 8 1/16 in. or 7 15/16 in. If a machinist says a length is "just 200 millimeters" that probably means it is closer to 200.00 mm than to 200.05 mm or 199.95 mm.

We all know that the acceleration due to gravity varies from place to place on the earth's surface. It also varies with the height above the surface, and gravity meters capable of measuring the variation from the floor to a tabletop are readily available. Further, any physical measure such as can only be determined by means of an experiment, and since a perfect experimental apparatus does not exist, it is impossible even in principle to ever know perfectly. Thus, the specification of given above is useful only as a possible exercise for a student. In order to give it some meaning it must be changed to something like:

Two questions arise about the measurement. First, is it "accurate," in other words, did the experiment work properly and were all the necessary factors taken into account? The answer to this depends on the skill of the experimenter in identifying and eliminating all systematic errors. These are discussed in Section 3.4.

The second question regards the "precision" of the experiment. In this case the precision of the result is given: the experimenter claims the precision of the result is within 0.03 m/s

1. The person who did the measurement probably had some "gut feeling" for the precision and "hung" an error on the result primarily to communicate this feeling to other people. Common sense should always take precedence over mathematical manipulations.

2. In complicated experiments, error analysis can identify dominant errors and hence provide a guide as to where more effort is needed to improve an experiment.

3. There is virtually no case in the experimental physical sciences where the correct error analysis is to compare the result with a number in some book. A correct experiment is one that is performed correctly, not one that gives a result in agreement with other measurements.

4. The best precision possible for a given experiment is always limited by the apparatus. Polarization measurements in high-energy physics require tens of thousands of person-hours and cost hundreds of thousand of dollars to perform, and a good measurement is within a factor of two. Electrodynamics experiments are considerably cheaper, and often give results to 8 or more significant figures. In both cases, the experimenter must struggle with the equipment to get the most precise and accurate measurement possible.

3.1.2 Different Types of Errors

As mentioned above, there are two types of errors associated with an experimental result: the "precision" and the "accuracy". One well-known text explains the difference this way:

" " E.M. Pugh and G.H. Winslow, p. 6.

The object of a good experiment is to minimize both the errors of precision and the errors of accuracy.

Usually, a given experiment has one or the other type of error dominant, and the experimenter devotes the most effort toward reducing that one. For example, in measuring the height of a sample of geraniums to determine an average value, the random variations within the sample of plants are probably going to be much larger than any possible inaccuracy in the ruler being used. Similarly for many experiments in the biological and life sciences, the experimenter worries most about increasing the precision of his/her measurements. Of course, some experiments in the biological and life sciences are dominated by errors of accuracy.

On the other hand, in titrating a sample of HCl acid with NaOH base using a phenolphthalein indicator, the major error in the determination of the original concentration of the acid is likely to be one of the following: (1) the accuracy of the markings on the side of the burette; (2) the transition range of the phenolphthalein indicator; or (3) the skill of the experimenter in splitting the last drop of NaOH. Thus, the accuracy of the determination is likely to be much worse than the precision. This is often the case for experiments in chemistry, but certainly not all.

Question: Most experiments use theoretical formulas, and usually those formulas are approximations. Is the error of approximation one of precision or of accuracy?

3.1.3 References

There is extensive literature on the topics in this chapter. The following lists some well-known introductions.

D.C. Baird, (Prentice-Hall, 1962)

E.M. Pugh and G.H. Winslow, (Addison-Wesley, 1966)

J.R. Taylor, (University Science Books, 1982)

In addition, there is a web document written by the author of that is used to teach this topic to first year Physics undergraduates at the University of Toronto. The following Hyperlink points to that document.

3.2 Determining the Precision

3.2.1 The Standard Deviation

In the nineteenth century, Gauss' assistants were doing astronomical measurements. However, they were never able to exactly repeat their results. Finally, Gauss got angry and stormed into the lab, claiming he would show these people how to do the measurements once and for all. The only problem was that Gauss wasn't able to repeat his measurements exactly either!

After he recovered his composure, Gauss made a histogram of the results of a particular measurement and discovered the famous Gaussian or bell-shaped curve.

Many people's first introduction to this shape is the grade distribution for a course. Here is a sample of such a distribution, using the function .

We use a standard package to generate a Probability Distribution Function ( ) of such a "Gaussian" or "normal" distribution. The mean is chosen to be 78 and the standard deviation is chosen to be 10; both the mean and standard deviation are defined below.

We then normalize the distribution so the maximum value is close to the maximum number in the histogram and plot the result.

In this graph,

Finally, we look at the histogram and plot together.

We can see the functional form of the Gaussian distribution by giving symbolic values.

In this formula, the quantity , and . The is sometimes called the . The definition of is as follows.

Here is the total number of measurements and is the result of measurement number .

The standard deviation is a measure of the width of the peak, meaning that a larger value gives a wider peak.

If we look at the area under the curve from graph, we find that this area is 68 percent of the total area. Thus, any result chosen at random has a 68% change of being within one standard deviation of the mean. We can show this by evaluating the integral. For convenience, we choose the mean to be zero.

Now, we numericalize this and multiply by 100 to find the percent.

The only problem with the above is that the measurement must be repeated an infinite number of times before the standard deviation can be determined. If is less than infinity, one can only estimate measurements, this is the best estimate.

The major difference between this estimate and the definition is the . This is reasonable since if = 1 we know we can't determine

Here is an example. Suppose we are to determine the diameter of a small cylinder using a micrometer. We repeat the measurement 10 times along various points on the cylinder and get the following results, in centimeters.

The number of measurements is the length of the list.

The average or mean is now calculated.

Then the standard deviation is to be 0.00185173.

We repeat the calculation in a functional style.

Note that the package, which is standard with , includes functions to calculate all of these quantities and a great deal more.

We close with two points:

1. The standard deviation has been associated with the error in each individual measurement. Section 3.3.2 discusses how to find the error in the estimate of the average.

2. This calculation of the standard deviation is only an estimate. In fact, we can find the expected error in the estimate,

As discussed in more detail in Section 3.3, this means that the true standard deviation probably lies in the range of values.

Viewed in this way, it is clear that the last few digits in the numbers above for function adjusts these significant figures based on the error.

is discussed further in Section 3.3.1.

3.2.2 The Reading Error

There is another type of error associated with a directly measured quantity, called the "reading error". Referring again to the example of Section 3.2.1, the measurements of the diameter were performed with a micrometer. The particular micrometer used had scale divisions every 0.001 cm. However, it was possible to estimate the reading of the micrometer between the divisions, and this was done in this example. But, there is a reading error associated with this estimation. For example, the first data point is 1.6515 cm. Could it have been 1.6516 cm instead? How about 1.6519 cm? There is no fixed rule to answer the question: the person doing the measurement must guess how well he or she can read the instrument. A reasonable guess of the reading error of this micrometer might be 0.0002 cm on a good day. If the experimenter were up late the night before, the reading error might be 0.0005 cm.

An important and sometimes difficult question is whether the reading error of an instrument is "distributed randomly". Random reading errors are caused by the finite precision of the experiment. If an experimenter consistently reads the micrometer 1 cm lower than the actual value, then the reading error is not random.

For a digital instrument, the reading error is ± one-half of the last digit. Note that this assumes that the instrument has been properly engineered to round a reading correctly on the display.

3.2.3 "THE" Error

So far, we have found two different errors associated with a directly measured quantity: the standard deviation and the reading error. So, which one is the actual real error of precision in the quantity? The answer is both! However, fortunately it almost always turns out that one will be larger than the other, so the smaller of the two can be ignored.

In the diameter example being used in this section, the estimate of the standard deviation was found to be 0.00185 cm, while the reading error was only 0.0002 cm. Thus, we can use the standard deviation estimate to characterize the error in each measurement. Another way of saying the same thing is that the observed spread of values in this example is not accounted for by the reading error. If the observed spread were more or less accounted for by the reading error, it would not be necessary to estimate the standard deviation, since the reading error would be the error in each measurement.

Of course, everything in this section is related to the precision of the experiment. Discussion of the accuracy of the experiment is in Section 3.4.

3.2.4 Rejection of Measurements

Often when repeating measurements one value appears to be spurious and we would like to throw it out. Also, when taking a series of measurements, sometimes one value appears "out of line". Here we discuss some guidelines on rejection of measurements; further information appears in Chapter 7.

It is important to emphasize that the whole topic of rejection of measurements is awkward. Some scientists feel that the rejection of data is justified unless there is evidence that the data in question is incorrect. Other scientists attempt to deal with this topic by using quasi-objective rules such as 's . Still others, often incorrectly, throw out any data that appear to be incorrect. In this section, some principles and guidelines are presented; further information may be found in many references.

First, we note that it is incorrect to expect each and every measurement to overlap within errors. For example, if the error in a particular quantity is characterized by the standard deviation, we only expect 68% of the measurements from a normally distributed population to be within one standard deviation of the mean. Ninety-five percent of the measurements will be within two standard deviations, 99% within three standard deviations, etc., but we never expect 100% of the measurements to overlap within any finite-sized error for a truly Gaussian distribution.

Of course, for most experiments the assumption of a Gaussian distribution is only an approximation.

If the error in each measurement is taken to be the reading error, again we only expect most, not all, of the measurements to overlap within errors. In this case the meaning of "most", however, is vague and depends on the optimism/conservatism of the experimenter who assigned the error.

Thus, it is always dangerous to throw out a measurement. Maybe we are unlucky enough to make a valid measurement that lies ten standard deviations from the population mean. A valid measurement from the tails of the underlying distribution should not be thrown out. It is even more dangerous to throw out a suspect point indicative of an underlying physical process. Very little science would be known today if the experimenter always threw out measurements that didn't match preconceived expectations!

In general, there are two different types of experimental data taken in a laboratory and the question of rejecting measurements is handled in slightly different ways for each. The two types of data are the following:

1. A series of measurements taken with one or more variables changed for each data point. An example is the calibration of a thermocouple, in which the output voltage is measured when the thermocouple is at a number of different temperatures.

2. Repeated measurements of the same physical quantity, with all variables held as constant as experimentally possible. An example is the measurement of the height of a sample of geraniums grown under identical conditions from the same batch of seed stock.

For a series of measurements (case 1), when one of the data points is out of line the natural tendency is to throw it out. But, as already mentioned, this means you are assuming the result you are attempting to measure. As a rule of thumb, unless there is a physical explanation of why the suspect value is spurious and it is no more than three standard deviations away from the expected value, it should probably be kept. Chapter 7 deals further with this case.

For repeated measurements (case 2), the situation is a little different. Say you are measuring the time for a pendulum to undergo 20 oscillations and you repeat the measurement five times. Assume that four of these trials are within 0.1 seconds of each other, but the fifth trial differs from these by 1.4 seconds ( , more than three standard deviations away from the mean of the "good" values). There is no known reason why that one measurement differs from all the others. Nonetheless, you may be justified in throwing it out. Say that, unknown to you, just as that measurement was being taken, a gravity wave swept through your region of spacetime. However, if you are trying to measure the period of the pendulum when there are no gravity waves affecting the measurement, then throwing out that one result is reasonable. (Although trying to repeat the measurement to find the existence of gravity waves will certainly be more fun!) So whatever the reason for a suspect value, the rule of thumb is that it may be thrown out provided that fact is well documented and that the measurement is repeated a number of times more to convince the experimenter that he/she is not throwing out an important piece of data indicating a new physical process.

3.3 Propagation of Errors of Precision

3.3.1 Discussion and Examples

Usually, errors of precision are probabilistic. This means that the experimenter is saying that the actual value of some parameter is within a specified range. For example, if the half-width of the range equals one standard deviation, then the probability is about 68% that over repeated experimentation the true mean will fall within the range; if the half-width of the range is twice the standard deviation, the probability is 95%, etc.

If we have two variables, say and , and want to combine them to form a new variable, we want the error in the combination to preserve this probability.

The correct procedure to do this is to combine errors in quadrature, which is the square root of the sum of the squares. supplies a function.

For simple combinations of data with random errors, the correct procedure can be summarized in three rules. will stand for the errors of precision in , , and , respectively. We assume that and are independent of each other.

Note that all three rules assume that the error, say , is small compared to the value of .

If

z = x * y

or

then

In words, the fractional error in is the quadrature of the fractional errors in and .

If

z = x + y

or

z = x - y

then

In words, the error in is the quadrature of the errors in and .

If

then

or equivalently

includes functions to combine data using the above rules. They are named , , , , and .

Imagine we have pressure data, measured in centimeters of Hg, and volume data measured in arbitrary units. Each data point consists of { , } pairs.

We calculate the pressure times the volume.

In the above, the values of and have been multiplied and the errors have ben combined using Rule 1.

There is an equivalent form for this calculation.

Consider the first of the volume data: {11.28156820762763, 0.031}. The error means that the true value is claimed by the experimenter to probably lie between 11.25 and 11.31. Thus, all the significant figures presented to the right of 11.28 for that data point really aren't significant. The function will adjust the volume data.

Notice that by default, uses the two most significant digits in the error for adjusting the values. This can be controlled with the option.

For most cases, the default of two digits is reasonable. As discussed in Section 3.2.1, if we assume a normal distribution for the data, then the fractional error in the determination of the standard deviation , and can be written as follows.

Thus, using this as a general rule of thumb for all errors of precision, the estimate of the error is only good to 10%, ( one significant figure, unless is greater than 51) . Nonetheless, keeping two significant figures handles cases such as 0.035 vs. 0.030, where some significance may be attached to the final digit.

You should be aware that when a datum is massaged by , the extra digits are dropped.

By default, and the other functions use the function. The use of is controlled using the option.

The number of digits can be adjusted.

To form a power, say,

we might be tempted to just do

function.

Finally, imagine that for some reason we wish to form a combination.

We might be tempted to solve this with the following.

then the error is

Here is an example solving . We shall use and below to avoid overwriting the symbols and . First we calculate the total derivative.

Next we form the error.

Now we can evaluate using the pressure and volume data to get a list of errors.

Next we form the list of pairs.

The function combines these steps with default significant figure adjustment.

The function can be used in place of the other functions discussed above.

In this example, the function will be somewhat faster.

There is a caveat in using . The expression must contain only symbols, numerical constants, and arithmetic operations. Otherwise, the function will be unable to take the derivatives of the expression necessary to calculate the form of the error. The other functions have no such limitation.

3.3.1.1 Another Approach to Error Propagation: The and Datum

value error

Data[{{789.7, 2.2}, {790.8, 2.3}, {791.2, 2.3}, {792.6, 2.4}, {791.8, 2.5},
{792.2, 2.5}, {794.7, 2.6}, {794., 2.6}, {794.4, 2.7}, {795.3, 2.8},
{796.4, 2.8}}]Data[{{789.7, 2.2}, {790.8, 2.3}, {791.2, 2.3}, {792.6, 2.4}, {791.8, 2.5},

{792.2, 2.5}, {794.7, 2.6}, {794., 2.6}, {794.4, 2.7}, {795.3, 2.8},

{796.4, 2.8}}]

The wrapper can be removed.

{{789.7, 2.2}, {790.8, 2.3}, {791.2, 2.3}, {792.6, 2.4}, {791.8, 2.5},
{792.2, 2.5}, {794.7, 2.6}, {794., 2.6}, {794.4, 2.7}, {795.3, 2.8}, {796.4, 2.8}}{{789.7, 2.2}, {790.8, 2.3}, {791.2, 2.3}, {792.6, 2.4}, {791.8, 2.5},

{792.2, 2.5}, {794.7, 2.6}, {794., 2.6}, {794.4, 2.7}, {795.3, 2.8}, {796.4, 2.8}}

The reason why the output of the previous two commands has been formatted as is that typesets the pairs using ± for output.

A similar construct can be used with individual data points.

Datum[{70, 0.04}]Datum[{70, 0.04}]

Just as for , the typesetting of uses

The and constructs provide "automatic" error propagation for multiplication, division, addition, subtraction, and raising to a power. Another advantage of these constructs is that the rules built into know how to combine data with constants.

The rules also know how to propagate errors for many transcendental functions.

This rule assumes that the error is small relative to the value, so we can approximate.

or arguments, are given by .

We have seen that typesets the and constructs using ±. The function can be used directly, and provided its arguments are numeric, errors will be propagated.

One may typeset the ± into the input expression, and errors will again be propagated.

The ± input mechanism can combine terms by addition, subtraction, multiplication, division, raising to a power, addition and multiplication by a constant number, and use of the . The rules used by for ± are only for numeric arguments.

This makes different than

3.3.1.2 Why Quadrature?

Here we justify combining errors in quadrature. Although they are not proofs in the usual pristine mathematical sense, they are correct and can be made rigorous if desired.

First, you may already know about the "Random Walk" problem in which a player starts at the point = 0 and at each move steps either forward (toward + ) or backward (toward - ). The choice of direction is made randomly for each move by, say, flipping a coin. If each step covers a distance , then after steps the expected most probable distance of the player from the origin can be shown to be

Thus, the distance goes up as the square root of the number of steps.

Now consider a situation where measurements of a quantity are performed, each with an identical random error . We find the sum of the measurements.

, it is equally likely to be + as - , and which is essentially random. Thus, the expected most probable error in the sum goes up as the square root of the number of measurements.

This is exactly the result obtained by combining the errors in quadrature.

Another similar way of thinking about the errors is that in an abstract linear error space, the errors span the space. If the errors are probabilistic and uncorrelated, the errors in fact are linearly independent (orthogonal) and thus form a basis for the space. Thus, we would expect that to add these independent random errors, we would have to use Pythagoras' theorem, which is just combining them in quadrature.

3.3.2 Finding the Error in an Average

The rules for propagation of errors, discussed in Section 3.3.1, allow one to find the error in an average or mean of a number of repeated measurements. Recall that to compute the average, first the sum of all the measurements is found, and the rule for addition of quantities allows the computation of the error in the sum. Next, the sum is divided by the number of measurements, and the rule for division of quantities allows the calculation of the error in the result ( the error of the mean).

In the case that the error in each measurement has the same value, the result of applying these rules for propagation of errors can be summarized as a theorem.

Theorem: If the measurement of a random variable is repeated times, and the random variable has standard deviation , then the standard deviation in the mean is

Proof: One makes measurements, each with error .

{x1, errx}, {x2, errx}, ... , {xn, errx}

We calculate the sum.

sumx = x1 + x2 + ... + xn

We calculate the error in the sum.

This last line is the key: by repeating the measurements times, the error in the sum only goes up as [ ].

The mean

Applying the rule for division we get the following.

This completes the proof.

The quantity called

Here is an example. In Section 3.2.1, 10 measurements of the diameter of a small cylinder were discussed. The mean of the measurements was 1.6514 cm and the standard deviation was 0.00185 cm. Now we can calculate the mean and its error, adjusted for significant figures.

Note that presenting this result without significant figure adjustment makes no sense.

The above number implies that there is meaning in the one-hundred-millionth part of a centimeter.

Here is another example. Imagine you are weighing an object on a "dial balance" in which you turn a dial until the pointer balances, and then read the mass from the marking on the dial. You find = 26.10 ± 0.01 g. The 0.01 g is the reading error of the balance, and is about as good as you can read that particular piece of equipment. You remove the mass from the balance, put it back on, weigh it again, and get = 26.10 ± 0.01 g. You get a friend to try it and she gets the same result. You get another friend to weigh the mass and he also gets = 26.10 ± 0.01 g. So you have four measurements of the mass of the body, each with an identical result. Do you think the theorem applies in this case? If yes, you would quote = 26.100 ± 0.01/ [4] = 26.100 ± 0.005 g. How about if you went out on the street and started bringing strangers in to repeat the measurement, each and every one of whom got = 26.10 ± 0.01 g. So after a few weeks, you have 10,000 identical measurements. Would the error in the mass, as measured on that $50 balance, really be the following?

The point is that these rules of statistics are only a rough guide and in a situation like this example where they probably don't apply, don't be afraid to ignore them and use your "uncommon sense". In this example, presenting your result as = 26.10 ± 0.01 g is probably the reasonable thing to do.

3.4 Calibration, Accuracy, and Systematic Errors

In Section 3.1.2, we made the distinction between errors of precision and accuracy by imagining that we had performed a timing measurement with a very precise pendulum clock, but had set its length wrong, leading to an inaccurate result. Here we discuss these types of errors of accuracy. To get some insight into how such a wrong length can arise, you may wish to try comparing the scales of two rulers made by different companies — discrepancies of 3 mm across 30 cm are common!

If we have access to a ruler we trust ( a "calibration standard"), we can use it to calibrate another ruler. One reasonable way to use the calibration is that if our instrument measures and the standard records , then we can multiply all readings of our instrument by / . Since the correction is usually very small, it will practically never affect the error of precision, which is also small. Calibration standards are, almost by definition, too delicate and/or expensive to use for direct measurement.

Here is an example. We are measuring a voltage using an analog Philips multimeter, model PM2400/02. The result is 6.50 V, measured on the 10 V scale, and the reading error is decided on as 0.03 V, which is 0.5%. Repeating the measurement gives identical results. It is calculated by the experimenter that the effect of the voltmeter on the circuit being measured is less than 0.003% and hence negligible. However, the manufacturer of the instrument only claims an accuracy of 3% of full scale (10 V), which here corresponds to 0.3 V.

Now, what this claimed accuracy means is that the manufacturer of the instrument claims to control the tolerances of the components inside the box to the point where the value read on the meter will be within 3% times the scale of the actual value. Furthermore, this is not a random error; a given meter will supposedly always read too high or too low when measurements are repeated on the same scale. Thus, repeating measurements will not reduce this error.

A further problem with this accuracy is that while most good manufacturers (including Philips) tend to be quite conservative and give trustworthy specifications, there are some manufacturers who have the specifications written by the sales department instead of the engineering department. And even Philips cannot take into account that maybe the last person to use the meter dropped it.

Nonetheless, in this case it is probably reasonable to accept the manufacturer's claimed accuracy and take the measured voltage to be 6.5 ± 0.3 V. If you want or need to know the voltage better than that, there are two alternatives: use a better, more expensive voltmeter to take the measurement or calibrate the existing meter.

Using a better voltmeter, of course, gives a better result. Say you used a Fluke 8000A digital multimeter and measured the voltage to be 6.63 V. However, you're still in the same position of having to accept the manufacturer's claimed accuracy, in this case (0.1% of reading + 1 digit) = 0.02 V. To do better than this, you must use an even better voltmeter, which again requires accepting the accuracy of this even better instrument and so on, ad infinitum, until you run out of time, patience, or money.

Say we decide instead to calibrate the Philips meter using the Fluke meter as the calibration standard. Such a procedure is usually justified only if a large number of measurements were performed with the Philips meter. Why spend half an hour calibrating the Philips meter for just one measurement when you could use the Fluke meter directly?

We measure four voltages using both the Philips and the Fluke meter. For the Philips instrument we are not interested in its accuracy, which is why we are calibrating the instrument. So we will use the reading error of the Philips instrument as the error in its measurements and the accuracy of the Fluke instrument as the error in its measurements.

We form lists of the results of the measurements.

We can examine the differences between the readings either by dividing the Fluke results by the Philips or by subtracting the two values.

The second set of numbers is closer to the same value than the first set, so in this case adding a correction to the Philips measurement is perhaps more appropriate than multiplying by a correction.

We form a new data set of format { }.

We can guess, then, that for a Philips measurement of 6.50 V the appropriate correction factor is 0.11 ± 0.04 V, where the estimated error is a guess based partly on a fear that the meter's inaccuracy may not be as smooth as the four data points indicate. Thus, the corrected Philips reading can be calculated.

(You may wish to know that all the numbers in this example are real data and that when the Philips meter read 6.50 V, the Fluke meter measured the voltage to be 6.63 ± 0.02 V.)

Finally, a further subtlety: Ohm's law states that the resistance is related to the voltage and the current across the resistor according to the following equation.

V = IR

Imagine that we are trying to determine an unknown resistance using this law and are using the Philips meter to measure the voltage. Essentially the resistance is the slope of a graph of voltage versus current.

If the Philips meter is systematically measuring all voltages too big by, say, 2%, that systematic error of accuracy will have no effect on the slope and therefore will have no effect on the determination of the resistance . So in this case and for this measurement, we may be quite justified in ignoring the inaccuracy of the voltmeter entirely and using the reading error to determine the uncertainty in the determination of .

3.5 Summary of the Error Propagation Routines

  • Wolfram|Alpha Notebook Edition
  • Mobile Apps
  • Wolfram Workbench
  • Volume & Site Licensing
  • View all...
  • For Customers
  • Online Store
  • Product Registration
  • Product Downloads
  • Service Plans Benefits
  • User Portal
  • Your Account
  • Customer Service
  • Get Started with Wolfram
  • Fast Introduction for Math Students
  • Public Resources
  • Wolfram|Alpha
  • Resource System
  • Connected Devices Project
  • Wolfram Data Drop
  • Wolfram Science
  • Computational Thinking
  • About Wolfram
  • Legal & Privacy Policy

Article Categories

Book categories, collections.

  • Technology Articles
  • Electronics Articles
  • Circuitry Articles

Ten Common Mistakes in Circuit Analysis

Circuit analysis for dummies.

Book image

Sign up for the Dummies Beta Program to try Dummies' newest way to learn.

Circuit analysis is a tricky subject, and it’s easy to make certain mistakes, especially when you’re first starting out. You can reduce your odds of making these common mistakes by reviewing the following list.

Failing to label voltage polarities and current directions

When you analyze any circuit, the first step is to properly label the voltage polarities and current direction for each device in the circuit. Circuit labels serve as reference marks for what’s happening in the circuit.

If your answers come out negative, that doesn’t mean your answers are wrong. It just means your answers are opposite in direction to your reference marks.

Making common math errors in circuit design

Simple arithmetic and algebraic errors can cost you when designing circuits. If your calculation is off by one decimal place, your circuit won’t work as designed. Basic trigonometric errors are problematic too, because if you don’t get the trig right, you’ll mess up important calculations involving imaginary and complex numbers.

And when you’re working with first- and second-order circuits, you need to avoid calculus errors, too. Always pay attention to your calculations and double-check your math.

Making incorrect assumptions about open and short circuits

You can’t assume the voltage across an open circuit is zero just because the current through an open circuit is zero. Likewise, you can’t assume the current through a short circuit is zero just because the voltage across the short circuit is zero.

Instead, remember that an open circuit has infinite resistance, having any voltage value across the open circuit but no current through it. The short circuit is like a piece of wire having zero resistance, so any amount of current can flow through it.

Forgetting that the voltage across a current source can be any value

Some students assume that the voltage across a constant current source is zero, but that’s not the case. The voltage across a current source can be any value.

Students who make this mistake often forget that in terms of resistance, a constant current source is a device having infinite resistance (like an open circuit). And an open circuit can have any voltage across it. In fact, using a constant current source with a value of zero is equivalent to replacing the current source with an open circuit.

Misidentifying series and parallel device connections

Beginning circuit analysis students frequently struggle to identify the difference between series and parallel connections of devices. To avoid making this mistake, commit the following to memory:

Series connections: For two devices to be connected in series, only those two devices can share a common junction point (or node). If three or more devices share a node, the devices aren’t in series. All devices connected in series share a common current.

Parallel connections: Devices connected in parallel must share two common junction points (or nodes). All devices connected in parallel share a common voltage.

Simplifying circuits incorrectly

Simplifying complex circuits can make your analysis easier (plus you can try different loads to meet circuit requirements). But if you mess up the simplification, you inevitably lose terminals, resulting in a circuit that isn’t equivalent.

Make sure you don’t lose the terminals of interest as you simplify the circuit. If you’re interested in a specific pair of terminals, one approach is to convert the circuit so that all elements are connected in parallel or all elements are connected in series at the pair of terminals.

Applying transformed circuits incorrectly

When you do source transformation, remember that the two circuits are equivalent only at the terminals of interest. If you forget this, you can misinterpret the actual output value.

Verify your transformed circuit with other circuit analysis techniques derived from Kirchhoff’s voltage and current laws, such as source transformation and voltage and current divider techniques. Or you can simply apply Kirchhoff’s laws to check your results.

Formulating node voltage equations incorrectly

The purpose of node-voltage analysis is to find the voltages across the devices in a circuit. A common mistake is attempting to write a nodal equation through a voltage source when none of its terminals are connected to ground. This approach isn’t valid because a basic nodal equation can’t be written through a voltage source. Instead, you must treat the voltage source for this special case as one node (known as a supernode ).

Another common pitfall with node-voltage analysis concerns how students visualize a node in a circuit diagram as one specific point in a circuit. When a node has three or more branches, students tend to make multiple nodes out of a single node.

Verify your work in node-voltage analysis when you only have independent sources by looking at the symmetry of the matrix of conductances along the diagonal. If it’s not symmetric, then you did something incorrectly. Also remember that when you have dependent sources, you may or may not have symmetry along the matrix diagonal.

Setting up mesh current equations incorrectly

You can’t form a mesh current equation when your mesh includes a current source, because the voltage across a current source can be any value. You need to avoid (or bypass) the current source so it’s not part of the circuit mesh equations. If the current source is part of two meshes, then you can form one supermesh to write the mesh current equation.

For consistency, make your mesh currents point in the same (clockwise) direction. You’ll end up with a matrix of resistances that’s symmetric, with positive elements along the main diagonal and negative off-diagonal elements. Better yet, if your resulting matrix has these characteristics, then you’ll know that you’ve likely formulated the mesh current equations correctly.

Giving your final answer in terms of a dependent variable

Your solution of an output must be a function of an independent source, not a dependent source, even when your circuit has dependent sources. This means you need to eliminate the dependent variable from your answer by writing it in terms of an independent variable.

About This Article

This article is from the book:.

  • Circuit Analysis For Dummies ,

About the book author:

John M. Santiago Jr., PhD, served in the United States Air Force (USAF) for 26 years. During that time, he held a variety of leadership positions in technical program management, acquisition development, and operation research support. While assigned in Europe, he spearheaded more than 40 international scientific and engineering conferences/workshops.

This article can be found in the category:

  • Circuitry ,
  • Circuit Analysis For Dummies Cheat Sheet
  • How to Convert Light into Electricity with Simple Operational Circuits
  • How to Detect a Missing Pulse with a Timing Circuit
  • How to Design a Band-Stop Filter to Reduce Line Noise
  • Analysis Methods for Complex Circuits
  • View All Articles From Book

Find Study Materials for

  • Explanations
  • Business Studies
  • Combined Science
  • Engineering
  • English Literature
  • Environmental Science
  • Human Geography
  • Macroeconomics
  • Microeconomics
  • Social Studies
  • Browse all subjects
  • Read our Magazine

Create Study Materials

  • Flashcards Create and find the best flashcards.
  • Notes Create notes faster than ever before.
  • Study Sets Everything you need for your studies in one place.
  • Study Plans Stop procrastinating with our smart planner features.
  • Sources of Error in Experiments

Discover the intricacies and implications of sources of error in experiments, specifically in the context of engineering. This comprehensive guide elaborates on what these errors mean, how they influence results, and strategies to mitigate their impact. Dive deep into the world of experimental errors, with detailed exploration of common examples, their potential consequences and effective solutions. Whether you're undertaking your first engineering experiment or a seasoned professional, understanding these errors is crucial to ensuring your data remains valid, reliable and accurate. Let's embark on this educational journey, delving into the art of identifying and managing sources of error in experiments.

Sources of Error in Experiments

Create learning materials about Sources of Error in Experiments with our free learning app!

  • Instand access to millions of learning materials
  • Flashcards, notes, mock-exams and more
  • Everything you need to ace your exams
  • Aerospace Engineering
  • Design Engineering
  • Engineering Fluid Mechanics
  • Engineering Mathematics
  • Engineering Thermodynamics
  • Materials Engineering
  • Professional Engineering
  • Accreditation
  • Activity Based Costing
  • Array in Excel
  • Balanced Scorecard
  • Business Excellence Model
  • Calibration
  • Conditional Formatting
  • Consumer Protection Act 1987
  • Continuous Improvement
  • Data Analysis in Engineering
  • Data Management
  • Data Visualization
  • Design of Engineering Experiments
  • Diversity and Inclusion
  • Elements of Cost
  • Embodied Energy
  • End of Life Product
  • Engineering Institution
  • Engineering Law
  • Engineering Literature Review
  • Engineering Organisations
  • Engineering Skills
  • Environmental Management System
  • Environmental Protection Act 1990
  • Error Analysis
  • Excel Charts
  • Excel Errors
  • Excel Formulas
  • Excel Operators
  • Finance in Engineering
  • Financial Management
  • Formal Organizational Structure
  • Health & Safety at Work Act 1974
  • Henry Mintzberg
  • IF Function Excel
  • INDEX MATCH Excel
  • IP Licensing
  • ISO 9001 Quality Manual
  • Initial Public Offering
  • Intellectual Property
  • MAX Function Excel
  • Machine Guarding
  • McKinsey 7S Framework
  • Measurement Techniques
  • National Measurement Institute
  • Network Diagram
  • Organizational Strategy Engineering
  • Overhead Absorption
  • Part Inspection
  • Porter's Value Chain
  • Professional Conduct
  • Professional Development
  • Profit and Loss
  • Project Control
  • Project Life Cycle
  • Project Management
  • Project Risk Management
  • Project Team
  • Quality Tools
  • Resource Constrained Project Scheduling
  • Risk Analysis
  • Risk Assessment
  • Root Cause Analysis
  • Sale of Goods Act 1979
  • Situational Factors
  • Six Sigma Methodology
  • Standard Cost
  • Statistical Process Control
  • Strategic Management
  • Supply Chain Engineering
  • Surface Texture Measurement
  • Sustainable Engineering
  • Sustainable Manufacturing
  • Technical Presentation
  • Technical Report
  • Trade Secret vs Patent
  • Venture Capital
  • Viable System Model
  • What is Microsoft Excel
  • Work Breakdown Structure
  • Solid Mechanics
  • What is Engineering

Understanding the Meaning of Sources of Error in Experiments

In the context of an experiment, 'error' refers to the deviation of a measured or calculated value from the true value. The accumulation of these errors, if not properly addressed, can significantly impact the accuracy and validity of experimental results.

Explaining What Sources of Error in Experiments Mean in Engineering

  • Systematic Errors
  • Random Errors

'Systematic errors' are errors that are consistent and repeatable, and typically result from faulty equipment or a flawed experimental design.

If your scale is not properly zeroed and is instead always showing a measure 0.01 units above the actual mark, even if you measure a distance of 10 units perfectly, your recorded measure will be 10.01 units. This is a clear illustration of a systematic error.

The Role of These Sources of Error in Engineering Experiments

Here's an interesting aspect to consider: while errors in experiments are often seen as problematic, they sometimes lead to new discoveries. A classic example is the serendipitous discovery of penicillin by Alexander Fleming, resulting from a contamination (an ‘error’) in his experiment.

Common Examples of Sources of Error in Experiments

Classic instances of sources of error in experiments examples.

  • Parallax error: This occurs when, due to line of sight issues, objects are wrongly positioned, usually resulting in overlap and causing the definite location of an object to be uncertain.
  • Reading error: Taking the wrong value from the scale of the measuring instrument results in a reading error. This can be as straightforward as misreading the markings on a ruler or thermometer.
  • Instrument precision error: Some instruments have limitations on how precisely they can measure. For instance, a typical school laboratory ruler can only measure to the nearest millimetre, while far more precise measurements may be required.

Parallax error, reading error, and instrument precision error all serve as practical examples of systematic errors. These errors are consistent and repeatable and are typically caused by the limitations of the experimental setup and equipment.

Frequently Encountered Types of Errors in Engineering Experiments

  • Environmental fluctuations: Changes in environmental conditions such as temperature, humidity, pressure, or electromagnetic interference can all lead to errors in measurements.
  • Instrumental drift: Many instruments display a change in response over time, leading to an error known as 'drift'. It's often seen in electronic components where properties can change due to warming up.
  • Misuse of equipment: Incorrect use of equipment or using the wrong equipment for a specific measure can result in large errors in experimental data.
  • Sample contamination: When carrying out chemical or biological experiments, contamination of samples can significantly alter results.

Consider a circuit with a nominal 10V supply that drifts by 0.1V over a one-hour experiment. If you are measuring a voltage drop over a 1kΩ resistor using Ohm’s law (\( V = IR \)), the potential error due to drift would be an incorrect calculation of current by 0.1mA. As measurements become more precise, this kind of drift can have a significant impact on results.

Diving into Sources of Error in Experiments and Their Impact

Sources of error in experiments and their potential consequences.

  • For instance, consider a case where a faulty weighing scale always shows a measurable amount as 2 grams less than the actual. If you are using this scale to measure a chemical necessary for a series of chemical reactions, then irrespective of careful execution, the results will always be skewed due to this error.
  • A classical example of a random error might occur in laboratory experiments involving the measurement of temperature. Day-to-day fluctuations in room temperature may cause slight variations in the measured temperature of a liquid, leading to inconsistent, and thus, unreliable results.
  • An example might be if you inaccurately record the temperature as 800 degrees Celsius instead of 80 degrees Celsius. No matter how impeccably the rest of the experiment is performed, this blunder will translate into an error that could mask the real results.

How Errors in Experiments Affect Outcomes and Data Integrity

For simplicity, assume the reaction rate is linearly proportional to the amount of catalyst used, as described by the equation \[ R = kC \], where \( R \) is the reaction rate, \( k \) is the rate constant, and \( C \) is the concentration of the catalyst. If the scale you're using to measure the catalyst is off by the said 2 grams, this error will influence the value of \( k \) you determine from measurements. In effect, you’ve introduced an error in an experiment that was otherwise impeccably planned and executed.

Evaluating and Reducing Sources of Error in Engineering Experiments

Strategies for minimising sources of error in experiments.

Calibration is the process of sequentially adjusting the output or indication on a measurement instrument to match the values represented by a reference standard over the entire measurement range of the instrument.

Tools and Techniques to Keep Errors Minimal in Engineering Experiments

Analog devices represent data using a physical quantity that can take on any value within a range, while digital devices represent data as discrete values.

Solutions for Common Sources of Error in Experiments

Suggested problem-solving approaches for sources of error in experiments.

Systematic Errors: These occur due to predictable and consistent factors which cause the measured value to deviate from the true value. They result in a bias in the data.

Random Errors: These are unpredictable fluctuations that arise from variables in the experiment that are outside of control. Unlike systematic errors, they cannot be pinpointed to any specific factor and thus, add uncertainty to the experimental results.

Blunders: These are avoidable and usually arise due to misconceptions, carelessness or oversight. These are not inherent in the experimental procedure but entirely depend on human factors and thus, different from the first two types.

Methods to Counteract Frequent Sources of Error in Engineering Experiments

Sources of error in experiments - key takeaways.

  • Sources of Error in Experiments can lead to valuable findings, as in the case of penicillin discovered by Alexander Fleming due to a 'contamination error'.
  • Common errors in experiments include Parallax error, Reading error and Instrument precision error, all of which, if overlooked, can yield misleading results.
  • Engineering experiments often encounter more specific errors such as Environmental fluctuations, Instrumental drift, Misuse of equipment, and Sample contamination.
  • Systematic errors (consistent and repeatable), random errors (unpredictable and arising from variables outside an experimenter's control), and blunders (human errors resulting from carelessness or lack of knowledge) can significantly impact the accuracy of experimental results.
  • Strategies for minimising Sources of Error include calibration of equipment, performing repetitions of the experiment, conducting a pre-experimental design analysis, error estimation, and carrying out blind trials.

Flashcards in Sources of Error in Experiments 15

What is the meaning of 'error' in the context of an experiment?

In an experiment, 'error' refers to the deviation of a measured or calculated value from the true value. If not addressed, these can impact the accuracy and validity of experimental results.

What are the three primary sources of errors in engineering experiments?

The three primary sources of errors in engineering experiments are Systematic Errors, Random Errors, and Blunders.

How do the sources of errors affect engineering experiments and why is understanding them crucial?

Errors determine accuracy and reliability of experimental results. Identifying them helps improve data accuracy, experimental design, and encourages meticulousness, thereby eliminating potential blunders.

What is a parallax error in an experiment?

A parallax error occurs when, due to line of sight issues, objects are wrongly positioned, usually resulting in overlap and causing the definite location of an object to be uncertain.

What could cause instrumental drift in engineering experiments?

Instrumental drift can occur when many instruments display a change in response over time, often seen in electronic components where properties can change due to warming up.

What does a sample contamination error indicate?

Sample contamination, an error that can occur during chemical or biological experiments, significantly alters results by contaminating the test samples.

Sources of Error in Experiments

Learn with 15 Sources of Error in Experiments flashcards in the free StudySmarter app

We have 14,000 flashcards about Dynamic Landscapes.

Already have an account? Log in

Frequently Asked Questions about Sources of Error in Experiments

Test your knowledge with multiple choice flashcards.

What is the meaning of 'error' in the context of an experiment?

Sources of Error in Experiments

Join the StudySmarter App and learn efficiently with millions of flashcards and more!

Keep learning, you are doing great.

Discover learning materials with the free StudySmarter app

1

About StudySmarter

StudySmarter is a globally recognized educational technology company, offering a holistic learning platform designed for students of all ages and educational levels. Our platform provides learning support for a wide range of subjects, including STEM, Social Sciences, and Languages and also helps students to successfully master various tests and exams worldwide, such as GCSE, A Level, SAT, ACT, Abitur, and more. We offer an extensive library of learning materials, including interactive flashcards, comprehensive textbook solutions, and detailed explanations. The cutting-edge technology and tools we provide help students create their own learning materials. StudySmarter’s content is not only expert-verified but also regularly updated to ensure accuracy and relevance.

Sources of Error in Experiments

StudySmarter Editorial Team

Team Engineering Teachers

  • 19 minutes reading time
  • Checked by StudySmarter Editorial Team

Study anywhere. Anytime.Across all devices.

Create a free account to save this explanation..

Save explanations to your personalised space and access them anytime, anywhere!

By signing up, you agree to the Terms and Conditions and the Privacy Policy of StudySmarter.

Sign up to highlight and take notes. It’s 100% free.

Join over 22 million students in learning with our StudySmarter App

The first learning app that truly has everything you need to ace your exams in one place

  • Flashcards & Quizzes
  • AI Study Assistant
  • Study Planner
  • Smart Note-Taking

Join over 22 million students in learning with our StudySmarter App

Get unlimited access with a free StudySmarter account.

  • Instant access to millions of learning materials.
  • Flashcards, notes, mock-exams, AI tools and more.
  • Everything you need to ace your exams.

Second Popup Banner

  • Intro Physics Homework Help
  • Advanced Physics Homework Help
  • Precalculus Homework Help
  • Calculus Homework Help
  • Bio/Chem Homework Help
  • Engineering Homework Help
  • Homework Help
  • Introductory Physics Homework Help

What Are Common Sources of Error in an Electric Field Mapping Lab?

  • Thread starter mich_v87
  • Start date Oct 11, 2005
  • Tags Error Experiment Source
  • Oct 11, 2005
  • Researchers discover new flat electronic bands, paving way for advanced quantum materials
  • Foregoing quantum chaos to achieve high-fidelity quantum state transfer
  • Quantum annealer improves understanding of quantum many-body systems
  • Oct 17, 2005

Sorry for the late reply. In case you're still interested, here are my comments. When looking for sources of error in an experiment, one good thing to do is to look for simplifying assumptions in your theory. Your theoretical model most likely assumes the following: 1.) You are using an ideal galvanometer. 2.) Your wires and circuit elements have zero resistance (except for resistors of course). In addition to that you can get errors by using nominal values of circuit element ratings, instead of measuring the ratings. For instance, in your theoretical calculations did you use "6 V" as your source voltage? Or did you take a voltmeter and actually measure the potential difference across the terminals? If you did the former, then you probably picked up some error from that.  

  • Oct 24, 2005

Related to What Are Common Sources of Error in an Electric Field Mapping Lab?

What is an error source in an experiment.

An error source in an experiment refers to any factor or condition that can cause a deviation from the expected or true value of a measurement or observation. This can include human error, equipment malfunction, environmental factors, and other variables that may affect the outcome of the experiment.

Why is it important to identify error sources in an experiment?

Identifying error sources in an experiment is crucial because it allows scientists to understand the potential limitations or uncertainties in their data. By identifying and minimizing these errors, scientists can increase the accuracy and reliability of their results.

What are some common types of error sources in experiments?

Some common types of error sources in experiments include random errors, systematic errors, and human errors. Random errors can occur due to chance and are typically small and unpredictable. Systematic errors, on the other hand, are consistent and can be caused by faulty equipment or biased measurements. Human errors can also occur due to mistakes in measurement or calculation.

How can you minimize error sources in an experiment?

To minimize error sources in an experiment, scientists can use proper experimental design, carefully calibrate equipment, and follow standardized procedures. They can also repeat the experiment multiple times and use statistical analysis to identify and account for any errors.

What should you do if you encounter unexpected error sources in an experiment?

If unexpected error sources are encountered in an experiment, it is important to document and analyze them to understand their impact on the results. Scientists should also consider repeating the experiment or adjusting their methods to minimize these errors in future experiments.

Similar threads

  • Mar 21, 2019
  • Dec 25, 2017
  • Nov 7, 2017
  • Aug 28, 2012
  • Sep 11, 2016
  • Aug 29, 2018
  • Feb 8, 2014
  • Sep 25, 2012
  • Jan 3, 2010
  • May 28, 2016

Hot Threads

  • Formula derivation connecting vertical water flowrate & horizontal distance moved by a suspended sphere
  • Current through a circuit when the key is open
  • Understanding Inductance and Induced EMF in Simple Circuits
  • Small oscillations of a simple pendulum placed on a moving block
  • Two concentric conducting spherical shells and resistor in between

Recent Insights

  • Insights   Aspects Behind the Concept of Dimension in Various Fields
  • Insights   Views On Complex Numbers
  • Insights   Addition of Velocities (Velocity Composition) in Special Relativity
  • Insights   Schrödinger’s Cat and the Qbit
  • Insights   The Slinky Drop Experiment Analysed
  • Insights   How to Solve a Multi-Atwood Machine Assembly

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 11 November 2022

Error, reproducibility and uncertainty in experiments for electrochemical energy technologies

  • Graham Smith   ORCID: orcid.org/0000-0003-0713-2893 1 &
  • Edmund J. F. Dickinson   ORCID: orcid.org/0000-0003-2137-3327 1  

Nature Communications volume  13 , Article number:  6832 ( 2022 ) Cite this article

9831 Accesses

8 Citations

22 Altmetric

Metrics details

  • Electrocatalysis
  • Electrochemistry
  • Materials for energy and catalysis

The authors provide a metrology-led perspective on best practice for the electrochemical characterisation of materials for electrochemical energy technologies. Such electrochemical experiments are highly sensitive, and their results are, in practice, often of uncertain quality and challenging to reproduce quantitatively.

A critical aspect of research on electrochemical energy devices, such as batteries, fuel cells and electrolysers, is the evaluation of new materials, components, or processes in electrochemical cells, either ex situ, in situ or in operation. For such experiments, rigorous experimental control and standardised methods are required to achieve reproducibility, even on standard or idealised systems such as single crystal platinum 1 . Data reported for novel materials often exhibit high (or unstated) uncertainty and often prove challenging to reproduce quantitatively. This situation is exacerbated by a lack of formally standardised methods, and practitioners with less formal training in electrochemistry being unaware of best practices. This limits trust in published metrics, with discussions on novel electrochemical systems frequently focusing on a single series of experiments performed by one researcher in one laboratory, comparing the relative performance of the novel material against a claimed state-of-the-art.

Much has been written about the broader reproducibility/replication crisis 2 and those reading the electrochemical literature will be familiar with weakly underpinned claims of “outstanding” performance, while being aware that comparisons may be invalidated by measurement errors introduced by experimental procedures which violate best practice; such issues frequently mar otherwise exciting science in this area. The degree of concern over the quality of reported results is evidenced by the recent decision of several journals to publish explicit experimental best practices 3 , 4 , 5 , reporting guidelines or checklists 6 , 7 , 8 , 9 , 10 and commentary 11 , 12 , 13 aiming to improve the situation, including for parallel theoretical work 14 .

We write as two electrochemists who, working in a national metrology institute, have enjoyed recent exposure to metrology: the science of measurement. Metrology provides the vocabulary 15 and mathematical tools 16 to express confidence in measurements and the comparisons made between them. Metrological systems and frameworks for quantification underpin consistency and assurance in all measurement fields and formal metrology is an everyday consideration for practical and academic work in fields where accurate measurements are crucial; we have found it a useful framework within which to evaluate our own electrochemical work. Here, rather than pen another best practice guide, we aim, with focus on three-electrode electrochemical measurements for energy material characterisation, to summarise some advice that we hope helps those performing electrochemical experiments to:

avoid mistakes and minimise error

report in a manner that facilitates reproducibility

consider and quantify uncertainty

Minimising mistakes and error

Metrology dispenses with nebulous concepts such as performance and instead requires scientists to define a specific measurand (“the quantity intended to be measured”) along with a measurement model ( ”the mathematical relation among all quantities known to be involved in a measurement”), which converts the experimental indicators into the measurand 15 . Error is the difference between the reported value of this measurand and its unknowable true value. (Note this is not the formal definition, and the formal concepts of error and true value are not fully compatible with measurement concepts discussed in this article, but we retain it here—as is common in metrology tuition delivered by national metrology institutes—for pedagogical purposes 15 ).

Mistakes (or gross errors) are those things which prevent measurements from working as intended. In electrochemistry the primary experimental indicator is often current or voltage, while the measurand might be something simple, like device voltage for a given current density, or more complex, like a catalyst’s turnover frequency. Both of these are examples of ‘method-defined measurands’, where the results need to be defined in reference to the method of measurement 17 , 18 (for example, to take account of operating conditions). Robust experimental design and execution are vital to understand, quantify and minimise sources of error, and to prevent mistakes.

Contemporary electrochemical instrumentation can routinely offer a current resolution and accuracy on the order of femtoamps; however, one electron looks much like another to a potentiostat. Consequently, the practical limit on measurements of current is the scientist’s ability to unambiguously determine what causes the observed current. Crucially, they must exclude interfering processes such as modified/poisoned catalyst sites or competing reactions due to impurities.

As electrolytes are conventionally in enormous excess compared to the active heterogeneous interface, electrolyte purity requirements are very high. Note, for example, that a perfectly smooth 1 cm 2 polycrystalline platinum electrode has on the order of 2 nmol of atoms exposed to the electrolyte, so that irreversibly adsorbing impurities present at the part per billion level (nmol mol −1 ) in the electrolyte may substantially alter the surface of the electrode. Sources of impurities at such low concentration are innumerable and must be carefully considered for each experiment; impurity origins for kinetic studies in aqueous solution have been considered broadly in the historical literature, alongside a review of standard mitigation methods 19 . Most commercial electrolytes contain impurities and the specific ‘grade’ chosen may have a large effect; for example, one study showed a three-fold decrease in the specific activity of oxygen reduction catalysts when preparing electrolytes with American Chemical Society (ACS) grade acid rather than a higher purity grade 20 . Likewise, even 99.999% pure hydrogen gas, frequently used for sparging, may contain more than the 0.2 μmol mol −1 of carbon monoxide permitted for fuel cell use 21 .

The most insidious impurities are those generated in situ. The use of reference electrodes with chloride-containing filling solutions should be avoided where chloride may poison catalysts 22 or accelerate dissolution. Similarly, reactions at the counter electrode, including dissolution of the electrode itself, may result in impurities. This is sometimes overlooked when platinum counter electrodes are used to assess ‘platinum-free’ electrocatalysts, accidentally resulting in performance-enhancing contamination 23 , 24 ; a critical discussion on this topic has recently been published 25 . Other trace impurity sources include plasticisers present in cells and gaskets, or silicates from the inappropriate use of glass when working with alkaline electrolytes 26 . To mitigate sensitivity to impurities from the environment, cleaning protocols for cells and components must be robust 27 . The use of piranha solution or similarly oxidising solution followed by boiling in Type 1 water is typical when performing aqueous electrochemistry 20 . Cleaned glassware and electrodes are also routinely stored underwater to prevent recontamination from airborne impurities.

The behaviour of electronic hardware used for electrochemical experiments should be understood and considered carefully in interpreting data 28 , recognising that the built-in complexity of commercially available digital potentiostats (otherwise advantageous!) is capable of introducing measurement artefacts or ambiguity 29 , 30 . While contemporary electrochemical instrumentation may have a voltage resolution of ~1 μV, its voltage measurement uncertainty is limited by other factors, and is typically on the order of 1 mV. As passing current through an electrode changes its potential, a dedicated reference electrode is often incorporated into both ex situ and, increasingly, in situ experiments to provide a stable well defined reference. Reference electrodes are typically selected from a range of well-known standardised electrode–electrolyte interfaces at which a characteristic and kinetically rapid reversible faradaic process occurs. The choice of reference electrode should be made carefully in consideration of chemical compatibility with the measurement environment 31 , 32 , 33 , 34 . In combination with an electronic blocking resistance, the potential of the electrode should be stable and reproducible. Unfortunately, deviation from the ideal behaviour frequently occurs. While this can often be overlooked when comparing results from identical cells, more attention is required when reporting values for comparison.

In all cases where conversion between different electrolyte–reference electrode systems is required, junction potentials should be considered. These arise whenever there are different chemical conditions in the electrolyte at the working electrode and reference electrode interfaces. Outside highly dilute solutions, or where there are large activity differences for a reactant/product of the electrode reaction (e.g. pH for hydrogen reactions), liquid junction potentials for conventional aqueous ions have been estimated in the range <50 mV 33 . Such a deviation may nonetheless be significant when onset potentials or activities at specific potentials are being reported. The measured potential difference between the working and reference electrode also depends strongly on the geometry of the cell, so cell design is critical. Fig.  1 shows the influence of cell design on potential profiles. Ideally the reference electrode should therefore be placed close to the working electrode (noting that macroscopic electrodes may have inhomogeneous potentials). To minimise shielding of the electric field between counter and working electrode and interruption of mass transport processes, a thin Luggin-Haber capillary is often used and a small separation maintained. Understanding of shielding and edge effects is vital when reference electrodes are introduced in situ. This is especially applicable for analysis of energy devices for which constraints on cell design, due to the need to minimise electrolyte resistance and seal the cell, preclude optimal reference electrode positioning 32 , 35 , 36 .

figure 1

a Illustration (simulated data) of primary (resistive) current and potential distribution in a typical three-electrode cell. The main compartment is cylindrical (4 cm diameter, 1 cm height), filled with electrolyte with conductivity 1.28 S m −1 (0.1 M KCl(aq)). The working electrode (WE) is a 2 mm diameter disc drawing 1 mA (≈ 32 mA cm −2 ) from a faradaic process with infinitely fast kinetics and redox potential 0.2 V vs the reference electrode (RE). The counter electrode (CE) is connected to the main compartment by a porous frit; the RE is connected by a Luggin capillary (green cylinders) whose tip position is offset from the WE by a variable distance. Red lines indicate prevailing current paths; coloured surfaces indicate isopotential contours normal to the current density. b Plot of indicated WE vs RE potential (simulated data). As the Luggin tip is moved away from the WE surface, ohmic losses due to the WE-CE current distribution lead to variation in the indicated WE-RE potential. Appreciable error may arise on an offset length scale comparable to the WE radius.

Quantitative statements about fundamental electrochemical processes based on measured values of current and voltage inevitably rely on models of the system. Such models have assumptions that may be routinely overlooked when following experimental and analysis methods, and that may restrict their application to real-world systems. It is quite possible to make highly precise but meaningless measurements! An often-assumed condition for electrocatalyst analysis is the absence of mass transport limitation. For some reactions, such as the acidic hydrogen oxidation and hydrogen evolution reactions, this state is arguably so challenging to reach at representative conditions that it is impossible to measure true catalyst activity 11 . For example, ex situ thin-film rotating disk electrode measurements routinely fail to predict correct trends in catalyst performance in morphologically complex catalyst layers  as relevant operating conditions (e.g. meaningful current densities) are theoretically inaccessible. This topic has been extensively discussed with some authors directly criticising this technique and exploring alternatives 37 , 38 , and others defending the technique’s applicability for ranking catalysts if scrupulous attention is paid to experimental details 39 ; yet, many reports continue to use this measurement technique blindly with no regard for its applicability. We therefore strongly urge those planning measurements to consider whether their chosen technique is capable of providing sufficient evidence to disprove their hypothesis, even if it has been widely used for similar experiments.

The correct choice of technique should be dependent upon the measurand being probed rather than simply following previous reports. The case of iR correction, where a measurement of the uncompensated resistance is used to correct the applied voltage, is a good example. When the measurand is a material property, such as intrinsic catalyst activity, the uncompensated resistance is a source of error introduced by the experimental method and it should carefully be corrected out (Fig.  1 ). In the case that the uncompensated resistance is intrinsic to the measurand—for instance the operating voltage of an electrolyser cell—iR compensation is inappropriate and only serves to obfuscate. Another example is the choice of ex situ (outside the operating environment), in situ (in the operating environment), and operando (during operation) measurements. While in situ or operando testing allows characterisation under conditions that are more representative of real-world use, it may also yield measurements with increased uncertainty due to the decreased possibility for fine experimental control. Depending on the intended use of the measurement, an informed compromise must be sought between how relevant and how uncertain the resulting measurement will be.

Maximising reproducibility

Most electrochemists assess the repeatability of measurements, performing the same measurement themselves several times. Repeats, where all steps (including sample preparation, where relevant) of a measurement are carried out multiple times, are absolutely crucial for highlighting one-off mistakes (Fig.  2 ). Reproducibility, however, is assessed when comparing results reported by different laboratories. Many readers will be familiar with the variability in key properties reported for various systems e.g. variability in the reported electrochemically active surface area (ECSA) of commercial catalysts, which might reasonably be expected to be constant, suggesting that, in practice, the reproducibility of results cannot be taken for granted. As electrochemistry deals mostly with method-defined measurands, the measurement procedure must be standardised for results to be comparable. Variation in results therefore strongly suggests that measurements are not being performed consistently and that the information typically supplied when publishing experimental methods is insufficient to facilitate reproducibility of electrochemical measurements. Quantitative electrochemical measurements require control over a large range of parameters, many of which are easily overlooked or specified imprecisely when reporting data. An understanding of the crucial parameters and methods for their control is often institutional knowledge, held by expert electrochemists, but infrequently formalised and communicated e.g. through publication of widely adopted standards. This creates challenges to both reproducibility and the corresponding assessment of experimental quality by reviewers. The reporting standards established by various publishers (see Introduction) offer a practical response, but it is still unclear whether these will contain sufficiently granular detail to improve the situation.

figure 2

The measurements from laboratory 1 show a high degree of repeatability, while the measurements from laboratory 2 do not. Apparently, a mistake has been made in repeat 1, which will need to be excluded from any analysis and any uncertainty analysis, and/or suggests further repeat measurements should be conducted. The error bars are based on an uncertainty with coverage factor ~95% (see below) so the results from the two laboratories are different, i.e. show poor reproducibility. This may indicate differing experimental practice or that some as yet unidentified parameter is influencing the results.

Besides information typically supplied in the description of experimental methods for publication, which, at a minimum, must detail the materials, equipment and measurement methods used to generate the results, we suggest that a much more comprehensive description is often required, especially where measurements have historically poor reproducibility or the presented results differ from earlier reports. Such an expanded ‘supplementary experimental’ section would additionally include any details that could impact the results: for example, material pre-treatment, detailed electrode preparation steps, cleaning procedures, expected electrolyte and gas impurities, electrode preconditioning processes, cell geometry including electrode positions, detail of junctions between electrodes, and any other fine experimental details which might be institutional knowledge but unknown to the (now wide) readership of the electrochemical literature. In all cases any corrections and calculations used should be specified precisely and clearly justified; these may include determinations of properties of the studied system, such as ECSA, or of the environment, such as air pressure. We highlight that knowledge of the ECSA is crucial for conventional reporting of intrinsic electrocatalyst activity, but is often very challenging to measure in a reproducible manner 40 , 41 .

To aid reproducibility we recommend regularly calibrating experimental equipment and doing so in a way that is traceable to primary standards realising the International System of Units (SI) base units. The SI system ensures that measurement units (such as the volt) are uniform globally and invariant over time. Calibration applies to direct experimental indicators, e.g. loads and potentiostats, but equally to supporting tools such as temperature probes, balances, and flow meters. Calibration of reference electrodes is often overlooked even though variations from ideal behaviour can be significant 42 and, as discussed above, are often the limit of accuracy on potential measurement. Sometimes reports will specify internal calibration against a known reaction (such as the onset of the hydrogen evolution reaction), but rarely detail regular comparisons to a local master electrode artefact such as a reference hydrogen electrode or explain how that artefact is traceable, e.g. through control of the filling solution concentration and measurement conditions. If reference is made to a standardised material (e.g. commercial Pt/C) the specified material should be both widely available and the results obtained should be consistent with prior reports.

Beyond calibration and reporting, the best test of reproducibility is to perform intercomparisons between laboratories, either by comparing results to identical experiments reported in the literature or, more robustly, through participation in planned intercomparisons (for example ‘round-robin’ exercises); we highlight a recent study applied to solid electrolyte characterisation as a topical example 43 . Intercomparisons are excellent at establishing the key features of an experimental method and the comparability of results obtained from different methods; moreover they provide a consensus against which other laboratories may compare themselves. However, performing repeat measurements for assessing repeatability and reproducibility cannot estimate uncertainty comprehensively, as it excludes systematic sources of uncertainty.

Assessing measurement uncertainty

Formal uncertainty evaluation is an alien concept to most electrochemists; even the best papers (as well as our own!) typically report only the standard deviation between a few repeats. Metrological best practice dictates that reported values are stated as the combination of a best estimate of the measurand, and an interval, and a coverage factor ( k ) which gives the probability of the true value being within that interval. For example, “the turnover frequency of the electrocatalyst is 1.0 ± 0.2 s −1 ( k  = 2)” 16 means that the scientist (having assumed normally distributed error) is 95% confident that the turnover frequency lies in the range 0.8–1.2 s −1 . Reporting results in such a way makes it immediately clear whether the measurements reliably support the stated conclusions, and enables meaningful comparisons between independent results even if their uncertainties differ (Fig.  3 ). It also encourages honesty and self-reflection about the shortcomings of results, encouraging the development of improved experimental techniques.

figure 3

a Complete reporting of a measurement includes the best estimate of the measurand and an uncertainty and the probability the true value falls within the uncertainty reported. Here, the percentages indicate that a normal distribution has been assumed. b Horizontal bars indicate 95% confidence intervals from uncertainty analysis. The confidence intervals of measurements 1 and 2 overlap when using k  = 2, so it is not possible to say with 95% confidence that the result of the measurement 2 is higher than measurement 1, but it is possible to say this with 68% confidence, i.e. k  = 1. Measurement 3 has a lower uncertainty, so it is possible to say with 95% confidence that the value is higher than measurement 2.

Constructing such a statement and performing the underlying calculations often appears daunting, not least as there are very few examples for electrochemical systems, with pH measurements being one example to have been treated thoroughly 44 . However, a standard process for uncertainty analysis exists, as briefly outlined graphically in Fig.  4 . We refer the interested reader to both accessible introductory texts 45 and detailed step-by-step guides 16 , 46 . The first steps in the process are to state precisely what is being measured—the measurand—and identify likely sources of uncertainty. Even this qualitative effort is often revealing. Precision in the definition of the measurand (and how it is determined from experimental indicators) clarifies the selection of measurement technique and helps to assess its appropriateness; for example, where the measurand relates only to an instantaneous property of a specific physical object, e.g. the current density of a specific fuel cell at 0.65 V following a standardised protocol, we ignore all variability in construction, device history etc. and no error is introduced by the sample. Whereas, when the measurand is a material property, such as turnover frequency of a catalyst material with a defined chemistry and preparation method, variability related to the material itself and sample preparation will often introduce substantial uncertainty in the final result. In electrochemical measurements, errors may arise from a range of sources including the measurement equipment, fluctuations in operating conditions, or variability in materials and samples. Identifying these sources leads to the design of better-quality experiments. In essence, the subsequent steps in the calculation of uncertainty quantify the uncertainty introduced by each source of error and, by using a measurement model or a sensitivity analysis (i.e. an assessment of how the results are sensitive to variability in input parameters), propagate these to arrive at a final uncertainty on the reported result.

figure 4

Possible sources of uncertainty are identified, and their standard uncertainty or probability distribution is determined by statistical analysis of repeat measurements (Type A uncertainties) or other evidence (Type B uncertainties). If required, uncertainties are then converted into the same unit as the measurand and adjusted for sensitivity, using a measurement model. Uncertainties are then combined either analytically using a standard approach or numerically to generate an overall estimate of uncertainty for the measurand (as indicated in Fig.  3a ).

Generally, given the historically poor understanding of uncertainty in electrochemistry, we promote increased awareness of uncertainty reporting standards and a focus on reporting measurement uncertainty with a level of detail that is appropriate to the claim made, or the scientific utilisation of the data. For example, where the primary conclusion of a paper relies on demonstrating that a material has the ‘highest ever X’ or ‘X is bigger than Y’ it is reasonable for reviewers to ask authors to quantify how confident they are in their measurement and statement. Additionally, where uncertainties are reported, even with error bars in numerical or graphical data, the method by which the uncertainty was determined should be stated, even if the method is consciously simple (e.g. “error bars indicate the sample standard deviation of n  = 3 measurements carried out on independent electrodes”). Unfortunately, we are aware of only sporadic and incomplete efforts to create formal uncertainty budgets for electrochemical measurements of energy technologies or materials, though work is underway in our group to construct these for some exemplar systems.

Electrochemistry has undoubtedly thrived without significant interaction with formal metrology; we do not urge an abrupt revolution whereby rigorous measurements become devalued if they lack additional arcane formality. Rather, we recommend using the well-honed principles of metrology to illuminate best practice and increase transparency about the strengths and shortcomings of reported experiments. From rethinking experimental design, to participating in laboratory intercomparisons and estimating the uncertainty on key results, the application of metrological principles to electrochemistry will result in more robust science.

Climent, V. & Feliu, J. M. Thirty years of platinum single crystal electrochemistry. J. Solid State Electrochem . https://doi.org/10.1007/s10008-011-1372-1 (2011).

Nature Editors and Contributors. Challenges in irreproducible research collection. Nature https://www.nature.com/collections/prbfkwmwvz/ (2018).

Chen, J. G., Jones, C. W., Linic, S. & Stamenkovic, V. R. Best practices in pursuit of topics in heterogeneous electrocatalysis. ACS Catal. 7 , 6392–6393 (2017).

Article   CAS   Google Scholar  

Voiry, D. et al. Best practices for reporting electrocatalytic performance of nanomaterials. ACS Nano 12 , 9635–9638 (2018).

Article   CAS   PubMed   Google Scholar  

Wei, C. et al. Recommended practices and benchmark activity for hydrogen and oxygen electrocatalysis in water splitting and fuel cells. Adv. Mater. 31 , 1806296 (2019).

Article   Google Scholar  

Chem Catalysis Editorial Team. Chem Catalysis Checklists Revision 1.1 . https://info.cell.com/introducing-our-checklists-learn-more (2021).

Chatenet, M. et al. Good practice guide for papers on fuel cells and electrolysis cells for the Journal of Power Sources. J. Power Sources 451 , 227635 (2020).

Sun, Y. K. An experimental checklist for reporting battery performances. ACS Energy Lett. 6 , 2187–2189 (2021).

Li, J. et al. Good practice guide for papers on batteries for the Journal of Power Sources. J. Power Sources 452 , 227824 (2020).

Arbizzani, C. et al. Good practice guide for papers on supercapacitors and related hybrid capacitors for the Journal of Power Sources. J. Power Sources 450 , 227636 (2020).

Hansen, J. N. et al. Is there anything better than Pt for HER? ACS Energy Lett. 6 , 1175–1180 (2021).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Xu, K. Navigating the minefield of battery literature. Commun. Mater. 3 , 1–7 (2022).

Dix, S. T., Lu, S. & Linic, S. Critical practices in rigorously assessing the inherent activity of nanoparticle electrocatalysts. ACS Catal. 10 , 10735–10741 (2020).

Mistry, A. et al. A minimal information set to enable verifiable theoretical battery research. ACS Energy Lett. 6 , 3831–3835 (2021).

Joint Committee for Guides in Metrology: Working Group 2. International Vocabulary of Metrology—Basic and General Concepts and Associated Terms . (2012).

Joint Committee for Guides in Metrology: Working Group 1. Evaluation of measurement data—Guide to the expression of uncertainty in measurement . (2008).

International Organization for Standardization: Committee on Conformity Assessment. ISO 17034:2016 General requirements for the competence of reference material producers . (2016).

Brown, R. J. C. & Andres, H. How should metrology bodies treat method-defined measurands? Accredit. Qual. Assur . https://doi.org/10.1007/s00769-020-01424-w (2020).

Angerstein-Kozlowska, H. Surfaces, Cells, and Solutions for Kinetics Studies . Comprehensive Treatise of Electrochemistry vol. 9: Electrodics: Experimental Techniques (Plenum Press, 1984).

Shinozaki, K., Zack, J. W., Richards, R. M., Pivovar, B. S. & Kocha, S. S. Oxygen reduction reaction measurements on platinum electrocatalysts utilizing rotating disk electrode technique. J. Electrochem. Soc. 162 , F1144–F1158 (2015).

International Organization for Standardization: Technical Committee ISO/TC 197. ISO 14687:2019(E)—Hydrogen fuel quality—Product specification . (2019).

Schmidt, T. J., Paulus, U. A., Gasteiger, H. A. & Behm, R. J. The oxygen reduction reaction on a Pt/carbon fuel cell catalyst in the presence of chloride anions. J. Electroanal. Chem. 508 , 41–47 (2001).

Chen, R. et al. Use of platinum as the counter electrode to study the activity of nonprecious metal catalysts for the hydrogen evolution reaction. ACS Energy Lett. 2 , 1070–1075 (2017).

Ji, S. G., Kim, H., Choi, H., Lee, S. & Choi, C. H. Overestimation of photoelectrochemical hydrogen evolution reactivity induced by noble metal impurities dissolved from counter/reference electrodes. ACS Catal. 10 , 3381–3389 (2020).

Jerkiewicz, G. Applicability of platinum as a counter-electrode material in electrocatalysis research. ACS Catal. 12 , 2661–2670 (2022).

Guo, J., Hsu, A., Chu, D. & Chen, R. Improving oxygen reduction reaction activities on carbon-supported ag nanoparticles in alkaline solutions. J. Phys. Chem. C. 114 , 4324–4330 (2010).

Arulmozhi, N., Esau, D., van Drunen, J. & Jerkiewicz, G. Design and development of instrumentations for the preparation of platinum single crystals for electrochemistry and electrocatalysis research Part 3: Final treatment, electrochemical measurements, and recommended laboratory practices. Electrocatal 9 , 113–123 (2017).

Colburn, A. W., Levey, K. J., O’Hare, D. & Macpherson, J. V. Lifting the lid on the potentiostat: a beginner’s guide to understanding electrochemical circuitry and practical operation. Phys. Chem. Chem. Phys. 23 , 8100–8117 (2021).

Ban, Z., Kätelhön, E. & Compton, R. G. Voltammetry of porous layers: staircase vs analog voltammetry. J. Electroanal. Chem. 776 , 25–33 (2016).

McMath, A. A., Van Drunen, J., Kim, J. & Jerkiewicz, G. Identification and analysis of electrochemical instrumentation limitations through the study of platinum surface oxide formation and reduction. Anal. Chem. 88 , 3136–3143 (2016).

Jerkiewicz, G. Standard and reversible hydrogen electrodes: theory, design, operation, and applications. ACS Catal. 10 , 8409–8417 (2020).

Ives, D. J. G. & Janz, G. J. Reference Electrodes, Theory and Practice (Academic Press, 1961).

Newman, J. & Balsara, N. P. Electrochemical Systems (Wiley, 2021).

Inzelt, G., Lewenstam, A. & Scholz, F. Handbook of Reference Electrodes (Springer Berlin, 2013).

Cimenti, M., Co, A. C., Birss, V. I. & Hill, J. M. Distortions in electrochemical impedance spectroscopy measurements using 3-electrode methods in SOFC. I—effect of cell geometry. Fuel Cells 7 , 364–376 (2007).

Hoshi, Y. et al. Optimization of reference electrode position in a three-electrode cell for impedance measurements in lithium-ion rechargeable battery by finite element method. J. Power Sources 288 , 168–175 (2015).

Article   ADS   CAS   Google Scholar  

Jackson, C., Lin, X., Levecque, P. B. J. & Kucernak, A. R. J. Toward understanding the utilization of oxygen reduction electrocatalysts under high mass transport conditions and high overpotentials. ACS Catal. 12 , 200–211 (2022).

Masa, J., Batchelor-McAuley, C., Schuhmann, W. & Compton, R. G. Koutecky-Levich analysis applied to nanoparticle modified rotating disk electrodes: electrocatalysis or misinterpretation. Nano Res. 7 , 71–78 (2014).

Martens, S. et al. A comparison of rotating disc electrode, floating electrode technique and membrane electrode assembly measurements for catalyst testing. J. Power Sources 392 , 274–284 (2018).

Wei, C. et al. Approaches for measuring the surface areas of metal oxide electrocatalysts for determining their intrinsic electrocatalytic activity. Chem. Soc. Rev. 48 , 2518–2534 (2019).

Lukaszewski, M., Soszko, M. & Czerwiński, A. Electrochemical methods of real surface area determination of noble metal electrodes—an overview. Int. J. Electrochem. Sci. https://doi.org/10.20964/2016.06.71 (2016).

Niu, S., Li, S., Du, Y., Han, X. & Xu, P. How to reliably report the overpotential of an electrocatalyst. ACS Energy Lett. 5 , 1083–1087 (2020).

Ohno, S. et al. How certain are the reported ionic conductivities of thiophosphate-based solid electrolytes? An interlaboratory study. ACS Energy Lett. 5 , 910–915 (2020).

Buck, R. P. et al. Measurement of pH. Definition, standards, and procedures (IUPAC Recommendations 2002). Pure Appl. Chem. https://doi.org/10.1351/pac200274112169 (2003).

Bell, S. Good Practice Guide No. 11. The Beginner’s Guide to Uncertainty of Measurement. (Issue 2). (National Physical Laboratory, 2001).

United Kingdom Accreditation Service. M3003 The Expression of Uncertainty and Confidence in Measurement  4th edn. (United Kingdom Accreditation Service, 2019).

Download references

Acknowledgements

This work was supported by the National Measurement System of the UK Department of Business, Energy & Industrial Strategy. Andy Wain, Richard Brown and Gareth Hinds (National Physical Laboratory, Teddington, UK) provided insightful comments on the text.

Author information

Authors and affiliations.

National Physical Laboratory, Hampton Road, Teddington, TW11 0LW, UK

Graham Smith & Edmund J. F. Dickinson

You can also search for this author in PubMed   Google Scholar

Contributions

G.S. originated the concept for the article. G.S. and E.J.F.D. contributed to drafting, editing and revision of the manuscript.

Corresponding author

Correspondence to Graham Smith .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature Communications thanks Gregory Jerkiewicz for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Smith, G., Dickinson, E.J.F. Error, reproducibility and uncertainty in experiments for electrochemical energy technologies. Nat Commun 13 , 6832 (2022). https://doi.org/10.1038/s41467-022-34594-x

Download citation

Received : 29 July 2022

Accepted : 31 October 2022

Published : 11 November 2022

DOI : https://doi.org/10.1038/s41467-022-34594-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Electrochemical surface-enhanced raman spectroscopy.

  • Christa L. Brosseau
  • Alvaro Colina

Nature Reviews Methods Primers (2023)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

possible sources of error in an electrical circuit experiment

Error Analysis in Measurement of Electrical Conductivity

  • Conference paper
  • First Online: 28 July 2022
  • Cite this conference paper

possible sources of error in an electrical circuit experiment

  • Sahiba Bano 43 , 44 ,
  • Ashish Kumar 43 , 44 ,
  • Bal Govind 43 , 44 ,
  • Komal 43 , 44 &
  • D. K. Misra 43 , 44  

Part of the book series: Lecture Notes in Electrical Engineering ((LNEE,volume 906))

367 Accesses

Technology development primarily needs accurate and precise measurement of any parameter with stated uncertainty. Thermoelectric technology depends upon the performance of material which is gauged by figure of merit (ZT). The ZT comprises three interrelating parameters, namely Seebeck coefficient ( α ), electrical conductivity ( σ ) and thermal conductivity ( κ ). Out of these three parameters, this review article focuses on only electrical conductivity and errors associated with its measurement. Factors causing uncertainty in the electrical conductivity measurement and solutions to minimize them are also discussed for general readership.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

possible sources of error in an electrical circuit experiment

Errors Associated in Seebeck Coefficient Measurement for Thermoelectric Metrology

possible sources of error in an electrical circuit experiment

Reliable measurements of the Seebeck coefficient on a commercial system

possible sources of error in an electrical circuit experiment

Correction of the Electrical and Thermal Extrinsic Effects in Thermoelectric Measurements by the Harman Method

Snyder GJ, Toberer ES (2008) Complex thermoelectric materials. Nat Mater 7:105–114. https://doi.org/10.1038/nmat2090

Article   Google Scholar  

Duan S, Cui Y, Chen X, Yi W, Liu Y, Liu X (2019) Ultrahigh thermoelectric performance realized in black phosphorus system by favorable band engineering through group VA doping. Adv Funct Mater 29:1904346

Gao P, Lu X, Berkun I, Schmidt RD, Case ED, Hogan TP (2014) Reduced lattice thermal conductivity in Bi-doped Mg2Si0.4Sn0.6. Appl Phys Lett 105:2–7. https://doi.org/10.1063/1.4901178

Fu C, Bai S, Liu Y, Tang Y, Chen L, Zhao X et al (2015) Realizing high figure of merit in heavy-band p-type half-Heusler thermoelectric materials. Nat Commun 6:8144

Zhang Q, He J, Zhu TJ, Zhang SN, Zhao XB, Tritt TM (2008) High figures of merit and natural nanostructures in Mg 2 Si 0.4 Sn 0.6 based thermoelectric materials. Appl Phys Lett 93:102109

Google Scholar  

Guan M, Zhao K, Qiu P, Ren D, Shi X, Chen L (2019) Enhanced thermoelectric performance of quaternary Cu 2−2 x Ag 2 x Se 1− x S x Liquid-like Chalcogenides. ACS Appl Mater Interfaces

Du Z, He J, Chen X, Yan M, Zhu J, Liu Y (2019) Point defect engineering in thermoelectric study of InSb. Intermetallics 112:106528

Harman TC, Spears DL, Walsh MP (1999) PbTe/Te superlattice structures with enhanced thermoelectric figures of merit. J Electron Mater 28:L1-5

Harman TC, Taylor PJ, Spears DL, Walsh MP (2000) Thermoelectric quantum-dot superlattices with high ZT. J Electron Mater 29:L1-2

Martin J (2013) Protocols for the high temperature measurement of the Seebeck coefficient in thermoelectric materials. Meas Sci Technol 24. https://doi.org/10.1088/0957-0233/24/8/085601

Martin J, Wong-Ng W, Green ML (2015) Seebeck coefficient metrology: do contemporary protocols measure up? J Electron Mater 44:1998–2006. https://doi.org/10.1007/s11664-015-3640-9

Burkov AT, Fedotov AI, Novikov S V (2016) Methods and apparatus for measuring thermopower and electrical conductivity of thermoelectric materials at high temperatures. Thermoelectr Power Gener Look Trends Technol 353–389

Philips’Gloeilampenfabrieken O (1958) A method of measuring specific resistivity and Hall effect of discs of arbitrary shape. Philips Res Rep 13:1–9

Mackey J, Dynys F, Sehirlioglu A (2014) Uncertainty analysis for common Seebeck and electrical resistivity measurement systems. Rev Sci Instrum 85. https://doi.org/10.1063/1.4893652

Rowe DM. Thermoelectrics handbook: macro to nano. CRC press; 2018.

Bhandari CM, Rowe DM (1995) CRC handbook of thermoelectrics. CRC Press, Boca Raton, FL, p 49

Kirby CGM, Laubitz MJ (1973) The error due to the Peltier effect in direct-current measurements of resistance. Metrologia 9:103

Cheremisin MV (2001) Peltier-effect-induced correction to ohmic resistance. J Exp Theor Phys 92:357–360

Download references

Author information

Authors and affiliations.

CSIR-National Physical Laboratory, Dr. K.S. Krishnan Marg, New Delhi, 110012, India

Sahiba Bano, Ashish Kumar, Bal Govind,  Komal & D. K. Misra

Academy of Scientific & Innovative Research (AcSIR), CSIR-NPL Campus, New Delhi, 110012, India

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Sahiba Bano .

Editor information

Editors and affiliations.

CSIR-NPL, New Delhi, India

Sanjay Yadav

Maharaja Surajmal Institute of Technology, New Delhi, India

K.P. Chaudhary

Ajay Gahlot

Yogendra Arya

Aman Dahiya

Naveen Garg

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Cite this paper.

Bano, S., Kumar, A., Govind, B., Komal, Misra, D.K. (2023). Error Analysis in Measurement of Electrical Conductivity. In: Yadav, S., Chaudhary, K., Gahlot, A., Arya, Y., Dahiya, A., Garg, N. (eds) Recent Advances in Metrology . Lecture Notes in Electrical Engineering, vol 906. Springer, Singapore. https://doi.org/10.1007/978-981-19-2468-2_17

Download citation

DOI : https://doi.org/10.1007/978-981-19-2468-2_17

Published : 28 July 2022

Publisher Name : Springer, Singapore

Print ISBN : 978-981-19-2467-5

Online ISBN : 978-981-19-2468-2

eBook Packages : Engineering Engineering (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Types of errors in measurement – sources and corrections

Error is a part of measurement. In an experiment, the aim is to get accurate result. But, practically it is very difficult to achieve 100% accuracy in a measurement. When the measured value differs from the actual value, then there is an error in that measurement. That means, if there is no 100% accuracy, then definitely there is an error. In another article, we have discussed the Accuracy and Precision in a measurement . In this article, we are going to discuss different types of errors in measurement, their sources and corrections .

What are Errors in measurement?

Types of errors in measurement.

The detailed explanation of these errors are given below.

Systematic Errors in measurement

Sources of systematic error, types of systematic error.

On the basis of the causes, the systematic error has three types –

Instrumental error

Observational error, environmental error.

Some of the experiments (specially, chemical experiments) depend on the environmental conditions like pressure, temperature, humidity , etc. to show a better result. Clearly, the environment can introduce errors in an experiment. These are the environmental errors.

Corrections of Systematic Error

Random errors in measurement, sources of random error.

The sources of random error in an experiment are –

Corrections of Random error

As the random error is unpredictable, it is almost impossible to correct the random errors in an experiment.

Gross Error or Human Error

This is all from the article on different types of errors in measurement, their sources and corrections . If you have any doubt on this topic you can ask me in the comment section.

1 thought on “Types of errors in measurement – sources and corrections”

What are sources of errors in experiment to verify ohm's law?

User Avatar

Sources of errors in experiments to verify Ohms law can be as simple as temperature or pressure. These errors can also be caused by length and diameter of the conductor being used in the experiment.

Sources of errors in an experiment to verify Ohm's law can include inaccuracies in measuring instruments, variations in temperature affecting the resistance of the material, improper connection of wires leading to resistance, and errors in the material's properties affecting its conductivity. Additionally, stray electrical interference or fluctuations in the power supply can also introduce errors in the experiment.

Add your answer:

imp

There are three sources of resistance in a parallel circuit two of them are rated at 20 ohms the other at 10 ohms what is the circuit's total resistances?

In a parallel circuit, the total resistance is calculated as the reciprocal of the sum of the reciprocals of each individual resistance. So, for three resistances of 20 ohms, 20 ohms, and 10 ohms, the total resistance will be 1 / (1/20 + 1/20 + 1/10) = 1 / (0.05 + 0.05 + 0.1) = 1 / 0.2 = 5 ohms.

What are Ohms the measurement of?

Ohms are the unit of measurement for electrical resistance. It indicates how much a material resists the flow of electric current.

What is equivalent of 1omh to watt?

There is no direct conversion between ohms and watts as they are different units of measurement. Ohms measure resistance while watts measure power. However, you can calculate power (in watts) using the formula P = V^2 / R, where V is voltage in volts and R is resistance in ohms.

What is the conductance of a wire that has a resistance of 400 ohms?

The conductance of a wire is the reciprocal of its resistance. Therefore, for a wire with a resistance of 400 ohms, the conductance would be 1/400 siemens, or 0.0025 siemens.

A lenght of cirular wire as a resistance of 1 ohms if its the diameter was double its resistance woul be.?

If the diameter of the circular wire is doubled, the resistance will decrease by a factor of four, resulting in a resistance of 0.25 ohms. Resistance is inversely proportional to the cross-sectional area of the wire, which is affected by the diameter.

imp

Top Categories

Answers Logo

  • Other Fluke companies:
  • Fluke Biomedical
  • Fluke Networks
  • Fluke Process Instruments

Home

US  -  English  [Change]

Understanding and dealing with high frequency error sources in oscilloscope calibration.

application/pdf icon

Paul Roberts Fluke Precision Measurement Ltd Norwich, UK.

Oscilloscope calibration has traditionally been considered part of the DC & LFAC workload. Nowadays the average scope found on a development engineer’s or test technician’s bench is a 1GHz bandwidth device, and bandwidths of several GHz are common. The processes and equipment used to calibrate these higher bandwidth scopes have also increased in frequency, taking oscilloscope calibration into the realm of RF & Microwave. Metrologists and calibration technicians increasingly have to deal with a variety of high frequency considerations such as VSWR and matching errors. Understanding how these sources of error in oscilloscope calibration influence the results and how oscilloscope calibration methods address them can simplify the task of scope calibration, reduce errors and ease the burden of uncertainty analysis.

Introduction

Calibrating the vertical (Y) channel accuracy of oscilloscopes is usually performed with DC or low frequency squarewave signals. Bandwidth is generally tested with a high frequency sinewaves. Nowadays bandwidths of 1GHz and 50Ohm inputs are common where previously the average scope was likely to have 250MHz bandwidth and only a high impedance (1MΩ) input. At these higher frequencies loading, impedance matching and VSWR effects can have significant impact on the measurement.

Testing the vertical channel pulse response is another measurement involving high frequency signals, in this case fast risetime pulses which contain significant high frequency content. Vertical channel risetime is usually tested by applying a high speed edge and observing the displayed transition time from 10% to 90% of the step size. Pulse aberrations (pre-shoot and overshoot) are often also measured. Impedance mis-matching between the pulse source and the scope input can cause reflections which will introduce unwanted pulse shape distortions and influence results.

The design of oscilloscope calibrators minimises the impact of these high frequency effects, but calibration technicians and metrologists must be aware of their impact, particularly in the context of laboratory accreditation and performing measurement uncertainty analyses.

Impedance matching basics

High frequency signal sources, including oscilloscope calibrators, have 50Ω outputs and are calibrated in terms of the level developed across a correctly terminated load. Errors from nominal or variation with frequency of either the source output impedance or load impedance will have an effect on the signal level developed across the load – in this case the scope input.

If a source of voltage V S and output impedance RS is connected to a load of impedance RL, the voltage developed across the load VL is given by V L = V S * R L /( R S + R L ). In practice, the impedances are not purely resistive, particularly at high frequency, and the impedances are not expressed directly in terms of resistance, capacitance and inductance, but in terms of VSWR (voltage standing wave ratio) or return loss (which is related to VSWR). These parameters are high frequency terms which are used to describe how well the actual impedance varies from the nominal (50Ω) impedance. (In practice, they actually relate the incident and reflected signals at a mis-match). If either the source impedance or the load impedance is extremely close to the nominal 50Ω, the impact of difference from nominal (mis-match) of the other impedance is minimised. By careful design, the source impedance of the oscilloscope calibrator can be made very close to a perfect 50W (low VSWR). However its value should be considered when assessing the impact of the load (scope input) VSWR.

The impact of mis-match on signal level accuracy can be assessed by considering the VSWR of source and load. These parameters will usually appear in the equipment specifications.

The impact of mis-match on signal level accuracy can be assessed by considering the VSWR of source and load.

Uncertainty analysis considerations

The expression for mis-match error listed above frequently appears in RF & Microwave theory literature, deriving a value for the error in terms of power. Oscilloscopes are calibrated in terms of voltage, so the mis-match error should also be expressed in terms of voltage. If the errors are small, the power error can be halved to obtain an equivalent voltage error without significant loss in accuracy (voltage is proportional to the square root of power, and the sensitivity coefficient in the relation between voltage and power is therefore ½). Alternatively the form of the expression for mis-match error in terms of voltage can be used, as shown below.

Alternatively the form of the expression for mis-match error in terms of voltage can be used, as shown below.

Typical oscilloscope 50Ω input VSWR can be 1.5 up to 1GHz. The higher bandwidth (4 – 6GHz) instruments are generally better achieving VSWR of 1.1 up to 2GHz, rising to 1.3 at 4 – 6GHz, with some examples exhibiting typical VSWR of 2.0 at 4GHz. On their lower (more sensitive) ranges, VSWR can be worse because there is less ‘padding’ from attenuators (which tend to improve matching) prior to the input stage. For example the Tektronix TDS6604 VSWR is specified as typically 1.3 at 6GHz for >100mV/div and 2.5 at 6GHz for <100mV/div.

Oscilloscope calibrators are designed to provide low VSWR outputs, and typical values (in this case for the Fluke 9500B with 9560 6GHz active head) are <1.1 up to 550MHz, <1.2 for 550MHz – 3GHz and <1.35 for 3Gz – 6GHz. The effect of calibrator and UUT VSWR on mis-match error is shown in the chart below, using the listed values for calibrator VSWR. The chart shows the worst case errors for particular UUT VSWR values. Note that the plots are slightly asymmetric – the amount by which a given mis-match can reduce the signal amplitude is slightly more than the amount by which it can increase the amplitude. The larger figure would usually be used as a worst case plus-or-minus contribution for uncertainty analysis purposes. ( 9500B users should note that the effect of UUT input VSWR of up to 1.6 is included in its published specifications).

When performing uncertainty analyses, the effect of mismatch on amplitude accuracy should be treated as one of the type B (systematic) contributions. Because of the nature of source and load mis-match errors the contribution has a U-shaped distribution (the majority of other type B contributions are treated as having rectangular or normal distributions). The uncertainty due to mis-match should be calculated from the VSWR information and divided by root two to be expressed at standard uncertainty for combination with the other uncertainty contributions.

Uncertainty (V) arising from UUT Mismatch v Frequency

Oscilloscope bandwidth testing

The impact of mismatch on signal level accuracy can be evaluated as described above. It should then be applied to the bandwidth measurement, but the manner in which it is applied depends on the bandwidth test method.

A common technique for bandwidth testing recommended by the oscilloscope manufacturers in the verification procedures documented in their product manuals is to determine the relative display amplitude at the nominal bandwidth frequency. Usually this involves measuring the relative amount by which the displayed amplitude falls, expressed in dB, at the nominal bandwidth frequency with respect to the amplitude at a lower reference frequency and confirming the change is <3dB. The signal amplitude uncertainty can be calculated as described above and applied directly – remembering to convert the uncertainty from a linear (%) ratio to dB if the result is expressed in dB.

An alternative method is to determine the frequency at which the signal amplitude falls by 3dB relative to a lower reference frequency (to 70.7% in voltage terms). Conversion from an amplitude uncertainty to a frequency uncertainty must be performed by considering the slope of the scope frequency response. There is generally a factor of two relation between frequency and voltage uncertainty for typical oscilloscope roll-off characteristics, but this will depend on particular scope design characteristics. A simple determination can be made by deviating the frequency above and below the measured 3dB point by equal amounts (a few percent) and measuring the change in displayed amplitude. Modern oscilloscopes with readout features ease the task of determining the value of such a small amplitude change. If the roll-off is smooth without excessive ‘peaking’ close to the 3dB point the amplitude changes for equal frequency changes above and blow the 3dB point should be similar, and can be used to determine the amplitude to frequency conversion factor to be applied to the amplitude uncertainty figure.

Oscilloscope pulse response testing

The effect of mis-match on the fast edge signals used for pulse response testing will be to cause reflections. The waveform displayed on the oscilloscope will be the combined effect of the edge from the oscilloscope calibrator and the lower amplitude reflection from the scope input. The magnitude of the reflection will depend on the calibrator source and oscilloscope input VSWR, and the timing of the reflection will depend on the effective transmission line length between the source and load. The effect is minimised by the design of the scope calibrator which provides a low VSWR source, particularly for the high speed pulses available from calibrators with active head designs. Low source VSWR will ensure the majority of any reflected signal is absorbed by the source, rather than being re-reflected again.

If reflections occur they generally impact the observed pulse aberrations rather than the observed risetime because the reflections appear back at the scope input after a delay, and therefore influence a later part of the displayed pulse shape. Generally, this is the part of the waveform where the oscilloscope aberrations specifications are least stringent and therefore any reflections have minimal impact on the test outcome.

Oscilloscope pulse response testing

Screenshots above show effect of reflection on viewing a 150ps falling edge transition in a 20GHz bandwidth. Left: from deliberate mismatch and cable delay. Middle: mismatch with no delay. Right: absorption of reflection by good source match. Vertical scale 2%/div, horizontal scale 500ps/div.

Excessive reflections may be an indication of poor VSWR caused by damage to the scope input, for example from an accidental overload during usage.

Conclusions

The design of dedicated oscilloscope calibration solutions minimise the effect of impedance mismatches, but calibration technicians and metrologists should be aware of their impact:

  • The effect of scope input VSWR can easily be assessed and included as an uncertainty contribution when making bandwidth tests.
  • Mis-match effects should be considered as having a U-shaped distribution for uncertainty analysis purposes, and be treated with a coverage factor of √2 when converting to standard uncertainty.
  • Mis-match effects can also influence pulse testing results by causing reflections, and excessive observed aberrations or anomalies may be indicative of scope input damage.

Microwave Datamate’, IFR reference data booklet, ref no 46891/861.

Fundamentals of RF and Microwave Power Measurements, Agilent Technologies, Application Note 1449-3.

Tektronix CSA7000 Series, TDS7000 Series, & TDS 6000 Series Instrument User Manual, TDS3000 Series User Manual.

Fluke 9500B Operation and Performance Handbook .

Calibration of Oscilloscopes, European cooperation for Accreditation of Laboratories, EAL-G30.

  • New Products
  • Electrical Standards
  • Electrical Calibrators
  • Bench Multimeters
  • Electrical Calibration Software
  • RF Reference Sources
  • RF Calibration Accessories
  • RF Calibration Software
  • Data Acquisition
  • Data Acquisition Software
  • ITS-90 Fixed-point cells
  • Standard Platinum Resistance Thermometers
  • Maintenance Apparatus
  • Liquid Nitrogen Comparison Calibrator
  • Resistance Bridges
  • Standard Resistors
  • Compact Calibration Baths
  • Standard Calibration Baths
  • Special Application Baths
  • Bath Accessories
  • Bath Controllers
  • Bath Fluids
  • Field Metrology Wells
  • Metrology Wells
  • Handheld Calibrators
  • Field Dry-Block Calibrators
  • Micro Baths
  • Infrared Calibrators
  • Thermocouple Furnaces
  • Dual Block Dry-Well
  • Zero-point Dry-Well
  • Platinum Resistance Thermometers (PRTs)
  • Thermistors
  • Thermocouples
  • Digital Thermometer Readouts
  • Multifunction Calibrators
  • Temperature Calibration Software
  • Humidity Generators
  • Humidity Data Loggers and Monitors
  • NMI Piston Gauges
  • Absolute Piston Gauges
  • High Pressure Pneumatic Piston Gauges
  • Hydraulic Piston Gauges
  • Piston Gauge Accessories
  • Low Pressure Controllers / Calibrators
  • Pneumatic Pressure Controllers / Calibrators
  • High Pressure Pneumatic Controller / Calibrators
  • Hydraulic Controller / Calibrators
  • Low Pressure Monitors
  • Digital Pressure Gauges
  • Reference Pressure Monitors
  • Pneumatic Deadweight Testers
  • Oil Deadweight Testers
  • Water Deadweight Testers
  • High-Pressure Hydraulic Deadweight Testers
  • Deadweight Tester Accessories
  • Pressure Calibrators
  • Pneumatic Pressure Control
  • Hydraulic Pressure Comparators / Pumps
  • Handheld Pressure Calibrators
  • Air Data Calibration
  • Environmental Monitors
  • Pressure Calibration Accessories
  • Pressure Calibration Custom Systems
  • Pressure Calibration Software
  • Gas Flow Standards
  • Gas Flow Accessories
  • GFS Primary Gravimetric Flow Standard
  • Flow Calibration Software
  • Handheld Temperature Calibrators
  • Dry-Block Calibrators and Micro-Baths
  • Precision Digital Thermometers
  • Temperature Probes
  • Hygro Thermometer with Data Logging
  • Digital Pressure Calibrators
  • Deadweight Testers
  • Precision Digital Pressure Gauges
  • Calibration Hand Pumps
  • mA Loop Calibrators
  • Process Calibration Software
  • MET/CAL® Software
  • MET/CAL® Support
  • Asset Management Software
  • Mechanical / Dimensional Calibration Software
  • Service and Support
  • All Calibration Instruments
  • Handheld Test Tools
  • Where to Buy
  • Request a Quote
  • Request a Demo
  • Request a Sales Consultation
  • Certified Pre-Owned Equipment
  • General Services Administration (GSA)
  • Financing Program
  • National Stock Numbers (NSNs)
  • Payment Options and Tax Info
  • Press Releases
  • Industry Links
  • Metrology Salary Survey
  • Conferences and Exhibitions
  • Training Courses
  • User Group Meetings
  • "How To" Seminars
  • Live Seminars
  • On-Demand Seminars
  • Education Hub
  • About Calibration
  • Articles and Education
  • Product Literature
  • Product Manuals (User Guides)
  • Videos and Virtual Demos
  • Resource Centers
  • Service Request (RMA)
  • Service Plans
  • Technical Support
  • Knowledge Base
  • Accreditations
  • Authorized Service Centers
  • Calibration Certificates
  • Community Forum
  • Technical Bulletins
  • Priority Support
  • Safety Data Sheets (SDS)
  • Safety, Service, and Product Notices
  • Software Downloads
  • PT100 Calculator
  • PT100 Table Generator
  • ITS-90 Reference Function Calculator
  • Thermocouple Table Voltage Calculator
  • Thermocouple Voltage to Temperature Calculator
  • Hart Scientific Temperature Calibration
  • DH Instruments Pressure and Flow Calibration
  • Pressurements Pressure Calibration
  • Ruska Pressure Calibration
  • Fluke Companies
  • Integrity and Compliance
  • Frequently Asked Questions
  • Why buy Fluke Calibration ?
  • Product index
  • New products
  • Where to buy
  • Request a quote
  • Calibration and repair
  • About Fluke Calibration
  • Service centers
  • Software downloads
  • Web seminars
  • Subscribe to e-news
  • Supplier Handbook
  • Why choose us
  • Other Fluke companies

Secondary menu

  • Privacy Statement
  • Terms of Use
  • Terms of Sale

IMAGES

  1. Types of Error

    possible sources of error in an electrical circuit experiment

  2. 5 Error Sources in Ohm's Law Experiment [How to avoid them] • Ohm Law

    possible sources of error in an electrical circuit experiment

  3. Types of Errors in Measurement

    possible sources of error in an electrical circuit experiment

  4. Types of Errors in Physics

    possible sources of error in an electrical circuit experiment

  5. Errors Of Electrical Instrument

    possible sources of error in an electrical circuit experiment

  6. The classification of error sources.

    possible sources of error in an electrical circuit experiment

VIDEO

  1. Common Sources of Error

  2. Electrical Experiment: Connecting Phase and Neutral Wires

  3. PARALLEL RESONANT CIRCUIT EXPERIMENT || LCR PARALLEL RESONANCE CIRCUIT EXPERIMENT || PRACTICAL FILE

  4. Electric Circuit Analysis

  5. Limitations, Sources of Errors and Assumptions for Science Labs CSEC, CAPE GCSE and IB

  6. Source of Error in Measurement

COMMENTS

  1. 5 Error Sources in Ohm's Law Experiment [How to avoid them]

    Carefully take the readings to avoid the errors. Systematic errors Tolerance values of resistors. Carbon and metal film resistors are the most popular class of resistors which are employed in our labs. Such resistors have a tolerance value which ranges between .05-20%. The leftmost band of carbon resistors indicates the possible tolerance of ...

  2. PDF PRACTICE 5. OHM'S LAW. SYSTEMATIC ERRORS.

    A device feeding an electric circuit with direct current (DC) is the power supply on picture, the "Gold Source DC Power supply". It comes with three different outputs: that on right is a fixed output (not adjustable) always giving 5 V, and two adjustable outputs, with voltages ranging between 0-30 V, and intensities 0-3 A.

  3. Series And Parallel Circuits Lab Sources Of Error

    From inaccurate readings to incorrect circuit design, the potential sources of errors in series and parallel circuit experiments are vast. Some errors can be attributed to faulty equipment, while others may be related to the user's inexperience or lack of technical knowledge. For instance, if you don't properly connect a resistor or ...

  4. PDF Uncertainty, Errors, and Noise in Experimental Measurements

    it is a circuit design problem, you will have to fix the circuit. 2.3.1 Johnson-Nyquist noise There are also several sources that are inherent to electronic circuits. One common source is known as Johnson-Nyquist noise or just Johnson noise. This noise results from thermal motions of the charge carriers in a conductor and is independent of

  5. Sources of Error in Science Experiments

    Drafts, temperature changes, light/dark differences, and electrical or magnetic noise are all examples of environmental factors that can introduce random errors. Physical errors may also occur, since a sample is never completely homogeneous.

  6. PDF Experiment 3: Ohm's Law

    operation of an electrical circuit. You should have completed Experiment 4 before working on this lab. Procedure Connect the three equal resistors that you used in Experiment 4 into the series circuit shown below, using the springs to hold the leads of the resistors together without bending them. Con-

  7. Bias and Sources of Error

    Unlike systematic errors which tend to give results that are always either too high or too low, random errors can give results that are sometimes too high and sometimes too low. For example, a student is using an ammeter to measure current in a given circuit. The results the student gets are 0.16 A, 0.15 A, 0.17 A, and 0.14 A.

  8. PDF Introduction to Error and Uncertainty

    • Understand how uncertainty is an integral part of any lab experiment Introduction There is no such thing as a perfect measurement. All measurements have errors and uncertainties, no matter how hard we might try to minimize them. Understanding possible errors is an important issue in any experimental science. The conclusions we

  9. How to Write Sources of Error

    In addition to identifying the source of the error, you can describe how it impacts the results, or you might suggest how the experiment might be improved (but only suggest improvement sparingly—not every time you describe a source of error), for example.

  10. PDF EXPERIMENT 0: MEASUREMENTS, ERRORS AND GRAPHS

    In this experiment, we are interested in learning how to treat data. This is a brief introductory ... the voltage across it when the resistance wire is connected across a battery in an electrical circuit. ... In carrying out an experiment it is obviously important to consider possible sources of systematic errors and to take precautions to ...

  11. Understanding Experimental Errors: Types, Causes, and Solutions

    These errors are often classified into three main categories: systematic errors, random errors, and human errors. Here are some common types of experimental errors: 1. Systematic Errors. Systematic errors are consistent and predictable errors that occur throughout an experiment. They can arise from flaws in equipment, calibration issues, or ...

  12. PDF PowerPoint Presentation

    1/28/2021 3 Production errors • Any sensor or other device will not be perfect -Assuming there isquality control, there will never-the-less be a

  13. Experimental Errors and Error Analysis

    The best precision possible for a given experiment is always limited by the apparatus. Polarization measurements in high-energy physics require tens of thousands of person-hours and cost hundreds of thousand of dollars to perform, and a good measurement is within a factor of two. ... The object of a good experiment is to minimize both the ...

  14. Ten Common Mistakes in Circuit Analysis

    The purpose of node-voltage analysis is to find the voltages across the devices in a circuit. A common mistake is attempting to write a nodal equation through a voltage source when none of its terminals are connected to ground. This approach isn't valid because a basic nodal equation can't be written through a voltage source.

  15. Sources of Error: Examples, Meaning, Reduction, Consequences

    A. Equipment Malfunction, Misinterpretation of Data, and Inaccurate Assumptions are the three main sources of errors in engineering experiments. B. The three primary sources of errors in engineering experiments are Calculation Errors, Procedural Errors, and Equipment Faults. C.

  16. PDF Physics 11/12/AP

    not refer to any mistakes you may make while taking the measurements. Rather it refers to the uncertainty inherent to the instrument and your own ability to minimize this uncertainty.

  17. What Are Common Sources of Error in an Electric Field Mapping Lab?

    Physics news on Phys.org . Physicists confirm quantum entanglement persists between top quarks, the heaviest known fundamental particles; 25 years of massive fusion energy experiment data open on the 'cloud' and available to everyone

  18. Error, reproducibility and uncertainty in experiments for ...

    The authors provide a metrology-led perspective on best practice for the electrochemical characterisation of materials for electrochemical energy technologies. Such electrochemical experiments are ...

  19. PDF Includes Teacher's Notes and Typical Experiment Results the PASCO

    A derivation for this formula can be found in most introduc-tory texts on electricity and magnetism. Equations 4 and 5 can be plugged into equation 3 to get a final formula for e/m: e/m = v/Br = 2V (5/4)3 a2 (Nμ0Ir)2 where: V = the accelerating potential. a = the radius of the Helmholtz coils.

  20. Error Analysis in Measurement of Electrical Conductivity

    Errors associated with electrical signal measurement such as voltage and current magnitude. The variation in the voltage offset measurement is generated due to Seebeck effect which causes the temperature gradient. This effect can be generated in wire, samples and also at the junction.

  21. Types of errors in measurement

    The Gross errors or human errors are the mistakes made by human while doing an experiment. This can widely differ the measured value from its original value. This can widely differ the measured value from its original value.

  22. What are sources of errors in experiment to verify ohm's law?

    Sources of errors in experiments to verify Ohms law can be as simple as temperature or pressure. These errors can also be caused by length and diameter of the conductor being used in the experiment.

  23. Sources of Error in Oscilloscope

    High frequency signal sources, including oscilloscope calibrators, have 50Ω outputs and are calibrated in terms of the level developed across a correctly terminated load. Errors from nominal or variation with frequency of either the source output impedance or load impedance will have an effect on the signal level developed across the load ...