For example, if you were asked “3 divided by 2 rounded to three significant digits” how could “1.50” be a sufficient answer, when the ‘0’ is ostensibly insignificant? How could any answer past two significant digits be meaningful when the correct answer only has two?
Comments
The 0 itself shows a meaningful measurement, indicating that the uncertainty in the measurement only relies somewhere between 1.4995 and 1.5005. if you just stated 1.5, that could imply an actual value between 1.495 and 1.505.
When the software you are entering the result into requires an expected number of digits. This is especially true for a lot of accounting software.
1.5 can be 1.48, or 1.52, or anything between 1.45 and 1.55. 1.50 is exactly1.50.
It can be significant when you don’t know how the calculation was made.
For example if someone measured something as “2” then it could have been anywhere from 1.5 to 2.49999, whereas if it’s “2.0” the error bars are 1.95 to 2.04999
For the first one, the error could have been as high as 33% (true value 1.5, reported as 2), whereas for the second one it’s a limit of just over 2.5% error.
“1.5” might be 1.51, 1.52, 1.53, or 1.54 rounded to 1.5.
“1.50” is specifically not 1.51, 1.52, 1.53, or 1.54.
It demonstrates that you’re not rounding to two disgnificant digits. Without the 0, it leaves the question of what that digit would have been had it been there.
If you just say 1.5 how for I know if you mean 1.50 or 1.54 rounded?
The number of significant digits tells you how accurate it is and where any rounding is occurring.
It basically tells you how precisely the value is known. It’s specifically a zero at the second decimal, not a rounding artifact or something.
eg. Two significant figures (1.5) means the number is somewhere between about 1.45 and 1.54 — the last digit (“5”) is rounded to the nearest tenth. But three significant figures (1.50) means the number is somewhere between 1.495 and 1.504.
Whole numbers typically can have as many significant figures as necessary for whatever given calculation, because they’re a count and not a measurement.
If I give you six basketballs and tell you to divide them in two, you’re not limited to one significant figure, because the object itself is a countable quanta.
So constants and whole items have limitless significant digits, but measurements will not have more significant digits than the limitation of the measurement device.
If you only write 1.5, that could indicate anywhere from 1.45 to 1.54 that’s been rounded, but 1.50 specifies that the number is more precisely close to 1.5
For that example i don’t think it is. But, i think it’s something like if i say 1.5 it could be anything from 1.45 to 1.54 actually. But i rounded it to 1.5. If i say 1.50 it’s that much more quaranteed, known, accuracy.
Don’t quote me. Trying to remember what my math prof said about this.
Significant digits and theoretical infinities are fun!
It simply signifies that it wasn’t rounded from 1.46, 1.51, 1.54 etc., but actually 1.50.
It denotes the precision. It doesn’t make sense in your example of pure math because you’re given numbers. But in reality, most numbers come from somewhere, for example a measurement. You need to take care to keep and communicate precision, and to not overstate it either. Your precision can never be better than your initial measurement’s.
Math is a language. It’s not about what you understand, it’s about sharing what you understand.
Decimals indicate precision. Let’s say you measure exactly 1.50 miles, so you drop the 0. Show someone that number and that person has no idea you measured exactly 1.50, they’ll think it was about 1.5
It’s significant because you were asked to provide it.
It’s about precision. I work with machines and have to measure tolerances. All of the measurements keep the same number of significant digits even if they are zero. If you’re measuring in thousandths, an answer in hundreds will not suffice.
Sig digs is a way of expressing precision. If I tell you I took a real world measurement and got 1.2 meters, that doesn’t mean the thing is 1.2 meters long exactly. If we get out precise measurement tools and re-measure it, it may be 1.2097576 meters, and if we did do that, you wouldn’t say that I lied to you because, to the closest tenth, it is 1.2 meters.
If I measured it to the centimeter, though, I wouldn’t tell you I measured it to be 1.2 meters, I would have told you to the closest centimeter, so I would’ve said it’s 1.21 meters. I will add the number of decimals that indicate the precision of my measurement.
So … what if I measured this item to the millimeter? What is the closest millimeter to the thing’s actual length of 1.2097576 meters? It’s 1.2 to the tenth meter, 1.21 to the centimeter. Is it 1.209 to the millimeter?
No, that can’t be right, we should round up because the next digit is a 7, so the answer is 1.210 meters. This is so because if we were measuring in millimeters, we wouldn’t say 1209 mm and we wouldn’t say 1211 mm, we would say 1210 mm. But there’s a problem here because 1210 mm only has three sig digs, so to indicate that we mean to include the zero to the left of the decimal point as a sig dig, we should place a decimal point: 1210. mm.
If we want to report this in meters, we just move that decimal over three places: 1.120 m.
Your example doesn’t really illustrate why significant figures are used. They represent a calculation made based on real measurements. The outcome of your calculation can’t be more accurate than the measurements themselves.
For example, say I want to calculate the average speed of an object traveling 15 m over 8 seconds. That would give you a speed of 15/8 m/s, or 1.875 m/s. But you can’t do that, because that’s more significant numbers than either of the measurements. You have to round to the lowest measured significant numbers, making the answer 2 m/s.
When adding measurements, it’s a bit different. There it’s the number of decimals that matter. If I measure a distance of 1.8 m with one ruler, and a distance of 0.08 with a more accurate ruler, and add those two together, I can’t say it is 1.88 m. I can’t claim to be that accurate if one of my rulers is only accurate to 1 decimal. So the answer is 1.9 m.
edit: forgot about your actual question. If I measure 15.00 m and 8.0 in that first example, that indicates I’ve used more accurate measuring devices, so my end result can also be more accurate (1.9 m/s). That’s what the zeroes at the end are for.
In addition to what others have said there is also a style to this that indicates the precision in a particular way. N.NNNN x 10^(n) for example 5.000 x 10^(8) instead of 500,000,000 in this example it would indicate that only three zeros after the 5 are known or significant rather than all eight if it is written out.
In your example, it doesn’t. 3÷2 are pure (integer) numbers and the answer is exactly 1.5.
Significant figures really come into play when you are measuring something that isn’t countable with integers. Example: We need 2L of soda. It costs time and money to be extra precise. If the soda company says “anything between 1.95 to 2.05L is ok”, then close enough is fine…. It may be 2.016 or 2.013, but we don’t care – it is close enough. It costs a lot more for extra precision.
The last digit is one that you are never really confident about. If I get a number 1.5 when measuring, it could be anything between 1.45 and 1.54.
If I say 1.50 by measuring, I am saying I am confident the actual value is somewhere between 1.495 and 1.505. This is more accurate, but more difficult to measure.
There’s a joke in math that 1+1 = 3 for large values of 1. Especially when you are in the early stages of learning calculus, math teachers often tell it.
If you have 1.4, but you round that number to a single integer, you’d round to 1. If you have 1.6, you’d round up to 2. So as 1.x approaches 1.5 you can see that 1.4 + 1.4 would be 2.8, which when rounded to a single integer is 3.
If you’re strictly doing theoretical math he differences between 1.5 and 1.50 is not as important. What if you’re measuring something? Like distance 1.5 miles means your accuracy measure distance has an error rate of a tenth of mile (anywhere from 1.45 miles to 1.55 miles.) 1.50 implies a higher degree of accuracy.
In school the issue of significant figures is usually taught in chemistry classes.
A good example is pi.
Usually you use 3.14 for pi, but the more digits you use, the more accurate you are.
NASA for example uses 15 digits for space travel, 2 digits wouldn’t be safe.
When you’re doing any calculation that involves measured data, it’s important to know how precise those measurements are. If you write down your data as “1.5” or “1.50” or “1.5000” that implies that your measurements were more and more precise, down to hundreds or thousandths instead of tenths. That’s what’s meant by “significant digits”
Like, if you drive 18 miles to work and it takes you around 35 minutes, you wouldn’t say you averaged 30.857142 mph. Your measurements weren’t that exact. Your measured data only had two significant digits, so you have to state your answer in two significant figures too, just 31. On the other hand, if you said you traveled exactly 18.0 miles in 35 minutes and 00 seconds, now your final answer can be much more precise.
It’s a simple statement that it was not rounded to 1.5 it is accurate to 1.50. 1.5 implies it could of been 1.51 and the .01 was dropped. In some cases that amount of accuracy is important.
Let’s say you measured something using a ruler and found it was 15 cm long. If you wrote it in meters, it would be 0.15m.
If I read that, I would know that the true value is more than 14.5cm and less than 15.5cm. That’s the range for error on your measurement using your tools.
Now redo it with a measurement of 10cm. If you write 0.1m, then when I read it, I might assume that it’s from 0.05m to 0.15m based on your ability to measure it. If you instead write 0.10m, then I know it’s between 0.095m and 0.105m true value.
Think of it this way: if 1.51 means something, and 1.59 means something, why not 1.50?
It’s one digit more of accuracy that happens to be at zero instead of 1 or 9.
In the example of 1.5 vs. 1.50: 1.5 can mean anything from 1.45 to 1.54, where as 1.50 means anything from 1.495 to 1.504
This is why the trailing zero is significant, it indicates the precision of measurement.
Precision.
You got something that when you measure it is like, “1.49647…” but what you’re using cannot consistently and reliably measure that far. So you round it up to 3 significant digits, which happens to be 1.50. That’s just the result of rounding it off.
Otherwise something that’s 2 sig figs would be roughly 1.5 and that .5 could be closer to 6 or 4 but was rounded off into a 5
In the example you gave, the trailing zero is not significant as the inputs are exact values. Exactly 3 divided by exactly 2 is exactly 1.5.
However, if the 3.00 represents grams of a solute and 2.00 represents liters of a solvent, then you would say you have 3.00g/2.00L = 1.50g/mL, as the 3 and 2 are not exact but represent an estimate to three significant figures. It could be that you have 3.002g or 2.998g, but your equipment is not precise enough to tell them apart.
Think of any extra place after the decimal as zooming in, even if it’s a zero.
A steel cube could be 1.5mm wide from far away but 1.53423114mm if you zoom in.
But if the number is 1.5000000 then you know there’s nothing extra even if you zoom in close.
“3 divided by 2 rounded to three significant digits” really should have been written as “3.00 divided by 2.00”. Then your answer is 1.50.
Whereas “3 divided by 2” is 2.
If something is 1,500m it’s accurate to the nearest mm, 1,50m is accurate to the nearest cm, and 1,5m is accurate to the nearest dm
1.50 means you want 1.50, as opposed to 1.51 or 1.49.
It denotes the accuracy of the measurement I think.
It indicates the precision of the measurement.
“1.5” has two digits of precision, and represents a measurement that’s good to one part in one hundred.
“1.50” has three digits of precision, and represents a measurement that’s good to one part in one thousand.
If the zeroes arent significant, you’d say 1.5 instead of 1.50 or 1.5000
Significant digits aren’t a thing in math, only in science and engineering. “But wait, it’s all about math and numbers??” No, it’s about what those numbers represent
Your question sounds a lot like something from a homework assignment in which case the numbers and what they represent really is irrelevant; the only relevant thing is that you can prove to your teacher what “two significant digits” means.
IRL, if I give you a 3ft section of pipe and tell you to cut it in half, are you going to give me a section that measures 1.54′ and another that measures 1.46′? Maybe that’s ok, or maybe I need 2 sections of pipe that are 1.50′ to fit in whatever we’re building. If I’m standing right there you could just ask, but what if you’re working in a fab shop and I just send you a drawing? You better hope I note what the measuring significance is.
Or maybe you’re baking bread; the recipe calls for 3g of yeast but you’re only making a half loaf. That’s a sig fig question even if you don’t realize it. Do you get a scale with 2-decimal accuracy, or do you trust a scale that measures to 0.5g enough to differentiate between 1.5 and 2? Does it actually matter in this case if you use 1.5g of yeast vs 2g?
In isolation sig figs are useless, but when you’re talking about physical properties of real things significant digits/figures become relevant.
The zero means we know it is not somewhere between 1.46 and 1.54. We know it is 1.50. The zero is not insignificant as it contains information about what that digit is.
ELI5: If mom says I need to split the candy bar with my younger brother and he only specifies 0.5, then perhaps I could only give him 46% of the bar while I enjoy 54% of the bar. This means my piece is 17% larger than his.
1.5 can be anywhere between 1.45 and 1.55.
1.50 can be anywhere between 1.495 and 1.505
It tells you the precision of the measurement.
When you are a machinist, it defines the accuracy. 1.1″ is a slop fit. Your cat can eyeball that. If the fit is 1.1000 that requires proper equipment and tools. 1.100 is standard for normal, lazy, machining. 1.1000 is a precision fit. 1.10000 becomes difficult.
It’s all about precision. 1.1 is not equal to 1.1001.
I think it’s helpful to consider this for a whole number. If someone tells you there were 100 people at an event, we would typically think only the 1 is significant in that number. There were around 100 people there (could be 92, could be 121).
If someone says there were 101 people there, then all three digits are significant. There were exactly 101 people there.
If I know for certain that exactly 100 people were at an event, then all three digits in 100 are significant.
You have to actually think about what significant digits are. They are indicating the accuracy of a real measurement.
So let’s say you have a 5 meter stick with markings at every mm. You measure somthing and it is exactly half a meter. You could write that as .5 m, but this does not convey the significance, because you actually know down to the mm the correct measurement.
So you would write it as .5000 m, or 500.0 mm. This tells another person looking at the measurement “I used a tool that measures down to the mm and estimated the final digit”. Now you’re actually conveying information. If you take away the zeroes and just write .5 m you’re telling another person “I used a tool that measures down to the meter and actually just guessed.”
This sounds to me like chemistry math or something similar. Rounding is assumed beyond the final significant digit. In chemistry basically X device is rated to measure to X significant digits. For this example 1.5 vs 1.5000000 implies a level of accuracy to your measurement. In your specific example of dividing and listing an answer out, I’m unsure exactly where this would come up, unless the source of all your digits was implied to be from some sort of measures extracted from devices that had similar specified accuracy.
Those decimal places or zeros represent a level of precision. If I tell you something is 1.5 feet long it could be anywhere from 1.46 to 1.54 feet long. So it could be almost half an inch shorter or longer. That’s not a big deal for some situations but it could be a very big deal or other situations. If I tell you something is 1.50 feet long, it could be anywhere from 1.495 feet to 1.504 feet – only 0.06″ longer or shorter. That’s a much smaller difference, right?
And getting more and more precise can be pretty expensive because it requires good tools to measure and create or trim something to a very precise length. It’s not very expensive for me to get a piece of steel that’s 4″ long but it’s very very expensive to get a piece of steel 4.000″ long. As a young engineer I know once discovered after sending some drawings to a machinist. He thought three decimal places looked much nicer so he drew out a fairly simple part with every dimension to three decimal places. The machine shop called up asking if the company really wanted to spend thousands of dollars on that piece (no engineers were fired in learning this lesson)
Significance gives information on the accuracy of the tool/device used to obtain the measurement. The zero on the end is significant because it lefts you know that we are confident that it should be zero and not another value.
Rounding.
How many decimals are exactly correct?
The last decimal is always rounded.
That’s it.
So every place is important, even when the last one rounds to 0
In measurements it implies accuracy. If I tell you I measured something at 5 mm, or at 5.000 mm you can immediately gather that for the first measurement I probably used a normal ruler but for the second a set of calipers for example accurate to 1 micrometer.
This confused me in high-school and wasn’t properly explained until I got a job where i was raking measurements down to the .0001.
So say I measure something that has to be between .5998 and .6000 inches thick. And when I take the measurement, with a tool capable of measuring down to the .0001, I get .6000 exactly. I could write it up as .6 and call it a day. The thing is, .6 could mean that I measured with a tool only capable of measuring to that first decimal place. It could mean that it’s. 6283. Which is way out of limits. Every 0 I add is me saying that I know for a fact that the measurement is exact, down to that decimal place. In reality it could be .60003527. But my limits are fine with that extra .00003527. It only calls for accuracy down to the .0001. So I added those extra 3 0s
It’s important in thinks like science, engineering and manufacturing. The last number indicates uncertainty, the one preceding it is a certain number. So if your number is 1.50, the .5 is certain and your tolerance would be in the .0X location. If you said your answer was 1.5, the uncertainty is in the .5 position, which means there is possibility due to whatever tolerance in your measuring device that this number is not accurate.
Depending on the application and scenario, the number of digits beyond the decimal are incredibly important indicators of precision and required tolerance or accuracy you must maintain. Your tooling needs to be an order of magnitude greater in its precision than whatever you’re trying to measure. So adding an extra 0 lets people know the number preceding it is accurate and not subject to any uncertainty.
Imagine an old school balance. On one side you have an object you’re weighing and on the other side standard weights. If you weigh the object and use 10 0.1 pound weights you could actually say the object weighs 1.0 pounds. The 0 at the end is significant because that is the degree of accuracy you are measuring to.
As someone who has a 30 year career in precision machining, the number of decimals tells me just how important that measurement/dimension is. And how precise i need to be.
I am capable of grinding a piece of steel within .00005 inches of the blueprint if it says something like 1.2500.
But if the blueprint tells me it needs to be 1.25 inches, I’ll just rip that shit.
I work in a CNC shop so when reading blueprints if a dimension is called out .01 vs .010 they have different tolerances. .xx usually is +/- .01 and .xxx is +/-.005
Significant figures come more so from the uncertainty in what you are putting in than what the output is
Take your example of 3/2, when you think about it you are thinking 3.0000000000…/2.0000000000…, but in real world measurements 3/2 could be 3.49999999…/2.49999999999…
2.500000000…/2.499999999….
3.499999999…/1.50000000…
2.5000000000…/1.5000000000..
Or literally any number spaning those ranges of (3.4999999 to 2.500000)/(2.4999999 to 1.5000000)
Significant figures are basically saying I don’t know what the 4th Significant figure is, so I don’t care about any value past it
1.5 = 1.50
people smoke crack in the comments.
You’ve got other good answers, let me add my own.
1.5 represents a number that is between 1.45 and 1.54. In other words, your magical measuring thingy has ten marks between 1 and 2.
1.50 represents a number that is between 1.495 and 1.504. In other words, your magical measuring things has ten marks between 1.45 and 1.55.
That extra ‘significant digit’ is like ‘zooming in’ on the measuring needle.
It’s the difference between accuracy and precision. Accuracy tells you if an answer is right. Precision tells you how right it is.
ypu don’t “round to X significant digits”. you aren’t understanding correctly how they are used.