“Scientific Notation” – Variants on the Standard

In this series, we’ve done a review of mathematical notation, with an eye on how each notation helps or hinders student learning. In the prior posts, we started to explore function notation, and played with variations.  I’m not done with that topic, but will interrupt that sequence for a post on scientific notation.

Each line in the picture represents a number, and one line is different from the next only in where the decimal point is placed.  Some of the digits are grayed out.  I’m using the grayed-out digits as a short hand for indicating equivalent ways of writing the same value.  The number in the top row, for example, is most commonly written as .03412 (at least in the United States; in some countries, writing the 0 before the decimal point is standard – there it would be written as 0.03412), but the common form is not the only one, the number .03412 can be written as .034120 or .0341200 or 0000.03412000 etc.  For our purposes here in this post, it is useful to think of the version with all the grayed-out stuff present as the “real” or the “full” version of the number, and think of the version without the grayed-out stuff as the “common” or “abbreviated” version of that same number.  It also works to think of the grayed-out digits as “invisible” digits of the number.

Looking at these “full” versions, it is accurate to say that one line differs from the next only in the placement of the decimal point.  Looking at the “common” versions, it is not quite accurate to say that.  From one line to the next, some of the grayed-out digits have to be made solid, or some of the invisible digits have to be made visible.  In general, all digits between the left-most non-zero digit and the right-most non-zero digit need to be visible in the standard notation, as well as all the zeros between the decimal point and the rest of the number.  In addition, in some countries, at least one digit needs to be shown to the left of the decimal point, and at least one digit needs to be shown to the right of the decimal point, though this does not hold true in the United States.

One way to introduce scientific notation is to think of a calculator with a narrow display window.   For example, let’s assume, for example’s sake, that the window can only show 5 digits.

In these 5 digits, only the following rows of the picture above could be represented directly:

.03412
.3412
3.412
34.12
341.2
3412
34120

Calculators that can handle scientific notation are able to represent both larger and smaller numbers in the same 5-digit width, by using an additional symbol, usually “E”, and this “E” symbol is followed by one or more digits.

In the picture above, the numbers on the right are as before; on the left we’ve shown the corresponding number in scientific notation.  Note that all the numbers on the left fit in the 5 digits our hypothetical window of the calculator is capable of showing us.  Let’s look at the number on the left in the last row.  Here is one way to read it: “The number I’m showing you (3.412) is not quite the real number, the real number is seven rows lower.”  And if you start from the highlighted number on the right, 3.412, and go down seven rows, you get 34120000.  Another way of saying this: “The number I’m showing you (3.412) is not quite the real number, the real number has the decimal point moved seven places to the right.”  The “E7″ construct tells you that the real decimal point is seven places to the right of where it is shown.  What the calculator is relying on is that the user can make the necessary adjustment easily and fill in the zeros appropriately.  If we now look at the number on the left in the top row, we can read this as follows: “The number I’m showing you (3.412) is not quite the real number, the real number has the decimal point moved 2 places to the left.”  The “E-2″ construct tells you that the real decimal point is two places to the left of where it is shown.  Again, the calculator relies on the user to make these adjustments.

Almost all real-life calculators will show a number in scientific notation only if the more familiar form doesn’t fit in the window.  This hybrid approach is more practically useful, but doesn’t look as regular:

I propose that in school settings, teachers freely use the scientific notation as introduced above, consistent with how almost all calculators show it.  The essence of this notation is not even that it is able to show very large numbers and very small numbers in a limited space – though that is clearly the motivation for it – but that it shows a number.  To see it as a number, all you need to do is accept the new symbol E as part of a number, just like the decimal point is part of a number, and just as commas can make a number more easily scanned, (as in 3,000,000 for three million) – it still is just a number.

In contrast, the way scientific notation is often introduced in school textbooks is as an expression: instead of the calculator’s way of showing 3.412E7, the textbook will show $3.412 \times 10 ^ 7$.  It is true, of course, that when you evaluate this expression you will end up with the same result of 34120000 (or 34,120,000).   For students seeing this for the first time, this is unnecessarily confusing.  I’ve seen plenty of kids take $3.412 \times 10 ^ 7$ to their calculator to turn this expression into a single number!  (Which, of course, usually doesn’t work in the sense that the calculator will not show the usual form of the result, but give it back in the 3.412E7 format.)  And the reverse, to take a number like 34,120,000 and be told to write it as $3.412 \times 10 ^ 7$ makes even less sense for the students – they already got the answer, why would they want to turn it back into an expression that then needs to be calculated?  Many of these students never get that the scientific notation is an alternative way to write the number, and that it was never intended to treat it as an expression to be calculated.  In my experience, most of these same students don’t have the same confusion with the calculator format.  Since the calculator format (also used in any number of computer languages) is in no way inferior to the standard textbook format, it can be used throughout the classroom.  The teacher can simply note that there are people who were taught to write $3.412 \times 10 ^ 7$ and that it is useful to be aware of this standard, but that it amounts to an old way of writing 3.412E7.  This statement is one that students can check on their calculator, if they so choose.