In our normal decimal number system for representing quantities, we can usually tell pretty easily which numbers are larger. For example, if you see 412 and 409 together, you don’t have to do a lot of work to tell which is larger. You could subtract them, but you don’t even have to. Most likely, you use some variant of the following scheme. You put the numbers one below the other, lined up at the right (more precisely, lined up by place value).
You then compare them starting at the left, skipping past all the positions where the top and bottom numbers have the same digits. You then focus on the first digit position where top and bottom number differ. That digit position tells you all you need to know. The digit positions further to the right don’t matter either. You can just compare the two digits in the first (left most) digit position where the numbers differ at all.
~ 1 ~
~ 0 ~
Whichever number has the largest digit in that position is the larger number. Since ‘1’ is larger than ‘0’, 412 is larger than 409.
It is enormously useful that our number system makes it so easy to judge two numbers for relative size. It gives us a real sense of quantity.
However, things tend to break down when numbers get very large. If you want to know which of the following two numbers is larger: 10000000000000000000000001 and 1000000000000000000000002, you would think those are very easy to compare. Yet I venture to guess that when you actually try to do this, you will find yourself counting the zeros over and over. We’re almost more interested in how many digits the number has than what those digits are. Another way of saying the same thing is that when we look at the digit 1 on the left of the number, we want to know how far away the rightmost digit is. It is important not only for the person who reads the number to be sure how far away the rightmost digit is, but just as important for the person who wrote the number in the first place. We could make this an explicit part of our representation, e.g. as follows: 25@10000000000000000000000001 and 24@1000000000000000000000002, where the number to the left of the ‘@’ symbol advises us of the distance of the left digit from the right digit of the number following. Though this information is redundant, redundant information isn’t always bad. On a check, we write the amount in digits as well as fully written out, and we consider that worthwhile as protection against mistakes and fraud. Yet the information doesn’t necessarily have to be redundant: we could now write 25@10…01 and still know as much about the number as we did before, and with far less writing needed. When we write 25@10…01 it is clear that the 25 is no longer advice, it is now an essential part of the number.
There is a more precise way of describing the 25 than calling it the distance of the extreme digits of 10000000000000000000000001, and that is: you get the number you’re looking for by taking 1.0000000000000000000000001 and moving the decimal point 25 places to the right. You may see it done this way in calculators, which might show the number as: 1.0000000000000000000000001E25, again showing this one number in two pieces – one piece gives the digits of the number, the other piece tells you how far to the right the decimal point should be from where it is shown. Now, a typical calculator may not be able to show you so many decimal places. It will round, the same way that it will round what it will show you if you ask for 1 / 7. Rounded, the calculator may show 1.00000E25 or 1.0000E25 or 1.000000E25. Note that the number of digits shown on the calculator has nothing to do with the size of the number but only with the nature of the rounding. All the numbers 1.00000E25, 1.0000E25, 1.000000E25 represent the same quantity.
The representation we have described here is known as scientific notation and it isn’t only for large numbers. You can use the same notation for modest-size numbers. What we normally write as 1234.56 could just as well be written as 1.23456E3. On calculators, you will typically not see 1.23456E3. Most calculators use a hybrid display, and automatically switch between the normal decimal notation of 1234.56 and the scientific notation of 1.0000E25 based on whether the decimal representation fits on the calculator’s screen.
Scientific notation is a representational system for numbers, complete in its own right. As such, we can wonder what it would take to multiply two numbers in scientific notation, or what it would take to add two numbers in scientific notion. Of course, we could always convert into the usual decimal number system before doing the operations and then report the result back into scientific notation. But what would it take to do the operation itself while staying in scientific notation? In this post, we will only offer a glimpse.
Let’s say we want to multiply 1.2E5 and 2E3. The result is 2.4E8. We get this by multiplying the 1.2 part with the 2 part, and adding the numbers to the right of the E’s together.
To add 1.2E5 and 2E3, we first rewrite 2E3 as .02E5 and now add 1.2E5 to .02E5 by adding the 1.2 and .02 parts, while keeping the common E5 part, resulting in 1.22E5.
These may seem complicated at first, but yet this is roughly what calculators and computers do all the time. For very large numbers, it works like a charm.