decimal arithmetic needs more-gradually-signified precision of the last digit |

In computers, n bits (binary digits) yield 2^{n} states, but in public, yield 2^{n+1}-1 states
by including those trailing blank (tail-trit 0,1,_) having suppressed trailing zeroes... But-and-furthermore,
computer numbers may be integers absolutely precise, or, 'reals' significantly precise (the topic hereïn)...
the binary representation of public, less-precise, states, thus requires a bit more recorded (qualitatively and
approximately quantitatively) to be, 'precise', and, public states of binary fractions, run from imprecise, 1-bit, to
maximally precise within implementation limits, and the least-precise public state, {0,1} in some unit-place, is
its own-unit-precision, with the next and all higher bits fully precise, (and more-precise representations
maintain more information re cumulated precision-spread-and-shape, and thus need even more, bits)... and
other radices higher-based afford fewer, public states, and fewer digital subfactors near two, and are thus not
exactly translated to public binary fraction nor significance representations... And, statistically, adjacent numbers
overlap semi-indistinguishably by quantum-triangular or normal distribution (*)...

* (n.b. σ√2π = 1 step in natural-normal e^{−πx2} a
fairly-triangular distribution with fairly-rectangular-but-cosine-wiggly-cumulative-top ~0.086·cos(2πx)
not flat—normal is 'un-ideal' for computing 'discrete-reals-doing-their-own-quantum-thing'...
and, reducing wiggle by overlapping its half-offset keeps ~7 ppm for precision-statistics but only for a
narrower-center not-energetically-equational... more discussion shall be available on
measurement and precision... the normal distribution always seemed
extreme for any approximately-measurable thing in the finite cosmos....)

* (n.b. #2. for rounding purposes adjacent fractions are indistinguishable, next-adjacent are distinguishable, albeit this breaks transitivity.)

In precise public fractions the lowest digit (blank thereafter) indicates both its specific value and implicitly its quantum-interval-spread (distance to the next specific value, statistically plural-next's), and, incrementally the subprogress of the higher-order digits:

- in classical arithmetic the least-significant-bit (binary) represents ±½, that is, consecutive numbers span uniform intervals that abut but do not overlap, (albeit in interval theory, vs set theory, consecutive intervals are closed by their common zero-width-endpoints the same point, e.g. the circumference of a circle is an interval from-any-start-point-and-once-around-to-the-start-point-again-doubly-included, and likewise its radius, is a closed interval, both endpoints included, and π [pi] is an arithmetic limit, of an own open-definition infinitesimally-precise not zerowidth-precise, but not-so in set theory; cf algebraic vs point-set topology);
- in statistical arithmetic the least-significant-bit (binary) represents ±1 triangularly or approximately-natural-normally, that is, consecutive numbers span their intervals and overlap with approximately-normal-approximately-triangular distributions, whence e.g. binary bbb0 ± 1 overlaps bbb1 ± 1 about 21%-25% [simply];
- (and further-from-consecutive numbers diminish that overlap to 0 triangular, and 0.0019 normal at 1-step, etc.);
- e.g. .0000, .0001, .0010, .0011, .0100, .0101, etc., are consecutive, adjacent-by-each-other (faintly overlapping-beyond if normal), cumulatively spanning...

Ideally, then, or an, ideal, would-or-should be to have—

- all digits add-subtract-etc. precisely as-usual;
- but the last-least-digit has significance: in decimal, evens {0,2,4,6,8}±2, halves {0,5}±5, full {0}±10;
- so we need 3 kinds of zeros: binary 0 [±2], quinary ∅/⊘ [±5], decimal/denary ±/⊗ [±1 implicit/±10];
- and thus the last-digit is itself-to-its-precision while the next-last-and-others are just themselves...
- and then: binary + binary → binary; binary + quinary → quinary; binary|quinary + decimal → decimal....

But, we should probably also have a rule for summing collections, of numbers, to the same last-place, which in elementary school, decimal, kept significant digits according to the least-significant place among them, and, as significance of independent, numbers, adds energetically, the decimal digit would track 100, before needing representation of significant-loss-of-significance... But, here, we're pushing the limits of significance toward 'binary' which doesn't collect so-well (tracking 4×6×4 ≈ 100 total but only by substeps which might work approximately if well-ordering significant cumulative sub-overflows, but not very generalized): so, here, an odd, last-digit, might-be most-precise, albeit by ±2, that is, significant in the next-higher-bit, and, adding like-significant-last-places, steps to one-place,-less,-significant...

[ongoing revisions]

So, similarly, expanding on this utility, we can do public radix-4 (quaternary, quartal, quatrits, quits, nibs, nibbles, 'twobits', 'dibits'), applying a bit of cleverness:

- as radix-4 has 5-choices for a last-digit-or-terminating-blank, not-quite-radix-2×2 having 7-choices, we push the statistical spread up-the-bits, slightly:
- and, as 0-standalone is least-significant-of-all-numbers, we choose last-digit-0 to indicate the next-last is fully-significant normal-triangular ±1 i.e. 0±4;
- and thence, the odd-digits {1,3}±2 indicate a bit-more-precision, a step 2× finer,
- leaving {2}±4 which we'll use as a halfstep-offset alternate for {0};
- (and, next-more-precise ±1 is represented by the next-step-last-digit-0; Da capo);
- (note, as a mnemonic: each digit precision {except 0} is doublescale its set-smallest);
- n.b. the final digit represents its own specific value as an offset;

And similarly-again, expanding on this utility, we can do public radix-8 (octal, octonary, octits, 'threebits', 'tribits'), applying that combined-bit of cleverness:

- as radix-8 has 9-choices for a last-digit-or-terminating-blank, not quite radix-2×2×2 15-choices, we push the statistical spread up-the-bits, slightly;
- and, as 0-standalone is least-significant-of-all-numbers, we choose last-digit-0 to indicate the next-last is fully-significant normal-triangular ±1 i.e. 0±8;
- and thence the odd-digits {1,3,5,7}±2 indicate the bit-most-precision step 4× finer;
- and, the odd-even-digits {2,6}±4 indicate the in-between-precision step 2× finer;
- leaving {4}±8 which we'll use as a halfstep-offset alternate for {0};
- (and, next-more-precise ±1 is represented by the next-step-last-digit-0; Da capo);
- (note, as a mnemonic: each digit precision {except 0} is doublescale its set-smallest);
- n.b. the final digit represents its own specific value as an offset;

And, we can of-course carry-on-similarly and do public radix-16 (hexadecimal, 'sedonary', 'fourbits', 'quadbits'), and-so-on-powers-of-2, applying all that cleverness...

But—the next-important-challenge is to come-up with a similar construction for public radix-10 (decimal) with scale-factors of 2×2.5×2× (fairly-nice and nicely-ordered but not-so-easily represented), and, similarly-again, expanding on this utility, with all that cleverness, and more:

- as radix-10 has 11-choices for a last-digit-or-terminating-blank, we push the statistical spread up-the-bits, slightly;
- and, as 0-standalone is least-significant-of-all-numbers, we choose last-digit-0 to indicate the next-last is fully-significant normal-triangular ±1 i.e. 0±10;
- and, thence the odd-digits {1,3,5,7,9}±2 indicate the most-precision step 5× finer;
- and, now, of the nonzero-even-digits {2,4,6,8} we choose an in-between-precision step 2× finer, {2.5,7.5}±5 represented-half-off by {2,8}, 2⇒2.5, 8⇒7.5;
- leaving {4,6} 'never-used-in-the-last-digit', (unless weirdly 4 ⇒ 5±10 as an alternate for {0})…
- (and, next-more-precise ±1 is represented by the next-step-last-digit-0; Da capo);
- (note, as a mnemonic: each digit precision {except 0} is doublescale its set-smallest);
- n.b. the final digit represents its own specific value as an offset;

But that isn't so nice at the intermediate step,—so, we consider the flip-set of digits, for public radix-10:

- and, thus, the even-decimal-digits {0,2,4,6,8}±2 indicate the most-precision step 5× finer, ('easy' to remember but not doublescale);
- and, as 5 is half-of-radix-10, we choose last-digit-5 to indicate the next-last is one-bit-more/fully-significant normal-triangular ±1 i.e. 5±10, ('easy' to remember but offset without an alternate);
- and, {3,7} for near-bit-stepping 2.5× finer with in-between precision ±0.5 i.e. {3,7}±5 (offset, and half-off-representation: 3 ⇒ 2.5 ± 5, and 7 ⇒ 7.5 ± 5);
- and, leaving {1,9} 'never-used-in-the-last-digit'...
- (and, next-more-precise ±1 is represented by the next-step-last-digit-5; Da capo);
- (note, as a mnemonic: even-digit precision is even ±2; and odds are 10 ×½, ×¼, ±2× i.e. odd-5±10, odds-of-2.5±5);
- n.b. the final digit represents its own specific value as an offset;

Arithmetic then proceeds formally, converting the last digit as-needed, precision spread cumulating
more-finely-gradually for this near-binary progression than for classical decimal, and looks-about-right:
0.{5} means 1-bit-more-precision, 0.{3,7} 1-more, and 0.{0,2,4,6,8} last-1.322-more; log_{2}(10)
≈ 3.322... But, while evens add to get evens (closed), {5} doesn't add to get {0}, and {3,7} don't add
to get {0,5}...

But, then-again, public decimal could be more like power-of-two radices, at the expense of overlapping off-representations, by {0,5}±10, {3,8}±5, and {1,2,4,6,7,9}±2 nicely quadrilaterally symmetric...

And—finally—representing precision in public hexadecimal assignments (radix-16): the last (non-zero) hexadecimal digit indicates its own precision and offset:

- {0,8}±16, one half the higher digit's unit, is coarsely precise;
- {4,C}±8, two odd quarters, are a bit-more-precise;
- {2,6,A,E}±4, four odd eighths, are 2-bits-more-precise;
- {1,3,5,7,9,B,D,F}±2, eight odd sixteenths, are 3-bits-most-precise;
- And {0} when not-suppressed is equally precise as the offset-8;
- And a next hexadecimal-digit gains the 4th next bit of precision... Da capo.

This is as gradual as possible, without altogether using more bits for precision-and-shaping;

QEF, QEI, QED....

And beyond, there's the issue of significance indicated by the numeric magnitude,

i.e. usually, log(1+(number/least significant place)), e.g. log(1+(99999/1)) = 5 digits:

e.g. 100000 has slightly,-more-significance than 99999, but one-extra digit-place,

e.g. 99999 has 9× significance of 11111, both in 5 digits,

and-which has 1.1111× significance of 10000-nearly-9999,

ergo by transitivity 99999 has 10× significance of 9999;

But—the notion of significant digits is slightly anomalous:

e.g. log(1+(99999±0.5/1)) ≈ 5±0.000002 digits significance;

e.g. log(1+(1/1)) ≈ 0.3 'strangely' significant compared to the digit-1 in 100000,

but-yet binary log_{2}(the same) ≡ log_{2}(10)*log(the same) = 1
the same at the radix level;

e.g. log(1+(0/1)) = 0 which is no-significance yet should be something, because—

e.g. if 0 ≡ 0.0 ≡ 0±0.5, and—because—only 'blank' should be 'no,-significance'...

So—we need somehow extend the last digit for 'blank' 11th in decimal, 3rd in binary,

(i.e. a dual-pinning paradigm: 'blank' and 'zero' should each be identical in all radices)

e.g. letting 'blank' = −1, log_{11}(2+decimal place), ⇒ log_{11}(2+'blank') = 0,
no significance...

Meanwhile—alternatively, log(1+(n=0/0)) = log(1+(n=1/1)) ≈ 0.3 would be almost right...

TBD... (maybe the sign [±] should also be considered as of binary significance)...

NOTES, COMMENTS, PRIORS, ALTERNATIVES:

Thus, in computers, the least significant non-blank bit designates its precision, and, offset {0,1};

This coincides with computer-compact-probabilities where +0.5 is presumed
(e.g. .00005-.99995 are 4-not-5-digit-probabilities);

Note decimal {4} is more-multiplicative in appearance: between 3.16 the half-factor-of-ten and 5.0 the half-of-ten;

A premise discovery under the title,

'Majestic Service in a Solar System'

Nuclear Emergency Management