guarded significant figures/precision

decimal arithmetic needs more-gradually-signified precision of the last digit

In computers, n bits (binary digits) yield 2n states, but in public, yield 2n+1-1 states by including those trailing blank (tail-trit 0,1,_) having suppressed trailing zeroes... But-and-furthermore, computer numbers may be integers absolutely precise, or, 'reals' significantly precise (the topic hereïn)... the binary representation of public, less-precise, states, thus requires a bit more recorded (qualitatively and approximately quantitatively) to be, 'precise', and, public states of binary fractions, run from imprecise, 1-bit, to maximally precise within implementation limits, and the least-precise public state, {0,1} in some unit-place, is its own-unit-precision, with the next and all higher bits fully precise, (and more-precise representations maintain more information re cumulated precision-spread-and-shape, and thus need even more, bits)... and other radices higher-based afford fewer, public states, and fewer digital subfactors near two, and are thus not exactly translated to public binary fraction nor significance representations... And, statistically, adjacent numbers overlap semi-indistinguishably by quantum-triangular or normal distribution (*)...

* (n.b. σ√2π = 1 step in natural-normal e−πx2 a fairly-triangular distribution with fairly-rectangular-but-cosine-wiggly-cumulative-top ~0.086·cos(2πx) not flat—normal is 'un-ideal' for computing 'discrete-reals-doing-their-own-quantum-thing'... and, reducing wiggle by overlapping its half-offset keeps ~7 ppm for precision-statistics but only for a narrower-center not-energetically-equational... more discussion shall be available on measurement and precision... the normal distribution always seemed extreme for any approximately-measurable thing in the finite cosmos....)

* (n.b. #2. for rounding purposes adjacent fractions are indistinguishable, next-adjacent are distinguishable, albeit this breaks transitivity.)

In precise public fractions the lowest digit (blank thereafter) indicates both its specific value and implicitly its quantum-interval-spread (distance to the next specific value, statistically plural-next's), and, incrementally the subprogress of the higher-order digits:

Ideally, then, or an, ideal, would-or-should be to have—

But, we should probably also have a rule for summing collections, of numbers, to the same last-place, which in elementary school, decimal, kept significant digits according to the least-significant place among them, and, as significance of independent, numbers, adds energetically, the decimal digit would track 100, before needing representation of significant-loss-of-significance... But, here, we're pushing the limits of significance toward 'binary' which doesn't collect so-well (tracking 4×6×4 ≈ 100 total but only by substeps which might work approximately if well-ordering significant cumulative sub-overflows, but not very generalized): so, here, an odd, last-digit, might-be most-precise, albeit by ±2, that is, significant in the next-higher-bit, and, adding like-significant-last-places, steps to one-place,-less,-significant...

[ongoing revisions]

So, similarly, expanding on this utility, we can do public radix-4 (quaternary, quartal, quatrits, quits, nibs, nibbles, 'twobits', 'dibits'), applying a bit of cleverness:

And similarly-again, expanding on this utility, we can do public radix-8 (octal, octonary, octits, 'threebits', 'tribits'), applying that combined-bit of cleverness:

And, we can of-course carry-on-similarly and do public radix-16 (hexadecimal, 'sedonary', 'fourbits', 'quadbits'), and-so-on-powers-of-2, applying all that cleverness...

But—the next-important-challenge is to come-up with a similar construction for public radix-10 (decimal) with scale-factors of 2×2.5×2× (fairly-nice and nicely-ordered but not-so-easily represented), and, similarly-again, expanding on this utility, with all that cleverness, and more:

But that isn't so nice at the intermediate step,—so, we consider the flip-set of digits, for public radix-10:

Arithmetic then proceeds formally, converting the last digit as-needed, precision spread cumulating more-finely-gradually for this near-binary progression than for classical decimal, and looks-about-right: 0.{5} means 1-bit-more-precision, 0.{3,7} 1-more, and 0.{0,2,4,6,8} last-1.322-more; log2(10) ≈ 3.322... But, while evens add to get evens (closed), {5} doesn't add to get {0}, and {3,7} don't add to get {0,5}...

But, then-again, public decimal could be more like power-of-two radices, at the expense of overlapping off-representations, by {0,5}±10, {3,8}±5, and {1,2,4,6,7,9}±2 nicely quadrilaterally symmetric...

And—finally—representing precision in public hexadecimal assignments (radix-16): the last (non-zero) hexadecimal digit indicates its own precision and offset:

This is as gradual as possible, without altogether using more bits for precision-and-shaping;

QEF, QEI, QED....

And beyond, there's the issue of significance indicated by the numeric magnitude,
i.e. usually, log(1+(number/least significant place)), e.g. log(1+(99999/1)) = 5 digits:
e.g. 100000 has slightly,-more-significance than 99999, but one-extra digit-place,
e.g. 99999 has 9× significance of 11111, both in 5 digits,
and-which has 1.1111× significance of 10000-nearly-9999,
ergo by transitivity 99999 has 10× significance of 9999;
But—the notion of significant digits is slightly anomalous:
e.g. log(1+(99999±0.5/1)) ≈ 5±0.000002 digits significance;
e.g. log(1+(1/1)) ≈ 0.3 'strangely' significant compared to the digit-1 in 100000,
but-yet binary log2(the same) ≡ log2(10)*log(the same) = 1 the same at the radix level;
e.g. log(1+(0/1)) = 0 which is no-significance yet should be something, because—
e.g. if 0 ≡ 0.0 ≡ 0±0.5, and—because—only 'blank' should be 'no,-significance'...
So—we need somehow extend the last digit for 'blank' 11th in decimal, 3rd in binary,
(i.e. a dual-pinning paradigm: 'blank' and 'zero' should each be identical in all radices)
e.g. letting 'blank' = −1, log11(2+decimal place), ⇒ log11(2+'blank') = 0, no significance...
Meanwhile—alternatively, log(1+(n=0/0)) = log(1+(n=1/1)) ≈ 0.3 would be almost right...
TBD... (maybe the sign [±] should also be considered as of binary significance)...

NOTES, COMMENTS, PRIORS, ALTERNATIVES:

Thus, in computers, the least significant non-blank bit designates its precision, and, offset {0,1};
This coincides with computer-compact-probabilities where +0.5 is presumed (e.g. .00005-.99995 are 4-not-5-digit-probabilities);

Note decimal {4} is more-multiplicative in appearance: between 3.16 the half-factor-of-ten and 5.0 the half-of-ten;

A premise discovery under the title,

Grand-Admiral Petry
'Majestic Service in a Solar System'
Nuclear Emergency Management

© 1996,2019ed GrandAdmiralPetry@Lanthus.net