[under construction]
In the development of interconnected node-relayed communications, data [messages] are qualified by path-history, and security is a probability-calculated concern. But the short data 'messages' within a single CPU would be encumbered by such long path-histories compounded by the CPU-arithmetic multi-variable and recursive processes - albeit this is the sort of advanced proof-of-program we desire, and is feasible for proofing small programs within any single CPU - the run-time cost becomes explosive. Programs do get very large, beyond program-proofing, and beyond periodic interim flow-process-checking on sample-data ... we need full-hardware-checking: a short digital-signature, a path-history modulus, tied to each datum; and we need processes which are inherently checked at each stage, and at stage-to-stage transfers.
In very large program code-bases we can also bounds-check the data as well as the processes, but we'd like the hardware to provide compounded assistance: effectually a meta-layer check on all softer layered checking. In essence, we wish every program, level, interface, implementation element do its own maximally efficent digital-signature checking - this is maximum-effort fail-safe TEMPEST engineering.
Checking every process-step by parallel redundant signature-similar process-step catches most single-point-failures at their occurrence. In integer arithmetic we learned [modulo] checking by nines-and-elevens [99] - rather than reduplicating the full arithmetic process, which would be time better spent doing other work, we use the simplified digital-signature afforded by modulo-arithmetic which catches most obvious discrepancies of singly or simply missed or transposed digits, efficiently. But computer processes are not all integer-additive: logic functions are already about as simple as possible, most literal functions are simple, and precise numeric functions are truncated of lesser digits.
. . . . . . .
[under construction]
In less integrally precise program data-function/routining, Layering the control of transfer of data between levels of processing, data-keyboard-entry, data-interpretation, data-signification, data-authorization, data-encryption, data-packaging, data-Input/Output/Processing, data-transmission - which may be separate hardwares - allows simple watching for ineptness-sabotaged program-code.
. . .
[under construction]
In [some] current technologies, individual computer files are coded with permissions: this metaware level of checking supports shared-information systems - owner, group, and public, being one common taxonomy. But a closer look, even to the operation for TEMPEST, suggests the software deserves its own level(s) - less emphasis on the so-called priviledged-public, group, and more on responsibilities, the who's-who in security:
The typical document retrieved by the public is readable {ASCII, HTML, XML/javascript, program-source, cgi, executable/java-source, binary-down-load, etc.} on the public-person's browser. But cgi-scripts run on the server for the public benefit, must be separately permitted at one-higher level. The owner has administration cgi-scripts disallowed the public, at yet a higher-level-threshold - and higher still are the trusted-but-verified browsers, and other [hardware-direct] executed program applications the owner runs to build and maintain his system. And at the top is the system software {DOS, etc.} which basically must have access to all files - even the 'garbage'.
. . .
[under construction]
A premise discovery under the title,