# Empirical Uncertainty Measurement

External events introduce uncertainty in the volatile hardware states of the processor.

Quantifying the volume of uncertainty is  difficult since  there does not exist any method to determine the entropy of a source that furnishes a random data of several thousands bits (i.e. the unmonitorable hardware states after each OS interrupt).

Below is the algorithm we used to empirically  measure  the average uncertainty introduced by an OS interrupt  on the I-cache + branch predictor (does not work for Itanium):

int Entrop[SIZEENTROPY];
int A;
while (NBINTERRUPT < NMININT){
if (A==0) A++; else A--;
Entrop[K] = (Entrop[K] << 5) ^ (Entrop[K] >> 27) ^
HARDTICK()^(Entrop[(K + 1) & (SIZEENTROPY - 1)] >> 31);
K = (K + 1) & (SIZEENTROPY - 1);
**repeated XX times **
}

HARDTICK() is a function that reads the hardware clock counter, tests the difference with the last read value, and increments the  NBINTERRUPT counter  whenever this difference is higher than a threshold indicating that there was an interruption between two successive calls to the function reads.

Throughout the loop, HARDTICK() is called  many times  and the result of this read is combined through exclusive-OR and shifts in the  Entrop
array. The unrolling factor (XX) is chosen in such a way that the loop body fits in the instruction cache (e.g. 16Kbytes for Pentium III or UltraSparc II), and that it does not overflow the branch prediction structures.

The  while loop is run until the number of encountered interruptions reaches NMININT. SIZEENTROPY is the size of the table used for
gathering  the read value of the hardware clock counter.  The flow of instructions executed by the loop body of the algorithm is
completely deterministic. Therefore, in the absence of operating system interrupt,the content of the instruction cache and also of the branch
predictor should be  completely predictable: we checked that the iterations of the loop just after an interruption present much more uncertainty that the iterations that occur long after the last iteration.

The Entrop array gathers (part of) the uncertainty injected in the instruction cache and the branch predictor by the operating system
interrupts. The content of the Entrop array is recorded in a file and the Entrop array is reinitialized  to zero.

Fixing SIZEENTROPY to 65536, we determined the threshold for NMININT  above which the content of  Entrop array can be CONSISTANTLY
considered as random, i.e. it cannot be distinguished by our battery of tests from truly random numbers (i.e DieHard + ent + FIPS 140 +NIST suite). By consistantly, we mean that we  got such results under many different workloads.

### Uncertainty  extracted from I-Cache + branch predictors for supported platforms

 Processor/OS UII Solaris UIII Solaris Itanium Linux PIII Linux PIII Windows NMININT 32 32 256 64 128 Av. unc. per OS int. 64 Kbits 64 Kbits 8 Kbits 32 Kbits 16 Kbits

 Processor/OS G4 MacOS 10 Athlon Linux/Windows P4 Linux P4 Windows NMININIT 32 64 64 64 Av. unc. per OS int. 64 Kbits 32 Kbits 32 Kbits 32Kbits