February 4, 2026

Proof of Work — Ethereum – Palludallu

Proof of Work — Ethereum – Palludallu

✔ Uses cryptographic hash function not as a proof of work itself, but rather as a generator of pointers to a shared data set allows for an I/O bound of work. ✔ Difficult to optimize via ASIC design and difficult to outsource nodes without full data set.

✔ Derived from three operations: hash, shift and moldulo.

✔ We define the following functions : 1. Nonce : 64bits. A new nonce is created for each attempt. 2. get_txid(T): return the txid (a hash of a transaction) of transaction number T from block B. 3.block_height: the current height of the block chain, which increases at each new block.

✔ The target is then compared with final_output, and smaller values are accepted as proofs.

✔ The initial hash output is used to independently and uniformly select 64 transactions from the blockchain. At each of the 64 steps, the hash_outputA is shifted right by one bit, to obtain a new number,shifted_A.

✔ A block is chosen by computing shifted_A modulo the total number of blocks, and atransaction chosen by computing shifted_A modulo the number of transactions within that block.

✔ These txids are also shifted by the same amount as the shifted_A which selected them. Once the 64 txids have been retrieved, they all XORed together and used as the input for the final hash function, along with the original nonce.

✔ The original nonce, shifted up into the most significant bits, is needed in the final XOR function because very small sets of transactions may not contain enough permutations of txids to satisfy the proof of work inequality.

✔ In fact, this algorithm only becomes I/O bound as the blockchain expands insize. In the extreme case of a blockchain with only 1 block and 1 transaction, the entire 64 iteration process can be omitted, and the nonce for final_output can be rapidly iterated as the txids will always be the same.

Ethash algorithm :

Proof of work algorithm that ethereum implements :

✔ The Ethash algorithm relies on a pseudorandom dataset, initialized by the current blockchain length. This is called a DAG, and is regenerated every 30,000 blocks (or every 5 days) and the DAG will continue grow in size as the blockchain grows.

✔ The fixed output that is produced during the hashing process, in order for a node to add a block to the Ethereum blockchain, must be a value that is below a certain threshold. Ethash ASIC-resistance :

✔ Proof of working mining on the Ethereum algorithm, Ethash, requires retrieving pieces of random data from the DAG, hashing randomly selected transactions from any block on the blockchain, and then returning the result from the hashing process.

✔ Thus, in order for an individual to mine on Ethereum, they will have to store the entire DAG for the purposes of being able to fetch data and compute selected transactions.

✔ The result of the Ethereum mining structure is that a miner spends more time reading the DAG, as opposed to executing computations that are fetched from it. This is an intentional design architecture that is aimed at making mining on Ethereum ASIC (application-specific integrated circuit) resistant. ✔ The requirement of having to hold a large amount of memory during the mining process means that entities such as mining farms gain little benefit from loading terabytes of memory into their mining devices.

✔ Large-scale miners receive little benefit from doing this because smaller miners can similarly also purchase terabytes of memory, as the energy cost of memory taken on by a large-scale miner and a smaller miner is comparable. What is memory hard ? Memory hardness essentially means that your performance is limited by how fast your computer can move data around in memory rather than by how fast it can perform calculating operations.

✔ Every mixing operation requires a 128 byte read from the DAG . Hashing a single nonce requires 64 mixes, resulting in (128 Bytes x 64) = 8 KB of memory read.

✔ Since fetching the DAG pages from memory is much slower than the mixing computation, we’ll see almost no performance improvement from speeding up the mixing computation.

✔ The best way to speed up the ethash hashing algorithm is to speed up the 128 byte DAG page fetches from memory. Thus, we consider the ethash algorithm to be memory hard or memory bound. The primary reason for constructing a new proof of work function instead of using an existing one was to attack the problem of mining centralisation, where a small group of hardware companies or mining operations acquire a disproportionately large amount of power to impact or manipulate the network .

✔ The Preprocessed Header — derived from the latest block and the Current Nonce — the current guess, are combined using a SHA3-like algorithm to create our initial 128 byte mix, called Mix 0 .

✔ The Mix is used to compute which 128 byte page from the DAG to retrieve, represented by the Get DAG Page block.

✔ The Mix is combined with the retrieved DAG page. This is done using a ethereum-specific mixing function to generate the next mix, called Mix 1 .

✔ Steps 2 & 3 are repeated 64 times, finally yielding Mix 64.

✔ Mix 64 is post processed, yielding a shorter, 32 byte Mix Digest.

✔ Mix Digest is compared against the predefined 32 byte Target Threshold.

✔ If Mix Digest is less than or equal to Target Threshold, then the Current Nonce is considered successful, and will be broadcast to the ethereum network

✔ Otherwise, Current Nonce is considered invalid, and the algorithm is rerun with a different nonce (either by incrementing the current nonce, or picking a new one at random).

Published at Sat, 23 Nov 2019 16:20:58 +0000

{flickr|100|campaign}

Previous Article

Nano How 1: Seeds and Keys – Nano

Next Article

Nano How 1: Seeds and Keys – Nano

You might be interested in …