I am a researcher and currently work for the Ethereum Foundation where I am leading an effort to expand the use of formal verification to secure codebases that are critical to Ethereum. I am also interested in the interaction between formal verification and AI, in this as well as other contexts. This follows on from my past research on how to make more open, decentralized, and transparent systems in which user privacy is preserved, and the relation between these social goals and a system's underlying technical mechanisms.
I obtained a PhD with a thesis on the design and usage of transparency enhancing technologies based on cryptographic logs under the supervision of Steven Murdoch in the information security research group. Before this, I obtained a BSc in Theoretical Physics from Queen Mary University of London and a MASt in Mathematics from the University of Cambridge, where I also spent a summer working on the formal verification of Mathematics using Isabelle.
I am currently leading a project aiming to formally verify zkEVMs.
My research on transparency has primarily focused on log based transparency enhancing technologies.
VAMS, which would allow for publicly verifiable audits of access to data requests for medical or law enforcement purposes, is an example of this. It shows how either Merkle trees or blockchains can be used to log such requests for auditors, and allow auditors to publish their audits (e.g., statistics about these requests) in such a way that people can verify the audits themselves, without having to compromise the privacy of people whose data was requested.
Transparency is important because when it looks like there has been an error made either by a system or a user, it isn't always clear which is more likely, especially when disputes only look at a single instance rather than evaluating the system as a whole. More generally, transparency makes it possible to contest the norms enforced by a system, rather than simply verify that a system has not made an error.
My research on decentralisation and decentralised systems (e.g., cryptocurrencies) has primarily focused on issues of incentives and game theoretic approaches to modelling decentralisation and security, in particular, to show that when long term incentives for decentralisation are considered, decentralisation can be reliably maintained although increasing it is difficult.
Looking at specific systems, work that I have been a part of has also shown how to implement smart contracts that make it possible to bribe miners in cryptocurrencies without requiring any trust between the briber and the bribee. Because it is also possible to verify blocks in other cryptocurrencies, bribes that are paid out on Ethereum can be made to Bitcoin miners, for example, showing that it is not possible to insulate the incentives of any cryptocurrency.
With my co-authors, we have also looked at Ethereum's transaction fee mechanism (EIP-1559) and shown that it may be rational for miners to deviate from an honest mining strategy, mining empty blocks to increase the base fee from future blocks.
alexander.hicks (at) ethereum.org