The capabilities of AI are rapidly approaching those of humans and have already surpassed them in many niche areas. While the recent rise of Large Language Models (LLMs) demonstrates that these models are becoming increasingly versatile and seemingly also more “generally intelligent”, they feel a lot more intelligent because they perfected learning the primary interface with humans: language. Although it’s not immediately evident that this breakthrough will lead to Artificial General Intelligence (AGI) in the short-term, we now have models that are trained to perfectly mimic highly capable humans in digital interactions[1]. Some might argue that the age of AI has begun.
Fig. 1
The performance of deep learning models is improving at an accelerating pace, reaching superhuman levels in benchmarks with ever-increasing speed. Language models have recently experienced significant advancements, attaining performance within the upper 20th percentile of human capabilities on the majority of conventional assessments. When comparing GPT-3.5 to GPT-4, separated by only a few months, the rate of progress becomes especially clear.
Access to increasingly powerful models is becoming available to individuals in ways that are impossible to control. The Stable Diffusion image generation model and software to generate deep fakes are open source, and Meta’s LLaMA language model has been leaked and can be run on a laptop.
A significant short-term consequence is that, until recently, we deemed text-based messages sufficient for establishing and proving humanness—the famous Turing Test. Bots have been prevalent on social media platforms and involved in manipulation campaigns[2] for quite some time. However, in most instances, it was feasible to differentiate them from authentic human users. Now, modern AI either has already passed the Turing Test or is very close to doing so. This will make it impossible in the future to determine humanness based solely on intelligence. Furthermore, recent impersonations using deep fakes[3] have demonstrated that even video-based attestation of humanness is becoming increasingly unreliable. Consequently, there is no longer a reliable method to verify humanness online.
However, proving humanness in the digital domain will be an essential and likely inevitable tool for empowering individuals, especially in this new chapter of human history. While there are various approaches to how a proof of personhood (PoP) might end up being implemented, it is crucial that such an important infrastructure prioritizes privacy, self-sovereignty, inclusivity and decentralization in order to benefit and protect individuals.
In response to this challenge, the Worldcoin project has initiated an open identity protocol called World ID. The advancement of AI has rendered data that can be digitally generated insufficient for attesting one’s human status. As a result, the focus needs to shift toward verifying humanness through real-world attestations. To evaluate human status based on unique physical characteristics (i.e. biometrics), Tools for Humanity—a technology company contributing to the Worldcoin project—has supported the design of a custom, open-source hardware device. The state-of-the-art device, which uses several neural networks to validate liveness and uniqueness without storing any image data, issues an AI-safe PoP credential on World ID. While there are many ways to attest humanness (and different applications have different requirements), the current state of technology suggests that this approach is the only scalable, fraud resistant and inclusive mechanism for establishing global PoP.
Through the PoP credential, the World ID protocol empowers everyone to prove their humanness online without requiring a third party. The protocol leverages zero-knowledge proofs to maximize privacy and will eventually be governed by the people through World ID itself. Today, initial versions of the hardware device, a mobile client and a deployment mechanism have been implemented, all of which will gradually become decentralized. Applications can interact with this proof on the protocol through the recently launched SDK. The protocol itself is permissionless and designed to eventually support a diverse set of credentials that can be attested by anyone. World ID will be compatible with the verified credentials standard, allowing for the representation of the diversity of an individual's social interactions (soulbound tokens, intersectional social data, etc.).
So far, more than 1.4M people have participated. If successful, World ID will become the largest network of real humans on the internet, accessible to all as a public good.
The need for proof of personhood
Advanced generative AI introduces the need for two mechanisms to improve online fairness, social interaction and trust: (1) Limiting the number of accounts each individual can create to protect against sybil attacks. This is particularly relevant to enabling digital and decentralized governance[4] and fairly distributing scarce resources like universal basic income (UBI), social welfare and subsidies; and (2) preventing the dissemination of AI-generated content, which is virtually indistinguishable from human-created content, to deceive or spread disinformation at scale.
While there is no silver bullet[5], proof of personhood addresses both challenges.
Authentication of accounts via PoP provides natural rate limiting[6], which essentially eliminates sybil attacks. Naturally, people could use their credentials to authenticate bots, but the scale is very limited. For instance, creating 1,000 bot accounts would require finding 1,000 human users willing to consistently verify their authenticity.
Distinguishing between human-created and AI-generated content is more difficult. It’s important to note that content generated or co-authored by AI is not necessarily undesirable—likely the opposite, in fact. It only becomes a problem if it is used to spread disinformation in a credible way and at scale.
Fundamentally, intelligence checks will no longer be effective discriminators for humanity. PoP empowers users to opt for interactions exclusively with authenticated accounts or verified content. Similar to how users can filter various content types on social media (e.g. "Following" and "For You" pages), PoP facilitates filtering for content or accounts that have been confirmed as human. Additionally, it enables the implementation of reputation systems that disincentivize the spread of inauthentic information, regardless of whether it is AI-generated or not. This can also help discourage individuals from authenticating bots.
Ultimately, PoP can be viewed as a fundamental building block for digital identity. One way to visualize this is outlined in Figure 2. The first layer, proof of personhood, establishes an individual's humanness and uniqueness. The second layer, digital authentication, ensures that only the legitimate owner of an identity can use it. This layer addresses the question, "Are you who you say you are?" Finally, digital identity verification, the third layer, focuses on answering "Who are you?".
Fig. 2
Proof of personhood and authentication are the basis of digital identity. Without it, many use cases of identity can be easily exploited by attackers, especially in a world where humans become hard to distinguish from AI. If implemented in a secure and private manner by leveraging zero-knowledge proofs and selective disclosure of information, digital identity can significantly empower individuals. Please note that the layers shown in the figure serve only as a visual aid, and do not represent a standardized definition of digital identity.
A solid foundation through proof of personhood and digital authentication is crucial to mitigate attacks. World ID seeks to establish a permissionless protocol for digital identity as public infrastructure by starting with the first two layers.
A world with proof of personhood
Upon establishing a global network of genuine, unique human identities, numerous possibilities arise for enhancing various aspects of society.
Redistribute wealth created by systems in the age of AI: As AI advances, fairly distributing access and some of the created value through UBI will play an increasingly vital role in counteracting the concentration of economic power[7]. To ensure that each individual registers only once and to guarantee equitable distribution, a global proof of personhood protocol is needed.
Advanced spam filters: By exclusively processing messages verified by humans, proof of personhood lays the foundation for advanced spam filtering, decluttering Twitter timelines and obviating the need for CAPTCHAs or browser DDoS protection, providing a smooth user experience.
Reputation systems: Proof of personhood enables the creation of reputation scores by effectively preventing the creation of multiple accounts. This makes global, frictionless and uncollateralized lending possible which particularly benefits those without access to traditional financial systems.
Governance: Digital collective decision-making presents significant challenges. Web3 projects often rely on token-based governance (one-token, one-vote), which may exclude some individuals and disproportionately favor those with greater economic power. As of today, only a few projects have explored actual democratic voting, such as Optimism’s Citizens’ House. A reliable, sybil-resistant mechanism simplifies the implementation of democratic governance models like “one-human, one-vote” and expands the design space for novel structures based on unique human identities, such as quadratic voting. The Worldcoin project, by utilizing proof of personhood, enables the protocol to be truly governed by the people for the first time. This ensures that the project benefits everyone. While the precise governance structure, such as direct voting or the election of representatives, warrants thorough consideration, this represents a paradigm shift that enables true decentralization. Another particularly important application is AI. To ensure that the benefits of AI are shared among all people, rather than being restricted to a privileged few, enabling inclusive participation in its governance is essential.
Authentication: Biometric-based authentication can be a part of solving the growing issue of digital identity theft, which may result in severe consequences for affected individuals. In 2021 alone, data breaches impacted 300 million people.
Equitable distribution of scarce resources: Crucial elements of modern society, including subsidies and social welfare, can be rendered more equitable by employing proof of personhood. This is particularly pertinent in developing economies, where social benefit programs confront the issue of resource capture—fake identities employed to acquire more than a person’s fair share of resources. In 2021, India saved $5 billion in subsidy programs by implementing a biometric-based system that reduced fraud. A decentralized proof of personhood protocol can extend similar benefits to any project or organization globally, allowing for greater value-sharing with users and incentive alignment.
Looking back, it will be difficult to believe that there was a time when there was no simple, privacy-preserving means to authenticate on the internet. This blog post outlines the need for a custom biometric device by exploring various methods of establishing proof of personhood and digital authentication.
Proof of personhood requirements
The security requirements for PoP mechanisms can vary depending on the specific application and the risk of potential fraud. Further, some applications might opt to accept multiple PoP mechanisms for registration, allowing broader coverage, as long as they are comfortable with users registering multiple times using different PoP credentials. However, for high-stakes use cases such as subsidies, global and equitable airdrops and UBI, a single, highly secure and inclusive mechanism to prevent multiple registrations is needed. Depending on the context, voting and reputation scores may also necessitate more stringent security measures.
When evaluating approaches to build PoP on a global scale, there are several important considerations:
Privacy. Above all, privacy cannot be compromised in the name of convenience. All interactions should be anonymous by default and support multiple profiles across different platforms that aren’t publicly linked with each other.
Self-sovereignty. Users must always be in control of their accounts, their data and how it is shared.
Fraud resistance. The mechanism must be able to prevent duplicate registrations. An unreliable mechanism that allows for multiple registrations would severely restrict the design space of possible applications, eliminating trust in use cases such as democratic governance, reputation systems and fair distribution of scarce resources (e.g. UBI, government subsidies, etc.).
Inclusivity and scalability. A global PoP should include everyone. This means the mechanism should be able to distinguish between billions of people. There should be a feasible path to implementation at a global scale and people should be able to participate regardless of nationality, race, gender or economic means.
Decentralization. PoP is foundational infrastructure that should not be controlled by a single entity to maximize resilience and integrity.
Continuity. Once a proof of personhood is granted, it is crucial to ensure that it is difficult to sell or steal, yet easy to recover. While the primary need for recovery is self-explanatory, both recovery and authentication help to deter selling PoP credentials and ensure that only the legitimate owner can utilize them for authentication purposes. Despite these precautions, it is important to acknowledge that they do not entirely safeguard against collusion or other attempts to bypass the one-person-one-proof principle. To address these challenges, innovative ideas in mechanism design and the attribution of social relationships will be necessary.
Potential proof of personhood mechanisms
There are different mechanisms to establish global PoP. The following table compares different approaches and their effectiveness in addressing the requirements outlined above.
Fig. 3
An overview of proof of personhood mechanisms reveals that biometrics is the only method that can fulfill all essential requirements, provided the system is implemented appropriately.
Online accounts
The simplest attempt to establish PoP at scale involves using existing accounts such as email, phone numbers and social media. This method fails, however, because one person can have multiple accounts on each kind of platform. Also, the (in)famous CAPTCHAs, which are commonly used to prevent bots, are ineffective here because any human can pass multiple of them. Even the most recent implementations, where virtually all major providers switched from “labeling traffic lights” to the so-called silent CAPTCHAs (e.g. reCaptcha v3) that basically rely on an internal reputation system, are limited.
In conclusion, current methods for deduplicating existing online accounts (i.e. ensuring that individuals can only register once), such as account activity analysis, lack the necessary fraud resistance to withstand substantial incentives. This has been demonstrated by large-scale attacks targeting even well-established financial services operations.
Official ID verification (KYC)
Online services often request proof of ID (usually a passport or driver's license) to comply with Know your Customer (KYC) regulations. In theory, this could be used to deduplicate individuals globally, but it fails in practice for several reasons.
KYC services are simply not inclusive on a global scale; more than 50% of the global population does not have an ID that can be verified digitally. Further, it is hard to build KYC verification in a privacy–preserving way. When using KYC providers, sensitive data needs to be shared with them. This can be solved using zkKYC and NFC readable IDs. The relevant data can be read out by the user's phone and be locally verified as it is signed by the issuing authority. Proving unique humanness can be achieved by submitting a hash based on the information of the user’s ID without revealing any private information. The main drawback of this approach is that the occurrence rate of such NFC readable IDs is considerably lower than that of regular IDs.
Where NFC readable IDs are not available, ID verification can be prone to fraud—especially in emerging markets. IDs are issued by states and national governments, with no global system for verification or accountability. Many verification services (i.e. KYC providers) rely on data from credit bureaus that is accumulated over time, hence stale, without the means to verify its authenticity with the issuing authority (i.e. governments), as there are often no APIs available. Fake IDs, as well as real data to create them, are easily available on the black market. Additionally, due to their centralized nature, corruption at the level of the issuing and verification organizations cannot be eliminated.
Even if the authenticity of provided data can be verified, it is non-trivial to establish global uniqueness among different types of identity documents: fuzzy matching between documents of the same person is highly error-prone. This is due to changes in personal information (e.g. address), and the low entropy captured in personal information. A similar problem arises as people are issued new identity documents over time, with new document numbers and (possibly) personal information. Those challenges result in large error rates both falsely accepting and rejecting users. Ultimately, given the current infrastructure, there is no way to bootstrap global PoP via KYC verification due to a lack of inclusivity and fraud resistance.
Web of Trust
The underlying idea of a “web of trust” is to verify identity claims in a decentralized manner.
For example, in the classic web of trust employed by PGP, users meet for in-person “key signing parties” to attest (via identity documents) that keys are controlled by their purported owners. More recently, projects like Proof of Humanity are building webs of trust for Web3. These allow decentralized verification using face photos and video chat, avoiding the in-person requirement.
Because these systems heavily rely on individuals, however, they are susceptible to human error and vulnerable to sybil attacks. Requiring users to stake money for their verification or every new user they verify can increase security. However, doing so increases friction as users are penalized for mistakes and therefore disincentivized to verify others. Further, this decreases inclusivity as not everyone might be willing or able to lock funds. There are also concerns related to privacy (e.g. publishing face images or videos) and susceptibility to fraud using e.g. deep fakes, which make these mechanisms fail to meet some of the design requirements mentioned above.
The idea of social graph analysis is to use information about the relationships between different people (or the lack thereof) to infer which users are real.
For example, one might infer from a relationship network that users with more than 5 friends are more likely to be real users. Of course, this is an oversimplified inference rule, and projects and concepts in this space, such as EigenTrust, Bright ID and soulbound tokens (SBTs)[8] propose more sophisticated rules. Note that SBTs aren’t designed to be a proof of personhood mechanism but are complementary for applications where proving relationships rather than unique humanness is needed. However, they are sometimes mentioned in this context and are therefore relevant to discuss.
Underlying all of these mechanisms is the observation that social relations constitute a unique human identifier if it is hard for a person to create another profile with sufficiently diverse relationships. If it is hard enough to create additional relationships, each user will only be able to maintain a single profile with rich social relations, which can serve as the user's PoP. One key challenge with this approach is that the required relationships are slow to build on a global scale, especially when relying on parties like employers and universities. It is a priori unclear how easy it is to convince institutions to participate, especially initially, when the value of these systems is still small. Further, it seems inevitable that in the near future AI (possibly assisted by humans acquiring multiple “real world” credentials for different accounts) will be able to build such profiles at scale. Ultimately, these approaches require giving up the notion of a unique human entirely, accepting the possibility that some people will be able to own multiple accounts that appear to the system as individual unique identities.
Therefore, while valuable for many applications, the social graph analysis approach also does not meet the fraud resistance requirement for PoP laid out above.
Biometrics
Each of the systems described above fails to effectively verify uniqueness on a global scale. The only mechanism that satisfies all PoP requirements, including reliably differentiating people in untrusted environments at scale, is biometrics. In fact, the Indian government has proven the effectiveness of biometrics by implementing the Aadhaar system to deduplicate enrollments in social welfare programs, saving $5B per year in fraud. Importantly, while it may sound counter intuitive, biometric systems can be implemented in a highly privacy-preserving manner as no images need to be saved. Even decentralizing the verification system is possible.
Biometric modalities
Different systems have different requirements. Authenticating a user via FaceID as the rightful owner of a phone is very different from verifying billions of people as unique. The main differences in requirements relate to accuracy and fraud resistance. With FaceID, biometrics are essentially being used as a password, with the phone performing a single 1:1 comparison against a saved identity template to determine if the user is who they claim to be. Establishing global uniqueness is much more difficult. The biometrics have to be compared against (eventually) billions of previously registered users in a 1:N comparison. If the system is not accurate enough, an increasing number of users will be incorrectly rejected.
Fig. 4
When it comes to biometrics, there are two different modes to consider. The simpler mode is 1:1 authentication, which involves comparing a user's template against a single previously enrolled template. This is commonly used in technologies such as Face ID, which compares an individual's identity against a single facial template. However, for global proof of personhood, 1:N verification is required. This mode involves comparing a user's template against a large set of templates to ensure that there are no duplicate registrations.
The error rates and therefore the inclusivity of the system are majorly influenced by the statistical characteristics of the biometric features being used. Iris biometrics outperform other biometric modalities and already achieved false match rates beyond (one false match in one trillion[9]) two decades ago[10]—even without recent advancements in AI. This is several orders of magnitude more accurate than the current state of the art in face recognition. Moreover, the structure of the iris exhibits remarkable stability over time[11].
Fig. 5
An overview of different biometric modalities reveals that iris biometrics is the only modality that can fulfill all essential requirements. While each modality has its advantages and disadvantages, iris biometrics stands out as the most reliable and accurate method for verification of humanness and uniqueness on a global scale.
Furthermore, the iris is hard to modify. Modifying fingerprints through cuts is easy, while imaging them accurately can be difficult, as the ridges and valleys can wear off due to manual labor. Moreover, using all ten fingerprints for deduplication or combining different biometric modalities is vulnerable to combinatorial attacks where different existing identities are combined to create new identities (e.g. by combining fingerprints from different people). DNA sequencing could in theory provide high enough accuracy but DNA reveals a lot of additional private information about the user (at least to the party that runs the sequencing). Additionally, it is hard to scale from a cost perspective and implementing reliable liveness detection measures is hard. Facial biometrics offers significantly better liveness detection compared to DNA sequencing. However, compared to iris biometrics, the accuracy of facial recognition is much lower. This would result in a growing number of erroneous collisions as the number of registered users increases. Even under optimal conditions, at a global scale of billions of people, a double digit percentage of legitimate new users would be rejected, compromising the inclusivity of the system. Nevertheless, even iris biometrics aren’t perfect–there will always be a small error rate. Whether a system that doesn’t reject even a single person on the scale of all of humanity can be built is an open research question but there are reasons to believe it can be achieved.
Verification hardware
In terms of the biometric verification itself, the fastest and most scalable path would be to use smartphones. However, there are two key challenges with this approach. First, smartphone cameras are insufficient for iris biometrics due to their low resolution across the iris, which decreases accuracy. Further, imaging in the visible spectrum can result in specular reflections on the lens covering the iris and low reflectivity of brown eyes (most of the population) introduces noise.
Second, the achievable security bar is very low. For PoP, the important part is not identification (i.e. “Is someone who they claim they are?”), but rather proving that someone is not part of the set yet (i.e. “Is this person already registered?”). A successful attack on a PoP system does not necessitate the attacker’s impersonation of an existing individual, which is a challenging requirement that would be needed to unlock someone's phone. It merely requires the attacker to look different from everyone who has registered so far. Phones are missing multi-angle and multi-spectral cameras as well as active illumination to detect so-called presentation attacks (i.e. spoof attempts) with high confidence. A widely-viewed video demonstrating an apparently effective method for spoofing Samsung’s iris recognition illustrates how straightforward such an attack could be in the absence of capable hardware.
Further, a trusted execution environment would need to be established in order to ensure that registrations originated from legitimate devices (not emulators). While some smartphones contain dedicated hardware for performing such actions (e.g., the Secure Enclave on the iPhone, or the Titan M chip on the Pixel), most smartphones worldwide do not have the hardware necessary to verify the integrity of the execution environment. Without those security features, basically no security can be provided and spoofing the image capture as well as the enrollment request is straightforward for a capable attacker. This would allow anyone to generate an arbitrary number of synthetic registrations. Therefore, custom hardware is required.
Recovery and authentication
In addition to the initial registration process (the deduplication step), biometrics can enable continuity. Even within a decentralized system, recovery can be designed so that individuals can effortlessly regain access to their PoP through their biometrics. Moreover, biometrics can serve as proof of ownership, a concept often encountered in everyday situations: when verifying someone's identity, the examiner not only inspects the authenticity of the ID but also confirms the person presenting it matches the photo on the ID. Facial recognition performed locally on the user’s phone, similar to Face ID, can be utilized to authenticate users and ensure that only the legitimate owner of the PoP credential can use it to authenticate. Secure and seamless proof of ownership can be achieved by implementing local zero-knowledge proofs on the user's device, which use signed image data from the custom biometric device to extend the security of trusted hardware to the user's phone.
Making proof of personhood a reality
In line with the rationale presented in this blog post, the Worldcoin project has established a proof of personhood mechanism based on a custom hardware device using iris biometrics as it is the only approach to ensure inclusivity (i.e. everyone can sign up regardless of their location or background) and fraud resistance, promoting fairness for all participants. The hardware device issues AI-safe proof of personhood credentials. The issuance of the credential is privacy-preserving, as no images need to be saved. Using the credential reveals no information about the individual, as the protocol employs zero-knowledge proofs. Further details can be found in the dedicated privacy blog post. Future development by the community can enable continuity by allowing users to authenticate themselves, ensuring that only they can use their credentials. Additionally, a recovery process that solely relies on biometrics, without requiring any memorization, can render the credentials impossible to lose.
The proof of personhood credential forms the foundation for the World ID protocol, an open, permissionless identity protocol that empowers individuals to prove claims about themselves (i.e., credentials which can be issued by anyone) in a self-sovereign manner. The protocol also plans to support verified credentials, and its dependencies will be entirely decentralized.
Moreover, the Worldcoin project proposes an initial implementation of a mobile client for the protocol and a deployment mechanism through independent operators. Both aspects will be elaborated upon in upcoming blog posts.
Utilizing proof of personhood, Worldcoin is initiating a global identity and financial network as a public utility, giving ownership to individuals irrespective of nationality or background and accelerating the transition to a future that welcomes and benefits every person on the planet. The identity layer will enable humans to distinguish other humans from advanced AI online and lays the foundation for global, digital identity to empower individuals and enable organizations. Combined with the financial layer, it enables to distribute wealth, and build the infrastructure for AI-financed global, non-state UBI. Given the current progress, the latter may be necessary sooner rather than later, and it is important to have the infrastructure in place when needed.
To date, over 1.4 million people have participated in the first, small-scale phase of the protocol's inception. The project will soon transition to the next phase.
References
- 1."participants in the study could only distinguish between human or AI text with 50-52% accuracy, about the same random chance as a coin flip."; https://www.pnas.org/doi/10.1073/pnas.2208839120
- 2.
- 3.Facial geometry mapping and audio style transfer facilitate digital impersonation via audio and video deep fakes. Notably, in 2022, Vitali Klitschko, the mayor of Kyiv, was impersonated by a deep fake. The attacker managed to conduct video conferences with several mayors of European capitals for up to 15 minutes. Some of the attacks were only uncovered days after the call.
- 4.Democratic voting (one person one vote), quadratic voting, square root voting or any other weighting that requires the notion of a unique person see https://vitalik.ca/general/2021/08/16/voting3.html. Note that this is unlikely to replace the need to nevertheless prove relationships for certain applications.
- 5.
- 6.
- 7.
- 8.
- 9.Note that the FPR (false positive rate of 1:N comparison against everyone who is enrolled) which can be approximated as N*FMR (false match rate for each individual 1:1 comparison) increases with the number of enrolled individuals. Therefore a false match rate of far beyond one in one billion is required to get to a global scale.
- 10.
- 11.Although minor aging effects can be observed, they are negligible and considerably less significant compared to the variability within the image capture process (source). Employing optimized hardware and software can improve image capture consistency.
Disclaimer
The above content speaks only as of the date indicated. Further, it is subject to risks, uncertainties and assumptions, and so may be incorrect and may change without notice. A full disclaimer can be found in our Terms of Use and Important User Information can be found on our Risks page.
Social graph analysis