Skip to main content

Nominating and Validator Selection on Polkadot

July 1, 2021 in NPoS, Polkadot, Research, Staking, Validators
Avatarby Polkadot

By Jonas Gehrlein, Web3 Foundation Research Scientist

Introduction

Polkadot network and its wild cousin Kusama are decentralized systems run by a large number of servers (i.e., nodes) that cannot be controlled by a single entity. This permissionless design offers more freedom and inclusion than centralized systems such as Google and Facebook. Whereas those centralized systems base their trustworthiness on their authority, decentralized systems draw security from rigorous economic incentive schemes. As a prominent variant of this concept, proof-of-stake networks emerged. These require nodes participating in consensus, often called validators, to put monetary resources at risk as a security deposit.

A key question is how such a network determines who may act as a validator. Here, Polkadot and Kusama utilize the Nominated Proof-Of-Stake protocol to calculate the active set of validators based on the total amount of nominated stake backing them. This includes both the validator's own stake and that of other token holders who back them (nominators). Specifically, this protocol partitions all the nominators' and validators' stake into a fixed number of stake pools – one per validator – so that the stake pool sizes are as evenly distributed as possible. In contrast to other proof-of-stake systems, this advanced mechanism guarantees proportional representation[1] of minorities’ preferences, which improves fairness and further democratizes the process. This also empowers the users of the network to take part in the regular election of validators and thereby have a direct influence on the active set.

A complementary part of this election algorithm is an economic incentive scheme that rewards validators who follow the protocol correctly and honestly and punishes malicious (or neglectful) behavior. To motivate nominators to select only suitable and trustworthy validators, they share the same consequences as their validators. That means that they get rewarded for the validator's good behavior and fined ("slashed") for bad behavior. These key concepts ensure that, by the wisdom of the crowd and market forces, the nominators contribute to securing the network by curating the active set of validators. The votes of nominators can thereby be regarded as affirmations of the credibility of the respective validators. In the long run, competent, trustworthy, and honest validators will remain active, while unsuitable validators will lose backing and will eventually be removed from the active set.

Nominating and validator selection are crucial parts of the interaction of the users with the protocol. This post presents background information on nominating and things to consider when selecting validators. The aim is to provide a perspective on building and maintaining a trusting relationship with validators, and illustrating some of the trade-offs that nominators face when selecting suitable validators.

Nominating means trusting

Staking rewards do not come for free. One part of the reward is a form of compensation for nominators for the effort they put into researching available validators – both active and inactive – and making a well-informed and diversified selection. This selection would ideally include the maximum number of selectable validators. This has positive effects on the decentralization of the network and also ensures that a nominator has the highest chance that at least one of the nominated validators is active. This ensures that the nominator will be constantly receiving staking rewards.

Another part of the nominator's reward is a form of compensation for locking their tokens to maintain the security of the system, which means taking the risk of getting slashed. This resembles one of the fundamental principles for efficient systems in economics: risk and reward go hand-in-hand. In essence:

Nominating can best be described as trusting validators to such an extent that a nominator is willing to bet their stake on the expectation that those validators will act in their interest.

But what exactly are the interests of nominators and what behavior or criteria should they expect from validators? First, nominators must expect their selected validators to act according to the rules of the network. Breaking those rules includes malicious behavior where an adverse validator (or cartel of validators) runs modified software in an attempt to double-spend or build a competitive fork.

Second, nominators must expect validators to be sufficiently competent to maintain their hardware and handle the infrastructure, preventing downtimes and unintended equivocations, thereby guaranteeing availability and constant block production. Third, nominators must expect their validators to value their implicit agreement of not increasing the commission significantly without giving notice and to frequently pay out the accumulated rewards.

Building trust

After establishing what behavior or criteria a nominator should expect from validators, it’s important to have a closer look at the trusting part of the definition above and ask: how can this trust be built and maintained? There are several ways that nominators can gain trust in validators and, accordingly, that validators can signal their trustworthiness.

Communication-based trust

Transparent communication and active engagement between nominators and validators play an important role in a trusting relationship between the two actors. A good starting point for this is the on-chain identity, which, if set by a validator, can reveal more detailed information such as websites, social media groups and can be a good general point of contact. This enables nominators to actively engage with their validators, ask questions and see if their vision and values align.

Ideally, validators signal their competency by communicating with the community and sharing insights about their operations and infrastructure. For example, information about where nodes are hosted and which security procedures are in place can be used by nominators to compare different validators and make informed nominations. Eventually, hosting meetups and events (in-person or digitally) between validators and their (future) nominators is a desirable goal to build long-term relationships.

Reputation-based trust

Another source of trust can be a good overall reputation. On the one hand, that can include the off-chain behavior of validators as active community members contributing to the ecosystem with improvements to codebases, providing educational resources, and actively helping nominators in community channels. Moreover, some entities run their validation services across multiple PoS blockchains, which can further increase their credibility.

On the other hand, historic on-chain metrics such as consistent validating without occurring slashes, around average block-production (era-points), non-critical changes to commissions, and a reliable up-time contribute to the good reputation of a validator. However, among all the necessary criteria to be researched, historic metrics are probably the hardest to track and require the most effort.

Economic-based trust

Even without knowing much about a validator’s identity or history, observing some economic signals of trustworthiness is possible. One important metric is the validator’s self-nomination in the form of self-stake / own stake. This metric indicates that the validator takes the same costly bet on their own success as the rest of their nominators. Additionally, the signaling value of self-stake is amplified by the fact that it can only be used to self-nominate, and yield would potentially be wasted if the validator is not elected (this argument especially holds for waiting validators).

By self-electing, a validator incurs the same risk of slashing as their nominators, which increases the incentive to behave properly. From a game-theoretic perspective and in contrast to other factors such as communication (which has to be considered cheap talk[2]), self-stake is a costly signal and resembles, without a doubt, the “skin in the game” of validators. Related to this, overall backing by other nominators (i.e., total stake minus the self-stake), can be regarded as a measure of aggregated trust towards a validator. However, potentially detrimental effects caused by herding effects and information cascades require each nominator to still do their research.

Another valuable economic component is the commission, i.e., the service fee charged on nominator’s rewards by validators. Interestingly, observing active validators with a substantial commission could improve their trustworthiness, because it means that they are competitive enough to be in the active set despite potentially offering lower rewards. In addition, validators with a higher commission, all other things being equal, have more skin in the game, as they forgo higher earning potential if they misbehave.

These rules of thumb are not completely separate from one another and can overlap in many instances. Ideally, nominators should aim for a good balance between different signals of trust and weigh them according to personal preferences and experiences.

Validator selection

Now that we know what behavior nominators should expect from their validators and how a trusting relationship can be built, the final step is to ask how to make a good validator selection. Unfortunately, there’s no one-size-fits-all recommendation. As seen above, there are several characteristics a validator can have, both expressed in on-chain variables and/or off-chain behavior to varying extents. A “perfect balance” cannot be determined in general and especially not for each individual, as personal preferences differ. Thus:

Validator selection is the evaluation of trade-offs based on a nominator’s personal preferences and beliefs.

To illustrate this point, let us have a look at a few examples of frequent trade-offs when evaluating validators:

  • Reputation vs. skin in the game: The perceived reputation of a validator (i.e., name and standing in the community) could be a valuable signal of their competency and trustworthiness. However, this often conflicts with the amount of self-stake that such a validator has. Some nominators trust the reputation of that validator, while others put more weight on the skin in the game and opt for higher self-staked alternatives.

  • Total stake vs. performance: Nominators are paid according to the relative share of their stake to the total stake of a validator. This effectively means that nominating a validator with a high total stake reduces expected staking rewards (ignoring commission and block-production), while high total backing could be a valuable signal of generally high trust by the community. Some individuals put more emphasis on the aggregated trust, while others look for higher payoffs by backing validators with less total stake.

  • Operator size vs. risk of super-linear slashing: The number of nodes a single entity runs on the network could be a signal of high professionalism in operations and processes and make nominating those larger operators attractive. However, the mechanism of super-linear slashing explicitly scales with idiosyncratic failures, which arguably are more probable when running multiple nodes with the same or similar infrastructure and processes. The sweet spot between operator size and the higher risk of super-linear slashing is impossible to generalize and must be determined by every individual nominator.

It’s easy to imagine many more trade-offs between the various available criteria. Just as a single entity cannot set the prices of an asset in a market, it’s left to the invisible hand guided by the nominators to aggregate information and find the optimal set of active validators. Additionally, it is important that nominators frequently check on and update their nominations because many crucial criteria of validators are prone to change, which could lead to a mismatch with the original evaluation. Additionally, there can potentially be new validators available that better align with the nominator’s preferences. A crucial part of the security of the network depends on the active participation of nominators.

Assisting Tools

A crucial insight from behavioral economics is that the cognitive resources of humans are limited[3], which often leads to deviations from predicted behavior by rational theory. The ever-growing number of active and waiting validators imposes a large cognitive load on nominators to perform their tasks of gathering information, observing the history of validators, and evaluating trade-offs to eventually make an informed choice.

Additionally, behavioral biases often let humans act contrary to their economic incentives and thus lead to suboptimal nomination behavior. For example, humans are prone to making errors when it comes to intertemporal choices[4], which could lead to procrastinating[5] over the research necessary in the short term even though it would prevent potential network issues in the long term. Another problem might be caused by herding behavior[6], whereby nominators follow a larger share of other nominators blindly without getting informed. Even a small initial fraction of nominators who behave suboptimally could lead to a cascading effect[7] with other nominators following them.

Therefore, tools are necessary to aid the decision process and make it less costly and more enjoyable. These tools must guide while respecting that there is no single best recommendation and that nominators need to be able to express their preferences.

A prominent platform for selecting validators is polkadot.js/apps, which lists all active and waiting validators under the “targets” tab and provides important individual on-chain metrics. Additionally, individual validators can be analyzed with respect to their historic behavior by inserting their addresses in the “validator stats” tab.

Another platform used to guide the selection process is the Validator Resource Center, which is currently running in a Beta version for the Kusama network. This tool includes some more sophisticated calculation of metrics, offers the option to import the current on-chain nominations, and analyzes critical on-chain events. Those include, for example, an increase in commissions or a decrease of self-stake for nominated validators. A principle of the platform is to make the selection process as easy as possible while providing maximal freedom of choice to nominators. It also offers additional filtering techniques based on the nominator’s preferences.

Helping nominators make informed decisions makes the whole process more enjoyable and less costly, and reduces the likelihood that they will neglect their duty to actively monitor their nominations and curate the active set.

Conclusion

The security of the network relies on the active participation of nominators in researching and selecting suitable validators and updating their selection frequently. In aggregation, this leads to a well-maintained active set of validators and a healthy network. This task is generously rewarded by the network in the form of staking rewards. To guide this process, nominators should have access to tools that help with the decision process and that are designed to value the individual preferences of the nominators.


  1. https://en.wikipedia.org/wiki/Proportional_representation ↩︎

  2. https://en.wikipedia.org/wiki/Cheap_talk ↩︎

  3. Simon H.A. (1990) Bounded Rationality. In: Eatwell J., Milgate M., Newman P. (eds) Utility and Probability. The New Palgrave. Palgrave Macmillan, London. ↩︎

  4. Richard Thaler (1981) Some empirical evidence on dynamic inconsistency. In: Economics Letters vol 8, issue 3, pp 201-207. ↩︎

  5. https://en.wikipedia.org/wiki/Present_bias#Procrastination ↩︎

  6. Ramsey M. Raafat, Nick, Chater and Chris Frith (2009) Herding in humans. In: Trends in Cognitive Sciences vol 13, no 10, pp 420-428. ↩︎

  7. Lisa R. Anderson and Charles A. Holt (1997) Information Cascades in the Laboratory. In: The American Economic Review vol 87, no 5, pp 847-862. ↩︎

From the blog

Bridges

The landscape of trustless bridges on Polkadot

With research and writing from Oliver Brett, Adrian Catangiu, and Aidan Musnitzky, this article explores the rich environment of bridge building, both within Polkadot and to external ecosystems. Any Web3 protocol with true aspirations of interoperability needs to consider the development and deployment of bridges to external networks, and in this sense Polkadot is no different. Blockchain bridges are, in essence, mechanisms for two sovereign chains with different technological foundations to o

AI

Unleashing the Potential of AI with Polkadot: the Blockchain Powered Revolution

Blockchain technology has opened up a world of possibilities, and nowhere is this more evident than in the emerging field of artificial intelligence (AI). The Polkadot network is at the forefront of this revolution, serving as a powerful platform for innovative AI projects that are pushing the boundaries of what's possible. In this blog, we'll dive into some of the most exciting AI initiatives within the Polkadot ecosystem, exploring how these projects are leveraging Polkadot's advanced capabili

Events

Polkadot Decoded 2024 Call for Speakers now open

The Polkadot Decoded 2024 Call for Speakers is now open for submissions. Do you have an idea for a talk, panel workshop, or pitch session for the Polkadot Decoded 2024 flagship event? Submit your talk today.

Subscribe to the newsletter to hear about updates and events.