Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
No. Realistically, the only condition that can cause a slashing event is if you run your validator's keys on two nodes at the same time (such as a failover / redundancy setup, where your backup node accidentally turns on while your main node is still running). Don't let this happen, and you won't get slashed. Slashing cannot occur from being offline for maintenance.
Yes, but with small penalties. See I'm worried about downtime.
If your validator proposes a block, then some of those rewards are immediately available to you in the form of priority fees and MEV (if you are using an MEV-Boost relay).
To withdraw your full validator amount (not just the skimmed amount) you will be able to withdraw your ETH by exiting your validator and waiting in the withdrawal queue. This process is different for each client, details for each can be found here: How to exit a validator.
Ethereum Foundation Withdrawals FAQ: https://notes.ethereum.org/@launchpad/withdrawals-faq
Validator withdrawals are processed in a round-robin fashion. Starting from validator 0 at the Capella upgrade, with each block, the consensus layer sweeps through the validator set in validator index order until it has found 16 withdrawals to include. The next block proposer will pick up where the previous proposer left off in the validator set and scan for 16 further withdrawals, and so on. If every validator were eligible for a withdrawal, and if the beacon chain is performing perfectly, then a full sweep of 980,000 validators would take 8.5 days ("sweep delay"). For more info, see Ben Edgington's eth2book. The queue and estimated withdrawal time can be seen on validatorqueue.com
No. You can generate your exit message and submit it using someone elses Beacon Chain client.
Beaconcha.in has built a resource exactly for this: https://beaconcha.in/tools/broadcast
As a validator you are rewarded for proposing / attesting to blocks that are included in the chain. On the other hand, you can be penalized for being offline and behaving maliciously—for example attesting to invalid or contradicting blocks.
The key concept is the following:
Rewards are given for actions that help the network reach consensus.
Minor penalties are given for inadvertent actions (or inactions) that hinder consensus.
And major penalties (or slashings) are given for malicious actions.
In other words, you maximize your rewards by providing the greatest benefit to the network as a whole.
Disk IOPS are very important if you want your node to operate to its true potential.
Low disk IOPS can cause many different issues such as missed attestations, missed block proposals, failure to get in sync with the network as well as failure to keep up with the network if already in sync.
If you are using Ubuntu, IOPS can be measured using this software. Before running tests, make sure your node services are stopped otherwise it will interfere with the results.
1) Quite a lot! A former Teku developer wrote an article in which they ran 5,000 validators on the one machine. There are a few other factors you will need to take into account, such as CPU, RAM and bandwidth considerations.
If you already have a validator client, a consensus client and an execution client, then it is as easy as importing the new keys into the validator client. It will pick up and start performing the validator duties for the new validator(s) right away.
2) No, you do not need multiple consensus clients running to run multiple validators. A single consensus client can run multiple validators.
Beacon Nodes pick the highest reward (local or remote) if it is above the min-bid
value.
If the highest reward (local or remote) is below the min-bid
value then the local block will be selected.
There are circuit breakers in beacon nodes that select a local payload when certain network conditions are met such as there being many missed slots recently.
Pre-signed exit messages only remain valid for two hard forks. After that, you will need to generate new ones.
This comes from https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/beacon-chain.md#get_domain and specifically the line:
An exit message signed at any epoch less than the last hard fork is lumped into a "previous version" bucket and given its fork version. That means that if your operation was signed two fork versions ago the verification function has the wrong fork version, hence the wrong domain, hence the wrong signing root, hence the wrong signature, hence it fails to verify.
Each key-pair associated with a validator requires locking 32 ETH to be activated, which represents your initial balance as well as your initial and maximum voting power for any validator.
The best thing to do is to exit your validator as soon as it is practical to do so. Even in the case of an encrypted machine that is physically stolen where you can safely assume the thief won't ever be able to gain access, it is simply not worth the thought or the risk of being slashed at some point in the future.
If your validator keys are ever compromised or you even suspect them of being compromised, exiting the validator and spinning up new ones are the best course of action you can take to protect yourself.
Once your ETH is secured, further investigations and actions can be taken to prevent or mitigate this from occurring again.
Staking on Ethereum gives you many options to participate. This can be overwhelming - no doubt. We all have been there!
Take it step by step. First learn about the options you have and choose what you are most comfortable with. There is no need to rush things and risk your precious sleep while doing so.
If you choose "Solo Home Staking" and want to run your own validator, decide between the different hardware options (f.e. Intel NUC) and follow a staking guide on testnet first. Search for Goerli Testnet Staking Guides. Take notes, find out what happens when you disconnect the power cable of your validator, how to update, etc. All in all - get confident with your node before staking on Ethereum Mainnet.
And - You don't have to face problems on your own.
Feel free to ask us any question and join our community on Discord.
Validators that participate in securing the beacon chain and execute "duties" get rewarded for this by new issuance of ETH. In addition, validators receive priority fees paid by users, and optionally MEV, Maximal Extractable Value.
You can view a validator's reward for proposed blocks by looking at the fee recipient address on etherscan.io↗ under Produced Blocks
.
See a detailed explanation here: How does my validator earn ETH?
Yes, the deposit/source address is shown on the validator. It’s not used for anything in the protocol though. The consensus layer actually has no record of which address a validator's deposit was made from but it is in the history of the execution layer as all transactions are.
The deposit/source address can be seen on beaconcha.in under Deposits
-> Ethereum Deposits
-> From Address
.
No. If you miss your block proposal, the slot that should have contained your block will be empty. Other than the lost rewards from missing the block proposal, there are no penalties or slashing that occurs from a missed block proposal.
Missing some attestations is completely normal and extremely low-cost. The penalty for missing an attestation is exactly the same as the reward for a successful one. So, with around 240 attestations per day per validator, missing one or two is still a successful attestation rate of over 99%!
No. There is no advantage to having more than 32 ETH staked.
Depositing more than 32 ETH to a single set of keys does not increase rewards potential, nor does accumulating rewards above 32 ETH, as each validator is limited to an effective balance of 32. This means that staking is done in 32 ETH increments, each with its own set of keys and balance.
Setting a withdrawal address when creating your validator keys is an important step when setting up your validator. Until a withdrawal address is set, you will not be able to claim your beacon chain rewards or withdraw your ETH.
The Staking Deposit CLI can set a withdrawal address during deposit JSON
creation (an 0x01 address). If a user opts not to do this - usually simply by omission - then it sets the hash of the withdrawal pub key instead (an 0x00 address)
And that’s it. Once your validator uses v1 credentials the withdrawal address is fixed and can’t be changed. In the current design, skimming is automatic, and so are full withdrawals: Full withdrawal just happens after exit is completed.
A tool to export the withdrawal key will likely not be created, and it’d also not be very useful. You need the withdrawal key at most twice:
Once to generate the signing key (only if no withdrawal address was set at that time).
Once more to sign a message to set one.
In both cases the key can be generated inside the CLI tool, be used for its purpose, and then be discarded again without ever being written to disk.
Calculating staking taxes can be both difficult and tiresome but it is an important thing to do. Luckily an amazing tool exists to simplify this process.
This will give you a rundown of the rewards your validators have accrued. Always double check with a local tax agent before filing to ensure they have been calculated in an appropriate manner for the jurisdiction that you are a tax resident of.
A validator is a virtual entity that lives on the Beacon Chain, represented by a balance, public key, and other properties, and participates in consensus of the Ethereum network.
If there's a catastrophic failure of your validator and you lose your validator keys, don't panic! These can be easily recovered as long as you still have your validator seed phrase / mnemonic. Simply follow the same steps you used when you first generated your validator keys, and install them on a new validator machine.
Be 100% certain that any previous machines will not come back online as this will lead to a slashing event.
If you lose your seed phrase, the one used to generate the validator keys, then unfortunately your staked ETH is most likely unrecoverable.
However, if you had set a withdrawal address, then the validator keys are enough to sign a voluntary-exit, which causes a withdrawal to that address. There is also a special case if you have a pre-signed voluntary-exit message, but that's likely only used by staking services and only noted here for completeness.
In the event that you can't recover your validator or you decide you want to stop staking, you have the option to exit your validator. Exiting a validator is a one way process. For details on how to exit your validator, check out our guide.
A node operator is the human being who makes sure the client software is running appropriately, maintaining hardware as needed.
A validator client is the software that acts on behalf of the validator by holding and using its private key to make attestations about the state of the chain. A single validator client can hold many key pairs, controlling many validators.
You can think of the deposit contract as a transfer of funds from an Ethereum account to a proof-of-stake validator account. It specifies who is staking, who is validating, how much is being staked, and who can withdraw the funds.
Setting up your own validator for "Solo Home Staking" is not difficult.
You can follow step-by-step staking guides, which don't take much time at all. See also time commitment.
There are pre-configured hardware options like Dappnode↗ or Avado↗ which can make things easier and eliminate the need to interact with the command line interface or Linux in general. You can also install the open-source Dappnode software ↗ on your own hardware to have a more intuitive staking experience.
The majority of the time commitment for staking is the initial learning and setup. It will probably take a day or two of tinkering to get it all figured out (maybe more, and that's okay!). Once you get going you're looking at updating once a month or so (ten minutes) and responding to outages, which are rare.
The most common cause of this issue is when node operators have failed to upgrade their node software prior to the network upgrade taking place. This can cause the client software to follow a forked chain and require a resync to get the node operating again on the correct chain.
Ensure that you are running a client version that is supported post network upgrade. Please check the GitHub notes for respective the clients you are using to verify which version you should be running.
If you were running an older version post network upgrade, then most likely your local database will need to be deleted and resynced. This is more commonly true for execution clients than it is for consensus clients.
If you download the beaconchai.in app or sign up to an account you can configure email alerts when new client releases are published so you won't run into this issue.
Two other methods are to follow the projects on Github so you can be emailed when a new release is published, or to join the client team Discord servers where new software releases are announced.
The answer to this question very much depends on how much ETH you have at your disposal. You should certainly top up if your balance is close to 16 ETH. This is to ensure you don’t get exited out of the validator set (which automatically happens if your balance falls below 16 ETH). At the other end of the spectrum, if your balance is closer to 31 ETH, it’s probably not worth adding the extra ETH required to get back to 32.
There are many great resources out there to help you monitor your setup, a few are linked below.
All of these services will help you see things such as attestation performance, block proposals and total ETH accrued from staking.
It is critical that you set your validator withdraw address to an address that you have created yourself and have full control over.
This is typically defined as: A wallet address where you have the private keys and the ability to both send and receive transactions.
If you do not have the private key for the wallet (For example, an address on an exchange) then do not set that as your validator withdraw address as there is no guarantee that the 3rd party will give you your rewards or even exist in the near future to continue giving you your rewards.
Always remember: Not your keys, not your coins.
Advanced setups such as setting the withdraw address to a multisig is also supported but it is only recommended for advanced users.
You may notice that your block proposals and ETH withdrawals are not appearing in your wallet transactions. Do not panic, this is both expected and normal. They will not appear there as both of these are not transactions.
If you enter in your wallet address into a website such as https://etherscan.io/ you will see a "Produced Blocks" tab and a "Withdraws" tab, further detailed information can be found here. Please note that these buttons will only appear if that wallet has had a block proposal occur or a validator withdrawal occur.
Whichever wallet you use will show your up to date ETH balance even if the transaction list is empty.
A not so frequently asked question but it has come up a few times! Some smart plugs only switch themselves back on after power failure when internet access has been restored.
However, in the case where your router or other critical network equipment are plugged into a smart plug with this "feature", they will not power back on unless the internet connection is restored, but the internet won't restore unless it has turned back on.
If you use a smart plug, it may be worthwhile to run a few tests to see if yours does this, otherwise you may find this out the hard way while attending Devcon on the other side of the world where it can only be fixed with manual action on-site...
As a validator, you'll need to have funds at stake so you can be penalized for behaving dishonestly. In other words, to keep you honest, your actions need to have financial consequences.
This question is commonly asked by Linux users - a detailed answer can be found here.
Each 32 ETH deposit activates one set of validator keys. These keys are used to sign off on the state of the network. The lower the ETH requirement, the more resulting signatures must be saved by the network. 32 ETH was chosen as a balance between enabling as many people as possible to stake without inhibiting decentralization by bloating the size of each block with signatures.
Limiting the maximum stake to 32 ETH per validator encourages decentralization of power as it prevents any single validator from having an excessively large vote on the state of the chain. It also limits the amount of ETH that can be exited from staking at any given time, as the number of validator that can exit in a given time period is limited. This helps protect the network against certain attacks.
Although a validator's vote is weighted by the amount it has at stake, each validators voting weight starts at, and is capped at 32. It is possible to drop below this with poor node performance, but it is not possible to raise above it. If you add more than 32 ETH for a single validator it will just get withdrawn back down to 32 ETH to your rewards address.
Never blindly trust any links when depositing ETH into the staking contract.
Always verify the deposit contract address from MULTIPLE sources:
Once you have your validator machine setup running both an Execution Layer client and Consensus Layer client you are ready to start the deposit process.
Staking deposits are processed through the ethereum.org launchpad:
https://launchpad.ethereum.org/en/overview ↗
The following screenshots show the deposit process.
Download CLI app
Download Key Gen GUI app
Build from source
No ETH is required to run a full node! 🥳
🍎 Contribute to the health of the Ethereum network.
Check all transactions on the network are valid for yourself.
Don't trust, verify.
📡 Broadcast your transactions .
Remove reliance on 3rd parties and intermediaries such as Infura or Alchemy.
Increase censorship resistance through decentralization.
Running a full node has very similar hardware requirements to running a validating node. The only difference is that validating nodes .
If you follow any of the and complete all the steps apart from the deposit process (which requires 32 ETH) then you will be running a full node!
"Welcoming First, Knowledgeable Second"
An unbiased, open-source collection of useful information and concepts about Ethereum staking. If you're looking to get started staking on Ethereum or simply to learn more about how the network is secured through validators, you've come to the right place!
Yes! This is a living documentation site, meaning we need the community's help to maintain and update the content. Any contribution, from writing whole sections and translations to correcting spelling and grammar mistakes will be greatly appreciated.
Use this GitBook invite link to suggest edits and new content directly on GitBook:
Supported by a GitBook Community License ♥️
Slashing is a scary word. But what exactly is it, how can it happen and how worried should you be?
TLDR: Realistically, the only condition that can cause a is if you run your validator's keys on two nodes at the same time (such as a failover / redundancy setup, where your backup node accidentally turns on while your main node is still running). Don't let this happen, and you won't get slashed.
Slashing cannot occur from being offline for maintenance.
Slashing is a term used to describe the response of the Ethereum network to a validator acting against the rules of the network. Validators perform a number of duties (e.g. and ).
If someone wanted to attack the Ethereum network they could propose multiple blocks or attest to multiple conflicting blocks. To disincentivize attacks on the network, in a system, validators have something at stake, which is currently 32 ETH per validator. When a validator breaks the rules of the network, two things will happen:
The validator has some amount of ETH taken from that initial 32 ETH staked balance.
The validator is force exited and removed from the .
The amount of ETH taken as a penalty varies on the state of the network. If a small number of validators are slashed simultaneously, then a rough estimate of the slashing penalty is 1 or 2 ETH. In an incredibly rare Black Swan event, when a large portion of the network is simultaneously offline or breaking the rules (e.g. in a coordinated attack) then the slashing penalty can be up to and including 100% of the stake.
When your validator is force exited and the stake is withdrawn you are able to re-stake your remaining ETH (if you still have the 32 required), after going through both the and again.
Yes! Withdrawals are now enabled on Ethereum
You can earn GitPOAPs by contributing directly to the EthStaker Knowledge Base (a ) and by asking a question that leads to content being created (a ).
To suggest changes or add new content please visit our or if you have any questions please join our .
Staking is the act of depositing 32 ETH to activate validator software. As a validator you’ll be responsible for storing data, processing transactions, and adding new blocks to the blockchain. This will keep Ethereum secure for everyone and earn you new ETH in the process.
Rewards are given for actions that help the network reach a consensus. You'll get rewards for running software that properly batches transactions into new blocks and checks the work of other validators because that's what keeps the chain running securely.
The network gets stronger against attacks as more ETH is staked, as it then requires more ETH to control a majority of the network. To become a threat, you would need to hold the majority of validators, which means you'd need to control the majority of ETH in the system–that's a lot!
Stakers don't need energy-intensive computers to participate in a Proof of Stake (PoS) system - just a home computer or (in the future) a smartphone. This will make Ethereum better for the environment.
Most impactful
Full control
Full rewards
Trustless
Solo staking on Ethereum is the gold standard for staking. It provides full participation rewards, improves the decentralization of the network, and never requires trusting anyone else with your funds.
Those considering solo staking should have at least 32 ETH and a dedicated computer connected to the internet ~24/7. Some technical know-how is helpful, but easy-to-use tools now exist to help simplify this process.
Your 32 ETH
Your validator keys
Entrusted node operation
If you don't want or don't feel comfortable dealing with hardware but still want to stake your 32 ETH, staking-as-a-service options allow you to delegate the hard part while you earn native block rewards.
These options usually walk you through creating a set of validator credentials, uploading your signing keys to them, and depositing your 32 ETH. This allows the service to validate on your behalf.
This method of staking requires a certain level of trust in the provider. To limit counter-party risk, the keys to withdrawing your ETH are usually kept in your possession.
Stake any amount
Earn rewards
Keep it simple
Popular
Several pooling solutions now exist to assist users who do not have or feel comfortable staking 32 ETH.
Many of these options include what is known as 'liquid staking' which involves an ERC-20 liquidity token that represents your staked ETH.
Liquid staking enables easy and anytime exiting and makes staking as simple as a token swap. This option also allows users to hold custody of their assets in their own Ethereum wallet.
Pooled staking is not native to the Ethereum network. Third parties are building these solutions, and they carry their own risks.
Least impactful
Highest trust assumptions
Many centralized exchanges provide staking services if you are not yet comfortable holding ETH in your own wallet. They can be a fallback to allow you to earn some yield on your ETH holdings with minimal oversight or effort.
The trade-off here is that centralized providers consolidate large pools of ETH to run large numbers of validators. This can be dangerous for the network and its users as it creates a large centralized target and point of failure, making the network more vulnerable to attack or bugs.
If you don't feel comfortable holding your own keys, that's okay. These options are here for you. In the meantime, consider checking out the ethereum.org wallets page, where you can get started learning how to take true ownership of your funds. When you're ready, come back and level up your staking game by trying one of the self-custody pooled staking services offered.
As you may have noticed, there are many ways to participate in Ethereum staking. These paths target a wide range of users and ultimately are each unique and vary in terms of risks, rewards, and trust assumptions. Some are more decentralized, battle-tested, and/or risky than others. We provide some information on popular projects in the space but always do your own research before sending ETH anywhere.
You shouldn't worry about downtime, but understanding what happens when your validator is offline can help you to gain confidence as a solo home staker.
The Ethereum network is designed with solo home stakers in mind. This means that the protocol is very forgiving if a validator has downtime or is offline.
If a validator is offline and not executing its duties, it will be penalized at a rate slightly lower than the rewards for the same period of time.
You start your solo staking home validator with 32 ETH.
Everything is going well and after a few months, your validator balance is 32.5 ETH.
Then... your validator goes offline! 🚨
If this happens for real, check out the "My validator is offline! What do I do?" guide.
As soon as your validator is no longer participating in the network it will start leaking ETH.
When you are offline, for each missed attestation the inactivity leak might be around -0.000011 ETH (the inactivity leak is slightly less than a successful attestation).
For a normal successful attestation, you might be rewarded with 0.000014 ETH.
If you have a catastrophic failure and you are not able to get your validator back online for 5 days, then it will take about 5 days of being back online to get back to the same balance as when the failure occurred.
If you are offline, you will not be able to produce a block. But how often do block proposals occur for a single validator? Currently, on average, a validator will propose a block every 2-3 months.
So, in this example scenario, even if you are offline for 5 days, there's only a small chance you would miss a block proposal. But what happens if you miss a block proposal?
If you miss your block proposal, the slot that should have contained your block will be empty. Other than the lost rewards from missing the block proposal, there are no penalties or slashing that occurs from a missed block proposal.
No. Realistically, the only condition that can cause a slashing event is if you run your validator's keys on two nodes at the same time (such as a failover / redundancy setup, where your backup node accidentally turns on while your main node is still running). Don't let this happen, and you won't get slashed. Slashing cannot occur from being offline for maintenance.
If you can't recover your validator or decide you want to stop staking, you have the option to exit your validator from the network. Exiting a validator is a one-way process. For details on how to exit your validator, check out our guide.
Being a solo validator is an important responsibility to ensure the long-term health of the Ethereum network. At EthStaker our goal is to help as many people as possible #stakefromhome ↗ and this information is provided to show that downtime and being offline is not something to be overly worried about.
How often a validator receives block proposals, and is selected to be part of a sync committee, is entirely random. As long as you do not see missed proposals, there is absolutely nothing you can do to increase the frequency.
True randomness can feel quite odd. A validator not getting a proposal for 9 months is perfectly normal. A validator getting two proposals in a week is entirely normal. Over a large enough set, this evens out, but for a handful of validators, randomness can indeed feel unsettling.
To see the latest's statistics on block proposal frequency take a look at Lucky Staker ↗.
The tool ethdo ↗ by attestant.io ↗ can be used to query the current average frequency of proposals and sync committees.
As of early 2024, it is roughly one proposal every 4 months and sync committee participation every 5 years.
No, it's random. There is nothing you can do to increase your chances at proposals, short of running more validators.
Missing some attestations is completely normal and extremely low-cost. The penalty for missing an attestation is exactly the same as the reward for a successful one. So, with around 240 attestations per day per validator, missing one or two is still a successful attestation rate of over 99%!
There are two causes of missing an attestation or having a low effectiveness with your validator. Some causes are under your control as a staker and some causes are outside of your control.
Even with a perfect setup, you might miss some attestations or incorrectly vote during an attestation lowering your effectiveness once in a while. Causes that are outside of your control are often related to network propagation or related to some of your peers being late in performing their own duties.
To go on a deep dive and learn everything about the attestation duty, timings, effectiveness and network propagation, check out these great articles.
Understanding Attestation Misses ↗ by Adrian Sutton
Exploring ETH2: Attestation Inclusion ↗ by Adrian Sutton
Defining Attestation Effectiveness ↗ by Jim McDonald
As a staker, you cannot do much about the causes that are outside of your control. What you can do is work on elements of your setup that are under your control to maximize your rewards. Even if you have a setup that was performing well before the merge, it's possible that with the additional work being introduced, some overlooked part of your setup might be the cause of additional misses or lower effectiveness since the merge happened. That's why you should double check all these items.
Make sure your clients are up-to-date. Client updates often include optimizations and improvements that will help perform your duties on time.
Make sure your machine consistently has enough resources (CPU, RAM, disk, etc). Using a dedicated machine can help. If your clients are starved of any of these resources, it will likely be a cause for more misses and lower effectiveness.
Make sure your time is properly in sync. The beacon chain protocol is quite time sensitive. chrony is a good tool to improve your time sync. On Ubuntu or Debian derivatives, installing chrony is often as easy as sudo apt install chrony
. On Windows, you can use these instructions ↗ to improve your time sync.
Make sure you have good internet latency, bandwidth and quality. For home validators, it's unrealistic to ask for a dedicated ISP or internet connection for your validator, but make sure your other network uses don't interfere too much with your validator. In case of doubt, see if you can get a better plan from your provider or check if there is an alternative provider in your area that can improve your internet.
Make sure you consistently have enough peers. Monitoring your clients peers count is not a bad idea if you have the technical ability.
Make sure you have properly configured open ports that permit incoming connections. Not only can this improve your setup networking health and your peers count, but it will also improve the Ethereum network health as a whole. To test if your ports are opened, you can use the StakeHouse open ports checker. Calling curl https://eth2-client-port-checker.vercel.app/api/checker?ports=30303,9000
should return a result that includes 30303 and 9000 in the open_ports
field if those ports are opened from the Internet. 30303 is the default P2P port for Geth and 9000 is the default P2P port for many consensus clients. Adjust these values if you use custom ports or if you use clients which have different default ports. Consult your client documentation to find more about this.
Once you have those in place, there is little more you can do to help. There might be some marginal benefits in connecting with more peers at the cost of higher resources usage, especially bandwidth. Under normal circumstances, the default peers count from your clients should be good. Monitoring internet quality with tools like those from pingman ↗ can help pinpoint the cause of some of these missed attestations if they are network related, but it will likely still be out of your control.
The penalty for missing attestations is exactly the same as the reward for a successful one. Any downtime penalty will be recovered in the same amount of uptime.
Proposing a block is rare. Depending on the size of the validator pool, a single validator will on average only propose a block every few months. If you are unlucky enough to be offline at the time that your validator is asked to propose a block, that's also ok.
The Ethereum network is robust and designed to handle these situations. If you miss your block proposal, the slot that should have contained your block will be empty. Other than the lost rewards from missing the block proposal, there are no penalties or slashing that occurs from a missed block proposal.
If you have a large number of validators or want to minimize your downtime, consider running a second consensus and execution client pair and adding this endpoint to your already running validator client. This will ensure that if one of your client pairs goes down, your validator client will automatically fallback to the other one.
Doing this is a supported function by all validator client software as each client has slashing protections to ensure that you do not attest twice, so you will not get you slashed.
Do not however, configure a second validator service. By doing so your risk of slashing goes up massively. You should only ever have your validator keys in one place at any one time. Missing out on a few days of rewards may seem bad at the time, but getting slashed and bleeding ETH while you wait out the 5 week exit period post-slashing is a lot worse!
Having a second consensus-execution clients is great, but having a second validator client is not.
TODO
A full node is one that runs both an Execution Client and a Consensus Client.
Here is a simple breakdown of what is required to run a full Ethereum node:
A stable Internet connection. The longer you stay online, the better your rewards. A spotty Internet connection will hurt your returns.
At least 10Mbps of bandwidth both up and down. A full node usually takes around 8Mbps to 10Mbps up & down of network traffic, depending on your configuration. You'll also need to take into account other traffic happening on this network (downloads, video calls, streaming, gaming, etc).
No data cap is imposed by your ISP. Running a full node will take a lot of data - as much as over 2 TB per month of on-chain data alone. This can be mitigated somewhat with a few settings tweaks to the ETH clients, but as a rule of thumb, don't run a full node if your Internet plan comes with a monthly data cap.
Stable electricity. For the same reason as needing a stable Internet connection, you also want to have reliable power. This can be mitigated with a large UPS (backup battery) to deal with short blackouts.
A computer with sufficient specs. This is pretty flexible because it really depends on what Execution and Consensus client you use, and what settings you configure them with. The computer can be a local machine, or it can be a Virtual Private Server (VPS) hosted in the cloud. Read below for some more information on those two options, and how to decide which is best for you.
The following are considered minimum requirements to run a full node:
Linux or macOS Operating System
Quad-core CPU (or dual-core hyperthreaded); both x64
and arm64
are supported
32 GB of RAM (preferably DDR4)
4 TB of free SSD Disk Space
A spinning platter hard drive is not fast enough to handle the constant random reads and writes that blockchain activity requires. You MUST use a solid-state drive. A list of tested SSD's can be found here.
Recommendations:
The ideal setup, and best practice is to have a dedicated computer for staking. Try to limit additional processes running on your staking box. Especially if it is something that is connecting to the outside world. Every extra process and every file being downloaded is another opportunity for an exploit.
Use Linux, it's easy! For the foreseeable future, Linux will receive better support from both the client teams and the community at large. If you choose Linux you will have access to more guides and more technical support from the community at large. Linux is lightweight, stable, secure, and it doesn't force you to restart for updates every other day.
Use a minority client! It is both good for the health of Ethereum and good for the health of your money.
A battery backup (UPS) is strongly recommended! Plug your modem and router into it also. Many ISPs have generators to support emergency services communications, meaning the internet continues to work during a power outage as long as your equipment is powered. Your ISP may be the same. Aside from blackouts, not having your computer shut down on every momentary power flicker is a nice quality-of-life improvement when staking from home.
Everything here applies to both solo staking and being a minipool node operator with Rocket Pool ↗.
Take a look at the hardware examples page for detailed explanations of real solo home staking setups.
Price: Lower cost.
Performance: Running an execution and consensus node on a Raspberry Pi is possible. Specifically, Nimbus which was designed to run on devices like a Raspberry Pi. Being able to run Ethereum nodes on low-powered hardware is great for decentralization and an honorable goal. However, running a validator is different. I maintain that the Pi’s lack of processing power and memory is a risk in some situations such as a period with no finalization. The reward of saving a few hundred dollars vs more powerful hardware does not even come close to outweighing the risk of extended downtime due to a lack of processing power or memory.
Power Usage: Approximately 8 watts.
Price: Lower cost.
CPU: For staking on Mainnet, a CPU that scores at least 6000 or better on Passmark is strongly recommended. For initial sync times, single-thread performance is better than having many cores.
Memory: Unless you go with an extremely bare-bones OS, 16GB is the minimum amount of RAM recommended for Mainnet.
Storage: An SSD is required. You do not need to worry about SATA vs NVMe, either will be fast enough. Buying one with a high terabytes written spec will help with longevity. A 2TB or bigger drive is recommended.
Caveats: Stability and uptime are essential to maximize your profits. If you are using an older desktop consider replacing the PSU and the fans. Buying a titanium or platinum-rated PSU will help save on the monthly power bill as well.
If you are planning on staking with an older laptop, consider that they have reduced capacity to deal with heat due to their form factor, and in rare cases, running while plugged in 24/7 can cause issues with the battery. If you do choose to stake with a laptop, try using one that far exceeds the CPU requirements as running a laptop at nearly full load 24/7 is not advisable. You will probably be fine, but generally speaking, laptops are not designed with that in mind.
If you are buying brand new, there is not much value in paying the price premium for a portable form factor, screen, keyboard, and trackpad. Once you get your staking machine set up, you do not need any of these features. You can just remote into the staking machine from your normal computer. The low profile form factor will actually be a downside when taking thermal performance into account. Laptops typically do not include an ethernet port now, which means you will be relying on WiFi. WiFi is very reliable now, but you can't beat the simplicity and reliability of a cable.
This is likely the simplest option and it will be easy to upgrade and service in the future**.**
Price: Medium price.
Power Usage: Probably around 30 watts.
This is essentially the same as using a prebuilt desktop. However, building your own gives you the option of choosing a case you like the look of, and buying higher-quality parts. For those of you who have never built a computer, it is easier than Lego because they only go together one way. Also, you won’t get any weird proprietary parts that will be difficult to replace should they ever fail. Unfortunately with prebuilt computers, concessions are sometimes made with components like the PSU to assuage the accountants and boost margins. Style points for adding a RAID card!
Price: Medium price.
Power usage: 20-25ish watts.
NUCs are super cute, and their small form factor gives them a very high significant-other approval factor. Unfortunately that does come with a bit of a price premium and slightly less performance than the larger desktop option. However, these are minor drawbacks. This is probably the best option for most people.
Price: Higher price.
Power Usage: It's bad. A modern server runs around 100 watts. If you get an older one, expect to be up around 150 watts.
Enterprise servers are jam packed with features, and are specifically designed to do exactly what a validator is trying to do. Run 24/7/365. They have redundant power supplies in case one breaks, they mostly have 2 CPUs, so in the unlikely event of one going bad, you can pop it out and restart with just one. They have built in RAID cards so you can have redundant storage. They have hot swappable drive trays, so if one of your drives goes bad, you don't even need to shut down. All of the components are high quality and built to last. You also get monitoring and maintenance tools that are not included in consumer gear like iDRAC and iLo. I would definitely caution that while servers are great for staking, you probably want to be the type of person who is willing to go into the weeds a bit and geek out. There is some research required to know what you are looking for before you go out and buy a server and there is a possibility you run into a weird technical issue that you will have to troubleshoot.
Price: Medium price
Performance: The DAppNodeXtreme is a good option if you are looking for a custom built OS with an easy UX. A DAppNode box is just a NUC pre-configured with their software. If you are confident enough to install an OS by yourself, you can save a bit of money by purchasing a normal NUC and installing DAppNode yourself. You can also install the DAppNode OS on any computer. If you don’t want to mess around with installing operating systems and want an easy UX, buying a DAppNode box is a convenient and simple way to get started.
Power Usage: 20-25ish watts.
Avado is an easy home-staking solution for people with limited technical knowledge or limited time. The Avado boxes are pre-configured computers with a user-friendly UI that allows you to use and manage the device from anywhere in the world.
Using an AVADO is convenient, secure and true to the spirit of decentralization.
Price: Medium price.
Performance: Definitely upgrade to 16GB of memory. The CPU will be more than fast enough with a 15,108 passmark score. Make sure you have a plan to get up to 2TB or more of storage, the internal memory and storage is integrated into the motherboard and requires soldering and advanced technical knowledge to upgrade.
Power Usage: Slightly less than the NUC, but not enough to make any real difference.
It's not possible to run Linux on the new ARM architecture this uses. It is more expensive that the NUC and also falls short on upgradeability, and ease of service, but for the Mac OS fans out there this is a great option that will work very well.
Price: Anywhere from $20 - $50 per month.
Performance: You can buy as much as you can afford.
If you live somewhere that is prone to natural disaster or has an unstable power grid or internet connection but still want to solo stake, this is a good option. You can also consider using a DVT protocol.
If you do have stable power and internet, running your own hardware will be a cheaper/more profitable solution long term. You need to evaluate the pros/cons of this for your own situation. Remember that if one of the VPS providers goes down, it will mean all of the people using that VPS service to host will also go down, and the inactivity penalties will be much larger than if you have uncorrelated downtime yourself.
Reward payments are automatically processed for active validator accounts with a maxed out effective balance of 32 ETH.
Any balance above 32 ETH earned through rewards does not actually contribute to principal, or increase the weight of this validator on the network, and is thus automatically withdrawn as a reward payment every few days. Aside from providing a withdrawal address one time, these rewards do not require any action from the validator operator. This is all initiated on the consensus layer, thus no gas (transaction fee) is required at any step.
Providing a withdrawal address is required before any funds can be transferred out of a validator account balance.
Users looking to exit staking entirely and withdraw their full balance back must also sign and broadcast a "voluntary exit" message with validator keys which will start the process of exiting from staking. This is done with your validator client and submitted to your beacon node, and does not require gas.
The process of a validator exiting from staking takes variable amounts of time, depending on how many others are exiting at the same time. Once complete, this account will no longer be responsible for performing validator network duties, is no longer eligible for rewards, and no longer has their ETH "at stake". At this time the account will be marked as fully “withdrawable”.
Once an account is flagged as "withdrawable", and withdrawal credentials have been provided, there is nothing more a user needs to do aside from wait. Accounts are automatically and continuously swept by block proposers for eligible exited funds, and your account balance will be transferred in full (also known as a "full withdrawal") during the next sweep.
Whether a given validator is eligible for a withdrawal or not is determined by the state of the validator account itself. No user input is needed at any given time to determine whether an account should have a withdrawal initiated or not—the entire process is done automatically by the consensus layer on a continuous loop.
When a validator is scheduled to propose the next block, it is required to build a withdrawal queue, of up to 16 eligible withdrawals. This is done by originally starting with validator index 0, determining if there is an eligible withdrawal for this account per the rules of the protocol, and adding it to the queue if there is. The validator set to propose the following block will pick up where the last one left off, progressing in order indefinitely.
Think about an analogue clock. The hand on the clock points to the hour, progresses in one direction, doesn’t skip any hours, and eventually wraps around to the beginning again after the last number is reached. Now instead of 1 through 12, imagine the clock has 0 through N (the total number of validator accounts that have ever been registered on the Beacon Chain, over 900,000 as of Jan 2024). The hand on the clock points to the next validator that needs to be checked for eligible withdrawals. It starts at 0, and progresses all the way around without skipping any accounts. When the last validator is reached, the cycle continues back at the beginning.
Checking an account for withdrawals
While a proposer is sweeping through validators for possible withdrawals, each validator being checked is evaluated against a short series of questions to determine if a withdrawal should be triggered, and if so, how much ETH should be withdrawn.
Has a withdrawal address been provided? If no withdrawal address has been provided, the account is skipped and no withdrawal initiated.
Is the validator exited and withdrawable? If the validator has fully exited, and we have reached the epoch where their account is considered to be "withdrawable", then a full withdrawal will be processed. This will transfer the entire remaining balance to the withdrawal address.
Is the effective balance maxed out at 32? If the account has withdrawal credentials, is not fully exited, and has rewards above 32 waiting, a partial withdrawal will be processed which transfers only the rewards above 32 to the user's withdrawal address.
There are only two actions that are taken by validator operators during the course of a validator's life cycle that influence this flow directly:
Provide withdrawal credentials to enable any form of withdrawal
Exit from the network, which will trigger a full withdrawal
This approach to staking withdrawals avoids requiring stakers to manually submit a transaction requesting a particular amount of ETH to be withdrawn. This also means there is no gas (transaction fee) required, and withdrawals also do not compete for existing execution layer block space.
A maximum of 16 withdrawals can be processed in a single block. At that rate, 115,200 validator withdrawals can be processed per day (assuming no missed slots). As noted above, validators without eligible withdrawals will be skipped, decreasing the time to finish the sweep.
Expanding this calculation, we can estimate the time it will take to process a given number of withdrawals:
800,000
7.0 days
900,000
7.8 days
1,000,000
8.7 days
1,100,000
9.6 days
As you see this slows down as more validators are on the network. An increase in missed blocks could slow this down proportionally, but this will generally represent the slower side of possible outcomes.
Here is a list of example home-staking setups created by the EthStaker community:
Validators that participate in securing the Beacon Chain and execute "duties" get rewarded for this by new issuance of ETH. In addition, validators receive priority fees paid by users, and optionally MEV. Validators are rewarded for executing those duties by new ETH issuance to the "validator balance". The current APY can be seen on the official launchpad ↗.
0.000014 ETH*
0.02403 ETH*
Every 2 years on average**
0.11008 ETH*
Very rarely included in Block Proposals
Up to 0.0625 ETH
Included in every Block Proposal containing transactions
Typically 0.01 to 0.1 ETH; very rarely 1+ ETH
Typically 0.01 to 0.1 ETH; very rarely 1+ ETH
*Varies based on the total number of validators in the network. Estimate shown approximated for 435,000 active validators.
**These are subject to randomness; there can be "dry spells" multiple times longer than the average without being given one.
ETH on the Consensus layer is not liquid as it is being staked. Balances above 32 ETH will be automatically skimmed if a withdraw credential has been set, but to access the principal 32 ETH you will need to exit the validator.
Rewards provided on the Execution layer are liquid and can be accessed instantly.
If the validator is offline and not executing its duties, it will be penalized at a rate slightly lower than the rewards for the same period of time.
Validators are penalized for small amounts of ETH if they are offline and fail to perform their assigned duties. This is called leaking. If a validator violates one of the core rules of the Beacon chain and appears to be attacking the network, it may get slashed. Slashing is a forceful exit of your validator without your permission, accompanied by a relatively large fine that removes some of your validator's ETH balance.
Realistically, the only condition that can cause a slashing is if you run your validator's keys on two nodes at the same time (such as a failover / redundancy setup, where your backup node accidentally turns on while your main node is still running). Don't let this happen, and you won't get slashed. Slashing cannot occur from being offline for maintenance.
Below is a table that shows the penalties that can happen to a validator:
-0.000011 ETH* per attestation (-9/10 the value of a normal attestation reward)
0
-0.00047 ETH* per epoch (-0.1 ETH total if offline for the whole sync committee)
At least 1/32 of your balance, up to your entire balance in extreme circumstances
*Varies based on the total number of validators in the network. Estimate shown approximated for 435,000 active validators.
As a rule of thumb, if you're offline for X hours (and you aren't in a sync committee), then you'll make all of your leaked ETH back after X hours once you're back online and attesting.
Stores everything kept in a full node and builds an archive of historical states.
Archive nodes are required if you want to query something like an account balance at a particular block.
This data represents units of terabytes (more than 20TB for Geth), which makes archive nodes less attractive for most users but can be handy for services like block explorers, wallet vendors, and chain analytics.
Syncing clients in any mode other than archive will result in pruned blockchain data. This means, there is no archive of all historical states but the full node is able to build them on demand.
Archive nodes aren't required to participate in block validation and can theoretically be built from scratch by simply replaying the blocks from genesis.
Votes by validators which confirm the validity of a block. At designated times, each validator is responsible for publishing different attestations that formally declare a validator's current view of the chain, including the last finalized checkpoint and the current head of the chain.
Every active validator creates one attestation per epoch (~6.5 minutes), consisting of the following components:
Committee
A bitlist of validators where the position maps to the validator index in their committee. The value (0/1) indicates whether the validator signed the data (i.e. whether they are active and agree with the block proposer).
Slot
The slot number that the attestation references.
Index
A number that identifies which committee the validator belongs to in a given slot.
Chain head vote
(beacon_block_root)
The root hash of the block the validator sees at the head of the chain (the result of applying the fork-choice algorithm).
Source
Part of the finality vote indicating what the validators see as the most recently justified block.
Target
Part of the finality vote indicating what the validators see as the first block in the current epoch.
Signature
A BLS signature that aggregates the signatures of individual validators.
An important component related to effectiveness is the chain head vote. This is a vote the validator makes about what it believes is the latest valid block in the chain at the time of attesting. The structure of a chain head vote consists of the following components:
Slot - Defines where the validator believes the current chain head to be.
Hash - Defines what the validator believes the current chain head to be to be.
The combination uniquely defines a point on the blockchain. By combining enough of these chain head votes the Ethereum network reaches consensus about the state of the chain.
Source (ethereum.org) ↗ Source (Attestant) ↗
Although the data in each attestation is relatively small, it mounts up quickly with tens of thousands of validators. As this data will be stored forever on the blockchain, minimizing it is important, and this is done through a process known as attestation aggregation.
Aggregation takes multiple attestations that have all chosen to vote with the same committee, chain head vote, and finality vote, and merges them together into a single aggregate attestation.
An aggregate attestation differs in two ways from a simple attestation. First, there are multiple validators listed. Second, the signature is an aggregate signature made from the signatures of the matching simple attestations. Aggregate attestations are very efficient to store, but introduce additional communications and computational burdens.
If every validator was required to aggregate all attestations it would quickly overload the network with the number of communications required to pass every attestation to every validator. Equally, if aggregating were purely optional then validators will not bother to waste their own resources doing so. Instead, a subset of validators is chosen by the network to carry out aggregation duties1. It is in their interest to do a good job, as aggregate attestations with higher numbers of validators are more likely to be included in the blockchain so the validator is more likely to be rewarded.
Validators that carry out this aggregation process are known as aggregators.
A major part of the work of the beacon chain is storing and managing the registry of validators – the set of participants are responsible for running the Ethereum Proof of Stake (PoS) system.
This registry is used to:
Assigns validators their duties.
Finalizes checkpoints.
Perform a protocol level random number generation (RNG).
Progress the beacon chain.
Vote on the head of the chain for the fork choice.
A block is a bundled unit of information that include an ordered list of transactions and consensus-related information. Blocks are proposed by Proof of Stake (PoS) validators, at which point they are shared across the entire peer-to-peer network, where they can easily be independently verified by all other nodes. Consensus rules govern what contents of a block are considered valid, and any invalid blocks are disregarded by the network. The ordering of these blocks and the transactions therein create a deterministic chain of events with the end representing the current state of the network.
A chosen validator by the Beacon Chain to propose the next block. There can only be one valid block per slot.
The block was proposed by a validator.
Validators are currently submitting data.
The proposer didn’t propose the block within the given time frame, so the block was missed/skipped.
In order to understand this, let us look at the diagram below "1, 2, 3, ... ,9" represent the slots.
Validator at slot 1 proposes the block “a”.
Validator at slot 2 proposes “b”.
Slot 4 is being skipped because the validator didn’t propose a block (e.g.: offline).
At slot 5/6 a fork occurs: Validator(5) proposes a block, but validator(6) doesn’t receive this data (e.g.: the block didn’t reach them fast enough). Therefore Validator(6) proposes its block with the most recent information it sees from validator(3).
The fork choice rule ↗ is the key here - It decides which of the available chains is the canonical one.
The canonical chain is the chain which is agreed to be the 'main' chain and not a fork.
The latest block received by a validator. This does not necessarily mean it is the head of the canonical chain.
The Beacon Chain has a tempo divided into slots (12 seconds) and epochs (32 slots). The first slot in each epoch is a checkpoint. When a supermajority of validators attests to the link between two checkpoints, they can be justified and then when another checkpoint is justified on top, they can be finalized.
An implementation of Ethereum software that verifies transactions in a block. These can be consensus layer clients or execution layer clients. Each validator needs both an execution layer client and a consensus layer client.
A group of at least 128 validators is assigned to validate blocks in each slot. One of the validators in the committee is the aggregator, responsible for aggregating the signatures of all other validators in the committee that agree on an attestation. Not to be confused with sync committees.
Ethereum's consensus layer is the network of consensus clients.
The Deposit contract is the gateway to Ethereum Proof of Stake (PoS) and is managed through a smart contract on Ethereum. The smart contract accepts any transaction with a minimum amount of 1 ETH and valid input data. Ethereum beacon nodes listen to the deposit contract and use the input data to credit each validator.
More info on the deposit contract
The average time it takes for a validator's attestations to be included in the chain.
Check out our page explaining validator effectiveness in more detail
1 Epoch = 32 Slots Represents the number of 32 slots (12 seconds) and takes approximately 6.4 minutes. Epochs play an important role when it comes to the validator queue and finality.
Ethereum's execution layer is the network of execution clients.
In Ethereum Proof of Stake (PoS) at least two third of the validators have to be honest, therefore if there are two competing epochs and one third of the validators decide to be malicious, they will receive a penalty. Honest validators will be rewarded.
In order to determine if an epoch has been finalized, validators have to agree on the latest two epochs in a row, then all previous Epochs can be considered as finalized.
If there are less than 66.6% of the total possible votes (the participation rate) in a specific epoch, the epoch cannot be justified. As mentioned in "Finalization", three justified epochs in a row are required to reach finality. As long as the chain cannot reach this state it has finality issues.
During finality issues, the validator entry queue will be halted and new validators will not be able to join the network, however, inactive validators with less than 16 ETH balance will be exited from the network. This leads to more stability in the network and a higher participation rate, allowing the chain to eventually finalize.
A change in protocol causing the creation of an alternative chain or a temporal divergence into two potential block paths. Also see hard fork
Stores and maintains the full blockchain data on disk. It serves blockchain data upon request and helps support the network by participating in block validation and by verifying all blocks and states. All states can be derived from a Full node.
The first block in a blockchain, used to initialize a particular network and its cryptocurrency.
A hard fork occurs when an update is being pushed to the Ethereum network and the new version of the software forks from the old version. Usually requires operators to update their validator software to stay on the correct side of the fork. Also see fork
The validator has made a timely vote for the correct head block.
If the Beacon Chain has gone more than four epochs without finalizing, an emergency protocol called the "inactivity leak" is activated. The ultimate aim of the inactivity leak is to create the conditions required for the chain to recover finality. Finality requires a 2/3 majority of the total staked ether to agree on source and target checkpoints. If validators representing more than 1/3 of the total validators go offline or fail to submit correct attestations then it is not possible for a 2/3 supermajority to finalize checkpoints. The inactivity leak lets the stake belonging to the inactive validators gradually bleed away until they control less than 1/3 of the total stake, allowing the remaining active validators finalize the chain. However large the pool of inactive validators, the remaining active validators will eventually control >2/3 of the stake. The loss of stake is a strong incentive for inactive validators to reactivate as soon as possible!
The inclusion distance of a slot is the difference between the slot in which an attestation is made and the lowest slot number of the block in which the attestation is included. For example, an attestation made in slot s and included in the block at slot s + 1 has an inclusion distance of 1. If instead the attestation was included in the block at slot s + 5 the inclusion distance would be 5.
The value of an attestation to the Ethereum network is dependent on its inclusion distance, with a low inclusion distance being preferable. This is because the sooner the information is presented to the network, the more useful it is.
To reflect the relative value of an attestation, the reward given to a validator for attesting is scaled according to the inclusion distance. Specifically, the reward is multiplied by 1/d, where d is the inclusion distance.
The Input data, also called the deposit data, is a user generated, 842 long sequence of characters. It represents the validator public key and the withdrawal public key, which were signed with by the validator private key. The input data needs to be added to the transaction to the deposit contract in order to get identified by the beacon-chain.
More info about the deposit process
66.6% of the total validators need to attest in favour of a block's inclusion in the canonical chain. This condition upgrades the block to "justified". Justified blocks are unlikely to be reverted, but they can be under certain conditions.
When another block is justified on top of a justified block, it is upgraded to "finalized". Finalizing a block is a commitment to include the block in the canonical chain.
An Ethereum client that does not store a local copy of the blockchain, or validate blocks and transactions. It offers the functions of a wallet and can create and broadcast transactions.
MEV or "maximal extractable value", is a controversial topic. Node operators can extract MEV by accepting blocks built by "searchers", via a small side program called "mev-boost ↗" by Flashbots. In this case, the Consensus Layer client such as Nimbus, Teku, etc. will, when asked to procure a block to propose, get blocks from MEV relays via mev-boost and from the Execution Layer client such as Besu, Geth, etc. and then choose whichever block from the relay pays best. The local Execution Layer client will only build a block when no block is offered by the relay (e.g the relay is down) or the min-bid offered is lower than the locally set value. If a blinded block from a relay has already been signed, then a local block will not be built so double signing does not occur. For this reason, when a validator signs a block from the relay, it is trusting that the relay will submit that signed block to the network.
Rewards from MEV are paid to the same suggested fee recipient address as priority fees.
When an Ethereum node receives a transaction, it is not instantly added to a block. The transaction is held in a waiting area or a buffer zone.
The transaction goes from a number of levels of verification such as it checks whether the output is greater than the input, whether the signature is valid or not, etc., and then only it is added to a block. The transaction is not added to a block if it fails any of these validations. The role of a mempool comes while a transaction is going through these checks. It is simply kept in this waiting area or a mempool. As soon as the transaction confirms, it is removed from the mempool and added to a block. Mempool is not a master reference shared universally by all nodes. There’s no “one” mempool. This means each node configures its own rules for the node’s mempool. In fact, a node can be the first to receive a transaction but it is possible that it might not have propagated the transaction to the rest of the network.
Any instance of Ethereum client software that is connected to other computers also running Ethereum software, forming a network. A node doesn’t necessarily need a validator but a validator requires a node. Running a node by itself does not generate any revenue but does contribute to the robustness of the network.
A person who maintains a validator
The participation rate is the percentage of validators that are online and performing their duties.
If the validator set is 1,000 validators, and 250 validators are offline or rarely making proposals or attestations, then it could be estimated that the participation rate is 75%.
Other nodes running Ethereum clients that connect to each other over a peer-to-peer network. Communication between peers is how the Ethereum network remains decentralized as there is no single point of failure.
Almost all transaction on Ethereum set a priority fee ↗ to incentivize block proposers to include the transaction as a higher priority than others. The higher the fee relative to other transactions currently waiting in the mempool This fee is paid to the block proposer. All of the priority fees in a block are aggregated and paid in a single state change directly to the suggested fee recipient set by the block proposer. This address could be a hardware wallet, a software wallet, or even a multi-sig contract.
A secret number that allows Ethereum users to prove ownership of an account or contracts, by producing a digital signature.
A method by which a cryptocurrency blockchain protocol aims to achieve distributed consensus. PoS asks users to prove ownership of a certain amount of cryptocurrency (their "stake" in the network) in order to be able to participate in the validation of transactions.
A number, derived via a one-way function from a private key, which can be shared publicly and used by anyone to verify a digital signature made with the corresponding private key.
Demonstrating cryptographically that a message or transaction was approved by the holder of a specific private key).
If your validator commits a slashable offense it will be force exited from the validator pool and will have ETH deducted depending on the circumstances of the event. Typically, this will be 1-2 ETH but could be significantly more ↗.
This is not something to be overly worried about, there are simple steps you can take to make sure that you don't invoke a slashing event.
There are three ways a validator can be slashed, all of which amount to the dishonest proposal or attestation of blocks.
Double voting: Signing two different attestations in one epoch.
Surround votes: Attesting to a block that "surrounds" another one (effectively changing history).
The slasher is its own entity but requires a beacon-node to receive attestations. To find malicious activity by validators, the slashers iterate through all received attestations until a slashable offense has been found. Found slashings are broadcasted to the network and the next block proposer adds the proof to the block. The block proposer receives a reward for slashing the malicious validator. However, the whistleblower (Slasher) does not receive a reward.
32 Slots = 1 Epoch A time period of 12 seconds in which a randomly chosen validator has time to propose a block. The total number of validators is split up in committees and one or more individual committees are responsible to attest to each slot. One validator from the committee will be chosen to be the aggregator, while the other 127 validators are attesting. After each Epoch, the validators are mixed and merged to new committees. Each slot may or may not have a block in it as a validatory could miss their proposal (e.g. they may be offline or submit their block too late). There is a minimum of 128 validators per committee.
An operator who runs a validator on the Ethereum network without a protocol between their validator and the Beaconchain
The validator has made a timely vote for the correct source checkpoint.
Someone who has deposited ETH into a validator to secure the network. This can be someone who runs a validator (an operator) or someone who deposited their ETH into a pool, where someone else is the operator of the validator.
A command-line tool used to generate validator keys and deposit data files.
The fee recipient is an Ethereum address nominated by a Beacon Chain validator to receive tips from user transactions and MEV.
A sync committee is a randomly selected group of validators that refresh every ~27 hours. Their purpose is to add their signatures to valid block headers. Sync committees allow light clients to keep track of the head of the blockchain without needing to access the entire validator set. Occurs every 2 years on average, however, there can be "dry spells" multiple times longer than the average without being given one. So if your validator is selected... congratulations! 🥳
The validator has made a timely vote for the correct target checkpoint.
A node in a Proof of Stake (Pos) system responsible for storing data, processing transactions, and adding new blocks to the blockchain. To activate validator software, you need to be able to stake 32 ETH. A validator's job is to propose blocks and sign attestations. It has to be online for at least 50% of the time in order to have positive returns. A validator is run by an operator (a human), on hardware (a computer) and is paired with a node (many thousand validators can run on one node).
Refers to pending validators. The deposit has been recognized by the Beacon Chain at the timestamp of “Eligible for activation”. If there is a queue of pending validators ↗, an estimated timestamp for activation is calculated.
Every validator receives a unique index based on when they are added from the validator queue.
The current balance is the amount of ETH held by the validator as of now. The effective Balance represents a value calculated by the current balance. It is used to determine the size of a reward or penalty a validator receives. The effective balance can never be higher than 32 ETH.
In order to increase the effective balance, the validator requires “effective balance + 1.25 ETH”. In other words, if the effective balance is 20 ETH, a current balance of 21.25 ETH is required in order to have an effective balance of 21 ETH. The effective balance will adjust once it drops by 0.25 below the threshold as seen in the examples above.
Here are examples of how the effective balance changes:
If the Current balance is 32.00 ETH – the Effective balance is 32.00 ETH.
If the Current balance dropped from 22 ETH to 21.76 ETH – Effective balance will be 22.00 ETH.
If the Current balance increases to 22.25 and the effective balance is 21 ETH, the effective balance will increase to 22 ETH.
32 ETH has been deposited to the ETH1 deposit-contract and this state will be kept for around 7 hours. This offers security in case the Ethereum chain gets attacked.
Waiting for activation on the Beacon Chain.
Before validators enter the validator queue, they need to be voted in by other active validators. This occurs every 4 hours.
Currently attesting and proposing blocks.
The validator will stay active until:
Its balance drops below 16 ETH (ejected).
Voluntary exit.
It gets slashed.
The Validator has been malicious and will be slashed and kicked out of the system.
A Penalty is a negative reward (e.g. for going offline). A Slashing is a large penalty (≥ 1/32 of balance at stake) and a forceful exit ... . - Justin Drake
Ejected: The validator balance fell below a threshold and was kicked out by the network.
Exited: Voluntary exit, the withdrawal key holder has the ability to withdraw the current balance of the corresponding validator balance.
The number of currently active validators securing the Ethereum network. The current validator pool can be seen here ↗.
The validator queue is a first-in-first-out queue for activating and exiting validators on the Beacon Chain.
Until 327680 active validators in the network, 4 validators can be activated per epoch. For every (4 * 16384) = 65536 active validators, the validator activation rate goes up by one.
5 validators per epoch requires 327680 active validators, allowing 1125 validators per day.
6 validators per epoch requires 393216 active validators, allowing 1350 validators per day.
7 validators per epoch requires 458752 active validators, allowing 1575 validators per day.
8 validators per epoch requires 524288 active validators, allowing 1800 validators per day.
9 validators per epoch requires 589824 active validators, allowing 2025 validators per day.
10 validators per epoch requires 655360 active validators, allowing 2200 validators per day.
Amount of activations scales with the amount of active validators and the limit is the active validator set divided by 64.
Exiting validators works in the same way, with the amount of validators that can exit the Beacon Chain per day rate limited to preserve the stability of the network.
The Seed Phrase or Mnemonic is a set of words (usually 12, 18 or 24 words long) used to generate your validator keys. Your mnemonic is the backup for your validator keys and is the ONLY way to withdraw your ETH, and no one can help you recover your mnemonic if you lose it.
An address that can be optionally set when creating a validator key that will be used to withdraw staked ETH. If this address is not set at the time of key creation it can be set at the time of withdrawal instead. For more information about setting the withdrawal address on key creation, see our FAQ answer.
This is a simple NUC that I set up for my home staking validator. It was very easy to build - around 10 minutes to unpack everything, slot in the RAM and SSD, and turn it on.
I decided on the larger form factor for the NUC (there's a normal and a slim version) to avoid any problems with restricted airflow and overheating. I'm also not constrained on space so I didn't mind having a slightly larger form factor on my shelf.
2TB SSD is the right amount for me as I won't need to upgrade it within the next 1-2 years and possibly by that time there may be improvements made in the protocol and/or clients that allow for smaller states, needing less storage.
You don't need 64GB of RAM but I wanted to have extra in case I needed it in the future, but 16GB would have been fine.
Total cost: £1165 (October 2022)
The machine only needed three parts:
To open the NUC, simply unscrew the four retaining screws, and detach the ribbon cable.
The ribbon cable has a small plastic retainer that can be unclipped by hand.
With the ribbon cable removed, the NUC will look like this:
The first component to insert is the SSD. There is a retaining screw that needs to be removed before the SSD is inserted (1).
The SSD is placed in the slot that says "NVMe ONLY" (2). It can only fit one way because of the little notch, so there's nothing to worry about.
Replace the SSD retaining screw (1).
The SSD in place should look like this:
Insert the RAM into the slots. Again, they can only fit one way because of the little notch.
The finished setup should look like this:
Replace the NUC base plate and secure the four retaining screws... and that's it!
All you need to do now is plug in the power cable and press the power button to turn it on 🥳
To install the validator software, check out the Linux installation guide.
This is a Custom-built-desktop that I put together for stake solo validator. I decided on high-quality components at a good price so that they will not be obsolete quickly.
There are 6 main things to keep in mind:
Motherboard External site link ↗
Microprocessor External site link ↗
RAM memory External site link ↗
Hard Drive External site link ↗
Power supply External site link ↗
Cabinet External site link ↗
I decided on a mini ITX motherboard Z690l which is a small form factor, the card is modern to house Intel 12th generation microprocessors that will last me for many years of hard work.
An Intel Corei5-12400 microprocessor with integrated graphics, the ratio with or without integrated graphics is very little difference in cost, plus you save on buying a graphics card.
A 16GB DDR5 RAM with RGB looks great for your build, in addition to the 6000 MHz it reaches.
A hard drive SSD 2TB PCI Express 3.0 NV1 M.2 NVMe.
The power supply EVGA Supernova 750 GM is very important for the assembly, I decided on an EVGA for the mini ITX that does not make noise, that is modular, you save on cables that you do not need and the fan is only activated when it is needed. With the SFX shape that is smaller.
Finally, Cabinet Cooler Master NR200P SFF - Mini-ITX, removable and with great access to modify the hardware components, including fans.
It is certainly a wise decision to build your own node. They are quality components without a doubt, you can save more by buying other motherboards or another type of RAM.
$280 USD - Motherboard Z690I AORUS ULTRA LITE LGA 1700/ Intel Z690/ Mini-ITX/ DDR5/ Dual M.2/ PCIe 3.0/ USB 3.2 Gen2X2 Type-C/2.5 GbE LAN
$225 USD - Microprocessor Intel Procesador Core i5-12400 - S-1700-2.50GHz - 6-Core - 18MB Smart Cache (12va. Generación - Alder Lake)
$125 USD - RAM memory 16GB DDR5 XPG ECC CL40 XMP 6000 MHz RGB
$145 USD - Hard Drive SSD 2TB PCI Express 3.0 NV1 M.2 NVMe
$115 USD - Power supply EVGA Supernova 750 GM, 80 Plus Gold 750W, Fully Modular, Eco Mode with FDB Fan, SFX Form Factor
$125 USD - Cabinet Cooler Master NR200P SFF - Mini-ITX
TOTAL: $1015 USD (September 2022)
Greetings #StakeFromHome #Mexico
Ethereum is a peer-to-peer network with thousands of nodes that must be able to communicate with one another using standardized protocols. The "networking layer" is the stack of protocols that allow those nodes to find each other and exchange information. This includes "gossiping" information (one-to-many communication) over the network as well as swapping requests and responses between specific nodes (one-to-one communication). Each node must adhere to specific networking rules to ensure they are sending and receiving the correct information.
There are two parts to the client software (execution clients and consensus clients), each with its own distinct networking stack. As well as communicating with other Ethereum nodes, the execution and consensus clients have to communicate with each other. This page gives an introductory explanation of the protocols that enable this communication.
gossip transactions over the execution-layer peer-to-peer network. This requires encrypted communication between authenticated peers. When a validator is selected to propose a block, transactions from the node's local transaction pool will be passed to consensus clients via a local RPC connection, which will be packaged into Beacon blocks.
will then gossip Beacon blocks across their p2p network. This requires two separate p2p networks: one connecting execution clients for transaction gossip and one connecting consensus clients for block gossip.
This tutorial will run you through the steps you need to take to setup a VPN server at home, allowing you to securely connect back in and manage your staking machine.
If you're ever out and about and away from home for long periods of time, then this tutorial is for you!
Question - Why should I set up a VPN server? It is easier to simply forward ports and connect in.
Answer - A VPN server introduces another layer of security. To connect to your node via SSH, you first have to connect to your VPN and then SSH into your node. This introduces another barrier and makes intrusions much more difficult.
You will need a public static IP address or a DNS for your home network.
This tutorial assumes you have Ubuntu Server 22.04 LTS installed and running. If not, no stress! You can follow the below link for a tutorial on how to do that.
For security purposes, I recommend you set up the VPN server on a separate machine that is not your staking machine. This can even be a VM.
With all of that out of the way - Let's dive right into it!
Please login to the machine and authenticate as superuser using the below command
sudo -i
Now execute the below command to ensure the OS and packages are up to date. Please upgrade any that aren't.
apt-get update && apt-get upgrade
Please execute the below command.
apt-get install ca-certificates wget net-tools gnupg
Execute the below three commands
This command will load the OpenVPN public GPG key.
This command will add the OpenVPN access server repository to your machine.
This command will check all configured repositories for updates.
Now the fun begins! To install, execute the below command
Hooray, you've just installed OpenVPN access server! Pay attention to this part of the debug, it contains valuable information.
The Admin UI is for making changes to the server config and adding users.
Browse to your Admin UI URL. You'll receive a certificate warning, you can safely ignore this and continue. Once completed, you'll see the below UI.
Please login, read and accept the EULA and we are ready to go!
We need to make a few network changes, for this please navigate to Configuration > Network Settings
Please find the "Multi-Daemon Mode" section, and edit both ports away from the default ports. This is for security purposes. These ports can be the same number. I picked 9514, but this is an example only, I recommend choosing your own ports.
Please don't navigate away from the "Network Settings" page for now. But you will need to open the below URL in a new tab.
Copy your IP address from this website and paste it in the "Hostname or IP Address" field located at the top of the "Network Settings" page. This will already be populated with your private IP address, you must overwrite it with your public one.
This step is optional but for security purposes I heavily recommend it.
We are going to configure the admin UI and the client UI to run on different ports because we only need to publicly access the client UI.
On the same page "Network Settings", please scroll down to the bottom and find "Client Web Server" and toggle the "Use a different IP address or port" setting.
Now we can change which port we want the client web server to run on, you can make this any port of your choosing. I chose 9515.
From here, please click "Save Settings" and then "Update Running Server"
Once the running server has been updated, you may need to refresh your browser and log back into the admin UI.
This step is also optional, but for security purposes I also heavily recommend it.
We are going to require that all user accounts setup and use 2FA so in the worst case scenario where someone did get your otherwise guess your user credentials, they won't be able to gain access.
Please navigate to Authentication > Settings
Please find the "TOTP Multi-Factor Authentication" setting and toggle it
Once the setting has been changed, you must again click "Save Settings" down the bottom and then "Update running server" as shown in the bottom of the last example (Step 5.3).
If you are an advanced user - You may have setup your OpenVPN server on a different subnet than your Ethereum nodes/validators.
If that is the case, you will need to browse to "Configuration > VPN Settings" and add in a static route for your validator network.
If they are not on separate subnets, please continue onto step 6.
Please navigate to "User management" > "User Permissions".
From here, you can add a new user. Please type out a username and tick the "Allow Auto-login" box, then select the "More Settings" box.
You can now set a password for the account in the new options that appear when you click "More Settings".
Once done, please "Save Settings" and "Update Running Server" again.
If you are using a local firewall (which you should be), you may need to unblock the local ports depending on how you have it configured.
Almost there!
For this step you will need to login to your router and forward ports to the machine running OpenVPN access server. The exact workflow is router dependent, so please search online for instructions and include your router make and model in the search.
You will need to forward two ports, both TCP/UDP.
The port(s) you entered for "Multi-Daemon Mode" (I used 9514)
The port you entered for "Client Web Server" (I used 9515)
Once completed, please browse to the below website and enter in your ports to check if they are forwarded correctly.
If you've made it this far, then congrats! You will be pleased to know that all the hard stuff is out of the way.
Please complete this step on the device you want to set up remote access.
NOTE: If you enabled MFA as per step 5.4, then after logging in you will be prompted to setup a 2FA credential. You can use something like Google Authenticator or Authy. I heavily recommend enabling this as the extra security is definitely worth it.
Please select the OS you are using.
From here you can select a client for your device.
If on Windows or Mac, it will automatically download the OpenVPN client software and guide you through the rest of the process.
If on Linux, Android or iOS, it will take you to an external page with further instructions.
Please download the "autologin profile".
Once done, you will have to import the profile into the OpenVPN software. The software itself (Windows or Mac) or external pages (Linux, Android or iOS) will show you how to do this.
Lucky you, I saved the easiest step for last.
If using a laptop or desktop, please connect to another network such as a mobile hotspot.
If using a phone, please disconnect from your WiFi and ensure you are connected to your telco's internet.
Go back to the OpenVPN software and hit connect and you'll be connected in just a matter of seconds.
If the connection was successful, you will see it now matches your home IP as you are now connected to your home internet connection. From here you can securely SSH into your Ethereum nodes/validators.
If you can connect to your home network but are unable to SSH into your servers, you may need to tweak the firewall on your Ethereum node to accept incoming SSH connections from the IP address of your OpenVPN server.
No. At least not easily. To get this to work you will need to write your own IP tables.
No. If you can access the server within your local network and download and setup your user profile, then you won't need to access the client UI externally.
However if you aren't within your local network and you need to redownload a user profile (For example if you are travelling and your phone/laptop dies and you get a new one) then you won't be able to login to the portal and download a new user profile.
To protect your server from brute-force SSH connection attempts, you can install fail2ban
.
This program will monitor incoming connections and block IP addresses that try to log in with faulty credentials repeatedly.
If you're using a non-standard SSH port that isn't 22
then you will need to change that in the config file above.
You can change the maxretry
setting, which is the number of attempts it will allow before locking the offending address out.
Save the file and restart the service.
Congratulations! You've successfully improved the security of your node 🥳
It is very important that you forward ports to both your Ethereum execution and consensus clients, this ensures you are able to peer with other nodes efficiently.
Your execution and consensus client will still run without port forwarding, but you may notice it will take a very long time to find peers, and you may struggle to connect with a meaningful number of them too. So it is in your best interest (and heavily recommended) to have them forwarded.
Below are a list of the default ports that the software(s) are set to listen on.
If your node is running and the port(s) have been forwarded, you can use an online tool to check whether the ports are forwarded correctly.
For more information on how to actually forward the ports, it is best to search for instructions via google, make sure to include your router make and model as the specific steps taken to forward ports will vary depending on your router.
It is also recommended to configure your node with a static IP. This can be hardcoded on the machine itself or set as a reservation on your DHCP server (which is usually your router).
If your node gets a dynamic IP address assigned to it, there is always the chance that your machine may get assigned a different IP address and your port forwarding will no longer work as they will be pointing to the old IP.
This is optional. You only need to consider this section if you run a node at home and would like to connect to it from outside of your home network.
Tailscale requires the use of an SSO identity provider, ensure you are aware of and are comfortable with the additional risks associated with this before proceeding. For details, visit .
If you would like to log into your home network remotely, such as while on vacation or on a business trip, the most common route is to use a Virtual Private Network server. This will allow you to connect to your node via SSH and view your monitoring dashboards from anywhere in the world, all without exposing your SSH port to the internet.
Many node operators use as their VPN server of choice for this. Tailscale is an open-source P2P VPN tunnel and hosted endpoint discovery service. It takes care of authentication, publication, and the NAT traversal required to establish an end-to-end encrypted path between your machine and your node without sending any sensitive traffic to a centralized server. It is a very powerful tool.
We will briefly cover a basic configuration of it, but feel free to for more details.
First, create a free . Tailscale requires the use of an SSO identity provider such as Google, GitHub, Okta, Microsoft, etc. For details, visit .
It is recommended that you enable 2FA (Two Factor Authentication) on whichever identity provider you choose for added security.
Next, follow to install Tailscale on your client - the machine you want to connect to your network with. For example, this could be a laptop or your phone. Note that it is not your validator node!
Once completed you should see your computer as 'connected' on the .
Now, install Tailscale on your node:
Finally, authenticate and connect your machine to your Tailscale network on your node:
You’re connected! You can find your Tailscale IPv4 address by running:
You should now be able to exit
the SSH session to your node on your client, and SSH into your node again through Tailscale using ssh <user>@<node-name> -p <ssh-port>
.
The following steps will modify your firewall rules. You must have at least 2 SSH sessions open to your node machine before proceeding - one for modifying the configuration and testing it afterward, and one that will stay logged in as a backup in case your changes break SSH so you can revert them!
Run these commands on the node machine.
Allow access to all incoming ssh connections over Tailscale.
You may also remove access to the SSH port to completely lock down your node. Note that you will not be able to log in from the local network as Tailscale will become the only way to log in. Only run the following command if you are okay with this.
Once you’ve set up firewall rules to restrict all non-Tailscale connections, restart UFW and SSH:
Now, confirm that everything is working as expected. exit
from one of your current SSH sessions (but remember to keep the second one open as a backup). Next, connect to the node machine via SSH using the Tailscale IP address or hostname: ssh <user>@<node-name> -p <ssh-port>
If it works, you did everything right and can now safely log into your home network while abroad!
If you've previously port-forwarded your node's SSH port in your router, you can now remove it.
Once per (every 6.4 minutes on average)
Also included in Block Proposals when using
Missed
Missed
Missed
The Client UI is for your devices, you'll be able to download user profiles/certificates here.
If you are one of the lucky ones that had to do , then you may also need to add your Ethereum node/validator subnet to the user account too.
These are the ports you set in and + port 943 (The default admin port). You can change the admin UI port if you wish, but as it doesn't have external access it is not really necessary.
Open the client web UI - You can use either your public or private IP for this step. In my case, I navigated to
Once in, login using the user account you created in . If successful, you will see the screen in the below step.
Now check your IP address at >
Now check your IP address again at >
So leaving the client UI exposed to the internet with MFA switched on () means the security is top notch.
Extra reading:
Now, . First, add Tailscale’s package signing key and repository on your node:
You should now see your node machine added to the . You may also change the name of the node machine and disable key expiry through the dashboard.
If you would like to access your node using its name rather than its IP address, you can do so by enabling in the Tailscale settings.
If you have UFW configured, you can now add a rule to .
This documentation was adapted from the
30303 TCP/UDP
30303 TCP/UDP, 30304 TCP/UDP
30303 TCP/UDP
30303 TCP/UDP
9000 TCP/UDP
9000 TCP/UDP
9000 TCP/UDP
12000 UDP, 13000 TCP
9000 TCP/UDP
Validator effectiveness can be thought of as how useful an attestation is to the network, considering both block production and inclusion distance.
How are attestations included on the Ethereum network? The process is roughly as follows:
This is a simplified view of attesting, but can be a useful starting point.
Every attesting validator generates an attestation with the data it has available about the state of the chain.
The attestation is propagated around the Ethereum network to relevant aggregators.
Every relevant aggregator that receives the attestation aggregates it with other attestations that have the same claims.
The aggregated attestation is propagated around the Ethereum network to all nodes.
Any validator that is proposing a block and has yet to see the aggregated attestation on the chain adds the aggregated attestation to the block.
Whenever an attestation has an inclusion distance greater than 1 it is important to understand why. There are a number of possible reasons:
A validator may have problems that result in delayed attestation generation. For example, it may have out-of-date information regarding the state of the chain, or the validator may be underpowered and take a significant amount of time to generate and sign the attestation. Regardless of the reason, a delayed attestation has a potential knock-on effect for the rest of the steps in the process.
Once an attestation has been generated by a validator it needs to propagate across the network to the aggregators. The nature of this process means that early propagation is critical to ensure that it is received by an aggregator in time for integration into the aggregated attestation before broadcasting. Validators should attempt to ensure they are connected to enough varied peers to ensure fast propagation to aggregators.
An aggregator can delay the attestation aggregation process. Most commonly this is because the node is already overloaded by generating attestations, but the speed of the aggregation algorithm can also cause significant delays when there is a large number of validators that need to be aggregated.
Similar to attestation propagation delay, the aggregation attestation needs to make its way around the network and can suffer the same delays.
For an attestation to become part of the chain it needs to be included in a block. However, block production is not guaranteed. A block may not be produced because a validator is offline, or is out of sync with the rest of the network and so produces a block with invalid data that is rejected by the chain. Without a block there is no way to include the attestation in the chain at that slot, resulting in a higher than optimal inclusion distance.
Block production failure has a second impact, which is that it increases the total number of attestations that are eligible for inclusion in the next block that is produced. If there are more attestations available than can fit in a block the producer is likely to include the attestations that return the highest reward, which will be those with the lowest inclusion distance. This can result in attestations that miss their optimal block also missing subsequent blocks due to being less and less attractive to include.
The fact that block production is out of the validator’s control (except for those the validator itself produces) requires the definition of the term earliest inclusion slot, where the earliest inclusion slot is the first slot greater than the attestation slot in which a valid block is produced. This takes in to account the fact that attestations cannot be included in blocks that do not exist, and is no reflection on the effectiveness of the validator.
It is possible for a malicious actor to refuse to include any given attestations in their aggregates, or to refuse to include attestations in their blocks. The former is mitigated by having multiple aggregators for each attestation group, and the latter by the cost of excluding an aggregated attestation. Ultimately, however, if the cost of excluding an attestation from a block is compensated for monetarily, or is considered to have a higher value politically, there is nothing an attesting validator can do to force inclusion by a block-producing validator.
Attestation effectiveness can be thought of as how useful an attestation is to the network, considering both block production and inclusion distance. It is formally defined as:
(earliest inclusion slot - attestation slot) / (actual inclusion slot - attestation slot)
Effectiveness is represented as a percentage value. An attestation that fails to be included with the maximum inclusion distance of 32 is considered to have an effectiveness of 0. Here are some example effectiveness calculations:
5
6
6
100%
5
6
7
50%
5
6
8
33.3%
5
7
7
100%
5
7
8
66.7%
5
7
9
50%
Attestation effectiveness for a single attestation is interesting but not very useful by itself. Aggregating effectiveness over multiple attestations, both over time and multiple validators, gives a better view of the overall effectiveness of a group of validators. Aggregate effectiveness can be calculated as a simple average of individual attestation effectiveness, for example a 7-day trailing average across all validators in a given group.
mevboost.org ↗ - Tracker with real-time stats for MEV-Boost relays and block builders.
MEV-Explore ↗ - Dashboard and live transaction explorer for MEV transactions.
A community maintained list of checkpoint sync endpoints can be found here: Ethereum Beacon Chain checkpoint sync endpoints ↗
The endpoint maintained by EthStaker can be found here
Checkpoint sync, also known as weak subjectivity sync, creates a superior user experience for syncing Beacon Node. It's based on assumptions of weak subjectivity↗ which enables syncing Beacon Chain from a recent weak subjectivity checkpoint instead of genesis. Checkpoint sync makes the initial sync time significantly faster with similar trust assumptions as syncing from genesis.
In practice, this means your node connects to a remote service to download recent finalized states and continues verifying data from that point. The third-party providing the data needs to be trusted to provide the correct information about the finalized state and should be picked carefully.
You must verify the slot
and state_root
against a known trusted source. This can be a friend, someone from the community that you know or any other source that you trust. There is a maintained list of publicly hosted checkpoint sync endpoints here↗, but it is recommended to use your own trusted source first if possible.
You will need to know the IP & Port of your beacon node.
Option A:
Check your consensus client logs
Find the slot number.
Find the state_root value.
Option B:
Open http://YOUR_NODE_IP:YOUR_NODE_PORT/eth/v1/beacon/headers/finalized
in your browser.
Find the slot number.
Find the state_root value.
Option C:
Install curl and jq.
In a new terminal window run:
If the slot
and state_root
from your validator matches the slot
and state_root
from (multiple) other sources, then it's a match, congratulations 🎉. If it's not a match you should start from scratch by wiping your beacon node and starting from the top.
This is a list of high quality staking guides that are recommended on ethereum.org ↗:
It's critically important for the health and long term sustainability of Ethereum that there is a diverse and balanced client ecosystem. Multiple, independently developed and maintained clients exist because client diversity makes the network more resilient to attacks and bugs. Multiple clients is a strength unique to Ethereum - other blockchains rely on the infallibility of a single client. However, it is not enough simply to have multiple, clients available, they have to be adopted by the community and the total active nodes distributed relatively evenly across them.
Please consider running a minority client to support Ethereum.
More information on client diversity can be found on ethereum.org ↗
Even if you have a large SSD installed, you may only have (Ubuntu Server) 200GB of total available space depending on whether or not you have a LVM configured for your disk.
This can cause the system to run out of disk space when syncing.
The error message is similar to:
Fatal: Failed to register the Ethereum service: write /var/lib/goethereum/geth/chaindata/383234.ldb: no space left on device
To address this issue, assuming you have an SSD that is larger than 200GB, expand the space allocation for the LVM by following these steps:
Congratulations! You're now using all available disk space on your staking machine 🥳
If you have multiple machines, make sure to stagger the Unattended-Upgrade::Automatic-Reboot-Time
so they don't all restart at exactly the same time!
Automatic security updates are helpful when you are not able to access your machine but want critical security updates to be applied automatically.
Update system packages again.
Restart the machine.
Congratulations! You've successfully enabled unattended-upgrades
on your staking machine 🥳
The Lighthouse validator client includes a mechanism to protect its validators against accidental slashing, known as the slashing protection database. This database records every block and attestation signed by validators, and the validator client uses this information to avoid signing any slashable messages.
For more info about Execution clients and Validator clients start here: Validator clients explained 👀
The Ethereum community maintains multiple open-source execution clients (previously known as 'Eth1 clients', or just 'Ethereum clients'), developed by different teams using different programming languages. This makes the network stronger and more diverse. The ideal goal is to achieve diversity without any client dominating to reduce any single points of failure.
Geth
Besu
Nethermind
Erigon
Go Ethereum (Geth for short) is one of the original implementations of the Ethereum protocol. Currently, it is the most widespread client with the biggest user base and variety of tooling for users and developers. It is written in Go, fully open source and licensed under the GNU LGPL v3.
Hyperledger Besu is an enterprise-grade Ethereum client for public and permissioned networks. It runs all of the Ethereum Mainnet features, from tracing to GraphQL, has extensive monitoring and is supported by ConsenSys, both in open community channels and through commercial SLAs for enterprises. It is written in Java and is Apache 2.0 licensed.
Besu's extensive documentation will guide you through all details on its features and setups.
Nethermind is an Ethereum implementation created with the C# .NET tech stack, licensed with LGPL-3.0, running on all major platforms including ARM. It offers great performance with:
An optimized virtual machine.
State access.
Networking and rich features like Prometheus/Grafana dashboards, seq enterprise logging support, JSON RPC tracing, and analytics plugins.
Erigon, formerly known as Turbo‐Geth, started as a fork of Go Ethereum oriented toward speed and disk‐space efficiency. Erigon is a completely re-architected implementation of Ethereum, currently written in Go but with implementations in other languages under development. Erigon's goal is to provide a faster, more modular, and more optimized implementation of Ethereum. It can perform a full archive node sync using around 2TB of disk space, in under 3 days.
For more info about Execution clients and Validator clients start here: Validator clients explained 👀
Consensus clients run the Beacon Chain and provide a proof-of-stake (PoS) consensus mechanism to execution clients.
Consensus clients all follow the same specification ↗. If a client doesn't follow this spec it won't be able to come to consensus with the rest of the network.
Lighthouse
Lodestar
Nimbus
Prysm
Teku
Lighthouse is a consensus client implementation written in Rust under the Apache-2.0 license. It is maintained by Sigma Prime and has been stable and production-ready since Beacon Chain genesis. It is relied upon by various enterprises, staking pools and individuals. It aims to be secure, performant and interoperable in a wide range of environments, from desktop PCs to sophisticated automated deployments.
Lodestar is a production-ready consensus client implementation written in Typescript under the LGPL-3.0 license. It is maintained by ChainSafe Systems and is the newest of the consensus clients for solo-stakers, developers and researchers. Lodestar consists of a beacon node and validator client powered by JavaScript implementations of Ethereum protocols. Lodestar aims to improve Ethereum usability with light clients, expand accessibility to a larger group of developers and further contribute to ecosystem diversity.
Nimbus is a consensus client implementation written in Nim under the Apache-2.0 license. It is a production-ready client in use by solo-stakers and staking pools. Nimbus is designed for resource efficiency, making it easy to run on resource-restricted devices and enterprise infrastructure with equal ease, without compromising stability or reward performance. A lighter resource footprint means the client has a greater margin of safety when the network is under stress.
Implemented by Trinity. Works like fast sync but also downloads the data needed to execute latest blocks, which allows you to query the chain within the first few minutes from starting.
Syncs state first and enables you to query RPC in a few minutes.
Prysm is a full-featured, open source consensus client written in Go under the GPL-3.0 license. It features an optional webapp UI and prioritizes user experience, documentation, and configurability for both stake-at-home and institutional users.
Teku is one of the original Beacon Chain genesis clients. Alongside the usual goals (security, robustness, stability, usability, performance), Teku specifically aims to comply fully with all the various consensus client standards.
Teku offers very flexible deployment options. The beacon node and validator client can be run together as a single process, which is extremely convenient for solo stakers, or nodes can be run separately for sophisticated staking operations. In addition, Teku is fully interoperable with Web3Signer↗ for signing key security and slashing protection.
Teku is written in Java and is Apache 2.0 licensed. It is developed by the Protocols team at ConsenSys that is also responsible for Besu and Web3Signer.
Step 1: Download the deposit command line interface app↗ for your operating system.
Please make sure that you are downloading from the official Ethereum Foundation GitHub account by verifying the url: https://github.com/ethereum/staking-deposit-cli/releases/
Step 2: Generate deposit keys using the Ethereum Foundation deposit tool
For security, we recommend you disconnect from the internet to complete this step.
Decompress the file you just downloaded.
Use the terminal to move into the directory that contains the deposit executable.
Run the following command to launch the app.
./deposit new-mnemonic --chain mainnet
Please make sure you have set --chain mainnet
for Mainnet, otherwise the deposit will be invalid.
Now follow the instructions presented to you in the terminal window to generate your keys.
TODO
TODO
These tools can be used as an alternative to the Staking Deposit CLI ↗ to help with key generation.
Linux
macOS
Windows
GUI
Linux
Windows
CLI
Browser
GUI
✅ OPEN SOURCE
✅ AUDITED
❌ BUG BOUNTY
✅ BATTLE TESTED
✅ PERMISSIONLESS
✅ SELF CUSTODY
✅ OPEN SOURCE
✅ AUDITED
❌ BUG BOUNTY
✅ BATTLE TESTED
✅ PERMISSIONLESS
✅ SELF CUSTODY
✅ OPEN SOURCE
❌ AUDITED
❌ BUG BOUNTY
✅ BATTLE TESTED
✅ PERMISSIONLESS
✅ SELF CUSTODY
Connecting remotely to a staking machine, whether it's hosted by a cloud provider (AWS, etc.) or running in your home is most often achieved using SSH (Secure Shell).
SSH is a command line tool that allows direct access to a remote machine. This tutorial will cover:
This tutorial won't cover the networking setup required to get a static IP, hostname and/or VPN as those are covered in other tutorials.
While SSH on its own is a great tool, there are some limitations that can be frustrating when connecting over a poor internet connection. For example, if the internet drops even for a second (if you're in a moving car or train) or you change WiFi networks, the SSH connection will be closed.
When you installed Linux on your staking machine the installation options should have asked if you would like to install SSH during the setup process.
To check if SSH is installed on your staking machine run the command:
If SSH is installed you should see a response showing the installed version:
If you get an error or don't see the version output then it's likely that the SSH server is not installed. When you want to install any new packages to your Linux system it's best practice to make sure that your current packages are up to date for security purposes:
Then install openssh-server
:
If you are using UFW as your firewall and have restricted incoming and outgoing connections then you will need to add the SSH port to allow remote connections (replacing <SSH_PORT>
with the configured SSH port - the default port is 22
):
Once you have confirmed SSH is installed on your staking machine, you can connect from a different machine using the command:
For example: ssh eridian@186:204:70:208
This command attempts to connect with your user's username at the specific IP address (or Host Name) of your staking machine.
You may get a prompt saying something like "You haven't connected to this machine before, do you want to trust it?" to which you should submit Yes
as the response.
At this point, if everything is configured correctly, you should be prompted to input your password. This is the password you use to log in with your staking machine user.
If you are using a different port for your SSH connection then you can specify the port when connecting using:
Benefits of using Mosh:
If you have an intermittent internet connection (e.g. a mobile connection or you're in a moving vehicle) a standard SSH connection will fail whenever the connection is lost. The connection must then be manually re-established, which can be annoying if it happens often and you are using additional security steps such as 2FA. Mosh allows connections to be dropped and automatically re-established when the internet signal reconnects.
Mosh uses a predictive interface for typing commands into the console. Standard SSH only shows the typed command once it has returned from the remote server. If you have a slow connection, this can be perceived as a laggy/slow interface. Mosh displays the text as you type commands, giving a much nicer user experience.
Limitations of using Mosh:
A limitation you will notice when using Mosh is that you can't scroll back up the terminal history. This is due to the way Mosh only renders the current screen, which has some performance advantages but can be frustrating if you miss something and can't scroll back to see it.
The Mosh package should be installed on both sides of the connection. That means both your staking machine and the machine you want to connect from (e.g. your everyday computer) will need Mosh↗ installed.
Install Mosh on your staking machine:
If you are using UFW, allowing Mosh ports through the firewall:
Mosh uses the same connection method as SSH, so once it is installed and the ports have been allowed it should be as simple as connecting with the command:
If you have changed the default SSH port you can specific the port used by Mosh using the command:
The Blink Shell↗ mobile app for iOS allows you to connect to your staking machine using both SSH and Mosh.
On your device (iPhone or iPad) open the Blink Shell app and type:
Keys & Certificates can be added if you are using an SSH key for your connections:
Hosts can be configured so you have an alias command e.g. ssh validator
that you can use with preconfigured settings
iCloud sync can be turned off if you don't want your SSH keys and passwords to be stored in iCLoud.
Auto Lock is a useful feature to add additional security to your portable device.
And that's it! You can now connect to your home staking validator remotely from your iOS device 🗺️
For additional security, SSH keys can be used alongside or instead of your username/password authentication when connecting to your staking machine.
Follow the instructions here to generate SSH keys: https://linuxconfig.org/how-to-generate-and-manage-ssh-keys-on-linux
The default port configured is 22
for SSH connections. If you want to change the default port for any reason (e.g. due to port forwarding on your router or the port being used by another service) follow these steps:
Open the /etc/ssh/sshd_config
file and locate the line:
Uncomment that line (by removing the leading #
character) and change the value with an appropriate port number (for example, 22000):
Save the change.
Restart the SSH server:
To confirm the port has been updated correctly run:
The result should show the new port number:
To install Linux on a physical machine here are the steps to follow:
Download a Linux distribution image onto your everyday computer.
Flash a USB with the distribution image.
Boot your staking machine from the USB.
Select the right options for your installation.
There are lots of Linux distributions available. If you are an experienced Linux user then you will already know which distribution you want to use based on your skills and ability. However, if you are a new Linux user or just want to keep things simple, then the recommended Linux distribution is Ubuntu Linux.
There are two types of distribution that you can choose:
Desktop: https://ubuntu.com/download/desktop↗
Desktop comes with a graphical interface that is similar to Windows or macOS desktops. For staking machines, the desktop version isn't ideal as it comes with additional overhead that isn't required, but it can be easier for new users who feel more comfortable with a graphical interface.
Server is a command line only interface. This can feel intimidating at first, but when following solo staking guides you will simply be copying and pasting commands, so it's not too difficult. You can remotely connect to your staking machine securely using protocols like SSH, but the easiest way to get started is to directly connect a keyboard and monitor. SSH can always be used later.
There are lots of tools available to flash USB drives with disk images. One that is open source and works across multiple platforms is https://www.balena.io/etcher↗. Simply select the Linux distribution image you downloaded previously, select the USB, then Flash!
This step should be as easy as inserting the USB that you flashed with the disk image in the previous step and then turning on your staking machine. In some cases, you may need to force the machine to boot from the USB rather than any currently installed OS. This can be done by editing the BIOS boot order and allowing booting from USB. Google is the best place to find information about booting from a USB if you do encounter any problems at this stage.
Once you have booted from a USB you will be presented with an installation menu. Use the arrow keys (up and down) to move the selection and use the return key (enter) to select the option.
After selecting Try or Install Ubuntu Server
you will see a screen like this. You don't need to do anything at this point, the system is just starting up.
Once the system has started you will be presented with the installation wizard. The first step is to select the language.
Select the keyboard layout.
Select the installation type you want to use. For this, select Ubuntu Server
.
Select a network. If your staking machine uses an ethernet cable for a direct network connection (recommended) then this option should already be populated. If using WiFi, select those details.
Select a proxy if required. If you are using a standard home network and don't know what this option means, don't worry, just leave it blank.
Select where you want to download the updates for the operating system from. This location can be selected based on your geographic location so that the downloads are faster. But it's easier to just select the default option that's pre-populated.
Select the storage configuration. As your staking machine is most likely a dedicated machine selecting Use an entire disk
is the best option. Don't worry about encryption as you want your machine to be able to automatically restart, and encrypted disks make that process much more complex.
You'll be shown a summary screen of the storage configuration. Linux by default may not use the entire available disk space. In the screenshot above the local storage size is shown as 1.171 Terabytes, but the confirmation screen below only shows 100GB being used.
To use all the available disk space, use the arrow keys to highlight the ubuntu-lv
row and hit the return/enter key to select Edit
. Enter the Max
value shown next to Size
in the input field, then save the update.
After confirming the storage settings you will be presented with an additional confirmation screen to make sure that you're ready to completely format and wipe any existing data on the storage disk. That's what we want, so select Continue
.
Setting up the user profile is important as it's how you will access the machine, both directly and remotely. Select a name for your user and the name for your server that will appear on your local network. Your username is used to login to the machine and the password protects your user account.
At this point it's a good idea to set up the SSH server so you don't have to install it manually later. If you never intend to SSH into your staking machine and only connect to it directly with a keyboard and monitor then you don't need this option. For information on SSH connections see the tutorial Connect with SSH.
This screen might be displayed asking you to select or deselect popular snaps. Don't worry about this page, it might even be empty for you. Simply move on to the next screen.
At this point, the installation will begin using all the configuration settings you've provided. This can take a few minutes (10 or more) depending on your hardware and configuration. You don't need to do anything, just wait until it completes. At the end of the installation process, you will need to reboot your machine. Select Reboot Now
and it will ask you to remove the installation device (the USB you used during the installation).
Once the system reboots you'll see startup information similar to the output below. Wait until that completes and you'll be shown a login screen.
This is the login screen for your validator machine. The name of this machine is eridian-validator
.
Enter the username you created during the installation. You will then be prompted for your password. As you type your password nothing will be shown on the command line (so it will look like it's not working!) but don't worry, this is for security and the typing is working.
And... you're in!
Congratulations! You've successfully installed Ubuntu Linux server on your staking machine 🥳
At this point, you are now "on the command line" and can start to work through many of the solo staking guides.
A common reason for Geth to fail can be an unexpected shutdown of a validator machine. Geth uses RAM for temporary memory and during a graceful shutdown some important information will be written to disk. However, during an unexpected shutdown, there isn't time to write to disk (e.g. due to a sudden loss of power) so important data is lost. This loss of data leads to a corruption of the chaindata
folder, requiring a resync.
Standard location of the chaindata
folder.
Standard location of the ancient
folder.
Good news! The required resync can be made much faster than a full resync simply by keeping the ancient
folder. The ancient folder contains files that are not corrupted during an unexpected shutdown.
Stop Geth.
Move the ancient
folder.
Delete the chaindata
directory and recreate it.
Move the ancient folder back to the now empty chaindata directory.
Change the ownership of the chaindata
directory to the Geth user.
Start Geth.
Congratulations! You've successfully started a Geth resync 🥳
If the ancient folder does not exist, that's not a problem. It just means you will need to resync Geth from scratch, which will take a bit longer.
Stop Geth.
Delete the chaindata
directory and recreate it.
Confirm the ownership and permissions for the chaindata
directory are set to the Geth user.
Start Geth.
Congratulations! You've successfully started a Geth resync 🥳
Two-factor authentication involves requiring a second security measure in addition to your password or SSH key, usually on a separate device from your primary one.
For example, you may be familiar with logging into a website such as a crypto exchange using both a password and a Google Authenticator code (or an SMS code). This two-step process is an example of two-factor authentication.
SSH can also be configured to require a Google Authenticator code, which means that an attacker that somehow compromised your SSH key and its passphrase would still need the device with the authenticator app on it (presumably your phone). This adds an extra layer of security to your system.
It is strongly recommended that you open a second terminal with an SSH connection to your node, just in case you misconfigure something. This way, you will have a backup that is still connected in case you lock yourself out, so you can easily undo your mistakes.
If you do manage to lock yourself out, you will need to physically access your node via its local monitor and keyboard to log in and repair the misconfiguration.
Start by installing (or a compatible equivalent) on your phone if you don't already have it. For Android users, consider which is an open-source alternative that supports password locking and convenient backups.
Next, install the Google Authenticator module on your node with this command:
Now tell the PAM
(pluggable authentication modules) to use this module. First, open the config file:
Find @include common-auth
(it should be at the top) and comment it out by adding a #
in front of it, so it looks like this:
Next, add these lines to the top of the file:
Then save and exit the file with Ctrl+O
, Enter
, and Ctrl+X
.
Now that PAM
knows to use Google Authenticator, the next step is to tell sshd
to use PAM
. Open the sshd
config file:
Now change the line KbdInteractiveAuthentication no
to KbdInteractiveAuthentication yes
so it looks like this:
(Older versions of SSH call this option ChallengeResponseAuthentication
instead of KbdInteractiveAuthentication
.)
Add the following line to the bottom of the file, which indicates to sshd
that it needs both an SSH key and the Google Authenticator code:
Every option added to AuthenticationMethods
will be required when you log in. So you can choose e.g. 2FA and password, or a combination of all three methods.
publickey
(SSH key)
password publickey
(password)
keyboard-interactive
(2FA verification code)
Then save and exit the file with Ctrl+O
, Enter
, and Ctrl+X
.
Now that sshd
is set up, we need to create our 2FA codes. In your terminal, run:
First, it will ask you about time-based tokens. Say y
to this question:
You will now see a big QR code on your screen; scan it with your Google Authenticator app to add it. You will also see your secret and a few backup codes looking like this:
Record the emergency scratch codes somewhere safe in case you need to log into the machine but don't have your 2FA app handy. Without the app, you will no longer be able to SSH into the machine!
Finally, it will ask you for some more parameters; the recommended defaults are as follows:
Once you're done, restart sshd
so it grabs the new settings:
When you try to SSH into your server with your SSH keys, you should now also be asked for a 2FA verification code, but not for a password.
This is a reminder that you should never have your validator keys configured across multiple machines at the same time, because if the same validator key is active twice across the network it will get . They should only ever be configured to run in one place at one time!
With that out of the way, let's get into it. There are a few things that you can do to minimize your potential downtime.
There will always be situations where you will have downtime, it is inevitable when running a validator so please don't chase a perfect attestation record. There are however some things you can do to minimize downtime.
The below ideas may or may not be feasible depending on how many validators you are running. Please weigh up the pros and cons yourself and decide if it is appropriate for you to do in your circumstances.
This will ensure abrupt shutdowns don't occur potentially saving your hardware from breaking, or your DB/OS from corrupting saving you a resync/reinstall. More information can be found about this on the
Either on the same machine but on a different SSD or an entirely separate machine and running different consensus/execution client software. A separate machine is more important if you are running a sizeable number of validators, otherwise, it may be overkill.
It is perfectly safe to run multiple nodes for redundancy, just not multiple validators.
The benefit of doing this is you won't have any downtime should one of the client pairs go offline, or corrupt, or if the SSD where it is sitting breaks and it requires manual maintenance to bring back online. You'll be able to fix the broken node in your own time while the validator will happily use the other configured beacon node and continue performing its duties.
You can even take it a step further and have your validator client on a separate SSD (For example, with your OS) and have it point to your beacon nodes, both of which would also be on separate SSDs, with less points of failure all around.
It can be useful to have a spare SSD ready to be swapped out in case of hardware failure. You will be able to immediately start the process to recover your nodes/validators and when that is done you can then buy a replacement drive at your own leisure.
If you travel around a lot, you could even have it plugged into your machine on standby ready to go meaning your node could be recovered remotely, unless of course, the drive that fails is your OS drive.
There will be times when you are offline and are missing attestations, do not stress or panic when this happens and focus on getting yourself back online. If for example, you are offline for 4 hours, it will take 4 hours of being online to be back where you started in terms of validator balance.
For more information about downtime see our helper posts:
Exiting a validator requires a signed message to be sent from your validator client. The withdrawal process is different for each client. These links are for each specific client:
This page will show you how to configure your execution client to serve HTTP RPC requests.
This will allow you to interact directly with the Ethereum network using your own node. No need to use a 3rd party service like Infura anymore!
You will need to add the following flags to your execution client.
Please note, configuring your --http-corsdomain as per the above example will allow anyone to use your node as an RPC endpoint. Please ensure this is also paired with the appropriate firewall rule(s) to prevent this from happening.
This will indicate your Geth node is ready for RPC connections
Please note, configuring your --rpc-http-cors-origins as per the above example will allow anyone to use your node as an RPC endpoint. Please ensure this is also paired with the appropriate firewall rule(s) to prevent this from happening.
This will indicate your Besu node is ready for RPC connections
Please note, configuring your --http.vhosts as per the above example will allow anyone to use your node as an RPC endpoint. Please ensure this is also paired with the appropriate firewall rule(s) to prevent this from happening.
This will indicate your Erigon node is ready for RPC connections
The below example will show you how to use your RPC endpoint with Metamask as it is one of the most commonly used wallets.
The specific details will vary depending on your local setup. As I am running Geth on the same machine as my Metamask installation, so I am using 127.0.0.1 as the IP address.
If your RPC is unavailable or otherwise inaccessible, it may show an error when you enter the Chain ID and won't allow you to save the network.
Success! Now you can use Metamask as you normally would with the added benefit of accessing the Ethereum network through your own node 🥳
When migrating validator keys, take your time, do not rush!
There are many scenarios where you need to move the validator keys from one machine to another, here are some examples:
⬆️ Upgrading hardware.
🔧 Recovering from a hardware failure.
☁️ Migrating from a cloud hosting service to a home staking machine.
In any of these cases, the procedure should be the same. The most important thing to remember is that the penalty for being offline is very low, so do not optimize for minimum downtime. A slashing event caused by incorrect key migration will incur a penalty equivalent to MONTHS of simply being offline.
🚨 Do not rush 🚨
Source: Where the keys are coming from. Target: Where the keys are being migrated to.
Stop the validator client on the source machine.
Stop the validator client on the target machine.
Wait a MINIMUM of 2 finalized before continuing.
Copy the validator keys to the target machine either through intermediate storage (e.g. a USB) or directly from source to target machine (e.g. scp
, rsync
, etc.). If the validator keys have been lost due to a hardware failure, .
Delete the keys from the source machine. This ensures that even if the source machine restarts unexpectedly, the validator signing keys won't exist so cannot be used by the validator client.
If available, export any from the source machine and import on the target machine.
Turn off the source machine and be 100% sure it cannot be restarted.
Start the validator client on the target machine.
Import the validator keys.
Check the validator client logs to confirm everything is working correctly.
Congratulations! You've successfully migrated your validator keys between two machines 🥳
(Request access)
(#teku channel)
Now you will need a wallet that allows you to add custom RPC endpoints. You can find a list of wallets with this feature
It's crucial to note that the recommendations and strategies presented in the following pages go above and beyond what a small-scale or solo staker would typically require. If you're staking from home with just a few validators, don't be overwhelmed; you don't need an entire incident response team or complex resource scaling strategies!
Welcome to the comprehensive guide written for scaled node operators in the Ethereum staking ecosystem. If you're an operator running a significant number of validators (approximately 100 or more) this guide is tailored for you.
The content in this guide is valuable to anyone involved in Ethereum staking but is particularly useful for those who have moved beyond solo or small-scale operations. Larger operators have unique challenges and opportunities that require specialized knowledge and strategies. From security protocols to resource scaling, the practices discussed in this section aim to offer critical insights that can materially improve the efficiency and security of large-scale staking operations.
While the added complexity might seem daunting, the benefits of optimized resource management, heightened security, and streamlined update processes can translate into significant advantages for large-scale operators. The increased cost and effort of implementing these practices are often justified by the higher stakes involved, both literally and figuratively.
This guide covers a range of topics, from incident response and security protocols to updates and resource scaling. Each section provides a deep dive into the best practices, actionable strategies, and things to look out for when operating at scale.
If you're responsible for a large-scale Ethereum staking operation, the following pages will equip you with the knowledge you need to operate efficiently, securely, and profitably. So let's dive in and explore what it means to be a scaled node operator in the evolving world of Ethereum staking.
If you are already running an Execution Client (EC) e.g. Geth and a Beacon Node (BN) e.g. Lighthouse you can connect your Obol DVT node to them. This allows you to reuse existing hardware and continue to run your solo staking validator alongside Obol DVT validators.
On your existing Beacon Node, ensure these flags are added so the Charon client can communicate directly.
There are three steps needed to change the configuration:
Copy the sample .env
file.
Uncomment (remove the #
) the line CHARON_BEACON_NODE_ENDPOINTS
. Add your existing Beacon Node IP address, for example "http://192.168.1.8:5052"
. As the Charon Client is running inside a Docker container you can't use localhost
, even though it might be running on the same physical machine, it requires the IP address of the host machine.
Any uncommented section will automatically override the same section in docker-compose.yml
when run with docker-compose up
. This allows you to edit the variables used by Docker without changing the docker-compose.yml
which could be modified in future updates.
Edit the newly copied file docker-compose.override.yml
and uncomment (remove the #
) the following lines:
services:
geth:
profiles: [disable]
lighthouse:
profiles: [disable]
You are now ready to start the Obol tutorial for creating an ENR and getting your new DVT validator set up!
To serve as a validator, both CL and EL need to be up to date with the network. There are a couple of techniques of how to check that.
All this health check data should lead to a monitoring tool of your choice.
Most of the nodes have exposed health checks APIs, that return HTTP 5xx
if the node is not syncing properly, and HTTP 200 OK
if everything is okay. That is the most simple and most basic version of the health check.
One more strategy is based on the timestamp of the latest block.
For EL that is a response to eth_getBlockByNumber("latest", false)
.
It has a field called timestamp
. By knowing the timestamp of the block and the block production rate (1 block per 12 seconds), it is possible to see how "old" is the current block of the node.
Since sometimes the block proposals could be missed, it doesn't make sense to keep this threshold too tight, but if it is > 5 minutes old, it makes sense to mark the node as "unhealthy" and notify your monitoring system.
Finally, the case when the blocks are being synced, but you are on the wrong fork. Detection of that could happen on the EL very easily, by using the block hash returned from eth_getBlockByNumber("latest", false)
.
You can compare these hashes across the nodes and also across the external sources of truth.
As a scaled Ethereum staking provider, you're responsible for a significant part of the network's overall health and security. This guide provides you with targeted information on what to prioritize when incidents happen, ensuring that you can react effectively.
Monitor performance and error metrics such as missed attestations, node latency, and validator performance to identify issues early. Implement alerts for any anomalies in these metrics.
Set predefined thresholds for raising an alarm. For example, if more than 5% of validators are underperforming or if you observe an unusual surge in network requests, it should immediately trigger an alarm.
Incident response example from GatewayFM
Once one of our machines went offline in one of our new bare metal providers, and we could only ask for help using the use website support ticket because we didn't have all the contacts yet. The communication was done via emails and there weren't not many updates from their side. After one hour our CEO called our account manager and we managed to create a Slack channel with their engineers. And things went quickly after that. In the end, they had to go to the data center physically and examine the machine. When the machine started again, our service was offline for 4 hours. In retrospect, it could have been longer if we didn't have the Slack channel.
Lesson learned, always ask for a direct support/communication channel (phone, Slack, etc) when on-boarding a new data center provider.
Initial Assessment: Determine the scope of the problem. Is it affecting one validator, multiple validators, or is it a network-wide issue?
Isolate the Issue: Segregate the affected validators to prevent the issue from spreading.
Consult Logs: Review system logs for any error messages or anomalies that could point to the root cause.
Communication: Notify your internal team. Transparency and quick communication are vital, especially if the issue impacts more than your operations.
Message Channels and Forums: While it's sensitive information, sharing what you suspect is an attack on public channels like Discord or Reddit can be valuable for corroborating with others.
Social Media: Use X or other platforms to alert the community; however, be very cautious and responsible with the language you use to prevent unnecessary panic.
Network Peers: If you're part of any coalitions or partnerships with other node operators, inform them so that they can also take precautionary measures.
Security Team: Alert your internal security team first for an initial assessment.
Ethereum Foundation Security: They have a responsible disclosure process for vulnerabilities.
GitHub: If the vulnerability is in an open-source tool, you may also open a confidential issue on the respective GitHub repository.
Private Communication Channels: For less immediate vulnerabilities, reach out to trusted peers in the industry via secure, private channels to verify the issue before going public.
What to look for first?
Is the node up and running? Is the validator client up and running? CPU/RAM/Disk space okay?
Read the logs. Are there enough peers? Is the number of validators found by the validator client as you expected?
Is your node in sync/is it syncing? If so, is it on the right fork? Take eth/v1/beacon/headers/head
API and check it against any public block explorer or in a community.
Is the network finalizing? eth/v1/beacon/headers/finalized
API -- should be moving every 6.2 minutes.
Being a scaled node operator comes with the responsibility of ensuring the network's security and efficiency. Adequate preparation and knowing precisely what to focus on when issues arise will make your incidence response effective and timely. Always remember, in times of incidents, swift action and clear communication are key.
Alerts should be carefully selected. It’s easy to set thresholds on every possible monitored metric and add an alarm to it... 🚨🚨🚨
But that leads to fatigue, distractions, and eventually ignoring alerts, see this article for more details.
Alerts should never be ignored, even if you think you have an idea what caused them. If you receive an alert, follow up on it and determine if it was a false positive, overly cautious, or a valid alert. If it is not a valid alert, update the process to ensure that it does not happen again in the future.
Multiple layers of alerting can help to reduce single points of failure and provide confidence in the metrics when something does go wrong. In an Ethereum validator example, alerting that says a validator machine is not contactable would be useful, but not enough on its own. Couple this with monitoring of the Beacon Chain to get alerts when a validator is missing attestations and you'll be much more confident that something is wrong.
For good tips on alerting in general, see “My Philosophy on Alerting”.
Relevance and Specificity: A good alert is highly relevant to the system's overall health and performance. It should target specific conditions that are indicative of real issues, avoiding general or vague triggers. This specificity helps in the quick identification and resolution of problems.
Actionable Information: The alert should provide enough information for immediate action. It needs to be clear about what the problem is, where the problem is, and ideally, suggest potential solutions or next steps. This reduces the time spent in diagnosing the issue.
Prioritization: Alerts should be categorized based on severity and impact. Critical alerts that require immediate attention should be distinguishable from lower-priority ones. This prioritization helps in managing and responding to alerts effectively.
Minimization of False Positives: A good alert system minimizes false positives. Frequent false alarms can lead to alert fatigue, causing important alerts to be overlooked. Regular review and adjustment of alert thresholds are necessary to maintain their accuracy and effectiveness.
Contextual Information: Providing context, such as historical data or related events, can be invaluable in understanding the significance of an alert. This helps in assessing the severity and potential impact of the issue.
Integration with Response Systems: Effective alerts are integrated with incident response systems or protocols. This integration ensures that alerts trigger appropriate responses, whether it’s notifying the right team or initiating automated remediation processes.
Continuous Improvement: A good alert system is not static. It evolves based on feedback and changes in the system it monitors. Regular reviews and updates to the alerting criteria and thresholds are essential for maintaining its effectiveness.
Escalation Pathways: There should be clear escalation pathways for alerts that are not addressed within a certain timeframe. This ensures that critical issues do not remain unresolved due to oversight or unavailability of initial responders.
By adhering to these principles, an alerting system can be both effective and efficient, providing timely and useful information to maintain the health and performance of the system it monitors.
There is a practice in every cloud service, called “being on-call”. That means that at some moment in time, there is a person responsible for reacting to alerts, regardless of when they happen.
An example of the on-call policy can be found in this GitLab On-Call Handbook.
Not all alerts necessitate an on-call response. While on-call duties are a critical component in operations, especially in cloud services, they involve individuals being ready to respond to alerts at any hour, including nights and weekends. This responsibility can be demanding and exhausting, hence the importance of regularly rotating staff in these positions.
The necessity for an alert to trigger an on-call response depends on several factors:
Severity of the Issue: Critical issues that can cause significant downtime or data loss should trigger on-call alerts. These require immediate attention to prevent or mitigate major impacts.
Urgency of Response Needed: If the issue can wait until regular working hours without causing significant harm, it may not need to be an on-call alert.
Frequency of the Alert: If an alert is triggered frequently but doesn't always require immediate action, it may be better handled during normal hours to avoid unnecessary disruptions.
Availability of Automated Responses: Some alerts can be resolved through automated processes, reducing the need for immediate human intervention.
In summary, while on-call alerts are crucial for addressing critical issues promptly, not every alert warrants an immediate on-call response. The decision should be based on the severity, urgency, frequency, and the possibility of automation in handling these alerts.
Running Ethereum staking services efficiently requires a robust and well-thought-out alerting system. This system should ensure maximum uptime and quick response to any issues that might affect staking operations. Here are key alerts necessary for managing Ethereum staking services:
Validator Node Downtime: Alerts for any downtime or unresponsiveness in validator nodes are crucial. Since continuous validator duties are essential for staking, any downtime can lead to missed rewards and penalties.
Missed Attestations: Monitoring and alerting for missed attestations are important. Validators need to constantly attest to the state of the blockchain, and missed attestations can indicate issues with network connectivity, hardware, or software.
Hardware Health Monitoring: Alerts related to the physical health of the servers, such as CPU temperature, disk space, and memory usage, help in preempting hardware failures that could affect staking operations.
Network Connectivity Issues: Since Ethereum staking relies heavily on network performance, alerts should be set up for any network connectivity issues or significant changes in latency.
Software Updates and Forks: Alerts for pending updates or upcoming forks in the Ethereum network are essential to ensure that the staking operation remains compatible and secure.
Security Breaches or Anomalies: Implementing alerts for any security breaches or unusual activities is crucial for protecting the staking infrastructure from malicious attacks. This could include alerting for access to staking key infrastructure.
Performance Metrics: Alerts based on performance metrics like changes in validator success rate, or participation rate, can give early warnings about potential issues in the staking process.
Regulatory Compliance: If applicable, alerts concerning regulatory compliance are important to ensure that the staking operations adhere to evolving regulations in different jurisdictions.
Smart Contract Events: For services using smart contracts, alerts for specific contract events like withdrawals, deposits, or contract updates are necessary.
Delays in Attestations (Node Suboptimal Performance): Alerts for delays in attestations are crucial. Attestation delays can be a symptom of suboptimal performance of validator nodes. These delays might be due to various factors such as software issues, overloaded systems, or network congestion. Setting up alerts for these delays helps in identifying and rectifying performance issues before they lead to missed attestations and potential penalties.
Long Block Processing Times: Monitoring block processing times is important, as prolonged processing times can indicate deeper issues within the staking infrastructure. Alerts for unusually long block processing times can signal problems such as inadequate hardware resources or network bottlenecks. Early detection and resolution of these issues are vital to maintaining the efficiency and reliability of the staking service.
By monitoring these aspects and setting appropriate alerts, Ethereum staking services can operate reliably, maintain security, and optimize their staking rewards while minimizing risks and penalties.
Security is non-negotiable when you're running Ethereum validators at scale. This guide focuses on best practices specifically tailored for scaled Ethereum staking providers. The objective is to offer a comprehensive security framework that goes beyond typical measures used by solo stakers.
Maintaining a precise inventory of all active servers is critical. Each unidentified machine is a potential security risk.
To mitigate these risks, employ automated tools that continuously monitor your network, identifying and flagging any new devices that appear without authorization. Such tools not only save time but also ensure that your security measures are not reliant on fallible manual processes. They can alert you to the presence of rogue or unauthorized hardware, enabling a swift response to potential security breaches. By keeping an up-to-date inventory, you can also ensure that all machines are running the necessary security software and updates, thereby reducing the surface area for potential attacks.
Multi-Factor Authentication (MFA) adds an additional layer of security, reducing the risk of unauthorized access.
MFA requires users to provide multiple credentials to authenticate themselves, which significantly decreases the likelihood of successful intrusions. Traditional username and password combinations can often be compromised, but with MFA, an attacker would need to obtain both the user's password and the second factor, which could be a temporary code from an app or a token from a hardware device. The use of hardware-based authentication methods, such as security keys, is particularly effective as they are resistant to phishing attacks and provide a high-security level. These devices can be required to be present during login attempts, ensuring that even if login credentials are compromised, access to the server is not possible without the physical key. Employing MFA on all administrative access points adds depth to your security strategy, acting as a critical line of defense for protecting your Ethereum staking operations from unauthorized access and potential security breaches.
SSH keys are more secure than passwords and should be used for secure shell access.
Employing SSH private keys instead of passwords for secure shell access is a cornerstone of secure system administration. SSH keys offer a more robust security posture because they are cryptographic keys that are almost impossible to decipher by brute force methods, unlike passwords which can often be guessed or cracked with enough time. To enhance the security offered by SSH keys, it's important to rotate them regularly, much like passwords, to mitigate the risks associated with key exposure over time. Additionally, these keys should be stored securely, using a key vault—a dedicated storage system designed to manage digital keys and secrets. Key vaults typically offer heightened security measures such as limited access, audit logs, and sometimes, automatic rotation of keys. By centralizing key storage, you can manage access more effectively and respond quickly if keys need to be revoked, ensuring that your secure shell environment remains a strong link in your Ethereum staking infrastructure's security chain.
A Bastion or Jump Host serves as an intermediary between your local machine and critical infrastructure.
This host should be highly secured, monitored, and only accessible via MFA.
"Defence in Depth" model
A Bastion or Jump Host is a hardened and closely monitored entry point to your network that acts as the single, auditable gateway through which all access to critical infrastructure must pass. The primary function of a Bastion Host is to provide a strong security layer that separates sensitive internal systems from external threats. By funneling all traffic through a Bastion Host, you significantly reduce the attack surface of your network by limiting the number of access points to the essential infrastructure. This also means that if an attack occurs and a Bastion Host becomes overwhelmed with requests and crashes, it does not impact the application server.
To ensure the Bastion Host provides effective security, it should have stringent access controls in place, including the use of Multi-Factor Authentication (MFA) to verify the identity of users before granting access to internal networks. It's also vital that the Bastion Host is monitored continuously for any suspicious activities or unauthorized access attempts. Logs should be maintained meticulously and reviewed regularly to detect potential security incidents promptly. The Bastion Host itself should be updated and patched without delay to protect against vulnerabilities. By centralizing access to critical infrastructure through a highly secured and monitored Bastion Host, organizations can create a controlled environment that adds a significant layer of security to their network management practices.
Start with a 'deny all' default firewall rule and open only those ports necessary for operations.
Implementing a robust firewall configuration is a fundamental aspect of network security, particularly for infrastructure managing Ethereum validators. The initial stance of any firewall should be to deny all incoming and outgoing traffic by default. This 'default deny' posture ensures that only traffic that has been explicitly permitted is allowed to pass through the firewall, minimizing potential points of entry for attackers.
From this secure baseline, carefully control and limit the exceptions by opening only the ports necessary for your operations. Essential services should be the only ones with open ports.
For remote administrative access, such as SSH (Secure Shell) for Unix-based systems or RDP (Remote Desktop Protocol) for Windows, it's a best practice to ensure that these services are not exposed directly to the internet. Instead, access should be funneled through a secure, MFA-enabled Virtual Private Network (VPN). By doing so, you create a secure tunnel for remote connections, and the use of MFA adds an additional layer of security, ensuring that even if VPN credentials are compromised, attackers still require the second authentication factor, dramatically reducing the likelihood of unauthorized access. This approach aligns with the principle of least privilege, ensuring that only authenticated and authorized users can access the network infrastructure necessary to perform their duties.
IP-based DDoS attacks can incapacitate your network.
IP-based Distributed Denial of Service (DDoS) attacks are designed to overwhelm your network with traffic, which can disrupt operations and potentially lead to significant downtime. These attacks can be volumetric, flooding your bandwidth; protocol-based, targeting network layer protocols; or application-layer attacks, disrupting specific functions of services.
To protect against such threats, it is imperative to implement DDoS protection services that can detect and mitigate these types of attacks. These services often employ a variety of tactics, such as traffic analysis, rate limiting, and filtering, to distinguish between legitimate user traffic and malicious data packets aiming to disrupt service. They can absorb and scrub the traffic, allowing only legitimate requests to pass through to your network.
In addition to employing external DDoS mitigation services, it's crucial to regularly monitor your network traffic for anomalies. Establishing a baseline of normal traffic patterns allows you to quickly identify unusual spikes or patterns that could indicate a DDoS attack in progress. By promptly detecting these irregularities, you can act quickly to investigate and address the issue before it escalates into a full-blown attack that could incapacitate your critical infrastructure. Regular monitoring, coupled with a robust DDoS mitigation service, forms a strong defense against the disruptive and potentially destructive forces of DDoS attacks.
Your Engine API is an attack surface that needs to be minimized.
The Engine API, which serves as a critical interface for interacting with Ethereum clients, represents a significant aspect of your network's attack surface. It's essential to restrict and control access to this API to prevent unauthorized manipulation or information disclosure that could be exploited by attackers.
Filtering access to the Engine API is a fundamental security practice. This typically involves configuring firewalls or other network security tools to allow connections only from specific, trusted IP addresses or networks. By doing so, you limit the potential for unauthorized access and reduce the risk of malicious entities exploiting the API.
On top of stringent access controls, it's also crucial to implement robust authentication mechanisms. The use of API tokens is a common and effective method. These tokens should be unique to each user or service that interacts with the Engine API, ensuring that only authenticated requests are processed. The tokens act as an additional layer of security, as they can be revoked or rotated regularly to maintain tight control over access. Moreover, it's important to transmit these tokens securely, typically using HTTPS to encrypt the data in transit and prevent interception by attackers.
By carefully managing who can access the Engine API and requiring secure, authenticated requests, you can significantly reduce the vulnerabilities in your Ethereum staking operation and enhance your overall security posture.
Using VLANs can effectively segregate different types of traffic and reduce the attack surface.
Virtual Local Area Network (VLAN) segmentation is a powerful network design strategy that enhances security by segregating traffic into distinct, isolated segments based on function, application, or data sensitivity. By using VLANs, you can control traffic flow within your network, effectively minimizing the attack surface by ensuring that devices or services only have access to the network resources they require to function.
When grouping related servers and services into VLANs, the aim is to apply the principle of least privilege at the network level. For example, you might group all payment processing servers in one VLAN, while servers handling internal communications might reside in another. This separation ensures that if one segment of the network is compromised, the breach is contained and doesn't automatically spread to other parts of the network.
Furthermore, it's critical to limit inter-VLAN routing. While some communication between VLANs is necessary for certain operations, it should be strictly controlled. Access Control Lists (ACLs) or firewall rules should be implemented to allow only the necessary traffic to pass between VLANs, and all such traffic should be monitored and logged for security purposes. By restricting and scrutinizing the traffic allowed to cross VLAN boundaries, you can further protect sensitive data and services from potential attacks originating from less secure areas of the network.
Hardening the Operating System can reduce the number of vulnerabilities.
Follow guidelines like NIST SP 800-123 for detailed steps.
Use hardening playbooks that automate many of these processes.
Operating System (OS) hardening is a critical process that involves configuring the OS to protect against a wide array of vulnerabilities. The goal is to eliminate as many security risks as possible by removing non-essential applications, closing unused network ports, and disabling unnecessary services. By hardening the OS, you reduce the attack surface that could be exploited by malicious actors.
Adhering to established hardening guidelines, such as those provided by the National Institute of Standards and Technology (NIST) can offer a comprehensive set of instructions for securing your servers. These guidelines encompass a variety of best practices, from setting up user permissions to applying the latest security patches.
Moreover, leveraging hardening playbooks, particularly those that can be automated with configuration management tools like Ansible, Puppet, or Chef, can significantly streamline the hardening process. These playbooks are designed to automate the enforcement of security policies and the application of configurations that align with best practice guidelines, ensuring a consistent and repeatable hardening process across all your systems.
Such playbooks not only save time and reduce the possibility of human error but also provide documentation of your security stance and facilitate rapid adjustments in response to new threats. Regularly updating these playbooks to reflect the evolving security landscape is a crucial aspect of maintaining hardened systems. By implementing these best practices in OS hardening, you can create a solid foundation for the overall security of your Ethereum validators and other critical infrastructure.
Endpoint Detection and Response (EDR), Security Information and Event Management (SIEM), and Network Detection and Response (NDR) are powerful tools for monitoring and responding to security events.
EDR is designed to provide real-time monitoring and response capabilities at the endpoint level, which includes workstations, servers, and mobile devices. It helps in detecting, investigating, and mitigating suspicious activities on these endpoints. EDR solutions are particularly valuable for identifying and responding to malware infections, ransomware attacks, and other threats that might bypass traditional antivirus solutions. They often use behavioral analysis and machine learning to identify anomalies that indicate a security incident.
SIEM systems are comprehensive solutions that aggregate and analyze data from various sources across your network, including logs from firewalls, routers, servers, and other network devices. They provide a holistic view of the security state of your organization's IT infrastructure. SIEM tools are adept at correlating events from different sources, identifying patterns indicative of a security incident, and generating alerts for further investigation. They also play a crucial role in compliance reporting by centralizing log data for audit purposes.
NDR focuses on the network level, monitoring network traffic to detect and respond to anomalies that could indicate a security threat. NDR tools analyze network traffic patterns to identify malicious activities such as DDoS attacks, network reconnaissance, and lateral movement within the network. By continuously monitoring network traffic, NDR systems can quickly identify and mitigate threats that could otherwise go unnoticed.
In practice:
EDR should be used for granular visibility and response capabilities on individual endpoints, which are often the target of attacks.
SIEM should be employed for overarching security event management, correlation, and reporting across the entire IT ecosystem.
NDR is ideal for gaining visibility into network traffic and behaviors, allowing for early detection and response to network-based threats.
Integrating these systems allows for a more comprehensive security posture, as each tool complements the others, providing layers of defense against a wide range of cyber threats.
Security at scale is an ongoing commitment that involves continual assessment and evolution. By implementing these practices, scaled Ethereum staking providers can not only secure their own operations but also contribute to the overall security of the Ethereum network.
Resource scaling is a pivotal aspect of managing large-scale Ethereum staking operations. Knowing when and how to scale your resources can mean the difference between seamless operation and a bottlenecked, inefficient network. This guide aims to help you make informed decisions on scaling your Ethereum staking setup.
Scaling costs vary based on several factors:
Infrastructure Providers: Costs depend on the chosen provider's pricing model, reliability, and performance.
Node Client Software Requirements: Different software may require varying levels of computational power, impacting the cost.
Beacon Node Redundancy Setup: The number of beacon nodes per validator client (VC) affects redundancy and cost. More nodes offer better fault tolerance but increase expenses.
Key Distribution: The method of distributing keys across VCs influences both security and cost.
Peer-to-Peer (P2P) Max Peer Number: Higher numbers can enhance network robustness but might lead to increased bandwidth usage and thus higher costs.
Subnet Subscription Settings: These settings can impact egress traffic, affecting the cost, especially if your provider charges based on bandwidth usage.
Scaling introduces various risks:
Concentration Risk: Assigning too many keys to a single VC or a group of beacon nodes increases the risk. Hardware failures, software bugs, or misconfigurations can lead to significant losses.
Network Security Risk: As you scale, the risk of attacks like DDoS increases. Appropriate security measures are essential to mitigate this.
Operational Risk: Large-scale operations require sophisticated management. Mistakes in scaling can lead to inefficiencies or operational failures.
Balancing performance and cost is challenging:
Diminishing Returns: The cost of performance improvements increases as you approach the peak of the performance curve. Beyond a certain point, the additional rewards may not justify further investment.
Network Efficiency: Efficient scaling can significantly improve transaction processing speed and network reliability.
Hardware and Software Optimization: Optimal performance often requires investment in high-quality hardware and continual software updates.
During non-finality periods in Ethereum staking, the blockchain may experience delays in finalizing blocks, leading to increased storage requirements as more data accumulates. Ensuring that your system can adapt quickly by extending the filesystem is crucial to avoid disruptions.
Best Practices
Maintain Reserve Storage Capacity: Allocate additional storage space beyond your current needs to accommodate potential data accumulation during non-finality periods. For example, if your current operations require 1 TB of storage, consider having an additional 20-30% reserve, i.e., an extra 200-300 GB, to handle unexpected increases in data.
Use Scalable File Systems: Opt for file systems that support rapid resizing. File systems like ZFS or Btrfs are designed for scalability and can be expanded quickly without significant downtime. This flexibility allows for immediate response to increased storage demands.
Scalable Cloud Storage Solutions: Cloud-based storage solutions like AWS EBS or Google Persistent Disk offer high scalability. These services allow you to increase storage capacity on the fly, often with minimal disruption to operations. For instance, AWS EBS allows you to modify the volume size, performance, or both while the volume is in use, enabling quick adjustments to storage needs.
Automated Monitoring and Scaling: Implement automated monitoring systems to track storage usage and trigger scaling actions when predefined thresholds are reached. This proactive approach ensures you're always prepared for unexpected increases in data storage requirements.
Regular Audits and Cleanups: Periodically review your storage utilization and perform cleanups of unnecessary data. This practice not only frees up space but also helps maintain an efficient and organized filesystem, reducing the risk of running out of storage unexpectedly.
Decentralized Storage Options: Consider leveraging decentralized storage solutions like IPFS or Filecoin as part of your storage strategy. These can offer redundant and cost-effective alternatives for storing large volumes of blockchain data.
By following these best practices, you can effectively manage your Ethereum staking operation's storage needs, ensuring smooth and uninterrupted functionality even during periods of non-finality.
A 1:1 CL:EL setup implies having an equal number of consensus layer clients and execution layer clients. This section also explores other potential combinations, such as using different ratios of CL and EL clients, and discusses the pros and cons of each. Additionally, the use of Vouch and Dirk, which are validator management tools, will be considered in these setups.
Best Practices
1:1 CL:EL Setup
Pros:
Balanced Performance: Ensures a well-rounded performance with equal focus on validating blocks and executing transactions.
Simplicity: Easier to manage and monitor due to the symmetry in the setup.
Cons:
Resource Intensive: Requires significant resources as each EL client needs to be paired with a CL client.
Costly: Higher operational costs due to the need for more hardware and energy.
Other Combinations (e.g., 2:1 CL:EL)
Pros:
Efficiency in Specialized Tasks: Allows for more focus on either consensus or execution, depending on the ratio.
Resource Optimization: Can be more resource-efficient if the workload is heavier on one layer than the other.
Cons:
Complex Management: More complex to manage and optimize due to the uneven distribution of clients.
Potential Bottlenecks: Imbalance can lead to bottlenecks in the layer with fewer clients.
Using Vouch and Dirk
Pros: Enhances security by managing slashing risks; automates and streamlines validator duties.
Cons: Adds an additional layer of complexity; requires understanding and proper configuration to be effective.
Pros: Increases security by isolating key management; beneficial for setups with multiple validators.
Cons: Requires a secure and reliable network setup; may introduce latency in signing operations.
Recommendations
Evaluate Your Specific Needs: Choose a CL:EL ratio that aligns with your operational needs and goals. For instance, a higher number of EL clients may be beneficial if transaction processing is a priority.
Resource Allocation: Assess your available resources to determine the most efficient setup.
Security and Management Tools: Consider implementing tools like Vouch and Dirk to enhance security and efficiency, especially in larger or more complex setups.
Regular Performance Assessments: Continuously monitor the performance and adjust the CL:EL ratio as needed based on network demands and operational changes.
In conclusion, the choice between a 1:1 CL:EL setup and other combinations should be guided by your specific requirements, resource availability, and the need for efficiency and security in your Ethereum staking operations. Using tools like Vouch and Dirk can further optimize the process, though they require careful management and understanding.
Archive nodes in the Ethereum network store the complete history and state of the blockchain. They are essential for specific use cases that require access to the entire history of transactions and states, such as detailed auditing, historical queries, or for development purposes.
Best Practices
Assess Necessity: Operate an archive node only if your specific use case demands access to historical state data. Common scenarios include deep blockchain analytics, development of complex dApps that require historical data, or conducting thorough audits.
Resource Planning: Be prepared for significant resource demands. Archive nodes require a large amount of storage space, often several terabytes, to store the entire Ethereum blockchain history. Additionally, they demand considerable computational power for processing and maintaining this data.
Regular Data Pruning: Implement strategies for data pruning where appropriate. While an archive node stores everything, regular maintenance and pruning of irrelevant or non-essential data can optimize performance and storage efficiency.
Robust Hardware and Networking: Invest in high-quality hardware with ample storage capacity and fast processing capabilities. Additionally, ensure a robust and high-bandwidth network connection to handle the data throughput required by an archive node.
Backup and Redundancy: Maintain regular backups and consider setting up redundant systems. Given the critical nature of the data stored in archive nodes, having backup systems can prevent data loss due to hardware failure or other issues.
Security Measures: Implement stringent security protocols. Archive nodes are valuable targets due to the comprehensive data they hold. Employ firewalls, intrusion detection systems, and regular security audits to protect against unauthorized access and cyber attacks.
Regular Updates and Maintenance: Keep the node software up-to-date and perform regular system checks. This practice ensures optimal performance and security, and keeps the node in sync with the latest protocol changes in the Ethereum network.
Cost-Benefit Analysis: Regularly evaluate the cost versus the benefits of running an archive node. Considering the high resource requirements, ensure that the value it adds to your operations justifies the expense.
By adhering to these best practices, you can effectively manage an Ethereum archive node, ensuring it serves its intended purpose without unnecessary resource expenditure or operational risk.
Scaling your Ethereum staking operation is a critical step that should be approached strategically and systematically. It's vital to recognize when scaling is necessary and to understand how to do it effectively while mitigating risks.
Best Practices
Metrics Monitoring: Continuously monitor key performance indicators such as CPU, memory, and storage utilization, as well as network latency. These metrics provide valuable insights into the current state of your system and indicate when scaling is necessary.
Thresholds for Scaling Actions: Establish clear thresholds for each metric that, when exceeded, trigger a scaling action. For example, if CPU utilization consistently exceeds 80%, it might be time to scale up processing power.
Cost-Benefit Analysis: Before scaling, conduct a thorough cost-benefit analysis. Consider the financial implications, potential performance improvements, and the overall impact on your operation. Ensure that the advantages of scaling justify the investment required.
Scalability Testing: Regularly conduct load testing to simulate increased demand on your system. This helps identify potential bottlenecks and provides a realistic assessment of how your setup will perform under scaled conditions.
Intensified Post-Scaling Monitoring: After scaling, increase your monitoring efforts to quickly identify and address any issues. This includes tracking the same metrics as before, but with a closer eye on how they change post-scaling.
Flexible and Scalable Architecture: Design your system architecture to be inherently scalable. This can involve using cloud-based services that allow easy scaling, adopting microservices architecture, or using containerization technologies like Docker or Kubernetes.
Regular Reviews and Adjustments: Periodically review your scaling strategy and make adjustments as needed. This could be in response to changes in Ethereum's protocol, shifts in staking rewards, or advancements in technology.
Redundancy and Failover Plans: Ensure that your scaling includes redundancy and failover mechanisms to maintain operations during unexpected issues or peaks in demand.
Security Considerations: As you scale, it's crucial to reinforce your security measures. Scaling often introduces new vulnerabilities, so it's important to conduct security audits and implement enhanced security protocols as part of the scaling process.
User Experience and Performance: Keep an eye on the user experience and overall system performance. Scaling should ultimately result in a smoother, more efficient operation for both administrators and end-users.
By following these best practices, you can scale your Ethereum staking setup effectively, ensuring improved performance and reliability while maintaining control over costs and risks.
Resource scaling requires foresight, preparation, and a good understanding of the operational intricacies of Ethereum staking. By keeping these best practices in mind, you can ensure that your large-scale Ethereum staking operation is both robust and agile, capable of adapting to the ever-evolving landscape of Ethereum staking.
For large-scale node operators managing 100s of validators, migrating validators to new servers is a crucial process that requires meticulous planning and execution. This section provides detailed guidelines and best practices for a seamless migration.
Before commencing the migration of Ethereum validators, a comprehensive pre-migration checklist is imperative. This checklist serves as the foundation for a successful transition, ensuring that all critical data is secured and that the new server environments are primed for a seamless takeover. This phase focuses on safeguarding validator keys, securing beacon node data, meticulously documenting server configurations, and ensuring software version consistency.
Validator Keys Backup: Validator keys are the cornerstone of your Ethereum staking operation. Securing these keys is paramount to maintaining control over your validators. The backup process must be robust, secure, and foolproof.
Implement hardware security modules (HSMs) to store validator keys. These devices offer enhanced security features, making them ideal for protecting high-value cryptographic keys.
Establish a regular schedule for updating and verifying backups. This includes checking the integrity of the backups and ensuring they are uncorrupted and accessible.
Diversify the storage of backups by using multiple physical and geographical locations. This approach mitigates risks associated with natural disasters, power outages, or other localized incidents.
Execution and Beacon Node Data Backup: Beacon nodes play a critical role in maintaining network consensus. Backing up beacon node data and the associated execution layer data ensures that you can quickly recover from hardware failures or other disruptions without compromising the network's integrity or your validators' performance.
Automate the backup process for the blockchain database and beacon node configuration files. Automation reduces human error and ensures consistent, timely backups.
Leverage incremental backup techniques to efficiently manage storage space while keeping the data up-to-date.
Regularly test recovery procedures to ensure that backups can be restored without issues, thereby minimizing downtime during unplanned disruptions.
Server Configuration Details: Accurate documentation of server configurations ensures that the new servers can be set up identically to the old ones, minimizing the risk of configuration-related issues post-migration.
Use "infrastructure as code" (IaC) methodologies to codify server configurations. This approach allows for automated, error-free deployment of server environments.
Maintain a version-controlled repository for all IaC scripts, ensuring an audit trail of changes and an easy rollback mechanism.
Document every detail of the network setup, including configurations, firewall rules, and custom settings, to facilitate a precise replication on the new servers.
Software Version Consistency: Consistent software versions across old and new servers are crucial to avoid compatibility issues during and after migration. This consistency is vital for both the operating system and the Ethereum staking client software.
Adopt containerization technologies like Docker, or use virtualization to create consistent software environments that are easily replicable across different servers.
Establish a procedure for regularly updating and maintaining a catalog of all software versions in use. This catalog should include not only the main staking client software but also any auxiliary tools and dependencies.
Document interdependencies between software components to ensure that updates or changes to one component do not adversely affect others, thereby maintaining a stable and predictable server environment.
If a backup isn't checked and verified, then it's not a backup, but a Schrodinger's backup box where the data is both restorable and not restorable at the same time. You'll only find out when it's too late, so you MUST test your backups!
The migration process is a critical phase in the transition of Ethereum validators to new servers. This process involves meticulously preparing the new servers, transferring all essential data securely, and conducting thorough testing to ensure operational integrity and performance. Each step in this process is designed to minimize downtime and risk, ensuring a smooth and secure transition.
Preparation of New Servers: The first step in the migration process is to prepare the new servers. This involves ensuring that the hardware is up to the task and that all systems are correctly configured and ready for deployment.
Perform a detailed hardware performance assessment to confirm that the new servers meet or exceed the specifications of the current servers. This assessment should include checks on processing power, memory, storage capacity, and network capabilities.
Develop and implement automated deployment scripts. These scripts should handle the installation and configuration of the necessary software, including the Ethereum staking clients and any related dependencies.
Thoroughly check network connectivity and security settings. Ensure that the new servers comply with the organization’s security protocols and that they can connect to the necessary network infrastructure without issues.
Data Transfer: Safely transferring data from the old to the new servers is a delicate task. It requires a methodical approach to ensure data integrity and security.
Use a staged transfer process. Begin with non-critical data to test the transfer mechanism's integrity and reliability.
Employ secure and encrypted transfer protocols such as SFTP or SCP. This ensures that data is protected during transit and reduces the risk of interception or corruption.
In cases where the dataset is too large for practical network transfer, consider using physical transfer methods. This could involve shipping encrypted hard drives, ensuring data security while moving large volumes of data.
Testing on New Servers: After transferring data, it is crucial to test the new servers extensively. These tests confirm the integrity and performance of the migrated systems.
Conduct a series of comprehensive tests, including load testing and failover simulations. Load testing checks the servers’ performance under high usage, while failover simulations test the resilience of the system in case of a server failure.
Verify the integrity of the transferred data by comparing it against the original hashes. This step is vital to ensure that no data corruption occurred during the transfer process.
Perform integration tests to check how the new servers interact with the rest of the network. It’s essential to ensure that this integration is seamless and does not negatively impact the ongoing operations of the network or other servers.
After the migration of Ethereum validators to new servers, it's crucial to ensure that the old servers are taken offline correctly. This stage is vital to prevent any conflicts that might arise from having validators running simultaneously on both old and new servers, which could lead to issues like accidental slashing.
Network Disconnection:
The initial step in decommissioning the old servers is to isolate them from the network. This action prevents any communication between the old servers and the Ethereum network, thereby mitigating the risk of simultaneous validator operations.
Physically disconnect the network cables or disable the network interfaces of the old servers. This step should be done methodically to ensure that no server remains inadvertently connected to the network.
Implement a protocol to check and double-check that all old servers are disconnected. This might involve a physical inspection or a network scan to confirm the absence of signals from the old servers.
Destroy Old Servers:
Completely eliminating the risk of the old servers inadvertently coming back online is a critical safety measure. This step involves permanently disabling or destroying the old servers.
In cloud environments, securely terminate the instances hosting the old validators. Ensure that all data is securely wiped and that the instances cannot be reactivated accidentally.
For physical servers, perform a secure formatting of the drives. This might include using specialized software to overwrite all data, ensuring that no residual data can be recovered or that the servers can be mistakenly restarted.
Monitoring Tools: Continuous monitoring is essential to confirm that the old validators are fully offline and no longer part of the network. This reassurance is crucial for the overall security and integrity of the validator operation.
Employ network monitoring tools to keep an eye on the network traffic. These tools should be configured to detect any unauthorized or unexpected activity that might indicate the old servers are still operational.
Set up alerts to notify the team immediately if any activity from the old servers is detected. This prompt response mechanism ensures that any potential issues can be addressed swiftly, reducing the risk of network conflicts or security breaches.
Regularly review and analyze the network logs for a period post-migration to ensure that there are no traces of the old validators. This thorough analysis helps in confirming the success of the migration process and the decommissioning of the old servers.
Do not blindly trust monitoring tools! Implement "fail-safe" processes.
Processes should be designed to be "fail-safe" such that if something does go wrong, the outcome is a safe mode of failure. Scenarios often happen like this:
Admin 1: "The old servers are offline, I the monitoring tool says they are shutdown"
Admin 2: "Ok great, I'll turn on the new servers then."
X Post: "BREAKING! Slashing incident occurred!"
Admin 1: "Oh, an automated system noticed the old servers were off and restarted them, I didn't expect that..."
This is not a scenario anyone wants to experience, and implementing a fail-safe process can avoid these type of issues.
Admin 1: "I've turned off the old servers, physically disconnected them from the network, isolated the network they were on, formatted their drives and confirmed the servers are offline and not reachable."
Admin 2: "Ok great, I'll turn on the new servers then."
X Post: "BREAKING! First uneventful day in crypto recorded. Nothing bad happened."
Implementing a rolling migration strategy is a prudent approach to transferring Ethereum validators to new servers. This strategy involves moving validators in controlled batches, allowing for more effective monitoring and reducing the overall risk of the migration process. By adopting this approach, you can ensure a smoother transition, with opportunities to adjust the strategy based on the performance and feedback from each batch.
Batch Migration: Migrating validators in batches, instead of all at once, provides a more manageable and less risky process. This approach allows for focused attention on each batch, ensuring a higher success rate with minimal disruption to the network.
Plan the migration in several phases, dividing the validators into logical groups. These groups can be based on their function, performance, or other relevant criteria.
Ensure each batch is small enough to manage effectively but large enough to provide meaningful insights into the migration process.
Develop a detailed schedule for each batch, including specific timelines and checkpoints, to maintain control over the migration process.
Test Batch:
Starting the migration with a test batch is a crucial step. This initial batch serves as a pilot, helping to identify any unforeseen issues or challenges that might arise during the migration.
Monitor the test batch closely, paying particular attention to performance metrics and potential issues.
Use the insights gained from this test batch to refine and optimize the process for subsequent batches.
Monitoring and Verification: Continuous monitoring and verification after each batch migration are essential to ensure the validators are operating correctly on the new servers.
Implement robust monitoring tools to track the performance and behavior of the newly migrated validators.
Verify that each batch of validators is correctly interacting with the Ethereum network and performing as expected.
Conduct a post-migration review for each batch, documenting any issues and the steps taken to resolve them.
Iterative Approach: An iterative approach to migration allows for continuous improvement of the process. Feedback and performance data from each batch should inform adjustments and refinements for subsequent batches.
After each batch, gather feedback from the team involved in the migration process. This feedback should cover technical, operational, and performance aspects.
Analyze performance data to identify any trends or recurring issues.
Adjust the migration plan and strategy based on this analysis, applying lessons learned to future batches to increase efficiency and reduce potential risks.
The post-migration phase is crucial in solidifying the success of the Ethereum validators' transfer to new servers. This phase involves rigorous data validation, continuous performance monitoring, having redundancy plans in place, and thorough documentation and reporting. Each aspect is essential to ensure the integrity and optimal performance of the validators in their new environment.
Data Validation: Post-migration, it's imperative to verify that all data transferred to the new servers is intact and accurate. This step ensures that the validators operate based on complete and uncorrupted data.
Perform detailed checks to compare the transferred data with the original datasets. This can include hash checks, record counts, and sample data verification.
Validate the operational status of each validator, ensuring they are actively and correctly participating in the network.
Employ automated scripts to scan through data and flag any inconsistencies or missing elements for further investigation.
Performance Monitoring: Continuous monitoring of the validators' performance on the new servers is essential to promptly identify and address any issues.
Set up comprehensive monitoring systems to track various performance metrics such as block production, attestation effectiveness, and hardware resource utilization.
Establish alerts for any deviations from expected performance benchmarks.
Regularly review performance data to identify trends or patterns that may require intervention or optimization.
Redundancy Plans: Despite thorough planning and execution, unforeseen issues can arise. Having a redundancy or rollback plan is crucial for quick recovery without significant impact.
Keep the old servers in a standby mode for a predetermined period after migration. This acts as a safety net in case a rollback is necessary.
Document and rehearse the rollback process to ensure a swift and efficient response if needed.
Regularly update and test backup systems even after migration to ensure they are ready for use in any emergency rollback scenario.
Documentation and Reporting: Detailed documentation and reporting of the entire migration process are vital for future reference and for understanding the migration's impact.
Create comprehensive reports detailing each step of the migration process, including strategies used, challenges encountered, and solutions implemented.
Document any performance improvements or operational enhancements observed post-migration.
Share these insights with relevant teams and stakeholders to inform future migrations and to contribute to the organization’s knowledge base.
Migrating validators to new servers at scale is a complex process that requires careful planning and execution. Following these guidelines and best practices will help ensure a smooth transition with minimal downtime or risk to your validators. Always prioritize the security and integrity of your validator keys and data during the migration process.
In the realm of large-scale Ethereum staking operations, effective monitoring is crucial for maintaining network health and validator performance. This section introduces the standard beacon API and metrics, pivotal in overseeing a vast array of validators. We'll also explore the potential of open-sourced dashboards, which could revolutionize how these metrics are visualized and managed.
The Beacon API serves as a gateway to the Ethereum beacon chain, providing standardized, accessible data crucial for monitoring validator performance and overall network health. This API offers detailed insights into various metrics like validator uptime, proposed blocks, missed attestations, and more. The API's standardized format ensures data can be easily integrated into various monitoring tools and dashboards, allowing for a cohesive and comprehensive view of the network's status.
Customizable Views: One of the key features of these dashboards would be the ability to customize views according to specific needs. Operators could tailor dashboards to focus on metrics most relevant to their operations, such as real-time validator performance, network participation rates, or epoch summaries.
Advanced Features: Imagine dashboards equipped with features like real-time data visualization, which would allow operators to see network changes as they happen. Historical data analysis tools could enable operators to identify trends over time, providing insights into long-term performance and network health. The integration of customizable alerts would mean operators can be immediately notified of potential issues, allowing for prompt response to maintain network integrity.
Community Contribution: The open-source nature of these dashboards encourages community collaboration, leading to continuous improvements and innovations. This communal effort can significantly advance the way Ethereum staking operations are monitored at scale.
How Long to Store Data: The decision on the duration of data storage for monitoring Ethereum validators is pivotal. It influences not just operational analysis but also compliance and resource allocation.
Factors Influencing Storage Duration: Key factors include regulatory compliance requirements, which may dictate minimum storage periods for certain types of data; the capacity for data storage, as extensive data can demand significant storage resources; and the practical utility of historical data in identifying trends and making informed decisions.
Strategic Planning for Data Retention: Operators need to balance the need for comprehensive historical data with practical considerations of storage capacity and management.
Proactive Monitoring: Proactive monitoring transcends the reactive nature of alerts. It involves a continuous, comprehensive review of the system’s health and performance metrics. Regularly schedule system health checks, analyze performance trends, and anticipate potential issues before they escalate. This approach helps in maintaining optimal system performance and preventing downtime.
Balancing Alerts: The efficacy of alert systems can be compromised by an overload of notifications, leading to 'alert fatigue'. Striking a balance is crucial. Prioritize and categorize alerts based on severity and impact. This helps in ensuring that critical issues are addressed promptly and less critical alerts do not cause unnecessary distractions.
Regular Audits and Updates:
The dynamic nature of Ethereum's network and validators necessitates regular audits and updates of the monitoring system. Schedule periodic reviews of the monitoring setup. Update alert parameters and monitoring tools to keep pace with network changes and evolving operational needs.
Documentation and Training:
Comprehensive documentation and proper training are essential for the effective use of monitoring tools and understanding alerts. Maintain detailed documentation of all monitoring procedures, alert systems, and operational guidelines. Conduct regular training sessions for team members to ensure they are adept at using these tools and responding to alerts.
Community Involvement:
Engagement with the broader Ethereum community is a valuable resource for staying abreast of best practices and emerging trends in monitoring technologies. Actively participate in community forums, attend webinars, and collaborate with other operators. Sharing experiences and insights can lead to enhanced monitoring strategies and a more resilient Ethereum staking ecosystem.
Effective monitoring at scale is an evolving discipline that requires a blend of robust technology, strategic planning, and continuous learning. By leveraging standard APIs, implementing essential alerts, and following best practices, large-scale node operators can maintain high performance and security in their Ethereum staking operations.
As described in , the only way to receive rewards from the or the initial 32 ETH deposit upon a is for a validator to have set a changing their Withdrawal Credentials from 0x00
to 0x01
.
It is possible upon validator creation to specify a withdrawal address and, if you have done so, there is no need to update your credentials. In fact, once your credentials have been set to 0x01 it will not be possible to change them in the future. This is why it is imperative that when you choose a withdrawal address, you choose one that you have full control over such as a hardware wallet. It is heavily recommended to NOT choose a wallet on an exchange or third party where you do not control the private keys.
Please note: If at any point you are confused as to what to do, please ask the EthStaker community for guidance. There are no stupid questions and we always strive to be welcoming first and knowledgeable second.
eth-docker users: There is a standalone guide if you use eth-docker . The following guide can be considered a companion as the steps are very similar.
Choose an address you have full control over: Hardware wallets are preferred and exchange wallets MUST NOT BE USED. You may think it is clever to send to a hot wallet or an exchange to avoid extra transaction fees, but you are risking not only your rewards but the initial 32 ETH deposit.
Once your withdrawal credentials have changed from 0x00 to 0x01 they can not be changed in the future.
Your mnemonic is required to change your credentials: Your funds will be locked indefinitely as long as your withdrawal credentials are 0x00
. Without your mnemonic, it will not be possible to update your credentials. Except for extremely rare cases of controlling the withdrawal private key and keystores, we recommend searching thoroughly and retracing your steps if you are having trouble locating your mnemonic.
Go offline! Security is important: When making this change, you will be exposing your mnemonic so it is heavily encouraged to perform this action offline. Failing to do so could result in the theft of the mnemonic and your validators.
All Beacon chain rewards and the initial deposit will go to the specified address automatically without user interaction: The address specified will be the only location rewards and the initial deposit can go once set. If the specified address becomes compromised, it is advised to work with a White Hat group to recover your funds.
Do not throw away your mnemonic after updating your credentials: Even after the credentials are changed, you are still advised to hold on to your mnemonic as it can be used to regenerate your keystore files if those files become lost. Your mnemonic can be passed on to your heirs.
There are two primary tools that are used to make the credential change and both have different requirements. Look at both options and choose based on your situation. Normally if you have multiple validators associated with a single mnemonic, ethdo is the preferred approach.
A GUI application that provides the functionality available with the Ethereum Staking CLI tool. If you are a non-technical user, this is a perfect choice. It is easy to use and less error-prone than attempting the Staking CLI directly. There is a for this tool.
An extremely powerful CLI tool that is ideal for technical users or those who have multiple validators associated with the same mnemonic. This tool has also proved to be very effective with users who have run into issues with Waygu KeyGen, normally due to misunderstandings. Due to the technical barrier of this tool, the rest of the guide will be centered around how to use it.
In order for ethdo to make the necessary changes, there are a number of things you will need:
Offline Preparation File: This is a file that contains information on all validators in the Beacon chain such as the public key, validator index, and current credentials (called the Beacon chain state). This data is required for the tool to make the necessary signature. To generate this file, you can run the following command on your Beacon chain client:
./ethdo validator credentials set --prepare-offline
An execution layer address: In order to receive funds, you need to specify an execution layer address that you fully control. This would preferably be a hardware wallet such as a ledger but you certainly want to choose an address that has the highest security. After an address is set, if the corresponding wallet were to be compromised, you would have a high likelihood of losing your rewards and initial deposit.
A USB flash drive: The machine you are going to perform this change on will not have access to the internet and thus you will not be able to download or upload anything directly. In order to get the necessary information to the machine and the results from the machine, you will need a flash drive to store said information and results. On this flash drive you should put the ethdo
cli tool, the offline-preparation.json
file, and the address
you wish to set.
Before you start this process, understand that until you move your results to an online computer and submit them, you can not mess up this step. Take a breath and relax, we'll get through it together.
Warning: It's worth repeating - Please go through the effort of performing this operation on a fully offline computer. A user reported that they made the change online on their work computer and they received a message from their IT staff with their mnemonic included. Luckily for that individual, whoever discovered the mnemonic was a good person and warned them. Please take security seriously.
In the terminal, we are going to copy the contents of the flash drive to your local device to avoid any permission issues. To do so, you first need to locate your drive contents. Usually running these commands will find it:
Once found, you can run :
and copy the resulting location along with the usd drive name.
Now to copy the contents, we can navigate back to our home
and run a command to copy the contents and fix the permissions:
Where <PWD_RESULT>
is the result of your command above. Something similar to:
At this point, running:
should result in the CLI executing and providing a list of all the commands it has to offer.
Now with your address and mnemonic on hand, you can run the following command and fill in the necessary information:
Where the withdrawal-address
is the execution address you FULLY CONTROL and the mnemonic
is the 24 word phrase that was used or created when you made your validators.
In order to submit the operation, we need to transfer the change-operations.json
file to a computer with a connection to the internet. Copy the change-operations.json
file to your USB through the file explorer or the following command:
again, should be similar to:
You can now safely shut off the air-gapped computer.
Plug in the USB to a computer you are comfortable with that has an internet connection.
Open the change-operations.json
file and look for an attribute called to_execution_address
. That is the address your Beacon chain rewards will go to. Be absolutely sure this is the address you specified and have full control over. Sending a test transaction to and from the address is advised.
Where <IP>
is the address of your node, most likely localhost
After you have set a withdrawal address, changing your credentials from 0x00 to 0x01, it is NOT possible to change them again. If you would like to have your rewards and deposit go to a different address, you will need to exit the validator and reenter with the desired address specified.
The only way to submit a valid credential change is if the operation is signed by the Mnemonic you created the validators with. This is normally a 24-word phrase. Your deposit account is irrelevant unless you specified your deposit mnemonic when creating your validators.
We are sorry to say no. Beacon chain rewards and the initial deposit can only go to the set address or will remain locked indefinitely if the credentials are 0x00. During the development of the Beacon chain and at the time of its launch, it was expected that 0x00 credentials could be used to handle withdrawals directly. But due to changes in future development plans, it was decided that an execution layer address would need to be specified forced the 0x00
to 0x01
change.
ethdo
works by searching the Beacon chain state for the public keys corresponding to the mnemonic provided. It attempts 1024 different indices (also known as paths or positions) before failing. If you use a third party such as StakeFish or Staked.Us, it is possible the public key will not match. You can get around this by using your private key instead. Please follow these instructions and get in contact with the EthStaker community if you have issues:
This will output the private key
of the 0th path. Copy that value and try this command:
When you have set a withdrawal address, the credentials have a pattern of 0x01
followed by 22 0
and then your address without the 0x
prefix. So if your address was 0x123456789abcdeedcba987654321012345789abc
your credentials would be 0x010000000000000000000000123456789abcdeedcba987654321012345789abc
after it was successfully changed.
Rolling out software upgrades is a common operation task. It is often overlooked during the service scaling up. Ensuring that the new release is deployed in time is particularly important for running Ethereum or blockchain infrastructure in general because:
New upgrades & forks are only supported by the new release, failing to do so in time will result in the node stopping syncing or producing invalid blocks. In the worst-case scenario, it will cause validators going offline.
New releases might contain security patches. It will harden the network and your system, preventing financial or data loss.
While understanding the importance of timely software upgrades is crucial, it's also essential to recognize the challenges associated with implementing them effectively. In the next section, we'll discuss the difficulties of maintaining up-to-date software at scale and best practices for overcoming these obstacles.
Managing software upgrades becomes increasingly complex as the number of nodes and clients in a network grows. When operating at scale, challenges such as breaking changes in configurations and newly introduced bugs can significantly impact the upgrade process. Furthermore, updating in production environments is more difficult due to the need to minimize downtime.
Apart from technical issues, several non-technical challenges can hinder the upgrade process:
De-prioritization: Software upgrades can often be considered as good-to-have rather than essential, leading to outdated software and potential security vulnerabilities.
Different versioning schemas: Inconsistent versioning schemes across clients can complicate the upgrade process and increase the likelihood of errors.
Too many software releases to track: Monitoring and managing numerous software releases that require regular updates can be a daunting task, especially in large-scale environments with multiple nodes and clients.
While these challenges can seem overwhelming, adopting effective strategies, processes, and best practices can streamline the software upgrade process. In the following sections, we'll discuss potential solutions to address the challenges faced by Ethereum validator operators during software upgrades.
If you haven't already, consider tracking your deployments and configurations in a Git repository or adopting GitOps practices for streamlined version control and consistency.
A software inventory is a comprehensive record of all software components within your system, including client implementations, versions, and configurations. Maintaining an up-to-date software inventory is crucial for successful software upgrades, as it helps you better plan for and execute upgrades while minimizing the risk of unexpected issues. The following approaches can help you achieve a well-organized software inventory:
Metrics: Many clients expose version information through their Prometheus metrics, making it simple to create a Grafana panel that displays version information for all your workloads. By visualizing this data, you can easily monitor software versions and identify any discrepancies that may require attention.
In-house solution: Develop a custom in-house solution tailored to your specific needs, such as a simple shell script. Creating a solution that fits your organization's requirements allows for seamless integration with your existing systems and processes.
To keep your software up to date, it's essential to stay informed about the latest releases and improvements. Most software projects publish new releases on GitHub, making it an ideal starting point to subscribe to new release events for all the clients you use. There are several tools and services available to help you accomplish this:
GitHub's built-in subscription feature: Use the native subscription feature on GitHub to receive notifications about new releases from your followed repositories.
Alternatively, you can develop a custom bot tailored to your requirements for tracking new releases and staying informed about the latest updates.
Proper planning is crucial for the smooth execution of software upgrades. By incorporating software upgrade planning into your project management process, you can ensure timely implementation, address the risk of de-prioritization, and mitigate potential technical issues due to more time to research and prepare.
Ensure that all important data is backed up in advanced. Have a detailed, repeatable process for this. A backup is only truly a backup if you are actually able to restore it, so make sure to test the full end to end process, not just to assume that if you too a backup that it will work.
Make sure you talk about software upgrades in the planning session: For example, if your team uses Scrum and conducts planning every two weeks, allocate 20 minutes just to go through all the software new releases. This practice will help ensure that upgrades are prioritized and completed within the designated time frame.
Check for deadlines and breaking changes: When you go through the releases, make sure you check for any deadlines associated with software upgrades, such as hard forks or security patches, and plan accordingly. Also, examine the release notes for breaking changes that may require additional work or adjustments to your existing configurations. Keep in mind that some changes, like database upgrades, might be irreversible and must be rolled out with care.
Automation can significantly streamline the software upgrade process, minimizing the risk of human error and reducing the time required to complete the upgrade. Implementing automation in your upgrade process can increase efficiency and ensure that updates are consistently applied. Here are some example tasks that can benefit from automation:
Automated OS Security Patches and Updates: Use automated systems for scheduling and deploying patches. This reduces human error and ensures timely update. Classify patches based on security risk and functionality impact. Prioritize critical security patches to mitigate vulnerability risks. Implement patches in stages, start with a small, controlled group to identify potential issues before wider deployment. Test patches in a separate environment to ensure compatibility and functionality without affecting live systems. Maintain detailed records of all patches for compliance. Document the patch, affected systems, and deployment dates.
Automated pull requests: Utilize tools or scripts that automatically create pull requests when new software releases are detected, updating the deployment definition accordingly. This approach ensures that your system stays up-to-date with the latest software versions and reduces the manual effort required to initiate updates.
Automated rollout and rollback: Use tools like Argo Rollouts to define acceptance criteria and roll out new versions automatically. This method is particularly useful if you require several hours to confirm the success of each deployment. Additionally, these tools often provide built-in rollback capabilities, ensuring that your system can quickly recover from any issues encountered during the upgrade process.
Incremental rolling upgrades: When running Ethereum validators, unless there is a critical update required, it is usually the case that not all nodes need to all be updated at the same time. To avoid problems with new versions of software having bugs, incrementally upgrading nodes during automated upgrades can improve resiliency.
By incorporating automation into your software upgrade strategy, you can greatly improve the overall efficiency and reliability of your update process, ensuring that your Ethereum nodes remain secure and up-to-date.
A key principle of DevOps is to treat servers like "cattle, not pets". This means that infrastructure should not be seen as unique snowflakes, but instead as replaceable. When an update doesn't work as expected, e.g. a database gets corrupted, rather than on the fly creating exotic new scripts and performing heroics in an attempt to fix it, processes should be in place that demand servers be shut down and recreated.
Important data, such as validator keys, should already be securely backed up. If you're ever in a situation where an update breaks a server and you lose critical data, it is not the fault of the update but some important steps missing from your processes!
Keeping software up-to-date requires commitment and is an essential part of operating a secure and stable blockchain infrastructure, such as Ethereum. By understanding the challenges associated with software upgrades and implementing effective solutions, such as maintaining an accurate software inventory, staying informed about new releases, planning for upgrades, and automating the upgrade process, you can ensure timely and successful software updates.
Updates at scale are a complex yet critical component of maintaining a large-scale Ethereum staking operation. By following these guidelines and continuously refining your processes, you can achieve a balance of security, performance, and reliability in your infrastructure.
The information in the Scaled Node Operators section has been written and reviewed by Igor Mandrigin and Gateway.fm, a leading large scale Ethereum staking infrastructure provider.
The information in the Scaled Node Operators section has been written and reviewed by Igor Mandrigin and Gateway.fm, a leading large scale Ethereum staking infrastructure provider.
The information in the Scaled Node Operators section has been written and reviewed by Igor Mandrigin and Gateway.fm, a leading large scale Ethereum staking infrastructure provider.
The information in the Scaled Node Operators section has been written and reviewed by Igor Mandrigin and Gateway.fm, a leading large scale Ethereum staking infrastructure provider.
The information in the Scaled Node Operators section has been written and reviewed by Igor Mandrigin and Gateway.fm, a leading large scale Ethereum staking infrastructure provider.
: A slashing protection and validator orchestration tool.
: A remote signer for ETH2 validators, emphasizing security.
The information in the Scaled Node Operators section has been written and reviewed by and , a leading large scale Ethereum staking infrastructure provider.
Monitoring tools can fail. when things don't work as expected and tools that alert to those anomalies fail.
The information in the Scaled Node Operators section has been written and reviewed by and , a leading large scale Ethereum staking infrastructure provider.
Projects such as can act as a starting point for open-source dashboards.
Check out the "" page for more detailed information on alerting.
The information in the Scaled Node Operators section has been written and reviewed by and , a leading large scale Ethereum staking infrastructure provider.
The ethdo tool:
Note: You may not have ethdo
on your machine and will need to download it
If you do not have access to your validator or are using a third party, you can ask the community for a version or .
The validator mnemonic: When you generated your validator, you created or provided a . If you do not have ownership of this key or have lost it, you will not be able to continue further and make the necessary signature.
Offline air-gapped machine: Because you are going to be exposing the mnemonic to sign this operation, it is recommended to use an offline machine to perform the operation. There are numerous guides on how to create an air-gapped machine and you can .
With the Live USB that you created during the preparation step, plug it into your offline machine and boot the computer. You will want to choose the "Try Ubuntu without installing" option. If you get stuck you can follow . With the computer online, be sure to shut off all network capabilities by looking in the upper right of the screen for the network icon. Clicking on the icon will give you the option to turn off connection to the internet.
Now that you have the offline operations system running, plug in your USB that has ethdo
, your address
, and offline-preparation.json
file. You will likely see a notification appear that the device was detected and clicking on that device will open the file explorer to that device.
At this point, we are going to be operating in a Terminal which will allow us to execute the ethdo CLI
and create the operation. You can open the Terminal by clicking on Activities
in the top left and then typing terminal
. If you get stuck, you can follow a guide .
This should output a change-operations.json
file with your changes. If you run into any issues, please view the or ask us on Discord. We are always happy to help in any way we can.
Once you have verified the address, you can submit using Beaconchain's or you can transfer the file to your Beacon chain node and run the following command:
At this point, the submission process and propagation should be near instantaneous. Look up your validator on the and see if the withdrawal credentials have been updated. When viewing a validator, there is a Deposits
section which should note the change of your credentials.
At this point, your credentials have been updated and you will automatically receive your Beacon chain rewards as described .
Nope. You are all set! You will periodically receive your rewards as defined .
Then follow the same .
Third-party tools: Leverage third-party tools to help manage your software inventory. There are many open-source tools available for generating inventory reports. If you're using Kubernetes, you can use or to help you track client versions in your deployments.
NewReleases.io: is a handy service that allows you to track new releases across various platforms, including GitHub.
Other alternatives: Explore for monitoring new software releases and choose the one that best suits your needs.
The information in the Scaled Node Operators section has been written and reviewed by and , a leading large scale Ethereum staking infrastructure provider.
Last updated: November 28, 2022
This Privacy Policy describes Our policies and procedures on the collection, use and disclosure of Your information when You use the Service and tells You about Your privacy rights and how the law protects You.
We use Your Personal data to provide and improve the Service. By using the Service, You agree to the collection and use of information in accordance with this Privacy Policy.
The words of which the initial letter is capitalized have meanings defined under the following conditions. The following definitions shall have the same meaning regardless of whether they appear in singular or in plural.
For the purposes of this Privacy Policy:
Account means a unique account created for You to access our Service or parts of our Service.
Company (referred to as either "the Company", "We", "Us" or "Our" in this Agreement) refers to EthStaker, r/ethstaker.
Cookies are small files that are placed on Your computer, mobile device or any other device by a website, containing the details of Your browsing history on that website among its many uses.
Country refers to: California, United States
Device means any device that can access the Service such as a computer, a cellphone or a digital tablet.
Personal Data is any information that relates to an identified or identifiable individual.
Service refers to the Website.
Service Provider means any natural or legal person who processes the data on behalf of the Company. It refers to third-party companies or individuals employed by the Company to facilitate the Service, to provide the Service on behalf of the Company, to perform services related to the Service or to assist the Company in analyzing how the Service is used.
Usage Data refers to data collected automatically, either generated by the use of the Service or from the Service infrastructure itself (for example, the duration of a page visit).
Website refers to EthStaker Knowledge Base, accessible from https://ethstaker.gitbook.io/ethstaker-knowledge-base
You means the individual accessing or using the Service, or the company, or other legal entity on behalf of which such individual is accessing or using the Service, as applicable.
While using Our Service, We may ask You to provide Us with certain personally identifiable information that can be used to contact or identify You. Personally identifiable information may include, but is not limited to:
Usage Data
Usage Data is collected automatically when using the Service.
Usage Data may include information such as Your Device's Internet Protocol address (e.g. IP address), browser type, browser version, the pages of our Service that You visit, the time and date of Your visit, the time spent on those pages, unique device identifiers and other diagnostic data.
When You access the Service by or through a mobile device, We may collect certain information automatically, including, but not limited to, the type of mobile device You use, Your mobile device unique ID, the IP address of Your mobile device, Your mobile operating system, the type of mobile Internet browser You use, unique device identifiers and other diagnostic data.
We may also collect information that Your browser sends whenever You visit our Service or when You access the Service by or through a mobile device.
We use Cookies and similar tracking technologies to track the activity on Our Service and store certain information. Tracking technologies used are beacons, tags, and scripts to collect and track information and to improve and analyze Our Service. The technologies We use may include:
Cookies or Browser Cookies. A cookie is a small file placed on Your Device. You can instruct Your browser to refuse all Cookies or to indicate when a Cookie is being sent. However, if You do not accept Cookies, You may not be able to use some parts of our Service. Unless you have adjusted Your browser setting so that it will refuse Cookies, our Service may use Cookies.
Web Beacons. Certain sections of our Service and our emails may contain small electronic files known as web beacons (also referred to as clear gifs, pixel tags, and single-pixel gifs) that permit the Company, for example, to count users who have visited those pages or opened an email and for other related website statistics (for example, recording the popularity of a certain section and verifying system and server integrity).
Cookies can be "Persistent" or "Session" Cookies. Persistent Cookies remain on Your personal computer or mobile device when You go offline, while Session Cookies are deleted as soon as You close Your web browser. You can learn more about cookies on TermsFeed website article.
We use both Session and Persistent Cookies for the purposes set out below:
Necessary / Essential Cookies
Type: Session Cookies
Administered by: Us
Purpose: These Cookies are essential to provide You with services available through the Website and to enable You to use some of its features. They help to authenticate users and prevent fraudulent use of user accounts. Without these Cookies, the services that You have asked for cannot be provided, and We only use these Cookies to provide You with those services.
Cookies Policy / Notice Acceptance Cookies
Type: Persistent Cookies
Administered by: Us
Purpose: These Cookies identify if users have accepted the use of cookies on the Website.
Functionality Cookies
Type: Persistent Cookies
Administered by: Us
Purpose: These Cookies allow us to remember choices You make when You use the Website, such as remembering your login details or language preference. The purpose of these Cookies is to provide You with a more personal experience and to avoid You having to re-enter your preferences every time You use the Website.
For more information about the cookies we use and your choices regarding cookies, please visit our Cookies Policy or the Cookies section of our Privacy Policy.
The Company may use Personal Data for the following purposes:
To provide and maintain our Service, including to monitor the usage of our Service.
To manage Your requests: To attend and manage Your requests to Us.
The Company will retain Your Personal Data only for as long as is necessary for the purposes set out in this Privacy Policy. We will retain and use Your Personal Data to the extent necessary to comply with our legal obligations (for example, if we are required to retain your data to comply with applicable laws), resolve disputes, and enforce our legal agreements and policies.
The Company will also retain Usage Data for internal analysis purposes. Usage Data is generally retained for a shorter period of time, except when this data is used to strengthen the security or to improve the functionality of Our Service, or We are legally obligated to retain this data for longer time periods.
Your information, including Personal Data, is processed at the Company's operating offices and in any other places where the parties involved in the processing are located. It means that this information may be transferred to — and maintained on — computers located outside of Your state, province, country or other governmental jurisdiction where the data protection laws may differ than those from Your jurisdiction.
Your consent to this Privacy Policy followed by Your submission of such information represents Your agreement to that transfer.
The Company will take all steps reasonably necessary to ensure that Your data is treated securely and in accordance with this Privacy Policy and no transfer of Your Personal Data will take place to an organization or a country unless there are adequate controls in place including the security of Your data and other personal information.
You have the right to delete or request that We assist in deleting the Personal Data that We have collected about You.
Our Service may give You the ability to delete certain information about You from within the Service.
You may update, amend, or delete Your information at any time by signing in to Your Account, if you have one, and visiting the account settings section that allows you to manage Your personal information. You may also contact Us to request access to, correct, or delete any personal information that You have provided to Us.
Please note, however, that We may need to retain certain information when we have a legal obligation or lawful basis to do so.
Under certain circumstances, the Company may be required to disclose Your Personal Data if required to do so by law or in response to valid requests by public authorities (e.g. a court or a government agency).
The Company may disclose Your Personal Data in the good faith belief that such action is necessary to:
Comply with a legal obligation
Protect and defend the rights or property of the Company
Prevent or investigate possible wrongdoing in connection with the Service
Protect the personal safety of Users of the Service or the public
Protect against legal liability
The security of Your Personal Data is important to Us, but remember that no method of transmission over the Internet, or method of electronic storage is 100% secure. While We strive to use commercially acceptable means to protect Your Personal Data, We cannot guarantee its absolute security.
Our Service may contain links to other websites that are not operated by Us. If You click on a third party link, You will be directed to that third party's site. We strongly advise You to review the Privacy Policy of every site You visit.
We have no control over and assume no responsibility for the content, privacy policies or practices of any third party sites or services.
We may update Our Privacy Policy from time to time. We will notify You of any changes by posting the new Privacy Policy on this page.
We will let You know via email and/or a prominent notice on Our Service, prior to the change becoming effective and update the "Last updated" date at the top of this Privacy Policy.
You are advised to review this Privacy Policy periodically for any changes. Changes to this Privacy Policy are effective when they are posted on this page.
If you have any questions about this Privacy Policy, You can contact us:
By visiting this page on our website: https://www.reddit.com/r/ethstaker/
The EthStaker Knowledge Base aims to be an unbiased, open-source collection of useful information and concepts related to Ethereum staking. To ensure a positive and productive environment for all contributors and users, we have established a code of conduct. By participating in the EthStaker Knowledge Base, you agree to abide by these guidelines.
Be Respectful and Inclusive:
Treat all participants with kindness, respect, and empathy, regardless of their background, experience, or perspectives. The EthStaker Knowledge Base welcomes individuals from all walks of life and seeks to foster an inclusive community that values diverse ideas and perspectives.
Maintain a Professional Tone:
Ensure that your contributions to the Knowledge Base are clear, concise, and professional. Avoid using offensive language, personal attacks, or engaging in any form of harassment. Constructive criticism is encouraged, but always provide it in a respectful and helpful manner.
Contribute Accurate and Unbiased Information:
Ensure that the information you contribute is accurate, up-to-date, and unbiased. Refrain from promoting specific projects, products, or services that could lead to conflicts of interest. Instead, provide objective comparisons and analyses that empower users to make informed decisions.
Acknowledge and Attribute Sources:
When using information, ideas, or concepts from other sources, provide proper attribution and credit. This not only maintains the integrity of the Knowledge Base but also recognizes and respects the work of others in the community.
Prioritize Collaboration and Open Communication:
The EthStaker Knowledge Base thrives on collaboration and open communication. Share your ideas, ask questions, and engage with other contributors in a constructive and transparent manner. By working together, we can create a comprehensive and valuable resource for the Ethereum staking community.
Protect User Privacy:
Respect the privacy of all users and contributors. Do not share personal information or contact details without explicit permission. Any communication related to the EthStaker Knowledge Base should occur through public channels, unless otherwise agreed upon by all parties involved.
Report and Address Issues:
If you encounter any issues or violations of this code of conduct, please report them to the EthStaker Knowledge Base moderators. We are committed to maintaining a safe and welcoming environment for all participants, and will take appropriate action to address any concerns or misconduct.
By adhering to this code of conduct, we can create a positive and productive environment that fosters collaboration, innovation, and the advancement of knowledge within the Ethereum staking community. Thank you for your commitment to maintaining the integrity and values of the EthStaker Knowledge Base.
It is strongly recommended that you connect your node/validator to a UPS. Doing so will ensure that it does not abruptly switch off should there be a sudden loss of power.
There are many potential issues that an abrupt shutdown can cause, such as:
Database corruption
This will require you to (depending on the severity of the corruption) delete the stored DB and resync it from scratch. Depending on the speed of your SSD, this could put you offline for a while.
OS corruption
This will require an OS reinstallation. You will then need to reconfigure the machine, install an execution and consensus client, set up your validators, then get both clients in sync.
Hardware failure
If your hard drive fails, you will need to source and install a new one, then complete the above steps (Install an OS and sync the clients).
If a power outage doesn't damage any physical components, you might still be unsafe. When power is restored you could experience a power surge that could overload and fry components, meaning more downtime while you investigate which components are damaged and then have them replaced.
Depending on the UPS model and the OS you are using, you can configure your UPS to gracefully shutdown the connected computer(s) once the battery level falls below a certain point. Incredibly powerful in protecting your data and computer.
My UPS (1600VA/960W) cost me roughly $200 USD and provides around an hour of power to all the connected devices. I have both my node and router connected, so in the event that there is a power outage, the node will still be online working away. I've had a few short power outages since becoming a validator so it has definitely come in handy!
If things switch off while you are sleeping or not at home, the below steps are very useful in having things start back up automatically.
In your BIOS there will most likely be a power setting, in there you should be able to find an option to have your computer switch back on once power is restored.
If you cannot find the setting, you may need to check your motherboards user guide.
NOTE - To enter your BIOS, you will need to press a specific button after switching on the machine (and before the OS loads). The most common keys are "DEL", "F1", "F2", "F10".
This does vary between motherboards, and if you are unsure what yours is, check your POST information once the PC starts as it may show it on the screen. Or you can check your motherboards user guide. Or you can do what I do, which is to spread your hands out on the keyboard and press the before mentioned keys all at once and hope one of them works.
If you are using a hypervisor to host your nodes/validators, then you should set the VM's to automatically switch on once the computer has booted. It also saves you from manually starting them up should you manually restart the host computer.
It is common practice to configure your execution node, consensus node and consensus validators as services and set them to automatically start once the OS has booted. This can be done with systemd.
Doing the above three steps will help to minimise downtime.
If you are using a NUC, you may notice that the fan is quite loud and can be uncomfortable on the ears. This is due to a setting called turbo boost which is enabled by default.
Now, this isn't a best practice, so please don't take it as such. Instead, this should be viewed as a quality-of-life option if your NUC fan is very loud and is disturbing the peace.
Effective key management is crucial for Ethereum validators, especially at scale. Poor key management has historically been a primary reason for validators being slashed. Understanding and implementing robust key management practices is essential to maintain both the integrity and security of validator operations.
Best Practices
Systematic Tracking of Keys: Maintaining an organized record of all keys is vital. Utilize key management software to track the status and usage of each key. Regular audits and checks are necessary to ensure accuracy.
Backup Strategies: Implement a robust backup protocol for keys. Backups should be stored securely, ideally in physically different locations to mitigate risks like natural disasters or local hardware failures.
Risks of Hot Standbys: While hot standbys offer quick recovery, they can be vulnerable to attacks if not properly secured. Ensure these systems are as secure as the primary systems and limit their exposure to potential threats.
Use of Web3Signer: Incorporate Web3Signer for secure and scalable key management. It separates key management from validator duties, reducing the risk of key exposure and facilitating ease of management across multiple validators.
Integration with Secure Vaults: Store keys in secure vaults, such as hardware security modules (HSMs), which offer robust protection against physical and digital threats. Ensure these vaults are accessible only to authorized personnel and have multi-factor authentication (MFA) mechanisms.
Audit and Compliance: Regularly audit key management practices and maintain logs for compliance purposes. This not only ensures adherence to best practices but also aids in identifying potential areas of improvement.
Training and Awareness: Educate your team on the importance of key security and the potential risks of mismanagement. Regular training sessions can help maintain a high level of awareness and vigilance.
What You Need to Know
Tracking and managing a large number of keys in Ethereum validator operations is critical. This ensures that each key is accounted for and its status is known, thus reducing the risk of unauthorized access or loss.
Best Practices
Implement Key Management Software: Use specialized software designed for key management. This software should provide a clear overview of all keys, their statuses, and usage history.
Regular Audits: Conduct periodic audits to verify the status and security of each key. This can be a mix of automated systems and manual checks.
Example: Consider using a tool like HashiCorp's Vault, which allows you to securely manage keys and perform automated audits. Implementing automated alerts for any irregularities in key usage can provide early warning signs of potential security breaches.
Suggestions
Keep a detailed log for each key, noting when it was used and by whom.
Regularly review and update your key management policies to adapt to new threats and technological advancements.
What You Need to Know
Backups are essential for key management, providing a safety net in case of key loss or corruption.
Best Practices
Secure and Redundant Storage: Store backups in multiple, physically secure locations. Use encryption to protect backup data.
Regular Testing: Periodically test backup keys to ensure they work as expected.
Example: Create encrypted backups of keys and store them in different geographical locations. For instance, one copy could be in a bank safe deposit box, while another is in a secure cloud storage service.
Suggestions
Develop a clear, written procedure for backup and restoration processes.
Never assume that a backup will work until it has been tested!
What You Need to Know
Hot standbys can be a double-edged sword: they offer quick recovery but can be vulnerable if not properly managed.
Best Practices
Secure Environment: Ensure that hot standby systems are as secure as the primary system. They should be in a controlled environment with limited access.
Regular Updates and Patches: Keep the software on hot standbys up-to-date to protect against vulnerabilities.
Example: If using a hot standby server for key management, it should be in a locked, climate-controlled server room with access restricted to authorized personnel only, and with all the same security measures as the primary server.
Suggestions
Regularly test the security of hot standby systems to ensure they are not vulnerable to attacks.
Limit network exposure of hot standbys and monitor them for unusual activities.
What You Need to Know
Web3Signer provides a secure and flexible way to manage keys for Ethereum validators, separating key management from operational duties.
Best Practices
Integration with Existing Systems: Integrate Web3Signer with your existing infrastructure to streamline key management processes.
Secure Configuration: Ensure that Web3Signer is configured securely, with access controls and audit logging.
Example: Use Web3Signer to manage validator keys while keeping the keys in secure hardware wallets. This way, the keys are not exposed to the internet and are protected against online attacks.
Suggestions
Regularly update Web3Signer to the latest version to benefit from security updates and new features.
Train staff on using Web3Signer effectively and securely.
What You Need to Know
Using secure vaults like HSMs (Hardware Security Modules) is essential for storing sensitive keys securely.
Best Practices
Restricted Access: Limit physical and digital access to the vaults to authorized personnel only.
Multi-Factor Authentication: Implement MFA for accessing the vaults.
Example: Employ HSMs to store the master keys and use them to generate and manage subordinate keys. HSMs can be configured to allow access only after multiple authentication factors are verified.
Suggestions
Regularly audit the physical and network security of the vaults.
Consider using vaults that offer tamper-evident features and logging capabilities.
What You Need to Know
Regular audits ensure that key management practices are up to standard and comply with relevant regulations.
Best Practices
Regular Internal and External Audits: Conduct both internal reviews and external audits to ensure compliance with best practices.
Maintain Detailed Logs: Keep comprehensive logs of key management activities.
Example: Engage a third-party security firm to conduct an annual audit of your key management practices. They can provide an unbiased view and suggest improvements.
Suggestions
Use automated tools to maintain logs and facilitate audits.
Stay updated with industry standards and regulations related to key management.
What You Need to Know
Human error can often be a weak link in key management. Regular training and awareness programs can mitigate this risk.
Best Practices
Regular Training Sessions: Conduct periodic training for staff on key management best practices and security protocols.
Awareness Campaigns: Keep staff informed about the latest security threats and the role of key management in mitigating these threats.
Example: Organize quarterly training workshops for staff, covering topics like key security, threat scenarios, and the importance of following protocols.
Suggestions
Use real-world case studies and examples in training to highlight the importance of proper key management.
Encourage a culture of security awareness within the organization.
For operators running hundreds of Ethereum validators, maintaining high uptime is not just a goal; it's a necessity. In this context, understanding and effectively managing failover and synchronization mechanisms are crucial.
What is Failover?
Failover is a resilience strategy employed to ensure the continuous operation of the beacon nodes. In the event of a failure or downtime in the primary node, failover mechanisms automatically switch operations to a standby node to maintain uninterrupted service. Beacon nodes are pivotal in maintaining the network's consensus by aggregating and disseminating information about validators. Their uninterrupted operation is crucial for the consistent performance of validators.
Implementing Failover:
Infrastructure Setup: This involves setting up secondary nodes that are always in sync with the primary node, ready to take over instantly in case of a failure.
Automated Monitoring and Switching: Implement systems that continuously monitor the health of the primary beacon node and automate the switching process to the standby node in case of detected anomalies.
Regular Testing: Regularly test failover mechanisms to ensure they work seamlessly when needed. This includes simulating failures and monitoring the switch-over process.
For Ethereum node operators managing a large number of validators, high uptime is crucial. It is essential to eliminate as many single points of failure as possible. While performance is important, ensuring that validators remain online and operational at all times is paramount.
Threshold Signing Setup: Threshold signing involves distributing the signing responsibility among multiple entities. A transaction or a block is only valid when a certain number of these entities (the threshold) agree and provide their signatures.
Best Practices:
Distribute signers across different physical and network environments to reduce the risk of simultaneous failures.
Regularly test the threshold mechanism to ensure it functions correctly under various scenarios.
Keep the threshold number optimal to balance between security and efficiency.
Active/Passive (Stand-by) Client Setup: This setup involves having one active validator client and one or more passive clients. The passive clients remain in sync and are ready to take over immediately if the active client fails.
Slashing Protection:
It's critical to implement slashing protection mechanisms to prevent the validator from being penalized due to accidental double-signing, which can occur if both active and passive clients become active simultaneously. Utilize built-in slashing protection features in client software and maintain a robust system for tracking and managing validator keys.
Multiple Beacon Node Connections:
Some validator clients support connecting to multiple beacon nodes. This reduces reliance on a single beacon node and adds an extra layer of redundancy. Configure the validator clients to connect to several beacon nodes, ideally hosted in different locations or by different providers.
Geographical Distribution:
Hosting infrastructure in multiple data centers, zones, or regions can safeguard against regional outages.
Balancing Factors: Operators must balance the cost, performance, and uptime benefits. Cross-region redundancy increases uptime but can add latency and costs.
Best Practices:
Evaluate critical points in the infrastructure and determine which components would benefit most from geographical redundancy.
Regularly review and update the disaster recovery and failover plans to ensure they are effective across different regions.
Distributed Validator Technology (DVT) offers a way to distribute the responsibilities of a single validator across multiple nodes. It could potentially provide extra redundancy with less slashing risk. Experiment with DVT in a controlled environment to assess its impact on redundancy and slashing risks. Closely monitor the performance and reliability of DVT setups and compare them with traditional setups. The current solutions available are Obol and SSV.
Achieving high uptime for large-scale Ethereum validator operations involves a multi-faceted approach that includes redundancy at various levels, careful infrastructure planning, and innovative technologies like threshold signing and DVT. Balancing cost, performance, and uptime is key, and regular testing and updates to the setup are crucial to maintain optimal operation.
This is a living documentation site, meaning we need the community's help to maintain and update the content. Any contribution, from writing whole sections and translations to correcting spelling and grammar mistakes will be greatly appreciated.
Use this GitBook invite link to suggest edits and new content directly on GitBook:
You can earn GitPOAPs by contributing directly to the EthStaker Knowledge Base (a contributor↗) and by asking a question that leads to content being created (a supporter↗).
To suggest changes or add new content please visit our EthStaker Github↗ or if you have any questions please join our Discord↗.
Please create a pull request for any changes you want to make and we'll review it as soon as possible.
Please use these notes when writing for this knowledge base to maintain a standardized format.
Use relative links ↗ to navigate between different files within this knowledge base.
[Other file link](other_file.md)
→ Other file link
Use anchor links to headings within the same file.
[Anchor link](#heading-anchor)
→ Anchor link
Combine relative links to other files with anchor links.
[Other file anchor link](other_file.md#heading-anchor)
→ Other file link
Show when a link is referencing an external site by adding the ↗ icon at the end of the link.
[External site link ↗](https://example.com)
→ External site link ↗
Create an image that's also a link.
[![image-text](https://some.site/your-image.jpg)](https://some.site/your-link.html)
The tables of contents are created using a VSCode extension Markdown All in One ↗ using the command Create Table of Contents
(in the VS Code Command Palette ↗) to insert a new table of contents. It should then automatically be updated when a file is saved.
Add the tag <!-- omit in toc -->
to any headings you do not want to be included in the Table of Contents.
When adding items to the Glossary and FAQ it's important that they remain in alphabetical order so it's easier to navigate. As there is no native way to achieve this in Markdown, you can use this bash script to reorder the headings.
The file name
alphabetical-ordering.sh
has been added to the .gitignore file.
Edit the new file with your preferred text editor.
Run the script.
This was a quick script, so if you have any improvements please update it here!
Somer Esat has written great guides and has a few examples that can be referenced, There are three examples in that guide are "geth.service", "lighthousebeacon.service" and "lighthousevalidator.service"
To switch this option off,
The information in the Scaled Node Operators section has been written and reviewed by Igor Mandrigin and Gateway.fm, a leading large scale Ethereum staking infrastructure provider.
The information in the Scaled Node Operators section has been written and reviewed by Igor Mandrigin and Gateway.fm, a leading large scale Ethereum staking infrastructure provider.
A firewall is a security mechanism that monitors both incoming and outgoing network connections and can either accept or reject traffic based on a set of configurable rules. It is heavily recommended to have one configured to improve the security of your node/validator setup.
There are two kinds of firewalls:
Software firewalls are run on the individual machine and protect it from other devices within the local network that it sits on.
It is recommended to have all traffic dropped by default and to set up individual rules allowing it where required, that way traffic can only enter the machine where it is explicitly allowed.
For example, if you run your execution node and consensus node on different machines, you can set up a firewall rule on your execution node to only allow traffic on port 8551 from the IP address of your consensus node.
If you are running Ubuntu Server, a firewall will already be installed by default under the ufw package, you will just have to configure it and enable it.
If you are running Geth and Prysm and have the software running on different machines, you could set the below config.
Execution (Assuming Geth with IP 192.168.1.50)
Consensus (Assuming Prysm with IP 192.168.1.51)
Very secure! No traffic in or out unless it is strictly Ethereum related. A full list of external ports by execution and consensus client can be found here.
From here additional ports can be unblocked such as SSH, the consensus HTTP API (If you are also running your validator on another machine) or the execution RPC API (If you wish to interact with the Ethereum network using your own node).
Hardware firewalls are run on dedicated devices (Usually your router) and can manage traffic both within networks and between networks.
One way to really fortify your setup is to configure a dedicated subnet on your router solely for your nodes/validators and have the firewall drop traffic from any other subnet from reaching this subnet (Also known as blocking all RFC 1918 traffic)
Should your regular everyday computer (Or any other device on your network) become compromised, the infiltrator won't even know about your nodes as they are sitting on another subnet that is completely blocked off.
Maximal Extractable Value (MEV) has become a pivotal aspect of blockchain operation, especially for large-scale node operators. While MEV offers potential rewards, it also introduces unique challenges in a high-volume environment. This guide delves into the critical considerations for implementing MEV at scale.
Challenge: Each beacon node typically aligns with a specific set of builders, complicating the sharing of beacon nodes across multiple validator clients (VCs) if they require different relays to be used.
Proposed Solution: Implementing a builder API proxy can streamline this process. This proxy would act as an intermediary, directing requests from the beacon node to the appropriate VC. While this solution is still in the conceptual phase, it promises to simplify the architecture for large-scale operations.
Strategy: Some validator clients allow the configuration of different beacon nodes for distinct operations. This flexibility means operators can designate dedicated beacon nodes specifically for block proposing.
Advantage: This approach enables more efficient resource allocation, ensuring that critical tasks like block proposing are handled by specialized, optimized nodes.
Critical Factor: Node response time is a decisive element in MEV, especially for operators serving clients with specific requirements, such as producing only OFAC-compliant blocks.
Compliance and Efficiency: The need to use OFAC-compliant MEV relays means that nodes cannot revert to producing non-MEV blocks if those vanilla blocks are not also OFAC-compliant. Consequently, operators should prioritize availability and compliance over profitability in these cases.
Optimization: Ensuring high availability and quick response times, while adhering to compliance requirements, is crucial. This may involve investing in robust infrastructure and optimizing network connectivity.
MEV at scale presents a complex landscape for large-scale node operators. Balancing efficiency, compliance, and profitability requires a nuanced approach, blending innovative technical solutions with strategic operational planning. As the MEV landscape evolves, so too must the strategies employed by those at the forefront of blockchain technology.
The information in the Scaled Node Operators section has been written and reviewed by Igor Mandrigin and Gateway.fm, a leading large scale Ethereum staking infrastructure provider.
Welcome to the EthStaker Knowledge Base Content Contribution Ideas page! We're excited to have you join our community and help grow our knowledge base. In order to make the process of contributing more accessible, we've compiled a list of topics and ideas for new content that you can contribute to. Our goal is to make Ethereum staking more accessible, informative, and engaging for everyone.
If you're interested in contributing to one of these topics or have your own idea, please reach out to us on the #knowledge-base Discord channel. Our team and community members are ready to help you get started, provide guidance, and answer any questions you might have.
Staking Tutorials
Step-by-step guides for setting up a validator node
How to stake using popular wallets and tools
Troubleshooting common staking issues
Validator performance optimization tips
Security & Privacy
Best practices for securing validator keys
Privacy considerations for Ethereum stakers
Staking Economics
Understanding the dynamics of staking rewards and APR
The role of Ethereum staking in DeFi
Community & Ecosystem
Profiles of popular staking services and pools
Staking events, meetups, and conferences
How to Contribute:
Join the EthStaker Discord server: discord.com/invite/ethstaker
Head over to the #knowledge-base channel.
Express your interest in contributing to a specific topic or idea or pitch your own idea.
Collaborate with our team and community members to gather resources, guidance, and support.
Create your content, following the EthStaker Knowledge Base content guidelines.
Submit your content for review.
We're excited to have you on board and look forward to working together to make Ethereum staking more accessible and engaging for everyone. Let's build a stronger, more knowledgeable Ethereum staking community, one contribution at a time!