Optional
allowfalse
Optional
clientIf true, only ever be a DHT client. If false, be a DHT client until told
to be a DHT server via setMode
.
false
Optional
datastoreThe datastore prefix to use
"/dht"
Optional
initialDuring startup we run the self-query at a shorter interval to ensure the containing node can respond to queries quickly. Set this interval here in ms.
1000
Optional
kHow many peers to store in each kBucket. Once there are more than this number of peers for a given prefix in a kBucket, the node will start to ping existing peers to see if they are still online - if they are offline they will be evicted and the new peer added.
20
Optional
kThe threshold at which a kBucket will be split into two smaller kBuckets.
KBuckets will not be split once the maximum trie depth is reached
(controlled by the prefixLength
option) so one can replicate go-libp2p's
accelerated DHT client by (for example) setting kBucketSize
to Infinity
and kBucketSplitThreshold
to 20.
kBucketSize
Optional
logThe logging prefix to use
"libp2p:kad-dht"
Optional
maxHow many parallel incoming streams to allow on the DHT protocol per connection
32
Optional
maxHow many parallel outgoing streams to allow on the DHT protocol per connection
64
Optional
metricsThe metrics prefix to use
"libp2p_kad_dht"
Optional
networkDynamic network timeout settings for sending messages to peers
Optional
pingHow many peers to ping in parallel when deciding if they should be added to the routing table or not
10
Optional
pingHow long the queue to ping peers is allowed to grow
100
Optional
pingSettings for how long to wait in ms when pinging DHT peers to decide if they should be added to the routing table or not.
Optional
pingHow many peers to ping in parallel when deciding if they should be evicted from the routing table or not
10
Optional
pingHow long the queue to ping peers is allowed to grow
100
Optional
pingSettings for how long to wait in ms when pinging DHT peers to decide if they should be evicted from the routing table or not.
Optional
prefixHow many bits of the KAD-ID of peers to use when creating the routing table.
The routing table is a binary trie with peers stored in the leaf nodes. The larger this number gets, the taller the trie can grow and the more peers can be stored.
Storing more peers means fewer lookups (and network operations) are needed to locate a certain peer, but also that more memory is consumed.
32
Optional
protocolThe network protocol to use
"/ipfs/kad/1.0.0"
Optional
providersInitialization options for the Providers component
Optional
queryHow often to query our own PeerId in order to ensure we have a good view on the KAD address space local to our PeerId
Optional
reprovideInitialization options for the Reprovider component
Optional
selectorsRecord selectors
Optional
validatorsRecord validators
Optional
peer
After startup by default all queries will be paused until the initial self-query has run and there are some peers in the routing table.
Pass true here to disable this behaviour.