Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

eth/downloader: separate state sync from queue #14460

Merged
merged 19 commits into from Jun 22, 2017

Conversation

fjl
Copy link
Contributor

@fjl fjl commented May 11, 2017

Scheduling of state node downloads hogged the downloader queue lock when
new requests were scheduled. This caused timeouts for other requests.
With this change, state sync is fully independent of all other downloads
and doesn't involve the queue at all.

State sync is started and checked on in processContent. This is slightly
awkward because processContent doesn't have a select loop. Instead, the
queue is closed by an auxiliary goroutine when state sync fails. We
tried several alternatives to this but settled on the current approach
because it's the least amount of change overall.

Handling of the pivot block has changed slightly: the queue previously
prevented import of pivot block receipts before the state of the pivot
block was available. In this commit, the receipt will be imported before
the state. This causes an annoyance where the pivot block is committed
as fast block head even when state downloads fail. Stay tuned for more
updates in this area ;)

queued := q.blockTaskQueue.Size() + q.receiptTaskQueue.Size() + q.stateTaskQueue.Size()
pending := len(q.blockPendPool) + len(q.receiptPendPool) + len(q.statePendPool)
queued := q.blockTaskQueue.Size() + q.receiptTaskQueue.Size()
pending := len(q.blockPendPool) + len(q.receiptPendPool)
cached := len(q.blockDonePool) + len(q.receiptDonePool)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The invariant of this function changed a bit. Idle will now return true even if state sync is still in progress. Not sure yet how this affects sync overall, but you should be aware. If it's fine, the docs need to be updated to state so.

// Stop before processing the pivot block to ensure that
// resultCache has space for fsHeaderForceVerify items. Not
// doing this could leave us unable to download the required
// amount of headers.
if i > 0 || len(q.stateTaskPool) > 0 || q.pendingNodeDataLocked() > 0 {
return i
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again a tiny invariant change. I don't yet see how this could have an adverse effect, just thought I'd mention it. Previously the code handled the pivot block fully independently from any other blocks. The new code places the pivot block possibly to the end of a fast sync import batch. It should work, just mentioning the behavioral change.

for ; n >= 0 && len(s.retry) > 0; n-- {
items = append(items, s.retry[len(s.retry)-1])
s.retry = s.retry[:len(s.retry)-1]
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are not maintaining a set of items that a node does not have. If you have for example a single peer, which does not have any data (e.g. it's also fast syncing), this loop will request the same data over and over again indefinitely from the peer. You need to mark any data unavailable from a peer and not request it ever again from there.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note however that maintaining such a list and avoiding reassigning the same hash over and over again will cause the next line to potentially OOM the node, because there would be no memory cap on the total size of the pending state retrievals.

The original code moved trie requests from the scheduler.Missing to an intermediate pool (similar to your retry slice, but one that contained all pending ones), and only requested new missing ones if it fit into the pool. You could do a similar thing by maintaining a counter with the total pending requests, and do:

return append(items, s.sched.Missing(min(n, maxPend - curPending))...)

That would ensure that we don't grow the number of outstanding requests unboundedly.

s.fetching[peer.id] = s.spawnFetch(peer, items, rtt)
}
peers = peers[1:]
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You need to handle the case when you have literally no nodes that can services the requests defined by the trie root. This is an attack vector where someone feeds you an invalid head block (you can't verify its validity when starting the sync). In such a case when no peers can actually provide data for the head root hash, it must be considered invalid/attack and the peer you're syncing with (the master peer) dropped.


func (s *stateSync) spawnFetch(p *peer, nodes []common.Hash, timeout time.Duration) *stateReq {
log.Trace("Requesting node data", "peer", p.id, "items", len(nodes), "timeout", timeout)
req := &stateReq{p, nodes, make(chan struct{})}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please use named fields.

s.requestDone(req)
s.retry = append(s.retry, req.items...)
for _, hash := range req.items {
req.peer.MarkLacking(hash)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Timeout should not mark them lacking. The node will try to request smaller batches. The same nodes can be retrieved later too potentially, no need to never retry.


func (s *stateSync) requestDone(req *stateReq) {
delete(s.fetching, req.peer.id)
close(req.cancelTimeout)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Convert req.cancelTimeout into req.timeoutTimer and do a req.timeoutTimer.Stop() instead here.

batch = s.db.NewBatch()
)
for _, blob := range pack.states {
hash := common.BytesToHash(crypto.Keccak256(blob))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You probably want to reuse a hashes throughout the for loop. There are almost 400 states per request, so there's no reason to do so many hasher allocs and deallocs.

}
s.updateStats(n, time.Since(procStart))

case req := <-s.timeout:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This select construct has the same issue as the current code:

select {
  case packI := <-deliver:
    // Process
  case req := <-s.timeout:
}

Process can take a long time. By that time all timeouts will fire (even if all the nodes actually delivered too). The end result is that this select statement picks deliver/timeouts randomly if both are available, so it will consider requests timed out that are actually queued up on the deliver channel.

for _, hash := range req.items {
req.peer.MarkLacking(hash)
}
req.peer.SetNodeDataIdle(0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Previously if a node timed out while requesting a single entry, it was dropped. This is required, otherwise a single really bad peer has the capacity to stall the entire sync. It's fine to have slow peers after sync is doe, but sync requires heavy downloads, so we can't afford slow peers. That's where the hard timeouts came into play in the original code.

@karalabe karalabe added this to the 1.6.2 milestone May 24, 2017
@remyroy remyroy mentioned this pull request May 24, 2017
@fjl fjl force-pushed the new-state-sync branch 3 times, most recently from 4c8d741 to 203158d Compare May 29, 2017 15:36
@fjl
Copy link
Contributor Author

fjl commented May 29, 2017

@Arachnid PTAL

@fjl fjl requested a review from Arachnid May 29, 2017 15:44
@@ -212,7 +216,7 @@ func New(mode SyncMode, stateDb ethdb.Database, mux *event.TypeMux, hasHeader he
// these are zero.
func (d *Downloader) Progress() ethereum.SyncProgress {
// Fetch the pending state count outside of the lock to prevent unforeseen deadlocks
pendingStates := uint64(d.queue.PendingNodeData())
// pendingStates := uint64(d.queue.PendingNodeData())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please delete commented line.

case req := <-d.trackStateReq:
activeReqs[req.peer.id] = req
req.timer = time.AfterFunc(req.timeout, func() {
select {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you leave a comment about the reason for this?

@fjl fjl removed the in progress label May 29, 2017
case <-s.cancel:
return errCancelStateFetch
case req := <-s.deliver:
// response or timeout
Copy link
Member

@karalabe karalabe May 30, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This path doesn't get called on timeout, so any node which timed out will forever get stuck.

}

pivot := d.queue.FastSyncPivot()
// processBlocks takes fetch results from the queue and tries to import them
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

name mismatch

@fjl fjl modified the milestones: 1.6.3, 1.6.2 May 31, 2017
@karalabe karalabe modified the milestones: 1.6.4, 1.6.3 Jun 1, 2017
@karalabe karalabe modified the milestones: 1.6.6, 1.6.4 Jun 16, 2017
@karalabe karalabe dismissed their stale review June 16, 2017 12:39

LGTM, though since I've added some code, would be nice for someone else to review

@remyroy remyroy mentioned this pull request Jun 19, 2017
fjl added 5 commits June 19, 2017 13:15
Scheduling of state node downloads hogged the downloader queue lock when
new requests were scheduled. This caused timeouts for other requests.
With this change, state sync is fully independent of all other downloads
and doesn't involve the queue at all.

State sync is started and checked on in processContent. This is slightly
awkward because processContent doesn't have a select loop. Instead, the
queue is closed by an auxiliary goroutine when state sync fails. We
tried several alternatives to this but settled on the current approach
because it's the least amount of change overall.

Handling of the pivot block has changed slightly: the queue previously
prevented import of pivot block receipts before the state of the pivot
block was available. In this commit, the receipt will be imported before
the state. This causes an annoyance where the pivot block is committed
as fast block head even when state downloads fail. Stay tuned for more
updates in this area ;)
fjl and others added 12 commits June 19, 2017 13:15
This change also ensures that pivot block receipts aren't imported
before the pivot block itself.
Fixes TestDeliverHeadersHang*Fast and (hopefully)
the weird cancellation behaviour at the end of fast sync.
This commit explicitly tracks duplicate and unexpected state
delieveries done against a trie Sync structure, also adding
there to import info logs.

The commit moves the db batch used to commit trie changes one
level deeper so its flushed after every node insertion. This
is needed to avoid a lot of duplicate retrievals caused by
inconsistencies between Sync internals and database. A better
approach is to track not-yet-written states in trie.Sync and
flush on commit, but I'm focuing on correctness first now.

The commit fixes a regression around pivot block fail count.
The counter previously was reset to 1 if and only if a sync
cycle progressed (inserted at least 1 entry to the database).
The current code reset it already if a node was delivered,
which is not stong enough, because unless it ends up written
to disk, an attacker can just loop and attack ad infinitum.

The commit also fixes a regression around state deliveries
and timeouts. The old downloader tracked if a delivery is
stale (none of the deliveries were requestedt), in which
case it didn't mark the node idle and did not send further
requests, since it signals a past timeout. The current code
did mark it idle even on stale deliveries, which eventually
caused two requests to be in flight at the same time, making
the deliveries always stale and mass duplicating retrievals
between multiple peers.
This commit fixes the hang seen sometimes while doing the state
sync. The cause of the hang was a rare combination of events:
request state data from peer, peer drops and reconnects almost
immediately. This caused a new download task to be assigned to
the peer, overwriting the old one still waiting for a timeout,
which in turned leaked the requests out, never to be retried.
The fix is to ensure that a task assignment moves any pending
one back into the retry queue.

The commit also fixes a regression with peer dropping due to
stalls. The current code considered a peer stalling if they
timed out delivering 1 item. However, the downloader never
requests only one, the minimum is 2 (attempt to fine tune
estimated latency/bandwidth). The fix is simply to drop if
a timeout is detected at 2 items.

Apart from the above bugfixes, the commit contains some code
polishes I made while debugging the hang.
@timtait
Copy link

timtait commented Jun 20, 2017

I don't really know enough about the whole process to give an overall 👍 or 👎

But for what it's worth I've had a good read through the last few commits from @karalabe and they look good to me 👍

trie/sync.go Outdated
@@ -48,6 +52,21 @@ type SyncResult struct {
Data []byte // Data content of the retrieved node
}

// SyncMemCache is an in-memory cache of successfully downloaded but not yet
// persisted data items.
type SyncMemCache struct {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please call it syncBatch

@karalabe karalabe merged commit 0042f13 into ethereum:master Jun 22, 2017
rjl493456442 pushed a commit to rjl493456442/go-ethereum that referenced this pull request Jun 28, 2017
* eth/downloader: separate state sync from queue

Scheduling of state node downloads hogged the downloader queue lock when
new requests were scheduled. This caused timeouts for other requests.
With this change, state sync is fully independent of all other downloads
and doesn't involve the queue at all.

State sync is started and checked on in processContent. This is slightly
awkward because processContent doesn't have a select loop. Instead, the
queue is closed by an auxiliary goroutine when state sync fails. We
tried several alternatives to this but settled on the current approach
because it's the least amount of change overall.

Handling of the pivot block has changed slightly: the queue previously
prevented import of pivot block receipts before the state of the pivot
block was available. In this commit, the receipt will be imported before
the state. This causes an annoyance where the pivot block is committed
as fast block head even when state downloads fail. Stay tuned for more
updates in this area ;)

* eth/downloader: remove cancelTimeout channel

* eth/downloader: retry state requests on timeout

* eth/downloader: improve comment

* eth/downloader: mark peers idle when state sync is done

* eth/downloader: move pivot block splitting to processContent

This change also ensures that pivot block receipts aren't imported
before the pivot block itself.

* eth/downloader: limit state node retries

* eth/downloader: improve state node error handling and retry check

* eth/downloader: remove maxStateNodeRetries

It fails the sync too much.

* eth/downloader: remove last use of cancelCh in statesync.go

Fixes TestDeliverHeadersHang*Fast and (hopefully)
the weird cancellation behaviour at the end of fast sync.

* eth/downloader: fix leak in runStateSync

* eth/downloader: don't run processFullSyncContent in LightSync mode

* eth/downloader: improve comments

* eth/downloader: fix vet, megacheck

* eth/downloader: remove unrequested tasks anyway

* eth/downloader, trie: various polishes around duplicate items

This commit explicitly tracks duplicate and unexpected state
delieveries done against a trie Sync structure, also adding
there to import info logs.

The commit moves the db batch used to commit trie changes one
level deeper so its flushed after every node insertion. This
is needed to avoid a lot of duplicate retrievals caused by
inconsistencies between Sync internals and database. A better
approach is to track not-yet-written states in trie.Sync and
flush on commit, but I'm focuing on correctness first now.

The commit fixes a regression around pivot block fail count.
The counter previously was reset to 1 if and only if a sync
cycle progressed (inserted at least 1 entry to the database).
The current code reset it already if a node was delivered,
which is not stong enough, because unless it ends up written
to disk, an attacker can just loop and attack ad infinitum.

The commit also fixes a regression around state deliveries
and timeouts. The old downloader tracked if a delivery is
stale (none of the deliveries were requestedt), in which
case it didn't mark the node idle and did not send further
requests, since it signals a past timeout. The current code
did mark it idle even on stale deliveries, which eventually
caused two requests to be in flight at the same time, making
the deliveries always stale and mass duplicating retrievals
between multiple peers.

* eth/downloader: fix state request leak

This commit fixes the hang seen sometimes while doing the state
sync. The cause of the hang was a rare combination of events:
request state data from peer, peer drops and reconnects almost
immediately. This caused a new download task to be assigned to
the peer, overwriting the old one still waiting for a timeout,
which in turned leaked the requests out, never to be retried.
The fix is to ensure that a task assignment moves any pending
one back into the retry queue.

The commit also fixes a regression with peer dropping due to
stalls. The current code considered a peer stalling if they
timed out delivering 1 item. However, the downloader never
requests only one, the minimum is 2 (attempt to fine tune
estimated latency/bandwidth). The fix is simply to drop if
a timeout is detected at 2 items.

Apart from the above bugfixes, the commit contains some code
polishes I made while debugging the hang.

* core, eth, trie: support batched trie sync db writes

* trie: rename SyncMemCache to syncMemBatch
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants