Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve memory alignment #780

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Conversation

tjungblu
Copy link
Contributor

@tjungblu tjungblu commented Jun 28, 2024

This runs betteralign to pack structs smarter.
Originally proposed by @mrueg in #673

@fuweid
Copy link
Member

fuweid commented Jul 2, 2024

Is there any improvement after change? If the struct keeps change, there is no way to track it until we run validation or something like that in CI.

@tjungblu
Copy link
Contributor Author

tjungblu commented Jul 8, 2024

Is there any improvement after change?

In an (IDLE) OpenShift cluster I couldn't really quantify an improvement at all. I've had higher memory usage with this PR, which I think comes from the difference between v1.3.10 and v1.4, also the golang version was different. Didn't dig much further as to why, read/write latency-wise there was also no measurable improvement between two clusters.

I'm going to take a single-node config for a run next time (maybe this Friday), will report if I have found anything meaningful to share.

This runs betteralign to pack structs smarter.
Originally proposed by etcd-io#673

Co-authored-by: Manuel Rüger <[email protected]>
Signed-off-by: Thomas Jungblut <[email protected]>
@k8s-ci-robot
Copy link

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: tjungblu
Once this PR has been reviewed and has the lgtm label, please assign ptabor for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@ivanvc
Copy link
Member

ivanvc commented Jul 12, 2024

According to the PR benchmarks, there's an improvement but it seems to be marginal (about 1%). Refer to: https://github.com/etcd-io/bbolt/actions/runs/9834932525?pr=780.

@tjungblu
Copy link
Contributor Author

The question is what we would expect with better memory alignment, the structs that are very often in memory would be node/page/inode:

/home/tjungblu/git/bbolt/node.go:11:11: 16 bytes saved: struct with 88 pointer bytes could be 72
/home/tjungblu/git/bbolt/node.go:603:12: 16 bytes saved: struct with 48 pointer bytes could be 32
/home/tjungblu/git/bbolt/page.go:145:15: 8 bytes saved: struct with 16 pointer bytes could be 8
/home/tjungblu/git/bbolt/tx.go:25:9: 104 bytes saved: struct with 192 pointer bytes could be 88

I would expect little less memory usage by etcd, only marginally better throughput/performance because of the better cache usage. In multi-node etcd I would not even think this is measurable due to the network latency between the peers.

I'll run some kube-burner tests today on single-node openshift, just rebased this over the 1.3.10 release to have a better comparison.

@tjungblu
Copy link
Contributor Author

Some preliminary findings from single node openshift. I've been running kube-burner with the api-intensive example, which basically creates a bunch of namespaces and creates some pods, updates status etc. It adds about 30mb of data to etcd (which ends up with 140mb of dbsize), so not very large at all.

With this alignment improvement, I do see about 360 mb of resident RAM usage for etcd, without it's only 330 mb. Everything else is being equal, surprisingly the GRPC GET requests are more than 2x faster (10ms vs. 24ms) with this alignment. That improvement seems too good to be true though, so that must have a different cause.

Take this with a huge grain of salt, there is just too much stuff on top of bbolt, could be that some operator made more watch requests than before or some other component created more events.

Guess we have to resort to the boring synthetic tests that we have on bbolt itself, maybe one of our bigger consumers have some time to test drive this with the bigger bbolt files?

@mrueg
Copy link
Contributor

mrueg commented Jul 18, 2024

Thanks for following up on that!

A few options to look if this improves anything:

@k8s-ci-robot
Copy link

@tjungblu: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-bbolt-test-2-cpu-arm64 97dddf5 link true /test pull-bbolt-test-2-cpu-arm64
pull-bbolt-test-4-cpu-arm64 97dddf5 link true /test pull-bbolt-test-4-cpu-arm64
pull-bbolt-test-4-cpu-race-arm64 97dddf5 link true /test pull-bbolt-test-4-cpu-race-arm64
pull-bbolt-robustness-arm64 97dddf5 link true /test pull-bbolt-robustness-arm64

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Development

Successfully merging this pull request may close these issues.

5 participants