Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Query regarding synchronization of data inside volumes in stateful sets #7997

Open
rohan-97 opened this issue Jul 29, 2024 · 5 comments
Open
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. sig/scalability Categorizes an issue or PR as relevant to SIG Scalability. sig/storage Categorizes an issue or PR as relevant to SIG Storage.

Comments

@rohan-97
Copy link

rohan-97 commented Jul 29, 2024

Describe the issue

Hello,

Being new to Kubernetes, I have a basic doubt regarding volume synchronization in statefulset.

I am working on a stateful application and trying to scale it up with multiple replicas,
I stumbled upon stateful set and was considering whether I can use it to implement my stateful application.

The application requires pods to be replicated and storage volume of all the replicated pods should be in sync and I was wondering if I can use statefulset for the same.

I went through documentation of stateful set but didn't found any block of document mentioning that kubernetes/statefulset synchronizes persistent volumes among multiple replicas of stateful set.

I need to confirm if I use stateful set to implement my stateful application then will kubernetes synchronize all the persistent volumes of all the pods or I need to implement some mechanism to manually synchronize data among the pods (e,g distributed file storage)

Thanks for the help in advance.

@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Jul 29, 2024
@rohan-97
Copy link
Author

/sig scalability

@k8s-ci-robot k8s-ci-robot added sig/scalability Categorizes an issue or PR as relevant to SIG Scalability. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Jul 29, 2024
@rohan-97
Copy link
Author

rohan-97 commented Aug 6, 2024

/sig storage

@k8s-ci-robot k8s-ci-robot added the sig/storage Categorizes an issue or PR as relevant to SIG Storage. label Aug 6, 2024
@mrbobbytables
Copy link
Member

You're best bet would be to reach out on the sig-storage mailing list or slack channel. This repo is sort of meta for self-management of the k8s community and not meant to route questions^^;;

@rohan-97
Copy link
Author

rohan-97 commented Aug 8, 2024

Hi @mrbobbytables ,

Thanks for the response, I'll add my query over there. :)

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. sig/scalability Categorizes an issue or PR as relevant to SIG Scalability. sig/storage Categorizes an issue or PR as relevant to SIG Storage.
Projects
None yet
Development

No branches or pull requests

4 participants