-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for tailing Kubernetes pod logs #392
base: main
Are you sure you want to change the base?
Conversation
d2062ea
to
e8e6d24
Compare
Thanks for the contribution! Generally speaking, I think it'd be great to offer a more direct way to get the Kubernetes logs. That said, I'm not sure if its the best approach to implement this via Do you know whether there is a better way to get the logs for Kubernetes (e.g. directly via the API) that would be suitable in your environment? |
Yes it's completely doable via the API. I chose to go with kubectl for the prototype because that works out of the box within a pod assuming the right role has been granted to the pod service account. Thus it was super easy to get started with. Just had to add kubectl to the docker image and just worked. I can do a prototype using the API. |
Makes sense - thanks for clarifying.
That would be helpful, if it isn't too much effort - I think that would be a better fit given how the collector does this for e.g. all managed service providers (using their SDKs to access relevant APIs) Also, out of curiosity, is the Kubernetes cluster you're testing with a self-managed one, or are you using one of the cloud managed Kubernetes services like Amazon EKS? |
We are using GKE on Google Cloud Platform. I have written the prototype and tested it on our clusters. It will likely need some input w.r.t structure, design and naming from your side or you can pick it up and polish it. |
e8e6d24
to
0646b74
Compare
This introduces support for reading logs via the Kubernetes API server using the `kubectl logs` utility.
0646b74
to
77fbcdc
Compare
@lfittl this has now been switched to the k8s client-go SDK. It will work out of the box when running in-cluster as long as this RBAC rule has been granted to the service account the pod is running as:
I would expect this is the most likely deployment when using k8s. When running outside the cluster a kubeconfig file will need to be provided. I also added a parameter to override the the API server URL if necessary which can be very useful in such cases. If you need more information to write documentation let me know and I can write up something more complete but if you are generally familiar with k8s this should be enough to get it going. |
I think, better and more scalable approach would be having a way for common ingestion solutions to stream logs into pganalyze. E.g: fluent-bit, grafana-agent, vector or even OTLP would ingest and collect logs for general logs storage and then also send them to pganalyze for deeper analysis. |
This introduces support for reading logs via the Kubernetes API server
using the
kubectl logs
utility.