Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for flamescope #457

Open
nmiculinic opened this issue Feb 27, 2019 · 12 comments
Open

Add support for flamescope #457

nmiculinic opened this issue Feb 27, 2019 · 12 comments
Labels
Priority: p2 Buganizer priority - P2 type: feat Buganizer type - Feature Request

Comments

@nmiculinic
Copy link

https://github.com/Netflix/flamescope has pretty nice trick up it sleeve, which I'd like see integrated into pprof. I can see this feature combining 3 parts:

  • time/offset based filtering for flamegraph (or better yet for all visualization). This could be achieved with two optional GET parameters for web visualization, e.g. start_offset and end_offset or camelCase or whatever. I can work on PR for that
  • Preparing data serverside for bucketing into X-ms slot buckets. I can also work on PR for that probably.
  • Frontend GUI integration.....this thing is harder for me to work on since I'm primarily in the backend/DevOps world.

I already have experience using flamescope and it paid its learning curve within first hour when I found bug easily which has been hidden in typical flamegraph view (( systems bug, thus perf input data, not pprof ))

@kalyanac
Copy link
Contributor

kalyanac commented Mar 6, 2019

Thank you for your suggestion and for offering to send a PR.

@nmiculinic
Copy link
Author

Sure; one question....where/does pprof save timestamp/offset of each event collected? I read a bit about it and couldn't find an obvious place where it's saved.

@nolanmar511
Copy link
Contributor

I don't think pprof's standard profile format (profile.proto) records a timestamp.

We could add a numeric label to the samples that pprof interprets as a time stamp. But we would have to update parsing code.

For perf input, we'd probably need to update https://github.com/google/perf_data_converter to get timestamps.

@nmiculinic
Copy link
Author

What about standard golang https://golang.org/pkg/net/http/pprof/ ? We should add numeric label there too, (( something like offset_ns probably ))

@nolanmar511
Copy link
Contributor

Yes; golang's profile collection (https://golang.org/pkg/net/http/pprof/) would also need to be updated to include time stamps.

@nmiculinic -- Are you mainly interested in looking at golang profiles flamescope?

@aalexand
Copy link
Collaborator

aalexand commented Mar 6, 2019

What duration of a profile and how many points / samples per second we are talking about? The profile.proto format is not designed to be an efficient timeseries storage so I am somewhat skeptical of using it for that purpose. I am also not sure golang folks would want to provide timed sampled profile, that means no aggregation, larger profiles, likely streaming needed.

This seems like large work, so it would be good to have more evidence it's wanted before any movement on this imo.

@nmiculinic
Copy link
Author

nmiculinic commented Mar 7, 2019

101Hz is more than enough for the sample rate. Duration of 30-120s seconds is ample to see patterns in this use-case. Thus we're talking about max 12 000/CPU data points, hardly a bit deal nowadays. Let's say 8 bytes an int64...that's 96KiB/CPU. Tiny. (( not counting protobuf VLE + gzip compression ))

EDIT: *CPU cores

@nmiculinic
Copy link
Author

https://groups.google.com/forum/#!topic/golang-nuts/GZ-v4XqAFkw

I've opened a discussion on golang-nuts. If there's anything more please let me know.

@aalexand
Copy link
Collaborator

I don't anticipate working on this in pprof soon. I feel if this can be implemented as a separate tool on top of the profile.proto format - that would be good. Otherwise maybe pprof is just not the best tool for visualizing this kind of data.

@NHDaly
Copy link
Contributor

NHDaly commented Sep 23, 2020

Chiming-in to add my two cents -- other FlameGraph tools are now including timestamps on the data, so that the displayed flamegraph is sequential in time.

With some filtering over the data, this allows discovering more meaning in the data, such as in this example, where you can see the same process is being repeated multiple times that should only have been happening once:
Screen Shot 2020-09-23 at 10 06 52 AM

This is information that may have been lost if the frames were aggregated as they are in PProf, since they each have exactly the same stack trace.

It would be really nice if the samples could contain a timestamp field, which is simply used for sorting how they are displayed in the final flamegraph. :)
AND, then as @nmiculinic suggests, this could then also be used for filtering by time slice.

Thank you for your consideration! :)

@felixge
Copy link

felixge commented Oct 27, 2022

My team at Datadog is very interested in this and other timestamp use cases as well. Here is a proposal for how pprof could be turned into a relatively efficient format for this: #728

@felixge
Copy link

felixge commented Mar 6, 2023

It's now possible to use FlameScope with go1.19+ - I've written a tool and blog post for it here: https://blog.felixge.de/flamescope-for-go/ (it's based on the execution tracer rather than pprof)

@Louis-Ye Louis-Ye added type: feat Buganizer type - Feature Request Priority: p3 Buganizer priority - P3 Priority: p2 Buganizer priority - P2 and removed Priority: p3 Buganizer priority - P3 labels May 16, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Priority: p2 Buganizer priority - P2 type: feat Buganizer type - Feature Request
Projects
None yet
Development

No branches or pull requests

7 participants