-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
host code #4
Comments
Hi, I'm facing the same issue. |
I meet the same problem, and i did not find the way to solve it. |
@dchakra336 : For host processing versus pim processing, you need to run Ramulator a little bit different. You can run Ramulator with split traces to be true or split traces to be false. |
@maryam1364: That's great! How did you run it differently? |
When there is one trace file, run ramulator without --split-trace option! |
Hi, the host code is always generated into a single file to ensure that the DRAM memory requests are ordered following the application's memory access pattern when simulating the host configuration on Ramulator. Recall that host traces are filtered by L3 in zsim and are intended to be used to simulate only the memory device on Ramulator (without PIM cores). Therefore, there is no reason to split host traces per core id. I would advise to not use split traces for host code. Also, I would advise to not run host traces on PIM mode in Ramulator. |
Hi, |
@maryam1364 hi ,when i run ramulator without --split-trace option,i get a meaningless stats file. 😭 is there any problem ? |
@XDUFanYang : For host run: it creates one ".out" file and when I want to run Ramulator, I make sure that split-traces option is false. I was working with ramulator-pim 6 months ago and these are the things I remember. |
Thx, I got! |
Hi,
When I tried to run the zsim code for host.cfg(./build/opt/zsim tests/host.cfg).It's not generating multiple traces for multiple cores. But pim.cfg seems like working fine though. Can someone please let me know what might be the possible thing I'm missing here.
The text was updated successfully, but these errors were encountered: