Skip to content
Lars Bergstrom edited this page Aug 5, 2014 · 1 revision

Agenda

Attending

  • laleh, pcwalton, kmc, ryan, (2x samsung), zwarich, gw, larsberg, jack, brson, mbrubeck

Hi-pri rust issues

  • larsberg: All under control.
  • all: \o/

Interface to JS

  • kmc: Started working on the JS safety-checking lint. Is it OK if every use of the unrooted JS has to be annotated with #[allow(unrooted_js_managed)]. Was planning to allow it in all structs deriving Encodable, but that heuristic seems brittle and harder to implement than an explicit attr.
  • pcwalton: Sounds good to me, but I think we should check with jdm.
  • kmc: Yes, there's still more work, anyway. Will talk with jdm more about implementation-side things.
  • larsberg: Example code?
  • kmc: I have a branch and can put a link here: https://github.com/kmcallister/servo/commit/c20b50bbbbcdc8ce3551adbc1e039a727cf89995
  • kmc: Mostly involves putting the attribute on the structs. I also ran into various situations where I'm not sure if we're safe right now. Right now, you get a warning on all of those. I'm not sure how soon this would get merged into master.

Flaky tests

  • jack: Lots of retries; this is not sustainable.
  • gw: For my part, I've been seeing two of them in Linux. With the changes I made to rust-layers (for zoom=1, use nearest filtering), I haven't seen any failures on Linux reftests since then. On OSX, we seem to have one that was six times in a row. It seems like it may be broken on Mac.
  • jack: line_break_simple?
  • gw: Yes, me too.
  • jack: I was surprised the Rust upgrade even passed. [The rust upgrade didn't build on OSX. —Ms2ger]
  • gw: I think it's just broken on OSX. On Linux I'm not seeing any flaky ones at the moment.
  • jack: I'll file a bug for linebreak_simple. The other problem we have right now is that the w-p-t stuff is using network resources to do the test, so it broke everything when the version of mozlog we use got removed [mozlog was updated, and wptrunner only requires a >= version, so the new version was used, and it was broken —Ms2ger]. There's a PR to fix it, but there's a different error where the XMLHttpRequest now fails [red herring —Ms2ger]. I don't know if anyone on this call is familiar with them. Does anyone else have any flaky tests that they're having troubles with? Part of the frustration is that there will be a PR that appears safe to land, I merge it, and then Travis informs me the build failed. OK, I'll start filing bugs for these and finding issues for these.
  • zwarich: If you send me the bugs, I can hunt them down on Mac. I have a newer machine, and it triggers some edge cases. bjbell made some fixes that improve the stability of a couple of tests. I'm curious if any more of the tests failing are race conditions like that as opposed to taking a reftest snapshot at the wrong time.
  • jack: I found one that was mac-specific and not related to servo. If I have dynamic switch enable, it cycles between the card and integrated so quickly that it causes failure.
  • zwarich: Yes, I force to integrated-only when doing reftests. I was debating making that change on Mac in glfw. Gecko also forces integrate and does not switch to discrete.
  • jack: Yes, sounds like a good idea.
  • zwarich: OK, I'll put that on our branch. If we really like it, I can upstream it, but it's not that important - just one or two calls.
  • jack: Diedra(?) is pretty responsive on #glfw on IRC, but we should definitely do it here.
  • zwarich: Rapid graphics card switching on OSX is also a very good way to kernel panic or cause your OSX box to hang.

Performance and power

  • https://www.dropbox.com/s/dcqv7ah8f7dujpc/Power%20And%20Performance%20for%20Servo-Aug4.pdf

  • laleh: After that 20x perf improvement from gw, we have much better improvements. Now, parallelization saves power! Power decreases when we use more threads in layout. It seems that past 6 or 7 threads we don't save more power or time.

  • jack: Is this android vs. desktop?

  • laleh: No, this is after gw's huge performance improvements to servo - almost a 20x savings in time and a 24x savings in power. In the new version of Servo, now with more parallelizaiton, we consume less energy.

  • jack: That is awesome! It makes me wonder if there is other low-hanging fruit here.

  • gw: There's at least one. The worker queue threads spin (busy-loop).

  • pcwalton: Yup!

  • jack: Do we have a theory for why the power usage was so high?

  • laleh: Before, the total time was around 40ms instead of 2ms now. So, running for a shorter time has less power.

  • pcwalton: All of the syscalls alone will mess with your number.

  • gw: There were a lot of context switches for all those read calls.

  • laleh: The first two tables were the same experiments from last week on a PC. After that change, layout times also decreased a little bit. The next part has the same general curve for the performance numbers, but they are all lower. The blue and purple lines in the fourth chart show the new runs, so you can see that the layout times are lower.

  • jack: I noticed our new low-freq is better than our old high-freq. A good perf target would be to have our low clock-speed servo perf beat the perf of other browsers at high frequency

  • laleh: In the fifth chart, I wanted to show the portion of layout to total time. Before, layout was < 1%, but now it's much higher (20-30%).

  • jack: This is great and very encouraging!

  • laleh: Android next. For Android, now we can run perf-rainbow on Android. But, the time to render the page is between 10 and 30 minutes, and the power usage is between 500J and 1300J. Even the minimum is still quite large for something Servo can bring up in 2ms.

  • pcwalton: We need to profile Android and see what's going on.

  • jack: How hard is that.

  • pcwalton: Very annoying, as of a few years ago.

  • gw: It's not too bad if you have a Tegra, because you can use the nvidia-provided tools. Would you like me to have a look at it?

  • pcwalton: That would be awesome.

  • jack: Even just putting together some numbers from a profile would really help!

  • gw: I can do that today or tomorrow.

  • laleh: I also did a comparison between Servo & Firefox. The names of the function are not exactly the same. In Firefox, what's called Paint is Rendering in Servo, so I'm not sure we're measuring exactly the same thing and how much time we're spending vs. each other.

  • jack: There's not much else in Servo other than layout/rendering/compositor, but that's only 25%. What's the other 75% I have too.

  • laleh: The profiling interfaces we have don't seem to report much time, compared to the overall time spent.

  • pcwalton: We might be doing multiple layouts, so you might need to multiple them.

  • gw: 30-35% of the time was also in those worker thread queues, just spinning.

  • jack: Ouch!

  • pcwalton: We can improve that, and it should drop power consumption.

  • jack: Us, or Rust?

  • gw: Us.

  • jack: Can you open a bug on that?

  • gw: Will do. - https://github.com/servo/servo/issues/3007

  • jack: Thanks, laleh! Hopefully we will keep decreasing these numbers and get some useful measurements there. It shouldn't take 10x the power and time...

  • pcwalton: We must be doing something VERY silly there.

  • jack: I'm surprised you can test this because I'd expect the screen to lock or phone to overhead...

  • zwarich: I bet the thermal limiter is kicking in on the phone, which causes this to take so long, because it decreases the power.

Cargo-ifying Servo

  • jack: Creating snapshots of today's Rust. Sadly, our old version doesn't have fixes to Rust that cargo needs to run tests. In order to use it, we have to get another Rust upgrade in. Hopefully, since it's so quick, we won't have a lot of syntax changes. Unfortunately, the compiler bugs still aren't fixed (though we still have workarounds). Probably will need another upgrade again because kmc's stuff is blocked on a Rust bug. Probably the jst stuff. The cargo stuff looks to be pretty easy; the only thing I haven't tried yet is native libraries. Not sure what I'm going to do with those.
  • pcwalton: You have a lot of Servo cargo-ified?
  • jack: Two. rust-geom and rust-xlib (they're basically identical). All this stuff I tested dependency resolution. I haven't done any of the native stuff. Glfw builds with cargo even on Windows...
  • kmc: My goal is for the parser to be using cargo before we land it in servo
  • jack: One issue with the cargo upgrade is it requires a different directory structure. Cargo also doesn't support multiple libraries in the same package. It was supposed to come, but the Cargo team is now talking about cancelling it.
  • brson: If you want to talk about it, we can have the meeting tomorrow.
  • pcwalton: I like the multiple lib feature.
  • jack: I have a gist up, but no comments?
  • larsberg:
  • jack: We need to pull all of the crates up to their own directories. What will happen is that each cargo.toml in a submodule will point to the git repo and appropriate branch. When used in Servo, we can edit the cargo configuration so that we can override the dependency path to be local submodules. So, Servo will still be using submodules but is using cargo. And if externals use our submodules, they can just use cargo out of the box, depend from git, etc. But inside Servo, we'll still be able to do local development in submodules. Normally Cargo uses hidden submodule directories in hidden places that prevent you from editing them.
  • kmc: Can we have diamond dependencies? html5ever and servo both depend on string-cache. Does that work?
  • jack: Overrides will handle that. So, when html5ever builds, it'll get an override.
  • kmc: Goal is to have our specific versions of libraries?
  • jack: And, Cargo doesn't let you edit the submodule dependencies at the same time. This setup would allow that. I've already started cargo-ifying things. Some of the community may be able to help us with this, as they will be identical except for the library names. Build system should be much nicer as a result.
  • gw: Is there a way, in cargo, to have it run the tests when your tests are dynamic, like for some of our stuff? I couldn't see how to do it.
  • jack: How?
  • gw: 'cargo test' runs the static ones. But we add dynamic test functions for our ref tests, etc....
  • jack: Things in the tests/ directory are integration tests and use an extern test. I don't know if Cargo can run them or not. If not, we'll have a dummy Makefile that runs the tests, etc. I think that long-term, we'd use mach to run the tests, etc. ('mach config' and 'mach build'). I'd like to get rid of our configure magic that reads and sets the settings. I'd like to make sure that we do something similar to firefox for clobber build detection, where it warns you. That way, if we ever detect submodules are out of date, we can stop and not overwrite. We have failsafes, but it would be nice to make it more automatic.
  • kmc: the html parser also has external tests loaded from files. It's done similarly to Servo today in Makefiles.
  • jack: The biggest missing piece is that we do codegen a few different ways; python scripts that generate code but also rust-http which uses rust to generate rust. In cargo right now, you'd have to have a makefile that does the rust compilation to generate the code before cargo. It'd be nice if Cargo could handle it instead of make and shelling out.
  • kmc: I'd like to turn it all into procedural macros, even if they call python.
  • pcwalton: I totally agree.
  • kmc: That would be a lot cleaner! But we would lose the caching of the code generation, which would increase the script crate's build time. That said, it's already really slow. But codegen is a noticeable fraction.
  • jack: Limitation of procedural macros with no caching?
  • kmc: Well, should talk with the Rust team. You could do something ad-hoc, but maybe Rust could grow a mechanism for saving the expanded result of a macro and knowing about things.
  • jack: Didn't Rust have this? Word cache?
  • pcwalton: Didn't do expansions...
  • jack: But it could tell if cache results were invalid or not...
  • pcwalton: That's not the hard part, which is making sure that it all works with hygiene.
  • brson: And no, the cache was deleted. The idea there was that somebody could take it out of tree if they wanted it - it could go in the cargo package. kmc, did you have any numbers on how long it takes your macros to expand?
  • kmc: I think all of it takes 2-3 seconds. Mostly I've not been building with optimization, which means that everything is slower. I think 2-3 seconds was with optimizations. But I'm not sure how that compares with the cost of running the python IDL generator and it has to be parsed. It should potentially be more.
  • jack: We could probably solve this if we could get all the generated code in its own crate. Since that wouldn't change, we wouldn't re-run the procedural macros again...
  • kmc: Good idea. Or the script-bindings crate could have a bit of non-generated code. But it'd be nice to split up the script crate since it's the largest and hairiest crate.
  • jack: Previous discussions were all about decreasing build times and breaking dependencies, maybe this will be easier? Should discuss when jdm is back from the Canadian holiday.
  • kmc: Yes, fine for the dependency graph to be a line in this case.
Clone this wiki locally