-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Missing indicators of what to learn from output to improve results. #103
Comments
Posted this as a new issue I have the same question. From my last job there are some reasonable relaxed binders, but the job stopped (see below). How do I continue this job with a lower acceptance rate? And/or, as bifxcore asked, how to I optimize the settings based on the results from the first job? This could be a great tool, but I would like to be able to use it efficiently. Trajectory successful, final pLDDT: 0.87 |
Generally, the thing that affects designs success the most is the target and choice of hotspots. It could be that your target is simply too hard for the pipeline to do. However, if you got one, there is hope! How many trajectories (based on entries in trajectory_stats.csv) did you have to do to get the 1? |
|
That's probably very little! For hard targets one might need to sample up to 3000. |
The run died with "The ratio of successful designs is lower than defined acceptance rate". Current acceptance_rate=0.01. I suppose I could set it to zero and let it run for a while. |
You can set enable_rejection_check to false |
I have now tried for weeks to get a binder to stick. I am using the hints that I could gather:
This led me to exactly 1 design accepted so far. Better than nothing :-) However, how to build on this lone success to improve outcome?
Logically, you would think that looking at the good design, I would be able to figure out what target residues ended up in the interface and use those in the next run. I could also look at the good trajectories and infer the common residues that are present in all interfaces. I tested both of these strategies, and I got 0 designs passing.
Please can you give me a hint on how to use the results of failed runs to try to increase the number of passing designs? How do I infer from the logs and heaps of output WHY the second stage keeps failing (No accepted MPNN designs found for this trajectory)?
The text was updated successfully, but these errors were encountered: