Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing indicators of what to learn from output to improve results. #103

Open
bifxcore opened this issue Nov 21, 2024 · 6 comments
Open

Comments

@bifxcore
Copy link

I have now tried for weeks to get a binder to stick. I am using the hints that I could gather:

  • truncate target as much as possible
  • remove disordered parts
  • update local repo to keep up with commits
  • try different weights and filters
  • different hot spots

This led me to exactly 1 design accepted so far. Better than nothing :-) However, how to build on this lone success to improve outcome?

Logically, you would think that looking at the good design, I would be able to figure out what target residues ended up in the interface and use those in the next run. I could also look at the good trajectories and infer the common residues that are present in all interfaces. I tested both of these strategies, and I got 0 designs passing.

Please can you give me a hint on how to use the results of failed runs to try to increase the number of passing designs? How do I infer from the logs and heaps of output WHY the second stage keeps failing (No accepted MPNN designs found for this trajectory)?

@mdppdm
Copy link

mdppdm commented Nov 21, 2024

Posted this as a new issue

I have the same question. From my last job there are some reasonable relaxed binders, but the job stopped (see below).

How do I continue this job with a lower acceptance rate? And/or, as bifxcore asked, how to I optimize the settings based on the results from the first job?

This could be a great tool, but I would like to be able to use it efficiently.

Trajectory successful, final pLDDT: 0.87
Trajectory successful, final pLDDT: 0.74
Trajectory successful, final pLDDT: 0.88
Trajectory successful, final pLDDT: 0.84
Trajectory successful, final pLDDT: 0.8
Trajectory successful, final pLDDT: 0.82
Trajectory successful, final pLDDT: 0.85
Trajectory successful, final pLDDT: 0.89
Trajectory successful, final pLDDT: 0.88
The ratio of successful designs is lower than defined acceptance rate! Consider changing your design settings!

@martinpacesa
Copy link
Owner

Generally, the thing that affects designs success the most is the target and choice of hotspots. It could be that your target is simply too hard for the pipeline to do. However, if you got one, there is hope! How many trajectories (based on entries in trajectory_stats.csv) did you have to do to get the 1?

@bifxcore
Copy link
Author

However, if you got one, there is hope! How many trajectories (based on entries in trajectory_stats.csv) did you have to do to get the 1?

@martinpacesa
Copy link
Owner

That's probably very little! For hard targets one might need to sample up to 3000.

@bifxcore
Copy link
Author

one might need to sample up to 3000

The run died with "The ratio of successful designs is lower than defined acceptance rate". Current acceptance_rate=0.01. I suppose I could set it to zero and let it run for a while.

@martinpacesa
Copy link
Owner

You can set enable_rejection_check to false

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants