Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clean up logging #10

Closed
lindonroberts opened this issue May 5, 2019 · 6 comments
Closed

Clean up logging #10

lindonroberts opened this issue May 5, 2019 · 6 comments
Assignees

Comments

@lindonroberts
Copy link
Collaborator

Use nicer logging (more like fmincon) with one line printed per iteration

@lindonroberts lindonroberts self-assigned this May 5, 2019
@JesusTorrado
Copy link

If I may, in this context I would suggest downgrading the logging.info per evaluation at

logging.info("Function eval %i at point %i has f = %.15g at x = " % (eval_num, pt_num, f) + str(x))
to a logging.debug, such that the INFO logging level prints more manageable periodic progress info.

Also, thanks for developing this implementation! @cmbant and I are interfacing it in our sampler, and it appears to work well for our problems (though any progress on #3 would be very welcomed!)

@lindonroberts
Copy link
Collaborator Author

Hi Jesus - I agree with this (currently too much output is displayed). I'm glad it works well for you - pending an improved handling of correlations (I'm finishing my phd thesis at the moment, but hope to look into this later in the year)

I'm planning on doing something similar to Matlab's optimisation routines (e.g. fmincon), where at each iteration they print out one line with a couple of key variables (current best value, # evals, optimality measure, ...).

@JesusTorrado
Copy link

Hi Jesus - I agree with this (currently too much output is displayed). I'm glad it works well for you - pending an improved handling of correlations (I'm finishing my phd thesis at the moment, but hope to look into this later in the year)

Good luck with the thesis!

I'm planning on doing something similar to Matlab's optimisation routines (e.g. fmincon), where at each iteration they print out one line with a couple of key variables (current best value, # evals, optimality measure, ...).

Not sure of what "iteration" means in the sentence, but in general differentiating between one logging level (say debug) that prints something per target evaluation and a less verbose (say info) one that prints progress info once in a while would be great IMO.

I would also recommend you, if I may, to declare a logger and call its logging methods, as described here as opposed to calling info etc directly from the logging module. This way the user can grab your logger with e.g. from pybobyqa import my_logger and change its verbosity level, name, etc. In you case, you may want to define a single one in __init__.py or solver.py, and then import it from the rest of the modules. I can write a very simple PR for that if you'd like.

@JesusTorrado
Copy link

JesusTorrado commented Mar 12, 2021

Hi @lindonroberts

I can see that you are now allowing for more logging control in 1.20. That's great and really useful! I would suggest, as described above, to split the logging levels a bit more finely: info for more sparse progress info (e.g. every few rho updates and at soft restarts), and debug for printing at each evaluation. But that's up to you, of course.

Also, you may want to consider creating a module-level logger with logger = logging.getLogger("name") and then using it like logger.info() etc, as explained here: https://docs.python.org/3/library/logging.html#logger-objects This way, people like me using your code as a library within theirs can identify where the messages come from, e.g. using a logging format like "[logger_name] msg".

Thanks for the great work!

@lindonroberts
Copy link
Collaborator Author

@JesusTorrado - I have (finally!) added module-level logging, thanks for this suggestion. At this stage I'm reluctant to change the levels for each message, but you should be able to deactivate the INFO message for each evaluation (which are the only log messages produced by pybobyqa.util)

@JesusTorrado
Copy link

JesusTorrado commented Nov 4, 2021

That's brilliant, thanks a lot! I'll update the pybobyqa version in our next release.

Mind that my suggestion was not necessarily about changing the info logging level, but about having 2 separate levels, whichever they are: one that prints once per target evaluation, and another one that prints progress information only sparsely (e.g. when specific values of the target metric are reached).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants