Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'Net' object has no attribute 'backward_from_layer' #9

Open
zcboluo opened this issue Jul 18, 2016 · 5 comments
Open

Comments

@zcboluo
Copy link

zcboluo commented Jul 18, 2016

Hi,

I am a beginner in deep learning and programming. When I run "find_fooling_image.py" in "./caffe/ascent"
,an error has occurred:
AttributeError: 'Net' object has no attribute 'backward_from_layer'

I don't know how to overcome it? Can you help me?

Thanks!

@zcboluo
Copy link
Author

zcboluo commented Jul 18, 2016

I have seen the closed issue about"no attribute 'backward_from_layer'"and sovled this problem.

But another problem has occurred:


grad: 0.0 0.0 0.0
Grad 0, failed
Result: no convergence


the grad is always 0, it may because:
code: " diffs = net.blobs[push_layer].diff * 0 " in the "find_fooling_image.py"

or it may result from that I used the model 'bvlc_reference_caffenet.caffemodel' instead, because I couldn't download the trained model from 'http://yosinski.cs.cornell.edu/yos_140311__caffenet_iter_450000'.
Where can I download it?

@zcboluo
Copy link
Author

zcboluo commented Jul 19, 2016

@yosinski I am waiting for you help. Thank you very much!

@yosinski
Copy link
Contributor

Hi @zcboluo, sorry for the inactive URL. Try this one instead:
http://c.yosinski.com/caffenet-yos-weights

Does that work? If not, make sure you're using a network definition prototext containing the force_backward: true line like as in ours:
https://github.com/Evolving-AI-Lab/fooling/blob/ascent/caffe/ascent/deploy_1_forcebackward.prototxt#L7

@zcboluo
Copy link
Author

zcboluo commented Jul 20, 2016

@yosinski Thank you very much. The model works. But I have a new problem:
When I put the generated picture back in the model, the results is different.
These is the code ():


    im=caffe.io.load_image('291_maj_Xpm.png')  
    tmp = transformer.preprocess('data', im)   # converts rgb -> bgr

    X0 = tmp[newaxis,:]
    X = minimum(255.0, maximum(0.0, X0 + mn4d)) - mn4d     # Crop all values to [0,255]
    out = net.forward_all(data = X)
    acts = net.blobs['prob'].data

    iimax = unravel_index(acts.argmax(), acts.shape)[1:]   # chop off batch idx of 0
    push_label = labels[push_idx]
    print  'Push idx: %d, val: %g (%s)\n      Max idx: %d, val: %g (%s)' % (push_idx, acts[0][push_idx],     push_label, iimax[0], acts.max(), labels[iimax[0]])

However, when replace 'im' with 'best_X'(return from the fuction 'find_image'). The result is the same.

Why ‘im’ is different from 'best_X'? So I wonder if there is any mistake I have not noticed.

@zcboluo
Copy link
Author

zcboluo commented Jul 21, 2016

@yosinski
I found that the code ‘X = minimum(255.0, maximum(0.0, X + mn4d)) - mn4d # Crop all values to [0,255]’ didn't really make 'X' in [0,255]. I wonde if this is the reason?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants