Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

horse or human inceptionv3 miscatagorizing #8

Open
tameyer1 opened this issue Mar 23, 2021 · 8 comments
Open

horse or human inceptionv3 miscatagorizing #8

tameyer1 opened this issue Mar 23, 2021 · 8 comments

Comments

@tameyer1
Copy link

Hello

I was working on the horse or human transfer learning section. I am using the datasets for training and validation from the urls you supply. No matter what horse image I try the model classifies it as human. I figured my local setup might have issues so I opened and ran the colab file for transfer learning in the chapter 3 folder. I received the same incorrect results using a random horse picture from the web as well as when I uploaded some of the validation horse images from the colab project.

@Rutraz
Copy link

Rutraz commented Mar 24, 2021

Hey

I was having a similar issue. I realized that the images of horses and humans used to train the model on the first iteration of chapter 3 were 300 x 300 and later in the chapter when we arrive at transfer learning the inputs were 150 x 150. So I altered the code given and put it back to 300 x 300 and that seems to work for me. Maybe the reason behind it is because the input needs to match the shape of the data (in this case 300× 300).

Inkedchrome_susClQU6im_LI

@tameyer1
Copy link
Author

tameyer1 commented Mar 24, 2021

I had been trying it the opposite way of setting the training, validation, and test images to 150,150. I went ahead and tested with 300,300 in both the model training and test and receive the same result: running the validation images against the model always returns human and 1. for the class. So still no luck but thanks for the reply.

My code:
#Training
import urllib.request
import zipfile
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras import layers
from tensorflow.keras import Model
from tensorflow.keras.applications.inception_v3 import InceptionV3
from tensorflow.keras.optimizers import RMSprop

weights_url = "https://storage.googleapis.com/mledu-datasets/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5"
weights_file = "inception_v3.h5"
urllib.request.urlretrieve(weights_url, weights_file)

pre_trained_model = InceptionV3(input_shape=(300, 300, 3),
include_top=False,
weights=None)

pre_trained_model.load_weights(weights_file)

for layer in pre_trained_model.layers:
layer.trainable = False

last_layer = pre_trained_model.get_layer('mixed7')
print('last layer output shape: ', last_layer.output_shape)
last_output = last_layer.output

x = layers.Flatten()(last_output)

x = layers.Dense(1024, activation='relu')(x)

x = layers.Dropout(0.2)(x)

x = layers.Dense(1, activation='sigmoid')(x)

model = Model(pre_trained_model.input, x)

model.compile(optimizer=RMSprop(lr=0.001),
loss='binary_crossentropy',
metrics=['acc'])

training_url = "https://storage.googleapis.com/laurencemoroney-blog.appspot.com/horse-or-human.zip"
training_file_name = "horse-or-human.zip"
training_dir = 'horse-or-human/training/'
urllib.request.urlretrieve(training_url, training_file_name)
zip_ref = zipfile.ZipFile(training_file_name, 'r')
zip_ref.extractall(training_dir)
zip_ref.close()

validation_url = "https://storage.googleapis.com/laurencemoroney-blog.appspot.com/validation-horse-or-human.zip"
validation_file_name = "validation-horse-or-human.zip"
validation_dir = 'horse-or-human/validation/'
urllib.request.urlretrieve(validation_url, validation_file_name)

zip_ref = zipfile.ZipFile(validation_file_name, 'r')
zip_ref.extractall(validation_dir)
zip_ref.close()

train_datagen = ImageDataGenerator(rescale=1./255.,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)

#Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator(rescale=1.0/255.)

#Flow training images in batches of 20 using train_datagen generator
train_generator = train_datagen.flow_from_directory(training_dir,
batch_size=20,
class_mode='binary',
target_size=(300, 300))

#Flow validation images in batches of 20 using test_datagen generator
validation_generator = test_datagen.flow_from_directory(validation_dir,
batch_size=20,
class_mode='binary',
target_size=(300, 300))

history = model.fit_generator(
train_generator,
validation_data=validation_generator,
epochs=15,
verbose=1)

model.save('horse_or_human.h5')

#Test script

import tensorflow as tf
from tensorflow.keras.preprocessing import image
import numpy as np
import os

model = tf.keras.models.load_model('horse_or_human.h5')

#predicting images
path = 'horse-or-human/validation/horses/'
directories = os.listdir(path)

for file in directories:
    img = image.load_img(path + file, target_size=(300, 300))
    x = image.img_to_array(img)
    x = np.expand_dims(x, axis=0)

    images = np.vstack([x])
    classes = model.predict(images)
    print(classes)
    print(classes[0])
    if classes[0] > 0.5:
        print(file + " image is a human")
    else:
        print(file + " image is a horse")

@emin
Copy link

emin commented Jul 24, 2021

yes, I've the same problem as well. Using 300 adjustment didn't work either. I don't know the issue, considering I've started with this book to my ML journey, I'd like to know what's wrong with this.

@lmoroney

@bartpotrykus
Copy link

bartpotrykus commented Feb 27, 2022

@tameyer1 Hey, I know your question is one year old but I encountered same issue. The reason is we were not normalizing when using a test image.
x = x / 255.0
is the line that made the classification correct.

@semajson
Copy link

semajson commented Jan 22, 2023

I hit the exact same issue.

It looks like this might be a bug in: https://github.com/lmoroney/tfbook/blob/master/chapter3/transfer_learning.ipynb

where rather than:

for fn in uploaded.keys():
 
  # predicting images
  path = fn
  img = image.load_img(path, target_size=(150, 150))
  x = image.img_to_array(img)
  x = np.expand_dims(x, axis=0)

  images = np.vstack([x])
  classes = model.predict(images, batch_size=10)
  print(fn)
  print(classes)

it should be:

for fn in uploaded.keys():
 
  # predicting images
  path = fn
  img = image.load_img(path, target_size=(150, 150))
  x = image.img_to_array(img)
  x = np.expand_dims(x, axis=0)
  x = x / 255.0

  images = np.vstack([x])
  classes = model.predict(images, batch_size=10)
  print(fn)
  print(classes)

However, difference that might explain why i needed to add x = x / 255.0 on my machine

  • I'm running on my WSL env, using tensorflow-cpu (version 2.11.0)
  • I had to replace img = image.load_img(path, target_size=(150, 150)) with img = tf.keras.utils.load_img(test_image, target_size=(300, 300)) to get my code to not error
  • i had to replace x = image.img_to_array(img) with x = tf.keras.utils.img_to_array(img) to get my code to not error

I think the next step to figure out if this is an error in https://github.com/lmoroney/tfbook/blob/master/chapter3/transfer_learning.ipynb is to run that code on colab code to see if x = x / 255.0 is needed

@semajson
Copy link

semajson commented Jan 22, 2023

I ran the https://github.com/lmoroney/tfbook/blob/master/chapter3/transfer_learning.ipynb on colab, and needed to make these following changes:

added:

  • import tensorflow as tf
  • x = x / 255.0

Changes:

  • img = image.load_img(path, target_size=(150, 150)) -> img = tf.keras.utils.load_img(path, target_size=(150, 150))
  • x = image.img_to_array(img) -> x = tf.keras.utils.img_to_array(img)

After these changes, things ran correctly and images I uploaded were identified correctly

@semajson
Copy link

@lmoroney looks like people can't just raise PRs against this repo.

What is the correct to get this issue fixed?

@semajson
Copy link

Ah i just realised what i was doing wrong - leave this with me and i will raise a PR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants