Skip to content

Shreya's Notes Issue 121

shreyasingh1 edited this page Feb 2, 2021 · 1 revision

This is Shreya's wiki for Issue 121, which is evaluating DeepNeuron's performance on the Mouselight data compared to a baseline logistic classifier and/or multilayer perceptron.

DeepNeuron Functionality Guide

Overview

For a neuron image stack, it can be used to automatically detect neurite signals. For a neuron image stack with detected 3D signals, it can automatically connect signals to generate local segments. For a neuron image stack with its associated automated reconstruction, it can be used as a filter to clean up all false positive tracing and generate a refined result. For a neuron image stack with its associated manual reconstructions, it can evaluate how consistent and reliable the reconstructions are. For a neuron image stack with interactive human annotation via the user interface, it can label neurite types in real-time.

Image Input info:

File -> open image/stack/surface_file in a new window Test image 1: test_image1.v3dpbd

Testimage1-2-D.png Testimage1-3-D.png

Test image 2: test_image2.v3dpbd

Testimage2-2-D.png Testimage2-3-D.png

Module 1: Neurite Signal Detection:

Automatically identify 3D dendritic and axonal signals from background. Input: All below files can be found here. deep_learning_models/2_labels/2D_CNN_5_layers **Deploy file: **deploy.prototxt **Trained model file: **caffenet_train_iter_270000.caffemodel **Database mean file: **2D_CNN_5_layers/imagenet_mean.binaryproto **Step size: **can be anything, 10 and 5 were used here

Output:

test_image1.v3dpbd_detection.swc (step size of 10)

image1-module1-stepsize10.png

test_image1.v3dpbd_detection.swc (step size of 5)

image1-module1-stepsize5.png

Test_image2.v3dpbd_detection.swc (step size of 10)

image2-module1-stepsize10.png

test_image2.v3dpbd_detection.swc (step size of 5)

image2-module1-stepsize5.png

Module 2: Neurite Connection

**Automatically connect local neurite signals to form neuronal trees. ** **Input: **All below files can be found here. deep_learning_models/2_labels/siamese_networks Deploy file: mnist_siamese.prototxt Trained model file: full_siamese_iter_450000.caffemodel SWC file: Output of Module 1/Neurite Signal Detection Module, ex. test_image1.v3dpbd_import.tif_detection.swc, test_image1.v3dpbd_import.tif_detection.swc

Output:

test_image1.v3dpbd_import.tif_detection.swc_connection.swc

image1-module2-stepsize10.png

test_image2.v3dpbd_import.tif_detection.swc_connection.swc

image2-module2-stepsize10.png

Module 3: Smart Pruning

**Filter false positive and refine automated reconstruction results. ** Input: All below files can be found here. deep_learning_models/2_labels/2D_CNN_5_layers Deploy file: deploy.prototxt **Trained model file: **caffenet_train_iter_270000.caffemodel Database mean file: imagenet_mean.binaryproto Step size: can be anything, 10 was used here SWC file: Output of Module 2/Neurite Connection Module, ex. Test_image1.v3dpbd_import.tif_detection.swc_connection.swc, test_image2.v3dpbd_import.tif_detection.swc_connection.swc

Output:

test_image1.v3dpbd_import.tif_detection.swc_connection.swc_pruned.swc

image1-module3-stepsize10.png

test_image2.v3dpbd_import.tif_detection.swc_connection.swc_pruned.swc

image2-module3-stepsize10.png

Module 4: Reconstruction Evaluation

Evaluate manual reconstructions and provide quality score. Input: All below files can be found here. deep_learning_models/2_labels/2D_CNN_5_layers Deploy file: deploy.prototxt Trained model file: caffenet_train_iter_270000.caffemodel **Database mean file: **imagenet_mean.binaryproto **Step size: **can be anything, 10 was used here **SWC file: **Output of Neurite Signal Detection Module, ex. test_image1.v3dpbd_import.tif_detection.swc, test_image2.v3dpbd_import.tif_detection.swc

Output: Evaluation Score.

Evaluation score for test image 1 is 0.995157.

image1-module4-stepsize10.png

Evaluation score for test image 2 is 0.977723

image2-module4-stepsize10.png

Module 5: Classification of Dendrites and Axons

Automatically classify neurite types during real-time annotation. **Preprocessing: ** Need to draw 3D curves on the neuron before running this module. Input: All below files can be found here. deep_learning_models/3_labels Either 2D_CNN_5_layers or 2D_CNN_6_layers can be used. **Deploy file: **deploy.prototxt **Trained model file: **caffenet_train_iter_130000.caffemodel Database mean file: imagenet_mean.binaryproto

DeepNeuron Mouselight Experiment Plan (in progress)

TL;DR We want to compare DeepNeuron’s performance on the Mouselight data to the performance of a baseline segmentation model on the Mouselight data.

Tommy’s poster for reference.

This experiment will be modeled off of the above poster, except we will replace the state of the art algorithm shown there with DeepNeuron. This experiment will return a ROC curve like the one shown in the poster.

Steps to do this:

0. Load the benchmarking data provided by Tommy

1. Build a logistic classifier in the Algorithms module of Brainlit

This will be a baseline segmentation model, whose input is the voxel’s intensity

1.5 Optional: Build a multilayer perceptron class in the Algorithms module of Brainlit

Work with Fredrick, Alisha, and Chenyang to coordinate who wants to do this

2. Test the logistic classifier (and multilayer perceptron) on Brainlit benchmarking data

3. Run DeepNeuron on the Brainlit data

Still need to decide which module. Maybe 1 (neurite detection) or 2 (neurite connection)?

4. Generate a ROC curve

Data Preprocessing Steps

  • 4 subvolumes are removed due to trace inalignment
  • Subvolumes are separated randomly into a training set of size 38, a validation set of size 4, and a testing set of size 4

To convert traces into image segmentations:

  • For points, we fill in the nearest voxel
  • For edges: fill using the Bresenham algorithm
  • We fill in voxels within 1µm of previously filled voxels

Common Errors Encountered

When attempting to read the benchmarking_data, I recieved a UnicodeDecodeError that said that charmap couldn't decode a character. This was occurring because I downloaded the benchmarking data from OneDrive in pieces, which apparently has a different encoding than if you download the data all at once. To resolve the error, I downloaded all the data at once in a large zip file.