Skip to content

Dataset

Hiten Mahesh Nirmal edited this page Apr 5, 2018 · 7 revisions

Train Data

Data contains a bunch of folders (325 of them), named as hashes, each of which contains 100 consecutive frames of a gray-scale video of cilia. Masks contains a number of PNG images (211 of them), named as hashes (corresponding to the subfolders of data), that identify regions of the corresponding videos where cilia is.

Also within the parent folder are two text files: train.txt and test.txt. They contain the names, one per line, of the videos in each dataset. Correspondingly, there is masks in the masks folder for those named in train.txt; the others, we need to predict The training / testing split is 65 / 35, which equates to about 211 videos for training and 114 for testing.

The data are all available on GCP: gs://uga-dsp/project4

Test Data

The data itself are grayscale 8-bit images taken with DIC optics of cilia biopsies published in this 2015 study. For each video, you are provided 100 subsequent frames, which is roughly equal to about 0.5 seconds of real-time video (the framerate of each video is 200 fps). Since the videos are grayscale, if you read a single frame in and notice its data structure contains three color channels, you can safely pick one and drop the other two. Same goes for the masks.

Speaking of the masks: each mask is the same spatial dimensions (height, width) as the corresponding video. Each pixel, however, is colored according to what it contains in the video:

• 2 corresponds to cilia (what you want to predict!)

• 1 corresponds to a cell

• 0 corresponds to background (neither a cell nor cilia)

LEFT: A single frame from the grayscale video.

RIGHT: A 3-label segmentation of the video

(0: background, 1: cell, 2: cilia). The “cilia” label is what we’re interested in for this project.

Reference:// Project4 Documentation