-
Notifications
You must be signed in to change notification settings - Fork 0
Home
Workshop coordinators: Nirav Merchant and Carlos Lizárraga. Data Science Institute, University of Arizona.
Tracking animal behavior noninvasively is crucial for many scientific fields. Extracting animal poses without markers is essential in biomechanics, genetics, ethology, and neuroscience but challenging in dynamic backgrounds. We introduced DeepLabCut, an open-source toolbox that adapts a human pose-estimation algorithm, allowing users to train a neural network with minimal data to track features accurately. Our updated Python package includes new features like GUIs, performance improvements, and active-learning-based refinement. We offer a step-by-step guide for creating a reusable analysis pipeline with a GPU in 1–12 hours. Additionally, Docker environments and Jupyter Notebooks are available for cloud resources like Google Colaboratory or CyVerse Discovery Environment.
DeepLabCut offers tools to create annotated training sets, train feature detectors, and analyze behavioral videos. It is used with mice, zebrafish, flies, and even rare subjects like babies and cheetahs.
You can find a list of DeepLabCut resources to learn how to use it.
- Main Github
- DeepLabCut Documentation
- Docs 1: Beginners Guide
- Docs 2: Manage Project
- Docs 3: Labeling GUI
- Docs 4: Neural Network training and evaluation in the GUI
- Docs 5: Video Analysis with DeepLabCut
- Docs: User Overview
- Workshop materials
The Problem of Pose Estimation: Understanding how animals and humans move is crucial in various fields, from biology and neuroscience to sports science and human-computer interaction. Extracting pose information (body part locations) from video data is a challenging task known as pose estimation. Traditional methods rely on handcrafted features and are often complex and domain-specific.
Deep Learning Revolutionizes Pose Estimation: Deep learning approaches, particularly convolutional neural networks (CNNs), have revolutionized pose estimation. CNNs can automatically learn relevant features from images, leading to more accurate and robust pose estimation. However, building and training deep learning models from scratch requires significant expertise and computational resources.
DeepLabCut: A User-Friendly Deep Learning Toolbox for Pose Estimation
DeepLabCut is a user-friendly, open-source toolbox specifically designed for animal pose estimation using deep learning. It simplifies the process by providing tools for data collection, annotation, training, and analysis.
The DeepLabCut library offers several advantages:
- Reduced Coding Burden: DeepLabCut eliminates the need to build complex deep learning pipelines from scratch. Users can focus on data preparation and customizing training configurations.
- Streamlined Workflow: DeepLabCut offers a structured workflow for pose estimation projects. It guides users through data collection, annotation, training, and evaluation stages.
- Built-in Functionality: DeepLabCut provides functionalities for data pre-processing, model training, evaluation, and video analysis. This reduces the need for external libraries and simplifies project development.
This learning plan leverages DeepLabCut to explore video pose analysis. Here are some potential study cases:
- Animal Behavior Analysis: The participants can analyze animal behavior in videos, such as tracking limb movements in walking insects or quantifying head orientation in freely behaving rodents.
- Human Motion Capture: DeepLabCut can be used for basic human motion capture tasks, like tracking body part movements during exercise routines or analyzing simple gestures.
- Object Pose Estimation: While DeepLabCut is primarily designed for animal and human pose estimation, it can be adapted for simpler object pose estimation tasks. For instance, participants could track the orientation of a toy car in a video.
By working with DeepLabCut, participants gain practical experience with deep learning applications in pose estimation while exploring animal or human movement in videos. This project provides a foundation for further exploration of deep learning techniques in computer vision tasks.
This plan guides participants to leverage the DeepLabCut toolbox for building a video pose estimation system.
Target Audience: Participants with basic Python programming knowledge and an interest in deep learning and computer vision.
Activities Plan:
- Week 0: Getting Ready
- Week 1: DeepLabCut Introduction and Environment Setup.
- Week 2: Data Collection and Annotation with DeepLabCut.
- Week 3: DeepLabCut Training and Evaluation.
- Week 4: Video Analysis and Refinement.
This plan leverages DeepLabCut's capabilities, reducing the need for the participants to build a model from scratch. It focuses on practical application and encourages exploration of DeepLabCut's features.
Remember, this is a flexible framework. Adapt it based on participant progress, chosen complexity, and DeepLabCut's latest functionalities.
- Keypoint-MoSeq: parsing behavior by linking point tracking to pose dynamics
- Lightning Pose: improved animal pose estimation via semi-supervised learning, Bayesian ensembling and cloud-native open-source tools
- SLEAP: A deep learning system for multi-animal pose tracking
- SuperAnimal pretrained pose estimation models for behavioral analysis
- Using DeepLabCut for 3D markerless pose estimation across species and behaviors
Created: 07/13/2024 (C. Lizárraga)
Updated: 07/15/2024 (C. Lizárraga)
DataLab, Data Science Institute, University of Arizona.
DataLab Main. Data Science Institute, University of Arizona 2024.