ActionAI is a python library for training machine learning models to classify human action. It is a generalization of our yoga smart personal trainer, which is included in this repo as an example.
These instructions will show how to prepare your image data, train a model, and deploy the model to classify human action from image samples. See deployment for notes on how to deploy the project on a live stream.
Add the smellslikeml PPA and install with the following:
sudo add-apt-repository ppa:smellslikeml/ppa
sudo apt update
# Install with:
sudo apt-get install actionai
Make sure to configure the working directory with:
actionai configure
Organize your training data in subdirectories like the example below. The actionai
cli will automatically create a dataset from subdirectories of videos where each subdirectory is a category label.
.
└── dataset/
├── category_1/
│ └── *.mp4
├── category_2/
│ └── *.mp4
├── category_3/
│ └── *.mp4
└── ...
Then you can train a model with:
actionai train --data=/path/to/your/data/dir --model=/path/to/your/model/dir
And then run inference on a video with:
actionai predict --model=/path/to/your/model/dir --video=/path/to/your/video.mp4
View the default config.ini
file included in this branch for additional configurations. You can pass your own config file using the --cfg
flag.
Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.
This project is licensed under the GNU General Public License v3.0 - see the LICENSE.md file for details