[hidecontent type="logged" desc="隐藏内容:登录后可查看"]
COCO 测试开发 2015 的结果:
方法 | AP @0.5:0.95 | AP @0.5 | AP @0.75 | 美联社媒体 | AP大 |
---|---|---|---|---|---|
OpenPose(CMU-姿势) | 61.8 | 84.9 | 67.5 | 57.1 | 68.2 |
检测器(Mask R-CNN) | 67.0 | 88.0 | 73.1 | 62.2 | 75.6 |
阿尔法姿势 | 73.3 | 89.2 | 79.1 | 69.0 | 78.6 |
MPII 完整测试集的结果:
方法 | 头 | 肩膀 | 弯头 | 手腕 | 时髦的 | 膝盖 | 踝 | 大道 |
---|---|---|---|---|---|---|---|---|
OpenPose(CMU-姿势) | 91.2 | 87.6 | 77.7 | 66.8 | 75.4 | 68.9 | 61.7 | 75.6 |
纽厄尔和邓 | 92.1 | 89.3 | 78.9 | 69.8 | 76.2 | 71.6 | 64.7 | 77.5 |
阿尔法姿势 | 91.3 | 90.5 | 84.0 | 76.4 | 80.3 | 79.9 | 72.4 | 82.1 |
docs/MODEL_ZOO.md中提供了更多结果和模型。
详情请阅读trackers/README.md 。
详情请阅读docs/CrowdPose.md 。
Colab:我们提供了一个Colab 示例,供您快速入门。
推理:推理演示
./scripts/inference.sh ${CONFIG} ${CHECKPOINT} ${VIDEO_NAME} # ${OUTPUT_DIR}, optional
basicModel_neutral_lbs_10_207_0_v1.0.0.pkl
推理 SMPL(从这里下载 SMPL 模型并将其放入model_files/
)。
./scripts/inference_3d.sh ./configs/smpl/256x192_adam_lr1e-3-res34_smpl_24_3d_base_2x_mix.yaml ${CHECKPOINT} ${VIDEO_NAME} # ${OUTPUT_DIR}, optional
有关高级 API,请参阅./scripts/demo_api.py
。要启用跟踪,请参阅此页面。
./scripts/train.sh ${CONFIG} ${EXP_ID}
./scripts/validate.sh ${CONFIG} ${CHECKPOINT}
例子:
演示使用FastPose
模型。
./scripts/inference.sh configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml pretrained_models/fast_res50_256x192.pth ${VIDEO_NAME}
#or
python scripts/demo_inference.py --cfg configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml --checkpoint pretrained_models/fast_res50_256x192.pth --indir examples/demo/
#or if you want to use yolox-x as the detector
python scripts/demo_inference.py --detector yolox-x --cfg configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml --checkpoint pretrained_models/fast_res50_256x192.pth --indir examples/demo/
FastPose
在 mscoco 数据集上训练。
./scripts/train.sh ./configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml exp_fastpose
更详细的推理选项和示例,请参考GETTING_STARTED.md
如果对您的研究有帮助,请在您的出版物中引用这些论文:
@article{alphapose,
author = {Fang, Hao-Shu and Li, Jiefeng and Tang, Hongyang and Xu, Chao and Zhu, Haoyi and Xiu, Yuliang and Li, Yong-Lu and Lu, Cewu},
journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
title = {AlphaPose: Whole-Body Regional Multi-Person Pose Estimation and Tracking in Real-Time},
year = {2022}
}
@inproceedings{fang2017rmpe,
title={{RMPE}: Regional Multi-person Pose Estimation},
author={Fang, Hao-Shu and Xie, Shuqin and Tai, Yu-Wing and Lu, Cewu},
booktitle={ICCV},
year={2017}
}
@inproceedings{li2019crowdpose,
title={Crowdpose: Efficient crowded scenes pose estimation and a new benchmark},
author={Li, Jiefeng and Wang, Can and Zhu, Hao and Mao, Yihuan and Fang, Hao-Shu and Lu, Cewu},
booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
pages={10863--10872},
year={2019}
}
如果您使用了 3D 网格重建模块,请同时引用:
@inproceedings{li2021hybrik,
title={Hybrik: A hybrid analytical-neural inverse kinematics solution for 3d human pose and shape estimation},
author={Li, Jiefeng and Xu, Chao and Chen, Zhicun and Bian, Siyuan and Yang, Lixin and Lu, Cewu},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={3383--3393},
year={2021}
}
如果您使用了 PoseFlow 跟踪模块,请同时引用:
@inproceedings{xiu2018poseflow,
author = {Xiu, Yuliang and Li, Jiefeng and Wang, Haoyu and Fang, Yinghong and Lu, Cewu},
title = {{Pose Flow}: Efficient Online Pose Tracking},
booktitle={BMVC},
year = {2018}
}
AlphaPose 可免费用于非商业用途,并可在这些条件下重新分发。对于商业查询,请发送电子邮件至 mvig.alphapose[at]gmail[dot]com 和 cc lucewu[[at]sjtu[dot]edu[dot]cn。我们会将详细协议发送给您。
[/hidecontent]