Skip to content

Human Keypoint Recognition

Human keypoint recognition is an AI algorithm that detects 17 joint points (keypoints) on the human body from images or video. It is utilized across a wide range of applications requiring pose estimation, including sports motion analysis, fall detection in nursing care settings, and work posture assessment in factories.

Algorithm Overview

Using deep learning-based pose estimation models, 17 keypoints on the human body are detected. Each keypoint corresponds to the following body parts:

IndexBody PartIndexBody Part
0Nose9Left Wrist
1Left Eye10Right Hip
2Right Eye11Left Hip
3Left Ear12Right Knee
4Right Ear13Left Knee
5Right Shoulder14Right Ankle
6Left Shoulder15Left Ankle
7Right Elbow16
8Left Elbow

Performance Metrics

ModelAccuracy (mAP pose@0.5)
Person Pose-S (Lightweight)86.3
Person Pose-M (Standard)89.3

Edge AI Board (RV1126B) Execution Efficiency

ModelProcessing Time
Person Pose-S (Lightweight)59ms
Person Pose-M (Standard)103ms

Key Features

  • 17-point joint detection: High-precision detection of major body joints
  • Two-tier model selection: Choice between speed-priority (S) and accuracy-priority (M)
  • Real-time processing: Edge inference at approximately 59ms with the lightweight model
  • Foundation for pose estimation: Pre-processing for fall detection, motion analysis, and sports form analysis

Use Cases

  • Fall detection in nursing care facilities (identification of unnatural postures)
  • Sports training form analysis
  • Factory work posture assessment (back pain prevention)
  • Rehabilitation motion recording
  • Gesture interfaces
  • Retail store customer behavior analysis

Edge AI Board Implementation

Using the RV1126B NPU, keypoint detection is achieved with 59ms (lightweight model) and 103ms (standard model). Camera video is processed in real time, enabling immediate detection of postural anomalies.