Skip to main content

Novel machine learning methodologies for human action correction in videos

The goal of this research is to motivate patients with disabilities to get better in a faster, easier and fun way. It will apply artificial intelligence methods to a software that turns traditional physical therapy exercises into interactive applications, including video cogniexergames. This helps patients perform the right exercises, gives them incentives to progress and tracks not only when they have done the exercises, but how effectively they are doing them. It contributes towards improving patients quality of life, while setting the standard for rehabilitative and long term care through personalized technology. This is part of ongoing research in the group and is directed towards and tailored for patients with neurological disabilities (recovering from stroke, hemiparesis, tetraparesis, Parkinson’s disease), with orthopedical problems (fractures, surgeries, musculotendinous disorders) as well as patients with age-related problem (arthritis, falls). The rehabilitation software includes a variety of exergames for developing upper limb coordination, improving upper limb and range of motion, reaching objects, speeding constrained movements, and many more. For all the conditions mentioned, exergames are a convenient form of physical exercise and are beginning to be employed in many rehabilitation domains. However, the exercises are not individualised, and, moreover, are not really meant for a targeted condition, group age, etc. The focus of this research is to design and evaluate advanced, cross-disciplinary artificial intelligence methods able to dynamically recognise, anticipate and interpret patients’ complex gestures and movements as they play exergames. Precisely, the research will concentrate on motion/action quality assessment from a correctness perspective, using machine learning methods. A human action usually lasts from several seconds to a few minutes and its action is spatio-temporal (a sequence of frames or images in time, segmented from a video). The data we have is skeleton data, angles data and depth images which are recorded using Kinect and Orbbec cameras. We are in the process of gathering additional data using wearable sensors. The final goal will be to design, develop and apply machine learning methods for creating a module that can automatically act as an individual intelligent (virtual) recommendation system for each patient. Temporal Convolutional Neural Networks, Rough Path Theory and Dynamic Time Warping are among the methods tested so far with promising results, but more advanced methods have to be tested and/or adapted to this problem. 

How to apply

If you are interested in applying for the above PhD topic please follow the steps below:

  1. Contact the supervisor by email or phone to discuss your interest and find out if you woold be suitable. Supervisor details can be found on this topic page. The supervisor will guide you in developing the topic-specific research proposal, which will form part of your application.
  2. Click on the 'Apply here' button on this page and you will be taken to the relevant PhD course page, where you can apply using an online application.
  3. Complete the online application indicating your selected supervisor and include the research proposal for the topic you have selected.

Good luck!

This is a self funded topic

Brunel offers a number of funding options to research students that help cover the cost of their tuition fees, contribute to living expenses or both. See more information here: The UK Government is also offering Doctoral Student Loans for eligible students, and there is some funding available through the Research Councils. Many of our international students benefit from funding provided by their governments or employers. Brunel alumni enjoy tuition fee discounts of 15%.