Novel machine learning methodologies for human action correction in videos
The goal of this research is to motivate patients with disabilities to get better in a faster, easier and fun way. It will apply artificial intelligence methods to a software that turns traditional physical therapy exercises into interactive applications, including video cogniexergames. This helps patients perform the right exercises, gives them incentives to progress and tracks not only when they have done the exercises, but how effectively they are doing them. It contributes towards improving patients quality of life, while setting the standard for rehabilitative and long term care through personalized technology. This is part of ongoing research in the group and is directed towards and tailored for patients with neurological disabilities (recovering from stroke, hemiparesis, tetraparesis, Parkinson’s disease), with orthopedical problems (fractures, surgeries, musculotendinous disorders) as well as patients with age-related problem (arthritis, falls). The rehabilitation software includes a variety of exergames for developing upper limb coordination, improving upper limb and range of motion, reaching objects, speeding constrained movements, and many more. For all the conditions mentioned, exergames are a convenient form of physical exercise and are beginning to be employed in many rehabilitation domains. However, the exercises are not individualised, and, moreover, are not really meant for a targeted condition, group age, etc. The focus of this research is to design and evaluate advanced, cross-disciplinary artificial intelligence methods able to dynamically recognise, anticipate and interpret patients’ complex gestures and movements as they play exergames. Precisely, the research will concentrate on motion/action quality assessment from a correctness perspective, using machine learning methods. A human action usually lasts from several seconds to a few minutes and its action is spatio-temporal (a sequence of frames or images in time, segmented from a video). The data we have is skeleton data, angles data and depth images which are recorded using Kinect and Orbbec cameras. We are in the process of gathering additional data using wearable sensors. The final goal will be to design, develop and apply machine learning methods for creating a module that can automatically act as an individual intelligent (virtual) recommendation system for each patient. Temporal Convolutional Neural Networks, Rough Path Theory and Dynamic Time Warping are among the methods tested so far with promising results, but more advanced methods have to be tested and/or adapted to this problem.
This is a self funded project
Brunel offers a number of funding options to research students that help cover the cost of their tuition fees, contribute to living expenses or both. See more information here: https://www.brunel.ac.uk/research/Research-degrees/Research-degree-funding. Recently the UK Government made available the Doctoral Student Loans of up to £25,000 for UK and EU students and there is some funding available through the Research Councils. Many of our international students benefit from funding provided by their governments or employers. Brunel alumni enjoy tuition fee discounts of 15%.)