Federated Learning (FL) has emerged as a prospective solution that facilitates the training of a ... more Federated Learning (FL) has emerged as a prospective solution that facilitates the training of a high-performing centralised model without compromising the privacy of users. While successful, FL research is currently limited by the difficulties of establishing a realistic large-scale FL system at the early stages of experimentation. Simulation can help accelerate this process. To facilitate efficient scalable FL simulation of heterogeneous clients, we design and implement Protea, a flexible and lightweight client profiling component within federated systems using the FL framework Flower. It allows automatically collecting system-level statistics and estimating the resources needed for each client, thus running the simulation in a resource-aware fashion. The results show that our design successfully increases parallelism for 1.66 × faster wall-clock time and 2.6× better GPU utilisation, which enables large-scale experiments on heterogeneous clients.
2020 International Conference on Virtual Reality and Visualization (ICVRV), 2020
Multimedia teaching patterns for piano performance have gained great momentum benefiting from the... more Multimedia teaching patterns for piano performance have gained great momentum benefiting from the rapid development of virtual reality (VR)/augmented reality (AR) technologies in recent years. Considerable efforts are made to achieve high-fidelity and fluency 4D visualization for the actions of hands. However, it is still challenging to effectively generate demonstration animation for large batches of music, especially for complex music requiring professional playing skills. We propose an AR-based individual tutorial system for piano training which is competent in automatically generating hand action animations and displaying animations with real pianos using head-mounted displays. Given a piece of music as input, our system first predicts the correspondence between target keys and fingers using a Hidden Markov model. Based on the prediction results, we employ musical prior knowledge to establish a generation mechanism for hand actions that aims to coordinate arm and finger motions taking naturalness and convenience into consideration.
Proceedings of the AAAI Conference on Artificial Intelligence, 2020
The recent advances in 3D Convolutional Neural Networks (3D CNNs) have shown promising performanc... more The recent advances in 3D Convolutional Neural Networks (3D CNNs) have shown promising performance for untrimmed video action detection, employing the popular detection framework that heavily relies on the temporal action proposal generations as the input of the action detector and localization regressor. In practice the proposals usually contain strong intra and inter relations among them, mainly stemming from the temporal and spatial variations in the video actions. However, most of existing 3D CNNs ignore the relations and thus suffer from the redundant proposals degenerating the detection performance and efficiency. To address this problem, we propose graph attention based proposal 3D ConvNets (AGCN-P-3DCNNs) for video action detection. Specifically, our proposed graph attention is composed of intra attention based GCN and inter attention based GCN. We use intra attention to learn the intra long-range dependencies inside each action proposal and update node matrix of Intra Atten...
Multimedia instrument training has gained great momentum benefiting from augmented and/or virtual... more Multimedia instrument training has gained great momentum benefiting from augmented and/or virtual reality (AR/VR) technologies. We present an AR-based individual training system for piano performance that uses only MIDI data as input. Based on fingerings decided by a pre-trained Hidden Markov Model (HMM), the system employs musical prior knowledge to generate natural-looking 3D animation of hand motion automatically. The generated virtual hand demonstrations are rendered in head-mounted displays and registered with a piano roll. Two user studies conducted by us show that the system requires relatively less cognitive load and may increase learning efficiency and quality.
Federated Learning (FL) has emerged as a prospective solution that facilitates the training of a ... more Federated Learning (FL) has emerged as a prospective solution that facilitates the training of a high-performing centralised model without compromising the privacy of users. While successful, FL research is currently limited by the difficulties of establishing a realistic large-scale FL system at the early stages of experimentation. Simulation can help accelerate this process. To facilitate efficient scalable FL simulation of heterogeneous clients, we design and implement Protea, a flexible and lightweight client profiling component within federated systems using the FL framework Flower. It allows automatically collecting system-level statistics and estimating the resources needed for each client, thus running the simulation in a resource-aware fashion. The results show that our design successfully increases parallelism for 1.66 × faster wall-clock time and 2.6× better GPU utilisation, which enables large-scale experiments on heterogeneous clients.
2020 International Conference on Virtual Reality and Visualization (ICVRV), 2020
Multimedia teaching patterns for piano performance have gained great momentum benefiting from the... more Multimedia teaching patterns for piano performance have gained great momentum benefiting from the rapid development of virtual reality (VR)/augmented reality (AR) technologies in recent years. Considerable efforts are made to achieve high-fidelity and fluency 4D visualization for the actions of hands. However, it is still challenging to effectively generate demonstration animation for large batches of music, especially for complex music requiring professional playing skills. We propose an AR-based individual tutorial system for piano training which is competent in automatically generating hand action animations and displaying animations with real pianos using head-mounted displays. Given a piece of music as input, our system first predicts the correspondence between target keys and fingers using a Hidden Markov model. Based on the prediction results, we employ musical prior knowledge to establish a generation mechanism for hand actions that aims to coordinate arm and finger motions taking naturalness and convenience into consideration.
Proceedings of the AAAI Conference on Artificial Intelligence, 2020
The recent advances in 3D Convolutional Neural Networks (3D CNNs) have shown promising performanc... more The recent advances in 3D Convolutional Neural Networks (3D CNNs) have shown promising performance for untrimmed video action detection, employing the popular detection framework that heavily relies on the temporal action proposal generations as the input of the action detector and localization regressor. In practice the proposals usually contain strong intra and inter relations among them, mainly stemming from the temporal and spatial variations in the video actions. However, most of existing 3D CNNs ignore the relations and thus suffer from the redundant proposals degenerating the detection performance and efficiency. To address this problem, we propose graph attention based proposal 3D ConvNets (AGCN-P-3DCNNs) for video action detection. Specifically, our proposed graph attention is composed of intra attention based GCN and inter attention based GCN. We use intra attention to learn the intra long-range dependencies inside each action proposal and update node matrix of Intra Atten...
Multimedia instrument training has gained great momentum benefiting from augmented and/or virtual... more Multimedia instrument training has gained great momentum benefiting from augmented and/or virtual reality (AR/VR) technologies. We present an AR-based individual training system for piano performance that uses only MIDI data as input. Based on fingerings decided by a pre-trained Hidden Markov Model (HMM), the system employs musical prior knowledge to generate natural-looking 3D animation of hand motion automatically. The generated virtual hand demonstrations are rendered in head-mounted displays and registered with a piano roll. Two user studies conducted by us show that the system requires relatively less cognitive load and may increase learning efficiency and quality.
Uploads
Papers by wanru zhao