Bulletin No. 2, 2024
Perfecting eye-brain-motion coordination Now, with funding from the RAISe+ Scheme, Professor Liu’s team aims to develop and commercialise technologies and products involving 3D vision-driven robots that realise effective real- time eye-brain-motion coordination for more versatile, safer operations. The robots will be useful in modular construction for measuring modular parts, and in the automotive industry for automated measurement of parts and batteries, as well as for car inspection. The robots could also be deployed in robotic grasping in warehouses or other settings. The team comprises multidisciplinary experts from CUHK’s Department of Mechanical and Automation Engineering, of which Professor Liu is a member, and the Department of Computer Science and Engineering, including Professors Fu Chi-wing and Dou Qi. The funding will give them a boost in transferring the years of knowledge they have built up into real-world applications for Hong Kong and beyond, Professor Liu says. Existing 3D vision robots have slow visual feedback, making it challenging to perform operations safely, quickly and with high adaptability, he explains. For industrial or service robots to accomplish complex tasks, they need to achieve eye-brain-motion coordination similar to humans. Current products have a frequency of only 0.5-2Hz. The team’s goal is to develop 3D vision-driven robots with coordination frequencies closer to humans’, reaching 200-1kHz. “Primarily, we want to improve the artificial intelligence here. This way, robots can perform tasks like sorting and shelf stocking in a logistics setting and retail stores,” Professor Liu notes. “For instance, while we already see robots serving meals in restaurants, most of them can only roll back and forth, and do no more, still lacking the versatility needed for more dynamic and complex operations. I hope they could adapt to more intricate tasks, like helping to clean up tables or taking up simple tasks in the kitchen in the future, with high versatility.” AI that understands the physical world Professor Dou, who has been developing computer algorithms and accumulating substantial experience in robot vision research, explains that traditional robotic vision is mainly semantic, which means a robot is capable of interpreting visual data in a way that assigns meaningful labels to identify objects and scenes – “an apple” or “a water bottle”, for | Professor Liu Yunhui Professor Liu and Professor Dou Qi (left) with the 3D Vision-Driven Picking Station Chinese University Bulletin 20
Made with FlippingBook
RkJQdWJsaXNoZXIy NDE2NjYz