M4E Research #5: February 2024
Conversational Crowdsensing: A Parallel Intelligence Powered Novel Sensing Approach
Abstract: The transition from CPS-based Industry 4.0 to CPSS-based Industry 5.0 brings new requirements and opportunities to current sensing approaches, especially in light of recent progress in Chatbots and Large Language Models (LLMs). Therefore, the advancement of parallel intelligence-powered Crowdsensing Intelligence (CSI) is witnessed, which is currently advancing towards linguistic intelligence. In this paper, we propose a novel sensing paradigm, namely conversational crowdsensing, for Industry 5.0. It can alleviate workload and professional requirements of individuals and promote the organization and operation of diverse workforce, thereby facilitating faster response and wider popularization of crowdsensing systems. Specifically, we design the architecture of conversational crowdsensing to effectively organize three types of participants (biological, robotic, and digital) from diverse communities. Through three levels of effective conversation (i.e., inter-human, human-AI, and inter-AI), complex interactions and service functionalities of different workers can be achieved to accomplish various tasks across three sensing phases (i.e., requesting, scheduling, and executing). Moreover, we explore the foundational technologies for realizing conversational crowdsensing, encompassing LLM-based multi-agent systems, scenarios engineering and conversational human-AI cooperation. Finally, we present potential industrial applications of conversational crowdsensing and discuss its implications. We envision that conversations in natural language will become the primary communication channel during crowdsensing process, enabling richer information exchange and cooperative problem-solving among humans, robots, and AI.
Authors: Zhengqiu Zhu, Yong Zhao, Bin Chen, Sihang Qiu, Kai Xu, Quanjun Yin, Jincai Huang, Zhong Liu, Fei-Yue Wang
Improving Welding Robotization via Operator Skill Identification, Modeling, and Human-Machine Collaboration: Experimental Protocol Implementation
Abstract: The industry of the future, also known as Industry 5.0, aims to modernize production tools, digitize workshops, and cultivate the invaluable human capital within the company. Industry 5.0 can't be done without fostering a workforce that is not only technologically adept but also has enhanced skills and knowledge. Specifically, collaborative robotics plays a key role in automating strenuous or repetitive tasks, enabling human cognitive functions to contribute to quality and innovation. In manual manufacturing, however, some of these tasks remain challenging to automate without sacrificing quality. In certain situations, these tasks require operators to dynamically organize their mental, perceptual, and gestural activities. In other words, skills that are not yet adequately explained and digitally modeled to allow a machine in an industrial context to reproduce them, even in an approximate manner. Some tasks in welding serve as a perfect example. Drawing from the knowledge of cognitive and developmental psychology, professional didactics, and collaborative robotics research, our work aims to find a way to digitally model manual manufacturing skills to enhance the automation of tasks that are still challenging to robotize. Using welding as an example, we seek to develop, test, and deploy a methodology transferable to other domains. The purpose of this article is to present the experimental setup used to achieve these objectives.
Authors: Antoine Lénat (CETIM, LS2N, LS2N - équipe RoMas, LS2N - équipe PACCE), Olivier Cheminat (CETIM), Damien Chablat (LS2N, LS2N - équipe RoMas), Camilo Charron (LS2N, UR2)
Learning-enabled Flexible Job-shop Scheduling for Scalable Smart Manufacturing
Abstract: In smart manufacturing systems (SMSs), flexible job-shop scheduling with transportation constraints (FJSPT) is essential to optimize solutions for maximizing productivity, considering production flexibility based on automated guided vehicles (AGVs). Recent developments in deep reinforcement learning (DRL)-based methods for FJSPT have encountered a scale generalization challenge. These methods underperform when applied to environment at scales different from their training set, resulting in low-quality solutions. To address this, we introduce a novel graph-based DRL method, named the Heterogeneous Graph Scheduler (HGS). Our method leverages locally extracted relational knowledge among operations, machines, and vehicle nodes for scheduling, with a graph-structured decision-making framework that reduces encoding complexity and enhances scale generalization. Our performance evaluation, conducted with benchmark datasets, reveals that the proposed method outperforms traditional dispatching rules, meta-heuristics, and existing DRL-based approaches in terms of makespan performance, even on large-scale instances that have not been experienced during training.
Authors: Sihoon Moon, Sanghoon Lee, Kyung-Joon Park