Even with the rapid technological advancements, robots are still hard to program especially for a dynamic environment. Firstly, due to the separation of the robot and human workspace this imposes an additional financial burden. Secondly, due to the significant re-programming cost in case of changing products, especially in Small and Medium-sized Enterprises (SMEs). Therefore, there is an essential need to reduce the commissioning, installation and programming efforts required to enable robots to perform various tasks while sharing the same space with a human operator. One such way to reduce programming is Learning from Demonstration (LfD). Using LfD, robots can learn from humans performing different tasks. Hence, the robot must be equipped with a cognitive capability that must facilitate human-robot interaction, so the robot cannot only learn new skills from its human partner but also enhance the robot’s performance while reproducing the learned skills.
Humans use their various senses to learn and perform tasks such as sight, touch, and hearing. One sense that plays a significant role in human activity is ‘touch’ or ‘force’. For example, holding a cup of tea, closing a drawer, opening a water tap, or making precise adjustments while inserting a key requires haptic information to achieve these precise manoeuvres successfully. In all these examples, force and torque based sensory data are crucial for the successful completion of the activity. Also, this information inherently conveys data about contact force, object stiffness, object shape and even surface texture. From a robotics point of view, many researchers have investigated the use of visual systems as the primary source of information about the surrounding environment for robots. However, in manufacturing context robots must have physical contacts with the surrounding environment, and visual input falls short at contact and close to contact configurations. Hence, a deep understanding of the haptic/force information during physical interaction is required, especially when human work collaboratively with a human.
In analogy with human, robots in the future must be able to physically interact with the surrounding environment based on a combination of different sensory data; such as haptic and visual data. In one hand, Visual data give sufficient information for gross motion (no physical contact with the environment). On the other hand, haptic data can provide adequate information for fine movement, which includes physical interaction with the surroundings. For effective Human-Robot Interaction, different sensory data must be considered as a communication media. In which human can communicate directly with the robot through facial expressions, verbal command, gestures and touch, to perform the intended task successfully with the robot. These sensory data enables the robot to independently to interact with the surrounding considering human actions. In Digitop project, we are looking at different sources of sensory data, to identify useful data for designing automation solution, monitoring performance and for real-time applications.