Dr Ali Al Yacoub: I'm worried that i will lose the connection

Siobhan Urquhart: Okay no worries

Dr Ali Al Yacoub: Yeah so yeah i have very bad internet

Siobhan Urquhart: Okay well it seems okay so far so so far so good

Dr Ali Al Yacoub: Yeah good thank you so much for attending um so today i would like to talk about wearable sensors in human robot collaboration context and this work have been done through the digital project which is an ebsrc uh three years project um and the main idea of or the main goal of digiTOP is to provide um support mechanisms and decision support tools for industries to adopt the digital manufacturing tools and two weeks ago i think if i'm not mistaken we have released the first digital tool to um to do so. So yeah after the webinar if you would like to visit our website and you can check the tool. So uh now i i will start with my presentation for today so um as from the title um i will talk about the the the human robot uh collaboration context as one of the new manufacturing equipment.

Dr Ali Al Yacoub: So the the the outline of my presentation i will speak a bit about the motivation, why we need human robot collaboration and what's missing um why we can use force like wearable sensors to um to overcome those limitations um i will talk about the proposed framework we have um our github and then i will talk about an illustrative example as a simple example to illustrate how we can integrate the wearable sensors with robots and then i will talk about something um called human human manipulation and human robot collaborate uh co-manipulation as as an example finally i will go through the conclusions and future work.

Dr Ali Al Yacoub:

So um nowadays um there are more uh uh collaborative robots um and and surprisingly even even though uh given the current situation with covid 19 only six which is not really um like in backed by the the the covid 19 is the uh corporates um which which uh indicates that there is a need to improve uh this kind of technology.

So the the problem the main problem is is to improve the communication between human and robot and um the like the intuitive way to communicate is usually the speech doing um like gestures um maybe the human posture posture the body posture and the facial expressions.

But also there are there are important physical entities in when we are talking about human robot collaborations especially if you are um in in a very close distance from the robot and you would like to do activities like handing over um or doing co-manipulation then um the intuitive way of of communication um shows some limitation so we need more uh and especially if the human and robot is in a physical contact with each other you don't want the robot suddenly to move and might cause injury to the human.

Finally um the human human um kind of collaboration give us ideas how to do human human uh collaboration such as um understanding how to human move a big piece of furniture around in in a structured environment.

So um as as i mentioned before the if we take the raw uh like the human robot collaboration as a as an example we see um as we see this in the in this slide um that the existing uh support framework um focus only on on on the robot here and you can find all uh sensory data coming about what the robot is doing uh force and torque sensors the motion speed acceleration even the temperature of the motors and so on and so forth.

On the other hand especially uh when we are talking about cobots you don't know much about the human and and um as an example uh ros robot operating system they have a very good um supporting um mechanisms for for the robot on the other hand we don't have the same level of support.

So so to to to to improve that um to improve that we need we need to include the human data.

So um so what we are proposing here as as i have shown in the previous slide we are proposing a framework that take the information from the human activities whether it's physical activities or um or sometimes it's more like psychological activities and try to determine what's the human state um is he reaching muscle fatigue for example is he stressed about the the work is he already reaching the maximum workload the mental workload threshold so if we can identify those we can program the robot to react accordingly and um and we believe that by such a framework we can we can answer one like we can provide indications on how to answer one of the open problems in human road collaboration and those uh for example uh trying to understand how the human uh behave under certain uh situation in in human robot collaboration scenario.

Um also um we can provide evaluation for the human robot collaboration setup because up till now you don't know how to evaluate such a setup usually or classically when you have a fixed automation solution you will you will measure the your effectiveness by by deciding how much energy you need uh how what's the cost of the sensors what is the cost of the running the machine on this speed how many products you will have you will have and and you can work out how useful and and you can evaluate the setup.

But in human robot collaboration how can i decide if the robot is working next to a human according to the standards now you need to limit the speed.

So how how can i evaluate the collaboration how can i say this is a good collaboration or not the third question is um is how safe it is and and um using wearable sensors can can help us a lot in indicating any abnormal activities in the setup and it might be much faster than getting a dangerous uh activity using uh like the conventional way.

A human might pick up something going wrong much faster than machine in some scenarios. So uh in order to provide this framework um we need a set of sensors and and this is on an example uh the idea here is only to provide tools uh equivalent to the what trust is providing for robots we would like to have the same support for the human and uh those wearable sensors can vary from uh using um a brain um emg's eeg signal uh muscle activity like eeg and um facial temperature and specifically sometimes people are talking about nose temperature and try to work out some psychological impacts.

Or having those wearable sensors into into some psych uh psychological state of the human and then if we do that we can then program the robot to react accordingly. The examples we have, for example the cardio signal which which can indicate stress mental effort and many others also the brainwave can indicate the drowsy relaxed activity and focus states and also you can pick up abnormality situation using the brain waves the nose temperature which is um sometimes it's funny how you can measure that but you can use a thermal camera to do such thing.

The the nose temperature can can indicate the mental workload and um but it's not only about measuring those physiological states you can also measure the physical activity of the human while it's doing a co-manipulation with the robot we need to know how much forces are going through this activity and if the human is grasping uh firmly on an object with the robot then this might indicate that the robot is not reacting correctly and we need to uh we need to modify that or improve the uh robot reaction.

Um again as an example those are the sensors we have integrated with our setup um we use my aware muscle activity sensor we use the moose headband for the brain activity which is a commercial sensor and with with a good accuracy uh at least in the literature they are saying it's it has a 95 percent accuracy in comparison with the more sophisticated uh devices um and we have the uh cardio uh sensor doubles um sensor which uh which goes to the earlobe and um and we have here on the helmet we have an imu to um to measure the head movement and those are the myoware here the black and blue boxes the little boxes are the muscle activity for left arm and forearm right arm and forearm as well and we have the thermocouple for the nose temperature so getting all those communicating to a mobile data acquisition box which is basically a raspberry pi and through wi-fi we connect this to a stationary workstation.

So to make those sensors work together we need the software and the software we are we we have developed is is uh is rust-based and it's um a ros package uh which um can um basically it's it's uh customized messages topics that like a carrier that carry the topics from different sensors and try to publish them in the rust network if you are not familiar with ros basically you can think about trust as a as a network of publishers and subscribers and each sensor here is is a publisher which publish a certain message with a different frequency and because we are using crossroads has a lot of support.

It's open source and there are a lot of mechanisms that you can use one of the interesting things that you can synchronize different messages coming from different sensors and then you can use this synchronized messages to control the robot there are a lot of uh restrictions and limitations of that but it's possible to do um our the software we have developed it's available on github and and you can find it if you search for intelligent automation center blue box or if you go to the digiTOP website you can find a link for our um github.

It's it's an initial effort it still needs a lot of refinement and and um and um improvements which i'm trying to do in the in the near future but feel free if you if you think it's useful feel free to send me comments and and i will try to accommodate um any feedback.

Um as an illustrative example to show what i have mentioned before how things will work together with an industrial robot we have this example which basically uh we have a human operator with the wearable sensors the muscle activity sensor on the left and right arm and we integrate that with the industrial robot six axis six degrees of freedom and it's it's a new artin collaborative robot and we we have this symbol logic and there is no scientific contribution in that but it can be replaced with more sophisticated um control approaches like uh reinforcement learning and then if the human activates the right arm the robot will go to the to the y direction positive y direction five centimeters if he um activate the left arm it will go in the opposite direction

Um this is the video of the setup and experiment again it's it's not um like there's no massive scientific contributions only to show that how things are working together i i hope that this give you an idea again this video is also available in our website um [Music] um so the the the the messages which is uh come through the the the provided packages can be seen here as a list of uh of sensory data and what i would like to show you here that we have we have the four stock sensor coming from the robot um and we have um and originally it's 100 uh 100 hertz but when we synchronize this with the muscle activity it will go up to almost 90 90 hertz which is not great but it's it's sometimes with especially with uh with noisy data like the four stalk sensor it works like a um low pass filter.

Um uh now the second in the second part i would like to talk about the human human co-manipulation and this is like a continuation of my bhd um so when i have done my bhd we have done a co-manipulation between two humans and we try to collect data uh from the uh like the positional data uh from a vicon system we collect the forced torque data using this setup on the left side and then we try to use data-driven approaches to understand or map the displacement of the object with the force clues which is coming from the leader the the human on the right side is leading the co-manipulation the second human on the left side is a follower and as as a result we we wanted to validate how um how accurate the algorithm was we we did the co-assembly um task with uh multivix uh but with with a massive threshold with a massive clearance um because this is a pure force based their no vision and it did not have any um other algorithms to react to its forces so and and this was the result

So the interesting bit about this uh and you you will see at the end of the video that once the insertion happens and the operator removes his hand the the robot will move due to additional contact with the object and this is something not desired if you are working on um with the robot and you release the and you guide the robot to do a task and you release the the the robot arm you don't want the robot to react to any other external forces what happened here that after the operator released the hand the robot moved back and almost break the the parts and this this is this is where um using wearable sensors can be useful.

Uh if if we have a muscle activity sensors that indicate the human is not doing anything right now and then the robot should not react to this another example which is not as um i hope it will be obvious but this is also part of the same experiment um so when when the human is trying to guide the robot to do a certain task if you have any external load like like now the robot will still move this load you can think about it as a dynamic load which had been added to the robot but still the robot is reacting to which is something not desirable because this is this mean that you might uh the robot might go to damage the part or might cause a damage to the human.

So um in this as as part of digital um we have collected data in a different way this time um so the concept of a human human communication can be mathematically uh represented as a mass uh spring number system from both sides and um and then it's a symbol um and a simple mathematical problem um in in 3d um but the the problem here that we don't have we don't have much of information of how much the forces from the follower and from the leader so we design we design a setup where we have a four-star sensor we have a load and we have a follower with with the muscle activity and the leader will hold the four-star sensor and guide the load through unstructured environment.

We have only one obstacle in the middle and the robot or the the two human are trying to avoid it by guidance coming from the leader um the data have been collected analyzed and we remove the noisy data we we try to filter the data and it's available on um on this website on the university uh responsibly and anyone can use the data if it's useful for your your research and then again we mathematically try to come up with an and concrete definition of the problem as input we have force we have emg signals and as output we have a displacement on xyz we didn't consider the rotation of the of the object and then our problem in this case it's it's a mapping of this input into into those outputs which is somehow um like for computer science.

It's it's a regression problem so we try to use um different data-driven approaches to extract this mapping and we try to compare that with a mathematical model um so we use uh situ features and uh the situ features are basically the normalized for torque sensor normalized emg and the the historical data coming from like the previous uh displacement of the object um because this problem is a temporal problem it depends on how much you have done displacement previously so we thought that it it it would be a good idea to include that and then uh feature two is uh the situ features two is without the emg the third set is with the emg and unnormalized emg and and the displacement and the fourth set of features is normalized ft emg um and um and the previous displacement did we use um those um uh data-driven approach linear regression um it's it's one of the simplest way to try to find the mapping.

It's not we know it's not suitable but it's still uh an easy way to do it and random forest which is also the concept is simple and it's a very powerful mechanism gradient boosted trees and then we use a recruit deep neural network a recurrent neural network and and again i thought we come across this slide already this is my mistake i'm sorry and the connected data um we had 5 000 almost 5000 data points and we have five trials the obstacle was in the middle here and the uh the two humans were trying to avoid that uh using this trajectory this data were collected just before the look down so sadly we couldn't carry on the uh the data collection but uh we thought that we can carry on with the other work hopefully uh soon we will be able to uh do the the rest of the data collection.

So briefly uh like the results of the what we have done um um we as you can see the some of the data driven with it does not make big difference this is the rmse uh in in in centimeters uh for the linear regression model random forest and boosted trees and recurrent neural network as you can see the recurrent neural network has the best uh performance with the feature set one and then you have the random forests boosted trees and as expected the linear regression model um but we thought that in comparison with the mathematical model it would be a good idea i know this is kind of a different research but we thought that we can combine the neural network and the recurrent neural network with the mathematical model to do something called hybrid approach where we combine the mathematical model and the data-driven model to see if this will be any use especially for to improve the mathematical model performance.

The mathematical model performance the error is is almost 1.7 meter which is massive and you can see all the data driven model is the the the error is less than 0.25 centimeters the hybrid model where we combine the data driven model to model the error of the mathematical model it was also not bad in comparison with the mathematical model um so if you have any question about this i try to to quickly run through the the those approaches and the hybrid um but if you have any question please let me know at the end of the presentation.

Finally i i will i would like to talk about the conclusion and future so in general um uh what we have seen here that there are still some questions about uh human robot collaboration setups and we believe that wearable sensors and trying to extract the human um physical and psychological state will be helpful to improve the human robot collaboration um the problem uh research in in or in the literature people had done research on one sensory data and they integrated and it's good but we still lack a more holistic representation of what we need in in human robot collaboration setups and what data will be useful what data will not be useful so um because of this we have we have presented the our initial framework to support this effort and in the future we would like to improve that by including more sophisticated control methods like reinforcement learning interactive reinforcement learning also our tool lacks a lot of visualization tools and we would like to extend that in the near future.

Thank you so much that i didn't go through the things quickly if you have any question please feel free to ask.

Presentation attendee: Yes ali thank you for that presentation uh i have a question um so it's very interesting uh particularly i was thinking about the manipulation task it sounded to me that looks like a very good option for including uh like a glove with the four sensors embedded in there or something like that or um

Dr Ali Al Yacoub: Yeah there are

Presentation attendee That's three sounds sounds like uh well certainly the problem of um well you put this um raw tape on there another option is just use uh use uh motion tracking also for the form of motion tracking and once you know the outline of the robot you can kind of estimate whether there's contact or not with with the union.

Dr Ali Al Yacoub: Yeah that's that there is a balance sorry the gloves is already already there are people who are doing research on that and integrate that with the with the robots we tried to get one of those gloves but up now we have we we haven't been successful in our center but i honestly i hope that i will be able to include that um the vision um yes i the vision can provide answers can provide some information about the current situation however um and we have we have experienced this ourselves sometimes if the human is too close to the robot using vision system sometimes it's confusing to know if the human is still in touch with the robot or not and um and you still need um you still need a kind of a contact uh synth sensing equipment to indicate if the human is is still in contact with the robot or not.

Presentation attendee Yes i would assume that the video would be more of a rough kind of uh proximity sensor more for body parts rather than rather than of course it's not sensing uh like the force or anything so yeah yeah.

Dr Ali Al Yacoub:Yeah it depends on how many cameras you would like to integrate in your system um uh how much uh computational uh power you have also so those values also um contribute a lot into into answering what is the what is the uh the suitable setup for human robot collaboration i it and this is one of the the open question in the in the human robot collaboration is what how much i would like to invest in that the porcelain torque sensor and the muscle activity is a very cheap uh solution in comparison with 12 camera system with a set of tv gpus and and the learning in the background trying to extract the open voucher of the human like open board 3 software so i agree it's useful but in close to contact situation it's still not the information .

Presentation attendee Yeah i guess it's about whether the application search for generalizable or not right so uh if you if it's a very constrained task like what you were describing then of course you want to try to search for the sensors that are as specific as possible to the task that has been being asked but um if in the future the manufacturing floor has robots performing multiple tasks and then it's likely that that a lot of different options need to be available and and also there needs to be adaptability based on the task yeah i agree yep and sorry just to follow up on that the the question as well is like what what approach do we take right so is is it just keep adding sensors uh because you you your example used um uh i think four four emg centers but uh yeah we could we could add many more um or how do we do that or do we try to improve the computational uh side of things or both that's still a question i kind of reiterate the point you're making at the end is that yeah i mean at some point there's going to be a bit of a bottleneck i think in terms of data bandwidth more data that's not always better yes yeah.

Dr Ali Al Yacoub:Yeah but but we need the again i i i totally agree but we need to in order to decide what data we need we need first to combine all of them and study all of them how they are react with each other find the relative mapping and then remove the the the sensory data which does not improve the setup but first we need to do that yeah yeah.

Presentation attendee I i i would i think i would argue you could make some educated guesses based on so i commented i'm not an engineer myself i come at this from the angle of the human i'm a human movement scientist so i would i would argue that uh data about humans can probably guide you already a little bit on on on what types of centers that that is whether it's kinematics whether it's forces generated how useful those bits of information are to uh yeah to convey what the human's actually doing um yeah.

Dr Ali Al Yacoub: Um there is um i think you you you might be uh you know about that more than me there's something called uh error related potential and this is basically um ukraine can indicate if there is something going wrong in in in your environment uh and you if you can pick up that then you can use this signal to teach robot to do things and do a corrective action based on this signal yeah so yeah i think using the human in this in such setup is very useful um but there are a lot of limitations so hopefully in the future we will be able to

Presentation attendee Well we don't know everything about the human yet so that's something that keeps me in work.

Siobhan Urquhart: Okay do we have any more questions for Ali

Presentation attendee Oh hi i i have a question uh so it kind of follows from the previous um uh discussion um and it's uh although it's not my area but i'm interested if you you would all conceded um at any point or have plans to consider sort of the human acceptance so for wearing all of these senses and if you sort of have any ideas about what to do about the this research and in this topic.

Dr Ali Al Yacoub: So i uh yes um so we are sorry i forgot to mention that we are working with nottingham uh bristol and um cranfield um and and uh i think in crime field if i'm not mistaken they are doing um the the technology acceptance uh studies um it's it's not my background so i i don't want to talk about it um but yeah it is definitely relevant and as as uh like digital as whole is looking at those things um i'm focusing on this kind of applications but it's it's within our uh within our interest oh thank you very much yeah welcome.

Siobhan Urquhart: Okay so thank you yulia and yoast for your questions um if anybody has anything else please do shout um otherwise i'd like to say thank you everyone for attending today it's been really nice and it's been great to hear about everything from you ali as well um thank you we are going to um put um this recording on our digital website so if anyone would like to use this um and access it that's absolutely fine and we also encourage you to have a look at the digitop toolkit which is in its very first stage of developments um there is some information on there um there will be a further phase next year where there'll be much more information released from all the studies that everyone's been doing in the team and if you have any um anything that you want to speak to the team about then please do get in touch with myself and i can always pass any messages on and we can make those connections with you guys as well and so thank you very much everyone and i hope you enjoy the rest of your day

Multiple persons: Thank you very much, thank you, take care, bye ,thank you ,bye thank you.