Digital Manufacturing Webinar Series
Nov 2020 - Jan 2021 )
The series is aimed at anyone that is looking to learn more about digital manufacturing technologies, whether for your business or academic research.
Video webinar: Wearable Sensors in a Human-Robot Collaboration Context. Dr Ali Al-Yacoub (Loughborough University)
Transcript: Wearable Sensors in a Human-Robot Collaboration Context
Dr Ali Al Yacoub: I'm worried that i will lose the connection
Siobhan Urquhart: Okay no worries
Dr Ali Al Yacoub: Yeah so yeah i have very bad internet
Siobhan Urquhart: Okay well it seems okay so far so so far so good
Dr Ali Al Yacoub: Yeah good thank you so much for attending um so today i would like to talk about wearable sensors in human robot collaboration context and this work have been done through the digital project which is an ebsrc uh three years project um and the main idea of or the main goal of digiTOP is to provide um support mechanisms and decision support tools for industries to adopt the digital manufacturing tools and two weeks ago i think if i'm not mistaken we have released the first digital tool to um to do so. So yeah after the webinar if you would like to visit our website and you can check the tool. So uh now i i will start with my presentation for today so um as from the title um i will talk about the the the human robot uh collaboration context as one of the new manufacturing equipment.
Dr Ali Al Yacoub: So the the the outline of my presentation i will speak a bit about the motivation, why we need human robot collaboration and what's missing um why we can use force like wearable sensors to um to overcome those limitations um i will talk about the proposed framework we have um our github and then i will talk about an illustrative example as a simple example to illustrate how we can integrate the wearable sensors with robots and then i will talk about something um called human human manipulation and human robot collaborate uh co-manipulation as as an example finally i will go through the conclusions and future work.Dr Ali Al Yacoub:
So um nowadays um there are more uh uh collaborative robots um and and surprisingly even even though uh given the current situation with covid 19 only six which is not really um like in backed by the the the covid 19 is the uh corporates um which which uh indicates that there is a need to improve uh this kind of technology.
So the the problem the main problem is is to improve the communication between human and robot and um the like the intuitive way to communicate is usually the speech doing um like gestures um maybe the human posture posture the body posture and the facial expressions.
But also there are there are important physical entities in when we are talking about human robot collaborations especially if you are um in in a very close distance from the robot and you would like to do activities like handing over um or doing co-manipulation then um the intuitive way of of communication um shows some limitation so we need more uh and especially if the human and robot is in a physical contact with each other you don't want the robot suddenly to move and might cause injury to the human.
Finally um the human human um kind of collaboration give us ideas how to do human human uh collaboration such as um understanding how to human move a big piece of furniture around in in a structured environment.
So um as as i mentioned before the if we take the raw uh like the human robot collaboration as a as an example we see um as we see this in the in this slide um that the existing uh support framework um focus only on on on the robot here and you can find all uh sensory data coming about what the robot is doing uh force and torque sensors the motion speed acceleration even the temperature of the motors and so on and so forth.
On the other hand especially uh when we are talking about cobots you don't know much about the human and and um as an example uh ros robot operating system they have a very good um supporting um mechanisms for for the robot on the other hand we don't have the same level of support.
So so to to to to improve that um to improve that we need we need to include the human data.
So um so what we are proposing here as as i have shown in the previous slide we are proposing a framework that take the information from the human activities whether it's physical activities or um or sometimes it's more like psychological activities and try to determine what's the human state um is he reaching muscle fatigue for example is he stressed about the the work is he already reaching the maximum workload the mental workload threshold so if we can identify those we can program the robot to react accordingly and um and we believe that by such a framework we can we can answer one like we can provide indications on how to answer one of the open problems in human road collaboration and those uh for example uh trying to understand how the human uh behave under certain uh situation in in human robot collaboration scenario.
Um also um we can provide evaluation for the human robot collaboration setup because up till now you don't know how to evaluate such a setup usually or classically when you have a fixed automation solution you will you will measure the your effectiveness by by deciding how much energy you need uh how what's the cost of the sensors what is the cost of the running the machine on this speed how many products you will have you will have and and you can work out how useful and and you can evaluate the setup.
But in human robot collaboration how can i decide if the robot is working next to a human according to the standards now you need to limit the speed.
So how how can i evaluate the collaboration how can i say this is a good collaboration or not the third question is um is how safe it is and and um using wearable sensors can can help us a lot in indicating any abnormal activities in the setup and it might be much faster than getting a dangerous uh activity using uh like the conventional way.
A human might pick up something going wrong much faster than machine in some scenarios. So uh in order to provide this framework um we need a set of sensors and and this is on an example uh the idea here is only to provide tools uh equivalent to the what trust is providing for robots we would like to have the same support for the human and uh those wearable sensors can vary from uh using um a brain um emg's eeg signal uh muscle activity like eeg and um facial temperature and specifically sometimes people are talking about nose temperature and try to work out some psychological impacts.
Or having those wearable sensors into into some psych uh psychological state of the human and then if we do that we can then program the robot to react accordingly. The examples we have, for example the cardio signal which which can indicate stress mental effort and many others also the brainwave can indicate the drowsy relaxed activity and focus states and also you can pick up abnormality situation using the brain waves the nose temperature which is um sometimes it's funny how you can measure that but you can use a thermal camera to do such thing.
The the nose temperature can can indicate the mental workload and um but it's not only about measuring those physiological states you can also measure the physical activity of the human while it's doing a co-manipulation with the robot we need to know how much forces are going through this activity and if the human is grasping uh firmly on an object with the robot then this might indicate that the robot is not reacting correctly and we need to uh we need to modify that or improve the uh robot reaction.
Um again as an example those are the sensors we have integrated with our setup um we use my aware muscle activity sensor we use the moose headband for the brain activity which is a commercial sensor and with with a good accuracy uh at least in the literature they are saying it's it has a 95 percent accuracy in comparison with the more sophisticated uh devices um and we have the uh cardio uh sensor doubles um sensor which uh which goes to the earlobe and um and we have here on the helmet we have an imu to um to measure the head movement and those are the myoware here the black and blue boxes the little boxes are the muscle activity for left arm and forearm right arm and forearm as well and we have the thermocouple for the nose temperature so getting all those communicating to a mobile data acquisition box which is basically a raspberry pi and through wi-fi we connect this to a stationary workstation.
So to make those sensors work together we need the software and the software we are we we have developed is is uh is rust-based and it's um a ros package uh which um can um basically it's it's uh customized messages topics that like a carrier that carry the topics from different sensors and try to publish them in the rust network if you are not familiar with ros basically you can think about trust as a as a network of publishers and subscribers and each sensor here is is a publisher which publish a certain message with a different frequency and because we are using crossroads has a lot of support.
It's open source and there are a lot of mechanisms that you can use one of the interesting things that you can synchronize different messages coming from different sensors and then you can use this synchronized messages to control the robot there are a lot of uh restrictions and limitations of that but it's possible to do um our the software we have developed it's available on github and and you can find it if you search for intelligent automation center blue box or if you go to the digiTOP website you can find a link for our um github.
It's it's an initial effort it still needs a lot of refinement and and um and um improvements which i'm trying to do in the in the near future but feel free if you if you think it's useful feel free to send me comments and and i will try to accommodate um any feedback.
Um as an illustrative example to show what i have mentioned before how things will work together with an industrial robot we have this example which basically uh we have a human operator with the wearable sensors the muscle activity sensor on the left and right arm and we integrate that with the industrial robot six axis six degrees of freedom and it's it's a new artin collaborative robot and we we have this symbol logic and there is no scientific contribution in that but it can be replaced with more sophisticated um control approaches like uh reinforcement learning and then if the human activates the right arm the robot will go to the to the y direction positive y direction five centimeters if he um activate the left arm it will go in the opposite direction
Um this is the video of the setup and experiment again it's it's not um like there's no massive scientific contributions only to show that how things are working together i i hope that this give you an idea again this video is also available in our website um [Music] um so the the the the messages which is uh come through the the the provided packages can be seen here as a list of uh of sensory data and what i would like to show you here that we have we have the four stock sensor coming from the robot um and we have um and originally it's 100 uh 100 hertz but when we synchronize this with the muscle activity it will go up to almost 90 90 hertz which is not great but it's it's sometimes with especially with uh with noisy data like the four stalk sensor it works like a um low pass filter.
Um uh now the second in the second part i would like to talk about the human human co-manipulation and this is like a continuation of my bhd um so when i have done my bhd we have done a co-manipulation between two humans and we try to collect data uh from the uh like the positional data uh from a vicon system we collect the forced torque data using this setup on the left side and then we try to use data-driven approaches to understand or map the displacement of the object with the force clues which is coming from the leader the the human on the right side is leading the co-manipulation the second human on the left side is a follower and as as a result we we wanted to validate how um how accurate the algorithm was we we did the co-assembly um task with uh multivix uh but with with a massive threshold with a massive clearance um because this is a pure force based their no vision and it did not have any um other algorithms to react to its forces so and and this was the result
So the interesting bit about this uh and you you will see at the end of the video that once the insertion happens and the operator removes his hand the the robot will move due to additional contact with the object and this is something not desired if you are working on um with the robot and you release the and you guide the robot to do a task and you release the the the robot arm you don't want the robot to react to any other external forces what happened here that after the operator released the hand the robot moved back and almost break the the parts and this this is this is where um using wearable sensors can be useful.
Uh if if we have a muscle activity sensors that indicate the human is not doing anything right now and then the robot should not react to this another example which is not as um i hope it will be obvious but this is also part of the same experiment um so when when the human is trying to guide the robot to do a certain task if you have any external load like like now the robot will still move this load you can think about it as a dynamic load which had been added to the robot but still the robot is reacting to which is something not desirable because this is this mean that you might uh the robot might go to damage the part or might cause a damage to the human.
So um in this as as part of digital um we have collected data in a different way this time um so the concept of a human human communication can be mathematically uh represented as a mass uh spring number system from both sides and um and then it's a symbol um and a simple mathematical problem um in in 3d um but the the problem here that we don't have we don't have much of information of how much the forces from the follower and from the leader so we design we design a setup where we have a four-star sensor we have a load and we have a follower with with the muscle activity and the leader will hold the four-star sensor and guide the load through unstructured environment.
We have only one obstacle in the middle and the robot or the the two human are trying to avoid it by guidance coming from the leader um the data have been collected analyzed and we remove the noisy data we we try to filter the data and it's available on um on this website on the university uh responsibly and anyone can use the data if it's useful for your your research and then again we mathematically try to come up with an and concrete definition of the problem as input we have force we have emg signals and as output we have a displacement on xyz we didn't consider the rotation of the of the object and then our problem in this case it's it's a mapping of this input into into those outputs which is somehow um like for computer science.
It's it's a regression problem so we try to use um different data-driven approaches to extract this mapping and we try to compare that with a mathematical model um so we use uh situ features and uh the situ features are basically the normalized for torque sensor normalized emg and the the historical data coming from like the previous uh displacement of the object um because this problem is a temporal problem it depends on how much you have done displacement previously so we thought that it it it would be a good idea to include that and then uh feature two is uh the situ features two is without the emg the third set is with the emg and unnormalized emg and and the displacement and the fourth set of features is normalized ft emg um and um and the previous displacement did we use um those um uh data-driven approach linear regression um it's it's one of the simplest way to try to find the mapping.
It's not we know it's not suitable but it's still uh an easy way to do it and random forest which is also the concept is simple and it's a very powerful mechanism gradient boosted trees and then we use a recruit deep neural network a recurrent neural network and and again i thought we come across this slide already this is my mistake i'm sorry and the connected data um we had 5 000 almost 5000 data points and we have five trials the obstacle was in the middle here and the uh the two humans were trying to avoid that uh using this trajectory this data were collected just before the look down so sadly we couldn't carry on the uh the data collection but uh we thought that we can carry on with the other work hopefully uh soon we will be able to uh do the the rest of the data collection.
So briefly uh like the results of the what we have done um um we as you can see the some of the data driven with it does not make big difference this is the rmse uh in in in centimeters uh for the linear regression model random forest and boosted trees and recurrent neural network as you can see the recurrent neural network has the best uh performance with the feature set one and then you have the random forests boosted trees and as expected the linear regression model um but we thought that in comparison with the mathematical model it would be a good idea i know this is kind of a different research but we thought that we can combine the neural network and the recurrent neural network with the mathematical model to do something called hybrid approach where we combine the mathematical model and the data-driven model to see if this will be any use especially for to improve the mathematical model performance.
The mathematical model performance the error is is almost 1.7 meter which is massive and you can see all the data driven model is the the the error is less than 0.25 centimeters the hybrid model where we combine the data driven model to model the error of the mathematical model it was also not bad in comparison with the mathematical model um so if you have any question about this i try to to quickly run through the the those approaches and the hybrid um but if you have any question please let me know at the end of the presentation.
Finally i i will i would like to talk about the conclusion and future so in general um uh what we have seen here that there are still some questions about uh human robot collaboration setups and we believe that wearable sensors and trying to extract the human um physical and psychological state will be helpful to improve the human robot collaboration um the problem uh research in in or in the literature people had done research on one sensory data and they integrated and it's good but we still lack a more holistic representation of what we need in in human robot collaboration setups and what data will be useful what data will not be useful so um because of this we have we have presented the our initial framework to support this effort and in the future we would like to improve that by including more sophisticated control methods like reinforcement learning interactive reinforcement learning also our tool lacks a lot of visualization tools and we would like to extend that in the near future.
Thank you so much that i didn't go through the things quickly if you have any question please feel free to ask.
Presentation attendee: Yes ali thank you for that presentation uh i have a question um so it's very interesting uh particularly i was thinking about the manipulation task it sounded to me that looks like a very good option for including uh like a glove with the four sensors embedded in there or something like that or um
Dr Ali Al Yacoub: Yeah there are
Presentation attendee That's three sounds sounds like uh well certainly the problem of um well you put this um raw tape on there another option is just use uh use uh motion tracking also for the form of motion tracking and once you know the outline of the robot you can kind of estimate whether there's contact or not with with the union.
Dr Ali Al Yacoub: Yeah that's that there is a balance sorry the gloves is already already there are people who are doing research on that and integrate that with the with the robots we tried to get one of those gloves but up now we have we we haven't been successful in our center but i honestly i hope that i will be able to include that um the vision um yes i the vision can provide answers can provide some information about the current situation however um and we have we have experienced this ourselves sometimes if the human is too close to the robot using vision system sometimes it's confusing to know if the human is still in touch with the robot or not and um and you still need um you still need a kind of a contact uh synth sensing equipment to indicate if the human is is still in contact with the robot or not.
Presentation attendee Yes i would assume that the video would be more of a rough kind of uh proximity sensor more for body parts rather than rather than of course it's not sensing uh like the force or anything so yeah yeah.
Dr Ali Al Yacoub:Yeah it depends on how many cameras you would like to integrate in your system um uh how much uh computational uh power you have also so those values also um contribute a lot into into answering what is the what is the uh the suitable setup for human robot collaboration i it and this is one of the the open question in the in the human robot collaboration is what how much i would like to invest in that the porcelain torque sensor and the muscle activity is a very cheap uh solution in comparison with 12 camera system with a set of tv gpus and and the learning in the background trying to extract the open voucher of the human like open board 3 software so i agree it's useful but in close to contact situation it's still not the information .
Presentation attendee Yeah i guess it's about whether the application search for generalizable or not right so uh if you if it's a very constrained task like what you were describing then of course you want to try to search for the sensors that are as specific as possible to the task that has been being asked but um if in the future the manufacturing floor has robots performing multiple tasks and then it's likely that that a lot of different options need to be available and and also there needs to be adaptability based on the task yeah i agree yep and sorry just to follow up on that the the question as well is like what what approach do we take right so is is it just keep adding sensors uh because you you your example used um uh i think four four emg centers but uh yeah we could we could add many more um or how do we do that or do we try to improve the computational uh side of things or both that's still a question i kind of reiterate the point you're making at the end is that yeah i mean at some point there's going to be a bit of a bottleneck i think in terms of data bandwidth more data that's not always better yes yeah.
Dr Ali Al Yacoub:Yeah but but we need the again i i i totally agree but we need to in order to decide what data we need we need first to combine all of them and study all of them how they are react with each other find the relative mapping and then remove the the the sensory data which does not improve the setup but first we need to do that yeah yeah.
Presentation attendee I i i would i think i would argue you could make some educated guesses based on so i commented i'm not an engineer myself i come at this from the angle of the human i'm a human movement scientist so i would i would argue that uh data about humans can probably guide you already a little bit on on on what types of centers that that is whether it's kinematics whether it's forces generated how useful those bits of information are to uh yeah to convey what the human's actually doing um yeah.
Dr Ali Al Yacoub: Um there is um i think you you you might be uh you know about that more than me there's something called uh error related potential and this is basically um ukraine can indicate if there is something going wrong in in in your environment uh and you if you can pick up that then you can use this signal to teach robot to do things and do a corrective action based on this signal yeah so yeah i think using the human in this in such setup is very useful um but there are a lot of limitations so hopefully in the future we will be able to
Presentation attendee Well we don't know everything about the human yet so that's something that keeps me in work.
Siobhan Urquhart: Okay do we have any more questions for Ali
Presentation attendee Oh hi i i have a question uh so it kind of follows from the previous um uh discussion um and it's uh although it's not my area but i'm interested if you you would all conceded um at any point or have plans to consider sort of the human acceptance so for wearing all of these senses and if you sort of have any ideas about what to do about the this research and in this topic.
Dr Ali Al Yacoub: So i uh yes um so we are sorry i forgot to mention that we are working with nottingham uh bristol and um cranfield um and and uh i think in crime field if i'm not mistaken they are doing um the the technology acceptance uh studies um it's it's not my background so i i don't want to talk about it um but yeah it is definitely relevant and as as uh like digital as whole is looking at those things um i'm focusing on this kind of applications but it's it's within our uh within our interest oh thank you very much yeah welcome.
Siobhan Urquhart: Okay so thank you yulia and yoast for your questions um if anybody has anything else please do shout um otherwise i'd like to say thank you everyone for attending today it's been really nice and it's been great to hear about everything from you ali as well um thank you we are going to um put um this recording on our digital website so if anyone would like to use this um and access it that's absolutely fine and we also encourage you to have a look at the digitop toolkit which is in its very first stage of developments um there is some information on there um there will be a further phase next year where there'll be much more information released from all the studies that everyone's been doing in the team and if you have any um anything that you want to speak to the team about then please do get in touch with myself and i can always pass any messages on and we can make those connections with you guys as well and so thank you very much everyone and i hope you enjoy the rest of your day
Multiple persons: Thank you very much, thank you, take care, bye ,thank you ,bye thank you.
Video webinar: Structured Authoring for AR based communication (Dr Dedy Ariansyiah, Cranfield University)
Transcript: Structured AR communication for remote diagnosis – Cranfield
Dr Dedy Ariansyiah: Hi everyone welcome to this webinar. In this webinar i'm going to talk about the potential of ar technology to improve remote communication in maintenance. My name is Dedy Ariansyiah and i'm a research fellow in ar for true life engineering for Cranfield University.
Some points that i will cover in this talk are the research background and the problem of remote communication in maintenance and then i will give an overview of how ar technology has been implemented to support remote maintenance.
After that i will talk about our work in developing a base remote communication framework to improve remote diagnosis this will be followed by how we validated our approach by comparing ar versus no ar solution. Finally some conclusion and further research in this area will be given in the last presentation.
The ultimate goal of maintenance manager in any industrial sectors like aviation manufacturing and railway is to maximize the uptime of the production asset and to keep the downtime minimum.
Every physical asset is subject to a degradation and will eventually fail at a certain point of time. Although the remaining useful life of an asset can be predicted thanks to iot sensor and predictive models still the maintenance quality and its associated costs are highly dependent on the skill set of a technician who carries out the task.
Given that a skilled technician is not always available on site when the failure occurs the question that we are trying to address is how could we effectively and efficiently provide access to the expert knowledge when it is needed.
The conventional method is to use telephone and email to reach to a skilled technician however this approach leads to a long waiting time in communication and often misunderstanding which result in loss in production time and inefficiency use of resources. At Cranfield University we developed ar-based solution that offers a novel way of visualizing and exchanging information between a remote expert and a local technician.
The intention of develop approach is to improve communication and therefore reduce errors and misunderstanding as well as time-consuming tasks. In the academic literature we found 20 papers from 2010 to 2018 that address the application of ar for remote maintenance. This paper can be categorized into four main areas, the majority of the study was focused around process guidance followed by training remote assistance and data collection.
A further analysis into this literature we identified at least three main areas that require further research. First is structure communication which refers to a way remote communication is regulated to reduce ambiguity.
Second is automatic data collection which refers to an automated mechanism for maintenance data gathering to improve maintenance communication during the collaboration task. The third is an ex system which refers to a recommended system that can advise a technician in troubleshooting a problem in unknown situation. This talk will focus on how ar-based framework maintenance especially for diagnosis tasks can be enhanced using structure communication.
In order to address the question the approach that we had taken was to develop a base remote communication framework which was built on two main components. First an innovative message structure and then and then second is a rule-based authoring approach for automatic ar content creation. Looking at the left figure we can see the conventional remote collaboration make use of telephone and email to deliver unstructured message and other related information.
With this approach it is difficult to ensure that a message is transparent or easily understood because of a knowledge gap that might exist between a remote expert and a local technician. In contrast our approach uses structure message and a rule-based authoring that decompose message into information elements and transform them into ar instructions that can enhance remote communication for diagnosis tasks.
This is the example of message structure and ar visualization for both expert and technician view. At the bottom you can see message elements of the message structure. The message structure was developed using the 5w use method.
Using this message element it allows us to address three main challenges. First is the ability to construct an effective message that can describe any procedures and second is the ability to record and to replay a call based on the message box and third is the ability to automatically create ar instructions.
The second element of the framework is the development of a rule-based altering approach. In the left figure we can see an ar instructions altered by a remote expert can be overlaid in the technician field in the real environment.
Unlike the traditional approach, ar-based solution can provide two ways of awareness. First is the object awareness which refers to the identification of the object being referred.
Second is the procedure awareness which refers to the procedure to be performed on the object being referred and here comes the validation part our arb system.
Implementation consists of three elements, first is a pc used by an expert to create ai instructions, second is a cloud server that store the information and provide data access to a database, third is a technician order lens which is a device used by a technician to visualize ar information in the real environment.
Other lens is equipped with a web camera that can be used to live stream what the technician is looking at to the remote expert. We carry out experimental validation to evaluate if the develop approach can actually enhance remote diagnosis tasks in terms of errors completion time and the complexity of the message. There were three different independent variables and two evaluation which involves a total of 30 msc students for performance evaluation and eight industrial users for usability and feasibility evolution.
And here is a more detailed information of the experimental tasks that we asked our participants to carry out. There were four messages, from simple to complex message a refers to a remote expert asking a participant to unscrew the screw of the front panel of the field hatch and open it.
Message b refers to an expert asking a participant to visually inspect the left and the right side of the field hatch and take a photograph of every defect found. Message c refers to an expert asking a participant to repair any defect by placing a patch.
D refers to an expert asking a participant to search and take a photograph of the previous reparation result and send it by email. Here are the results that we have obtained from our validation test. In terms of errors we found that the number of testers who made mistake during remote diagnosis tests were similar across different experimental groups. This is also true for the total number of errors across experimental groups which implies that the accuracy of remote diagnosis using augmented reality remained unaffected. In terms of time, we found that the reduction time for remote diagnosis was achieved. Using augmented reality solution the average reduction time using ar was 56 percent in comparison to no ar solution so it leads to more than half time efficiency which entails to a better use of expert time.
Furthermore the reduction time was incremented as the complexity of the message increased. This implies that remote diagnosis using our ar solution can achieve more time saving as the complexity of the message increases. Finally we also got the result of usability and feasibility evaluation from the testers and the industrial users opinion from the data collected.
We found similar results between two user groups which shows some evidence that our ar solution for remote diagnosis is indeed useful and applicable in real life context. Therefore based on our validation results we can conclude that our aovs approach using message structure and allow.
This authoring can be used to improve the efficiency of remote diagnosis in terms of time reductions however the total number of errors were similar between our arbs approach and no ar in remote technologies.
Further we've also got the results that the experts feel were similar with the tesla fields in terms of usability utility and visibility to our ar solution. As part of the future work we should address the efficiency of arv solution on the side of the remote expert. Finally we should also address the potential use of structural communication for developing a recommender system.
Thank you for attending this webinar i'm happy to answer if there is any questions related to this talk?
Video webinar: A Design Framework for Adaptive Digital Twins. Dr John Erkoyuncu (Cranfield University)
Transcript: A Design Framework for Adaptive Digital Twins
Dr John Erkoyuncu: explain the different kind of results that we got from this, So just to introduce uh the talk, um firstly i guess the context is digital twins and the kind of angle that we're coming in this paper is really about how do we enable digital twins to evolve over time and with that i'm really focusing on data in terms of the different types of data that we're trying to collect how that data actually can get processed in the digital twin over time so we're really looking at that design architecture to be able to adapt the digital twin based on the data that's feeding i t.
So um as an introduction we consider the digital twin as a living entity and that's an important aspect in that we'd like the digital twin to be a digital representation of the assets and processes and over time we want to be able to continuously represents the asset and the processes.
So that's the key thing that living entity element is key in that we need to capture the different complex engineered assets and we need to understand that different interconnections across the the kind of data models visualization systems need to be captured over time so that the digital twin continues to be representative and that's where we by looking at the literature and talking to industry we realize that there are some gaps around how do we enable data interconnectedness and that was really the context and the focus for this paper that we looked at.
So for us the focus was on creating a design framework because we consider that if you don't have a suitable design framework you won't be able to facilitate that data flow and how you're going to manage the data across the life cycle.
So in this paper we particularly looked at ontologies as a way to structure data and and i'll explain this in more detail uh shortly.
And then we also looked at defining data because when we said that data will evolve over time we wanted to characterize what data means so we defined it based on kind of big data literature and we looked at things like variety velocity and volume as the key ways to characterize data and i'll explain these different aspects in a moment as well.
So in terms of the state of the art when we looked at the literature it really is about providing a digital representation of a physical asset and it needs to really describe the properties the condition and the behavior through its models through the different analysis that takes place.
So it's really important that the representation the digital representation is accurate it's robust and it's evolving as needed uh with whatever you're trying to represent.
So we've looked at the different applications in literature and and you'll see from the references that um i would say this field has pretty much been evolving over the last sort of three four five years at most so it's quite a relatively new area to look at.
So some papers as as shown in this in this slide you can see that digital twins have been applied for human robot assembly systems they've been applied for health management of wind turbines and also looking at factories how to kind of model the factory and improve the productivity of the factory.
So i think a lot of the literature started off by trying to describe what is a digital twin and i think more and more now we're moving into illustrating the potential benefits that a digital twin can provide so there are a lot more working examples that are available and we're starting to really understand how we can implement them rather than just conceptually describe what a digital twin is.
So in terms of the research gaps that the kind of areas that we were focusing on were really looking at the connection between the digital twins brownfield systems and their data the connection was missing from from the gaps that we identified we also realized that the feedback mechanism between the as is model of the product and the product model itself are not necessarily that clear how to connect them and also looking at how to integrate the different modules of data with minimum intervention that's another thing that was that was missing.
So in terms of our focus what we try to really create here is to build a mechanism really to design the data architecture so that the data and the models can be evolving the connection between these can be evolving over time so we want data and models to kind of stay connected and evolve over the life cycle.
So really as an example if if i have an asset like let's say i have a car and i have a digital twin of that car what we wanted to understand was how can we collect the data continuously over the life cycle of that car so that we can really represent that car and its future states in an accurate manner.
So in terms of how we started to look at this we first looked at different ways that data is today represented.
So there were three different approaches that we sort of realized and firstly it was really about the decentralized approach so you can see the the figure here on the left hand side so the decentralized approach is considering the relationships between different software as a one-to-one relationship so you can see these arrows are highlighting the one-to-one relationships that are needed to be able to communicate between different softwares.
Now what you'll realize from this is that it's very relatively manual it's very laborious to build those relationships and and it's quite time consuming.
The second scenario is to build a centralized approach whereby you can see the software and the data storage you're building a kind of centralized data storage mechanism and each software can share data to that centralized data repository.
Now the approach that we're considering to develop is that we're building a new language in the middle between the centralized data facility and the software different software packages.
So this language in the middle in a way is facilitating the integration between the software and the data and that's the key thing here that we're trying to achieve.
So along these lines we're really trying to understand how data can be represented across the assets life cycle and the key thing here is modifications in the asset so whenever there's a modification in the asset the whole relationship in terms of the data the software becomes complex and we want to address that complexity by creating this new approach where the link between data models can be seamless or or much much easier.
So just to introduce ontologies just to give you an overview so compared to other shared languages like sql ontologies do have some advantages so firstly they are semantically structuring data and with that they're typically using knowledge domains so each knowledge domain is really structuring the different key parts of the data that you'd like to utilize.
So ontologies are also based on the open world assumption which means the interfaces for receiving new data can be defined based on schemas.
So that really is how you can define and you can see the picture on the right hand side there whereby you're defining a hierarchy you're defining your knowledge domain so if you consider the blue boxes here as your in a way knowledge domains you can then start to break down what are the different types of data that's linked to that and we can start to build relationships between different relevant data types and that's really the the key thing here in that we can start to understand how different types of data connect to each other and how they can start to influence your decision making capabilities.
So this was really at the backbone in terms of how we can connect the data and the software and and and this is just showing an example here on the repair remote diagnosis type context and and and that's really just highlighting the key things that you might need to be able to make decisions.
So in terms of the framework that we developed there were two main stages that we considered. So we firstly in stage one looked at really describing the asset and process of interest so it's really the backbone of the asset and we want to make that backbone that kind of description of the product and process clear so that we can start to understand what kind of changes can take place on that asset or process.
So in stage two once we've defined the assets then we move into stage stage in stage one we define the asset and in stage two we start to capture what kind of changes can happen over time so it's really about understanding the interfaces between the software and the data and start to see whether there are any new types of data that can be represented in the digital twin.
So we're trying to understand that change element and capture change so that the digital twin is evolving over time so we're very much looking at the knowledge domains from the ontology language and we're trying to see how those knowledge domains are going to evolve over time based on that data hierarchy that i showed you in an example in the previous slide.
So in terms of the stages and the framework this slide is just trying to go into a bit more detail so on the right hand side you can see the figure which is just showing the different stages and the different steps that are part of the framework. So in stage one it's really about the asset and we're trying to characterize the asset in terms of its system its modules its components so in terms of the steps we're really trying to identify the asset hierarchy we're trying to declare the relationships between the hierarchy elements and we want to see the naming of the different knowledge domains and the sub elements to that.
In stage 2 it's really focusing more so on the software side so we're trying to capture changes in the software so if you're using for example cad cam various monitoring related software or any new software we want to understand how those new softwares are related to the asset and how the relationships the interfaces are evolving over time.
So you can see we've got eight steps to really understand the different types of information that are generated and the new interfaces that come into place as we introduce new software.
So in that process we're looking at different ways to quantify data so if i have new data or the data is changing we want to be able to characterize that and we've looked at that from three different angles one is variety so the data variety we characterize that we define that based on the number of attributes and or relationships to modify and update.
The second area is volume so volumes looking at the number of individuals to modify or update when assigning new attributes or relationships and the third area is around velocity so this is about the number of interfaces to update due to the assets evolution.
So this is really how we're defining the ontology's role so the ontology is really targeting to capture these different changes in data and we're trying to structure that so that the digital twin becomes seamlessly updated over time.
So in terms of stage one this is just going into a bit more detail so in stage one we're identifying the possible hierarchy levels we're declaring the different classes and we're defining the relationships between the hierarchical elements and we're correlating the classes and we're also declaring the attributes that are defining the asset and we're determining a standard naming convention so that we'll stick to that and use that across the assets life cycle.
So it's really in a way defining the fundamental basis of the asset that we're trying to represent.
In stage two it's about understanding the dynamic behavior of the digital twin.
So we have defined eight steps and the target is really to offer design flexibility and choices to introduce adaptiveness so at the heart of this we're trying to say that if you've designed a digital twin you don't necessarily want that digital twin to stay fixed it will change it will evolve over time and stage two is where we say that you can actually introduce adaptiveness and maintain how the asset will be represented.
So here we're defining a temporal attribute and and that means temporal information is going to be captured.
So we'll have a date stamp we'll have different types of information represented and we can start to capture that over time.
We're also looking at the different attributes and how they're connected to each other and we want to understand how those relationships also evolve over time.
So if you look at the eight steps that we've proposed here from step one to step eight we're really trying to navigate from understanding the changes that take place in the asset.
So step one is really understanding the new software that's exchangeable it's creating data we're trying to understand how that new data that new software is actually going to affect our ontology architecture and we're trying to then start to go into more detail if we don't know the relationship between the new software and the new data then we're trying to create that relationship in our ontological architecture.
So it's really trying to add or adapt the ontology that we have and grow it and adapt it over time by following these different steps.
So we applied this framework on two different case studies so one was representing a helicopter gearbox system and the other was a robotic system so i'll show you videos and and and pictures in a moment but in a nutshell the helicopter gearbox um firstly defined the helicopter gearbox and there was a new change in that helicopter gearbox in that we installed um we we made a geometrical hole on that gearbox so we introduced a new hole and we tried to understand how that changed in terms of the adaptiveness of the digital twin.
So for us the question was if we made the physical modification on the asset how does that affect the digital twin.
In terms of the robotic system we were actually interested in introducing new cameras to the system so that we could evaluate and realize the movements of the robot.
So that was really changing the system and understanding whether the digital twin can realize the change in the system.
so if we go into more detail in the helicopter gearbox example you can see on the right hand side there's a picture to illustrate the gearbox.
So what we did was um it was to introduce a new screw and we added a new sensor so that was our first step now you can see here this was the component that we added to the gearbox.
In the second step we recorded the manufacturing activities through various web-based software like cam and cad we use the cad plug-in to represent the component and then we built that we reviewed that cad model and we updated the total system level cad model and then we improved our we integrated that into our augmented reality visualization of this model and lastly we updated the digital twin to reflect the modifications of the physical asset with the new sensor.
So for us this is kind of a continuous process so if you make a modification in the asset we then go through these different stages.
So it's a continuous process and for us the measure of success is based on whether the digital twin is accurate in representing this asset.
So just to take you through a video of the outcome where it actually shows um that the digital twin is updated um we're using augmented reality we're using um hololens here and you can see the digital twin representation of the gearbox is here so after applying each step that i just explained our digital twin can be visualized and it you can also start to see various figures that show the updated sensor that was introduced and also the the kind of various sensors that were already embedded in the gearbox.
So for us really it's a question of if we were to make a new change to the gearbox how will the digital twin actually adapt and capture those changes.
So this was one example area that we looked at and the second so so going into a bit more detail in terms of the helicopter system here we looked at how our approach compared to the sql approach.
So here we saw that the variety of aspect was very similar and there but there was no schema change needed so that actually made it a lot uh well quicker and faster to to make the modification.
In terms of volume again it was similar to the sql approach but we also were able to ensure that data consistency was easier to manage because we only had one individual change in the in the properties.
In terms of velocity it was faster only one interface was needed instead of two so we were able to integrate the different software packages easier because of the exchange language that we created through the ontology.
So we realized there were some benefits in terms of the data processing in this process.
In terms of the robotic system so here we realized that the thing that we were interested in was introducing a a new lab based so this was a lab-based mobile robot system and we were introducing new sensors to capture the movements of the robots.
So you can see the kind of system layout here we've got lab-based mobile robots here and we wanted to introduce a new camera-based tracking system.
So the assets represented as part of stage one in our framework and all that information is is is populating the digital twin and we're starting to then understand the the kind of movements of the robots much more in detail by introducing these new cameras.
So just to go into a bit more detail here this was how we were looking at it so the the framework was applied in a software uh kind of platform and we were checking whether the new camera system that we had embedded was seeing the movements of the robots.
So for us the question was whether the camera system was detected by the digital twin and whether that camera system was actually improving in capturing where the robot's movements were.
So you can see in this figure here where we're representing and understanding where these different robots are and our ability to predict their location had improved.
And from in terms of the way we were managing the data we could see that um the relational database like an sql-based approach was actually harder to manage the overall data that was coming into the system and the ontology-based approach was much more efficient in terms of the way we managed the data.
So that was really trying to give another example as to how the framework could be applied and whether it's generated any benefits.
So with that i think just to conclude a few points um so here we we created a design framework particularly focusing on how data can evolve over time when you're trying to update a digital twin.
So we were looking at data in terms of three aspects variety velocity and volume across the life cycle and we wanted to illustrate that the way you can manage data can improve by implementing ontologies and and and we could also show that there's closed loops between the actual physical assets and the digital assets that we're managing.
So moving forward and we're really interested in understanding further the relationship between data and models and we want to go into much more detail with the feedback side.
So for example if we make some analysis in the digital twin we want to see how that analysis can feed back into the physical asset and how we can improve the efficiency and productivity of the physical asset.
So that's really a critical area of interest and we need to really understand how we can use the models and the data better to give that feedback back to the asset and we also need to think of the mechanisms of how that information can be channeled back into the asset to influence the behavior of the physical asset.
So we'll be continuing that kind of work as part of as part of Digitop and it's an area that we're quite keen.
So in terms of acknowledgements we acknowledge the epsrc Digitop project we also acknowledge some colleagues that we collaborated with on this project from slovenia and also colleagues at City University London and Babcock International as an industrial partner to this project.
So with that thank you very much and um i'll be happy to answer any questions that you may have.
Siobhan Urquhart: Thank you john that was really really interesting um if anybody has any questions questions please do put them on the chat function or just ask john straight away.[pause]
Dr John Erkoyuncu: I'm trying to open the chat function but it's not opening for me um okay i'm not sure if there are any questions there.
Siobhan Urquhart: There aren't any coming up so far so i don't know it doesn't look like we've got any questions for you sorry okay oh here we go so Maria are you able to see that you want me to read that out John?
Dr John Erkoyuncu: If you could read it it's not coming up for me okay.
Siobhan Urquhart: So Maria asks, you talked about structuring data in a way that the communication language sits um i think she's just writing some more um okay so you talked about structuring data in a way that a communication language sits between the data and the model does this affect response time?
Dr John Erkoyuncu: Okay yeah that's that's a good question it does affect the response time and the key thing there is that understanding the communication can take time between the data and the models and that's why we're comparing the ontology-based approach with other approaches like sql approach.
Any kind of modelling approach has this processing time and i think here we're trying to improve that time it takes to to do that exchange.
But the structure of the ontology is key for that so if you if you have an inefficient structure it will delay the analysis.
Siobhan Urquhart: I think i hope that answers your question Maria and i think we have Mojang is just typing something so wait for that.
Dr John Erkoyuncu: Okay great Maria i'm not sure if that answered your question?
Siobhan Urquhart: She says it did thank you.
Dr John Erkoyuncu: thanks.[pause]
Siobhan Urquhart: Okay so Mojang is asking thanks for your talk what's interesting what is your next plan and future to this idea?
Dr John Erkoyuncu: Okay thank you so i mean we're continuing to work um on the Digitop project so as part of that we're looking at using augmented reality as the means to collect data from the people the operators the for example the people doing maintenance.
So we now want to see how we can collect data from multiple sources.
So so with augmented reality it could be a person maybe speaking into hololens and then we orally are connect collecting information it could also be someone maybe doing assembly disassembly and we want to visually recognize the changes that they're making so that we can feed those changes into the digital twins so the question there is instead of someone filling in a piece of paper to illustrate what they've done we now want to automate that with augmented reality and capture the changes and feed it into the digital twin automatically.
So that's the kind of seamless data collection and we want to feed that into the digital twin in real time or close to and we want the digital twin to become richer in terms of the data sources so we're thinking of moving beyond sensors to incorporating data from lots of different uh sources including people.
Siobhan Urquhart: That's great thank you um i think Mojang is just asking something else.
Dr John Erkoyuncu: okay
Siobhan Urquhart: He's saying thank you um that's great thank you if there are any more questions please do put your hand up or type anything in here um i just would like to say thank you to John and make you aware of some more webinars that we've got coming up as part of the Digitop project um so we have the understanding and exploring integration and interaction with rich pictures which should be a really nice one and that's next tuesday 19th of January we've got Ella from Loughborough University.
Um we've also got facial thermography with Adrian um who's doing that on Tuesday the 26th of Jan then we also have a last final webinar is multi-sensory virtual environments on the second of Feb and that's with Dr Glenn Kawson also from University of Nottingham and so if you'd like to have a look on our website you'll be able to see the links there and then we have one final question from Maria who asks you how would a digital twin respond to drift in the data or the model?
Dr John Erkoyuncu: Okay so that's a really good question and that's uh actually another piece of work that we've just done we've just submitted a paper on this so we're looking at using artificial intelligence to really look at learning from past events and uh different kind of disruptions.
So the thing that we're really interested in here was that how do you characterize that there is a disruption in the digital twin so for us the first question was how do we realize that there is a deviation.
Then we looked at how do you actually implement a resilient strategy.
So we we need to provide a response if there is a disruption and and here the interesting thing is disruption is in terms of the data and the digital twin not being updated as needed.
So for us the the resilience of the digital twin was the the critical area in terms of the research.
Um and and we've just done the first piece of work around that and we're going to continue to look at that more i think it's a critical question thank you for that Maria.
Siobhan Urquhart: Okay thank you that's really that's really great um i've just posted the link in for the website there and um Maria is saying thank you for your answer i think i think that's it.
Dr John Erkoyuncu: Great thank you very much.
Siobhan Urquhart: Thank you so much john that was brilliant um thank you to everyone else for attending um i think adrian's actually just putting always putting thank you lots of people but showing their appreciation yeah so that's wonderful thank you so much john and i hope everyone enjoys the rest of their day.
Multiple persons: thank you very much, thank you, thank you, thanks a lot, bye take care everybody