Transcript
Dr John Erkoyuncu: explain the different kind of results that we got from this, So just to introduce uh the talk, um firstly i guess the context is digital twins and the kind of angle that we're coming in this paper is really about how do we enable digital twins to evolve over time and with that i'm really focusing on data in terms of the different types of data that we're trying to collect how that data actually can get processed in the digital twin over time so we're really looking at that design architecture to be able to adapt the digital twin based on the data that's feeding i t.
So um as an introduction we consider the digital twin as a living entity and that's an important aspect in that we'd like the digital twin to be a digital representation of the assets and processes and over time we want to be able to continuously represents the asset and the processes.
So that's the key thing that living entity element is key in that we need to capture the different complex engineered assets and we need to understand that different interconnections across the the kind of data models visualization systems need to be captured over time so that the digital twin continues to be representative and that's where we by looking at the literature and talking to industry we realize that there are some gaps around how do we enable data interconnectedness and that was really the context and the focus for this paper that we looked at.
So for us the focus was on creating a design framework because we consider that if you don't have a suitable design framework you won't be able to facilitate that data flow and how you're going to manage the data across the life cycle.
So in this paper we particularly looked at ontologies as a way to structure data and and i'll explain this in more detail uh shortly.
And then we also looked at defining data because when we said that data will evolve over time we wanted to characterize what data means so we defined it based on kind of big data literature and we looked at things like variety velocity and volume as the key ways to characterize data and i'll explain these different aspects in a moment as well.
So in terms of the state of the art when we looked at the literature it really is about providing a digital representation of a physical asset and it needs to really describe the properties the condition and the behavior through its models through the different analysis that takes place.
So it's really important that the representation the digital representation is accurate it's robust and it's evolving as needed uh with whatever you're trying to represent.
So we've looked at the different applications in literature and and you'll see from the references that um i would say this field has pretty much been evolving over the last sort of three four five years at most so it's quite a relatively new area to look at.
So some papers as as shown in this in this slide you can see that digital twins have been applied for human robot assembly systems they've been applied for health management of wind turbines and also looking at factories how to kind of model the factory and improve the productivity of the factory.
So i think a lot of the literature started off by trying to describe what is a digital twin and i think more and more now we're moving into illustrating the potential benefits that a digital twin can provide so there are a lot more working examples that are available and we're starting to really understand how we can implement them rather than just conceptually describe what a digital twin is.
So in terms of the research gaps that the kind of areas that we were focusing on were really looking at the connection between the digital twins brownfield systems and their data the connection was missing from from the gaps that we identified we also realized that the feedback mechanism between the as is model of the product and the product model itself are not necessarily that clear how to connect them and also looking at how to integrate the different modules of data with minimum intervention that's another thing that was that was missing.
So in terms of our focus what we try to really create here is to build a mechanism really to design the data architecture so that the data and the models can be evolving the connection between these can be evolving over time so we want data and models to kind of stay connected and evolve over the life cycle.
So really as an example if if i have an asset like let's say i have a car and i have a digital twin of that car what we wanted to understand was how can we collect the data continuously over the life cycle of that car so that we can really represent that car and its future states in an accurate manner.
So in terms of how we started to look at this we first looked at different ways that data is today represented.
So there were three different approaches that we sort of realized and firstly it was really about the decentralized approach so you can see the the figure here on the left hand side so the decentralized approach is considering the relationships between different software as a one-to-one relationship so you can see these arrows are highlighting the one-to-one relationships that are needed to be able to communicate between different softwares.
Now what you'll realize from this is that it's very relatively manual it's very laborious to build those relationships and and it's quite time consuming.
The second scenario is to build a centralized approach whereby you can see the software and the data storage you're building a kind of centralized data storage mechanism and each software can share data to that centralized data repository.
Now the approach that we're considering to develop is that we're building a new language in the middle between the centralized data facility and the software different software packages.
So this language in the middle in a way is facilitating the integration between the software and the data and that's the key thing here that we're trying to achieve.
So along these lines we're really trying to understand how data can be represented across the assets life cycle and the key thing here is modifications in the asset so whenever there's a modification in the asset the whole relationship in terms of the data the software becomes complex and we want to address that complexity by creating this new approach where the link between data models can be seamless or or much much easier.
So just to introduce ontologies just to give you an overview so compared to other shared languages like sql ontologies do have some advantages so firstly they are semantically structuring data and with that they're typically using knowledge domains so each knowledge domain is really structuring the different key parts of the data that you'd like to utilize.
So ontologies are also based on the open world assumption which means the interfaces for receiving new data can be defined based on schemas.
So that really is how you can define and you can see the picture on the right hand side there whereby you're defining a hierarchy you're defining your knowledge domain so if you consider the blue boxes here as your in a way knowledge domains you can then start to break down what are the different types of data that's linked to that and we can start to build relationships between different relevant data types and that's really the the key thing here in that we can start to understand how different types of data connect to each other and how they can start to influence your decision making capabilities.
So this was really at the backbone in terms of how we can connect the data and the software and and and this is just showing an example here on the repair remote diagnosis type context and and and that's really just highlighting the key things that you might need to be able to make decisions.
So in terms of the framework that we developed there were two main stages that we considered. So we firstly in stage one looked at really describing the asset and process of interest so it's really the backbone of the asset and we want to make that backbone that kind of description of the product and process clear so that we can start to understand what kind of changes can take place on that asset or process.
So in stage two once we've defined the assets then we move into stage stage in stage one we define the asset and in stage two we start to capture what kind of changes can happen over time so it's really about understanding the interfaces between the software and the data and start to see whether there are any new types of data that can be represented in the digital twin.
So we're trying to understand that change element and capture change so that the digital twin is evolving over time so we're very much looking at the knowledge domains from the ontology language and we're trying to see how those knowledge domains are going to evolve over time based on that data hierarchy that i showed you in an example in the previous slide.
So in terms of the stages and the framework this slide is just trying to go into a bit more detail so on the right hand side you can see the figure which is just showing the different stages and the different steps that are part of the framework. So in stage one it's really about the asset and we're trying to characterize the asset in terms of its system its modules its components so in terms of the steps we're really trying to identify the asset hierarchy we're trying to declare the relationships between the hierarchy elements and we want to see the naming of the different knowledge domains and the sub elements to that.
In stage 2 it's really focusing more so on the software side so we're trying to capture changes in the software so if you're using for example cad cam various monitoring related software or any new software we want to understand how those new softwares are related to the asset and how the relationships the interfaces are evolving over time.
So you can see we've got eight steps to really understand the different types of information that are generated and the new interfaces that come into place as we introduce new software.
So in that process we're looking at different ways to quantify data so if i have new data or the data is changing we want to be able to characterize that and we've looked at that from three different angles one is variety so the data variety we characterize that we define that based on the number of attributes and or relationships to modify and update.
The second area is volume so volumes looking at the number of individuals to modify or update when assigning new attributes or relationships and the third area is around velocity so this is about the number of interfaces to update due to the assets evolution.
So this is really how we're defining the ontology's role so the ontology is really targeting to capture these different changes in data and we're trying to structure that so that the digital twin becomes seamlessly updated over time.
So in terms of stage one this is just going into a bit more detail so in stage one we're identifying the possible hierarchy levels we're declaring the different classes and we're defining the relationships between the hierarchical elements and we're correlating the classes and we're also declaring the attributes that are defining the asset and we're determining a standard naming convention so that we'll stick to that and use that across the assets life cycle.
So it's really in a way defining the fundamental basis of the asset that we're trying to represent.
In stage two it's about understanding the dynamic behavior of the digital twin.
So we have defined eight steps and the target is really to offer design flexibility and choices to introduce adaptiveness so at the heart of this we're trying to say that if you've designed a digital twin you don't necessarily want that digital twin to stay fixed it will change it will evolve over time and stage two is where we say that you can actually introduce adaptiveness and maintain how the asset will be represented.
So here we're defining a temporal attribute and and that means temporal information is going to be captured.
So we'll have a date stamp we'll have different types of information represented and we can start to capture that over time.
We're also looking at the different attributes and how they're connected to each other and we want to understand how those relationships also evolve over time.
So if you look at the eight steps that we've proposed here from step one to step eight we're really trying to navigate from understanding the changes that take place in the asset.
So step one is really understanding the new software that's exchangeable it's creating data we're trying to understand how that new data that new software is actually going to affect our ontology architecture and we're trying to then start to go into more detail if we don't know the relationship between the new software and the new data then we're trying to create that relationship in our ontological architecture.
So it's really trying to add or adapt the ontology that we have and grow it and adapt it over time by following these different steps.
So we applied this framework on two different case studies so one was representing a helicopter gearbox system and the other was a robotic system so i'll show you videos and and and pictures in a moment but in a nutshell the helicopter gearbox um firstly defined the helicopter gearbox and there was a new change in that helicopter gearbox in that we installed um we we made a geometrical hole on that gearbox so we introduced a new hole and we tried to understand how that changed in terms of the adaptiveness of the digital twin.
So for us the question was if we made the physical modification on the asset how does that affect the digital twin.
In terms of the robotic system we were actually interested in introducing new cameras to the system so that we could evaluate and realize the movements of the robot.
So that was really changing the system and understanding whether the digital twin can realize the change in the system.
so if we go into more detail in the helicopter gearbox example you can see on the right hand side there's a picture to illustrate the gearbox.
So what we did was um it was to introduce a new screw and we added a new sensor so that was our first step now you can see here this was the component that we added to the gearbox.
In the second step we recorded the manufacturing activities through various web-based software like cam and cad we use the cad plug-in to represent the component and then we built that we reviewed that cad model and we updated the total system level cad model and then we improved our we integrated that into our augmented reality visualization of this model and lastly we updated the digital twin to reflect the modifications of the physical asset with the new sensor.
So for us this is kind of a continuous process so if you make a modification in the asset we then go through these different stages.
So it's a continuous process and for us the measure of success is based on whether the digital twin is accurate in representing this asset.
So just to take you through a video of the outcome where it actually shows um that the digital twin is updated um we're using augmented reality we're using um hololens here and you can see the digital twin representation of the gearbox is here so after applying each step that i just explained our digital twin can be visualized and it you can also start to see various figures that show the updated sensor that was introduced and also the the kind of various sensors that were already embedded in the gearbox.
So for us really it's a question of if we were to make a new change to the gearbox how will the digital twin actually adapt and capture those changes.
So this was one example area that we looked at and the second so so going into a bit more detail in terms of the helicopter system here we looked at how our approach compared to the sql approach.
So here we saw that the variety of aspect was very similar and there but there was no schema change needed so that actually made it a lot uh well quicker and faster to to make the modification.
In terms of volume again it was similar to the sql approach but we also were able to ensure that data consistency was easier to manage because we only had one individual change in the in the properties.
In terms of velocity it was faster only one interface was needed instead of two so we were able to integrate the different software packages easier because of the exchange language that we created through the ontology.
So we realized there were some benefits in terms of the data processing in this process.
In terms of the robotic system so here we realized that the thing that we were interested in was introducing a a new lab based so this was a lab-based mobile robot system and we were introducing new sensors to capture the movements of the robots.
So you can see the kind of system layout here we've got lab-based mobile robots here and we wanted to introduce a new camera-based tracking system.
So the assets represented as part of stage one in our framework and all that information is is is populating the digital twin and we're starting to then understand the the kind of movements of the robots much more in detail by introducing these new cameras.
So just to go into a bit more detail here this was how we were looking at it so the the framework was applied in a software uh kind of platform and we were checking whether the new camera system that we had embedded was seeing the movements of the robots.
So for us the question was whether the camera system was detected by the digital twin and whether that camera system was actually improving in capturing where the robot's movements were.
So you can see in this figure here where we're representing and understanding where these different robots are and our ability to predict their location had improved.
And from in terms of the way we were managing the data we could see that um the relational database like an sql-based approach was actually harder to manage the overall data that was coming into the system and the ontology-based approach was much more efficient in terms of the way we managed the data.
So that was really trying to give another example as to how the framework could be applied and whether it's generated any benefits.
So with that i think just to conclude a few points um so here we we created a design framework particularly focusing on how data can evolve over time when you're trying to update a digital twin.
So we were looking at data in terms of three aspects variety velocity and volume across the life cycle and we wanted to illustrate that the way you can manage data can improve by implementing ontologies and and and we could also show that there's closed loops between the actual physical assets and the digital assets that we're managing.
So moving forward and we're really interested in understanding further the relationship between data and models and we want to go into much more detail with the feedback side.
So for example if we make some analysis in the digital twin we want to see how that analysis can feed back into the physical asset and how we can improve the efficiency and productivity of the physical asset.
So that's really a critical area of interest and we need to really understand how we can use the models and the data better to give that feedback back to the asset and we also need to think of the mechanisms of how that information can be channeled back into the asset to influence the behavior of the physical asset.
So we'll be continuing that kind of work as part of as part of Digitop and it's an area that we're quite keen.
So in terms of acknowledgements we acknowledge the epsrc Digitop project we also acknowledge some colleagues that we collaborated with on this project from slovenia and also colleagues at City University London and Babcock International as an industrial partner to this project.
So with that thank you very much and um i'll be happy to answer any questions that you may have.
Siobhan Urquhart: Thank you john that was really really interesting um if anybody has any questions questions please do put them on the chat function or just ask john straight away.
[pause]Dr John Erkoyuncu: I'm trying to open the chat function but it's not opening for me um okay i'm not sure if there are any questions there.
Siobhan Urquhart: There aren't any coming up so far so i don't know it doesn't look like we've got any questions for you sorry okay oh here we go so Maria are you able to see that you want me to read that out John?
Dr John Erkoyuncu: If you could read it it's not coming up for me okay.
Siobhan Urquhart: So Maria asks, you talked about structuring data in a way that the communication language sits um i think she's just writing some more um okay so you talked about structuring data in a way that a communication language sits between the data and the model does this affect response time?
Dr John Erkoyuncu: Okay yeah that's that's a good question it does affect the response time and the key thing there is that understanding the communication can take time between the data and the models and that's why we're comparing the ontology-based approach with other approaches like sql approach.
Any kind of modelling approach has this processing time and i think here we're trying to improve that time it takes to to do that exchange.
But the structure of the ontology is key for that so if you if you have an inefficient structure it will delay the analysis.
Siobhan Urquhart: I think i hope that answers your question Maria and i think we have Mojang is just typing something so wait for that.
Dr John Erkoyuncu: Okay great Maria i'm not sure if that answered your question?
Siobhan Urquhart: She says it did thank you.
Dr John Erkoyuncu: thanks.
[pause]Siobhan Urquhart: Okay so Mojang is asking thanks for your talk what's interesting what is your next plan and future to this idea?
Dr John Erkoyuncu: Okay thank you so i mean we're continuing to work um on the Digitop project so as part of that we're looking at using augmented reality as the means to collect data from the people the operators the for example the people doing maintenance.
So we now want to see how we can collect data from multiple sources.
So so with augmented reality it could be a person maybe speaking into hololens and then we orally are connect collecting information it could also be someone maybe doing assembly disassembly and we want to visually recognize the changes that they're making so that we can feed those changes into the digital twins so the question there is instead of someone filling in a piece of paper to illustrate what they've done we now want to automate that with augmented reality and capture the changes and feed it into the digital twin automatically.
So that's the kind of seamless data collection and we want to feed that into the digital twin in real time or close to and we want the digital twin to become richer in terms of the data sources so we're thinking of moving beyond sensors to incorporating data from lots of different uh sources including people.
Siobhan Urquhart: That's great thank you um i think Mojang is just asking something else.
Dr John Erkoyuncu: okay
Siobhan Urquhart: He's saying thank you um that's great thank you if there are any more questions please do put your hand up or type anything in here um i just would like to say thank you to John and make you aware of some more webinars that we've got coming up as part of the Digitop project um so we have the understanding and exploring integration and interaction with rich pictures which should be a really nice one and that's next tuesday 19th of January we've got Ella from Loughborough University.
Um we've also got facial thermography with Adrian um who's doing that on Tuesday the 26th of Jan then we also have a last final webinar is multi-sensory virtual environments on the second of Feb and that's with Dr Glenn Kawson also from University of Nottingham and so if you'd like to have a look on our website you'll be able to see the links there and then we have one final question from Maria who asks you how would a digital twin respond to drift in the data or the model?
Dr John Erkoyuncu: Okay so that's a really good question and that's uh actually another piece of work that we've just done we've just submitted a paper on this so we're looking at using artificial intelligence to really look at learning from past events and uh different kind of disruptions.
So the thing that we're really interested in here was that how do you characterize that there is a disruption in the digital twin so for us the first question was how do we realize that there is a deviation.
Then we looked at how do you actually implement a resilient strategy.
So we we need to provide a response if there is a disruption and and here the interesting thing is disruption is in terms of the data and the digital twin not being updated as needed.
So for us the the resilience of the digital twin was the the critical area in terms of the research.
Um and and we've just done the first piece of work around that and we're going to continue to look at that more i think it's a critical question thank you for that Maria.
Siobhan Urquhart: Okay thank you that's really that's really great um i've just posted the link in for the website there and um Maria is saying thank you for your answer i think i think that's it.
Dr John Erkoyuncu: Great thank you very much.
Siobhan Urquhart: Thank you so much john that was brilliant um thank you to everyone else for attending um i think adrian's actually just putting always putting thank you lots of people but showing their appreciation yeah so that's wonderful thank you so much john and i hope everyone enjoys the rest of their day.
Multiple persons: thank you very much, thank you, thank you, thanks a lot, bye take care everybody