On 14 December 2023, FAIRCORE4EOSC joined forces with EUDAT for an insightful webinar to shed light on how the EUDAT Collaborative Data Infrastructure and the FAIRCORE4EOSC project are innovating the European research data management landscape. This webinar provides an in-depth look at how the components developed in the FAIRCORE4EOSC project are enhancing the research data services provided by EUDAT to support FAIR research life cycles.
Whether you are a researcher, a data scientist, or a policymaker, this webinar offers valuable insights into the future of research data management in Europe.
Agenda:
00:00 – Introduction and moderation – Zachary Smith, Trust-IT Services
04:38 – EUDAT Overview – Mark van de Sanden, SURF
17:24 – FAIRCORE4EOSC Overview – Tommi Suominen, CSC
31:39 – B2FIND enhancement with FAIRCORE4EOSC components – Heinrich Widmann, DKRZ
49:06 – B2SHARE enhancement with FAIRCORE4EOSC components ‘ Harri Hirvonsalo, CSC
54:52 – Panel and Audience Q&A
Access the website and slides here: http://tinyurl.com/2ay2jyfb
Good morning everyone welcome we’re glad to have you with us this morning if you’d like um to write in the chat and say hi and tell us where you’re connecting from that’ be lovely as we give people time to to join the webinar so welcome everyone to enhancing
Research data management with with audet and fair core for Eos um my name’s Zack Smith I’m from trust it and I’m part of um Fair Freo project but I also am part of the communications team for UD that so I found myself in a perfect position
To present um both uh UD CDI and um and fos to you in this webinar and I’m very excited that we’ve got some great speakers with us to talk about uh how how these uh how the veal freest project and eudat is interacting and um and working to make research data Management
Services even better um so just some housekeeping announcements uh the slides and webinar recording will be available sometime next week so don’t worry about you know if you miss something on a slide or needing to screenshot things um we will make them available uh please ask your questions in the zoom Q&A tool
Um if you’re not familiar with zoom webinars uh we have the Q&A toour which is next to the chat so I’m just looking in the chat now and I see some people got uh Netherlands Finland France um Hungary Germany Spain um Austria so it’s nice to see
People join from a lot of different countries um I’m joining from Italy and so please use the Q&A tool for any questions um we will have a time at the end to uh speak to the whole panel together of all the speakers um and so any relevant questions will be asked
Then um maybe some of the panelists will respond to your questions directly in the Q&A if you have them and last last thing I’d like to say is to make sure to follow both fa fosk and eudat on LinkedIn and X for regular updates or X Twitter uh whatever we call it these
Days um and as you can see here’s the agenda and we’re in the first part next I’ll hand over to Mark to talk about UD that then Tommy will talk about F and then uh Hinrich will talk about specific service that is being enhanced by the foros and then Harry will’ll talk about
Another one and then we’ll get to the time of Q&A and uh the panel session so um I’ll hand over the floor to Mark now um please turn on your camera and share your screen Mark and okay I will do okay I get home soon you can’t start
Your video because the host has stopped it so I think you need to and open my camera video okay then I will start sharing my Screen and hopefully you can see my slides yep we can see them okay uh good morning everyone I’m Mark vambo I’m from Surf and I’m also the technical coordinator within AUD do and in these slides I will provide a short overview of of eud do as the organization a little bit of the
Services which we are providing uh and also I will address some new cases uh where uh you can see okay how these services are being used by for example communities uh e uh CDI e. collaborative data infrastructure is one of the major e infrastructures in in Europe and we
Have a vision as that uh we want to have data shared and preserved across borders and disciplines and if you look at this Vision it is very similar as for example how EOS also has this Vision as to be able to share data across borders and
Disciplines uh and to be able to achieve this uh EV that is also providing a set of uh services but also a set of components which can be used to build up your services and we are trying to aim uh that researchers can use uh services and to support them throughout the
Research data life cycle so that they are able to manage the data in in a better way uh eot is an member organization and at the moment 24 organizations are member of OT and they are spread across 14 different countries and as you can see there are uh the spread is really
Across many countries within Europe um in in many of the discussions we are looking at data but also at different stages and different levels of of data and we very frequently use this diagram to explain the different levels of data as you can see we have distinguish which is more in private
Data domain where still a lot of data is made available users are very active in in in producing and analyzing and working on the data uh so the data objects are also very actively changing frequently updates uh and then we also looking more at the Shared data domain
Where the data is become already more stable is also already more uh described uh but also uh is shared within different teams uh which can be within your own domain but also already in different collaborations which you uh which you are working with uh to for example to produce any uh Publications
And then we are more going to the top and then we have the published data domain where the data is is stable It is Well described but also is preserved over long term to make available so that you really have links to the different kind of Publications and if you look at
Ot Ot is more looking towards the shared data domain and providing components so that users are able to easily share the describe but also make data discoverable uh within their own domain but also within the collaborators and for this we have been uh developing a different set
Of services and components which can be used by uh uh to resarch but also communities to build up their infrastructure and if you look at this this is our our suite of of components which we have be developing and if you go to uh uh the top there we have be def
Find which is data Discovery service so we Harvest metadata for many different uh uh repositories I make this cable uh to user to find interesting data sets uh where they can work with uh going to uh one step uh below this and we have the data access and sharing layer where we
Have different kind of services as B2 share which is a data repository service but also technology via which you can easily set up the data repository and make and describe your data and make your data uh sharable within uh your domain uh and if you then go to the right we
Have p drop which is uh a service which can easily be used for active data sharing of active data it is a sh share type of uh service so you can collaborate with your uh uh fellow researchers to share the data but also integrate this with different kind of
Platforms where you can share the data across different kind of services um on the data management uh layer we are looking at uh BT safe which is an a service in which you can Implement different kinds of data management policies you can integrate this with different kind of data
Services but also integrate this with different kind of ser uh uh storage type of uh uh platforms so you can create a virtual environment for making the data available while you’re also implementing data management policies to improve your uh data practices uh in general way we
Also use tful for example to be the handle server to assign position identifiers so at an early stage you can already use position identifiers to link this two data to data objects but also to data set therefore you see that P handle is integrated with different kind
Of shs as was be to safe so that you want want to have for example pads for data objects we can automate this process within be the safe but also for example if you have a publication in B to share they can also assign uh Peds uh
To the data to the data sets but also to the data objects and at the lower layer we have the BET access service which is our uh uh infrastructure proxy to provide access uh to the different services within uh within. so therefore it is integrated to is the different
Services to do uh user and access management of those services in this uh slide we provide some numbers of how the the services are being used but also how many communities we have within our uh within our domain making use of the services for example within B
To share we already support for 24 different communities which make the data available but also we support them by having different kind of metadata schema supporting the communities uh was in bet hand we provide a large number of prefixes across different kind of repositories but also uh for example we have already
Registered more than 30 million Peds assigning to data objects which are stored by the communities uh within be def find we make more than 1 million data objects discoverable uh from more than 100 repositories which are hosted by not only e not only e. memb MERS but by different communities which we
Collaborate to make the data discoverable uh going to the other services I will not go to all the details of of the numbers for example uh with in drop there we have also as you can see for example we offer the services also as a premium level service
Which uh you that can offer those Services towards uh users in an payus uh uh format and we already this for a number of communities uh to support the researchers and research communi and research data life cycle we have be looking to see okay how does our
Services fit within the within the life cycle and for this we have to develop this this diagram there you can see actually that there are two different life cycles one more from a community perspective where the community will uh uh manage the data but also the generation of the data but also the
Production of uh data sets and data products for the researchers and then you have more in uh data life cycle for the individual researchers who want to work with the data uh to produce uh and analyze the data but also then again for example produce Publications around the
Data they have been analyzing there you can see that our our serves are fitting into different life cycles from Community perspective but also from how we want to support individual uh users where for example if the data is being generated it can be managed uh via policies INB to save can be published
With INB to share PDS are then to be assigned or identifying in different data objects but also when the data is being published it is being made to discover be by be defined where the individual researcher can find the data can select the data get the data store
Them where they need to do this uh at the processing level or within uh data platforms as for example be the drop where they collaborate with communities and with other researchers to produce uh uh Publications and outputs and then again when they have the output they can
Preserves a long time according to V to Safe policies and make the data again publish it B to share and if you look at this this is one of the use cases which we have been working this one of the communities which is the comply at Community uh to see how this works
Within their infrastructure how they can set up an infrastructure where they want to support their users within the compartment uh use case you are really looking at to see okay how can we link the system the machine where the data is being generated how can I offer this
Towards the users so that they also can uh make use of the data analyze the data and also publish data sets on basis of the analysis they have done and this is a distributed uh use case where you can see that uh the uh signon is hosted in
France by esrf there’s the data being generated the data is transferred uh to vas supercomputing where the data is first processed data products are being generated are stored in be save PDS are being assigned to the data objects then distributed across po across different sites according to different policies uh
Two different organizations have for example the supercomputer uh at serve but also the supercomputer set this is all done through data management policies as implemented to save uh the users get access to the data are analyzing the data but also when they are analyzing the data producing new new
Output which they want to publish in B2 share when the data is being published in B2 share which is an instance hosted at UCL and also the data is being uh made discoverable be defined as you can see what we want try to do with a new
That is to provide different kinds of components so that you can set up an whole full infrastructure to manage your data to also provide those kind of components towards the users so that they can manage the data through the full research data life cycle including also linking to processing and making the data
Discoverable this is a very short introduction uh to OT if you have any questions uh not sure if they’re already in the chat or in the Q&A U else we can always address this at the end of the webinar thank you so much Mark so I’d like to ask um Tommy
Now to present who is the coordinator of fos right does it show as it should hope so yeah so fair for Yos um we’ll try to summarize this in about 10 minutes um it’s called Fair core for Yos so I have to little bit say about
IOS um the concept of IOS as such is under flux a bit with procurement and um a little bit of a redefinition uh of the concept but uh we have been now going on for a year and a half and our starting point was uh to uh the Strategic
Research and Innovation agenda called sria that was qu created in 21 that identified um gaps in the state-of-the-art and highlighted um the web aair data and a minimum viable iOS by 27 as the main targets so uh a lot of these gaps were focusing on trying to um well essentially identify things that
Need to be done reach the mve EOS uh by 27 and that is new core components or services that are still missing from the portfolio in order for the EOS to operate and um that was the starting point we are now um in month 19 of three-year project so just past the
Halfway marker uh we have 22 Partners about 10 million EUR and we’re coordinated at CSC but Mark who was just talking now is our technical coordinator so we focus on developing nine new yusk core components um that are specifically tailored and addressing the gaps that were identified
Thematically uh from those gaps uh this particular call addresses these four topics so persistent identifiers metadata and ontologies interoperability and research software so the nine core components that we work on developing are first the Rd graph so the research Discovery graph this is essentially an open a uh extension of
What they’re currently doing um they have this graph of interconnected research objects not necessarily identified by persistent identifiers and they started out with very messy data sets so they have had to do a lot of cleaning of the data D duplication and a lot of work uh to work with what they
Have but it is a very large graph of related research objects now this is software Publications data sets everything possible um they are working on creating subgraphs for example for specific communities or uh countries whatever the the limiting criteria is they’re looking at natural language Sur into graph data so that you
Could express um the query um I would like to see uh publications of the 10 most cited uh researchers in the field of um I don’t know boreal forests so um that should be able to translate it into a graph query that to then return results about the research objects that match the
Query uh they’re working on quite a few different uh sub Technologies to enrich this graph we and the idea is that this would be integrated in EOS so that to allow Discovery within EOS of um interesting research objects then we have the pit graph from data site which
Has a bit of a different starting point here all the objects are inherently pitted and it is always Pitto pit connections that we are discussing in the graph it is likewise also a knowledge graph in a similar manner but then it um is working on uh providing an API uh
For accessing the graph to perform graph queries into it it is uh providing a data dump of their data as is the Rd graph then they’re looking at this kind of usage statistics uh for example that how many times has this data set been accessed so opened um downloaded cited
So different kind of metrics related to the utilization of the objects of the graph and here the quality of the data is slightly higher than with the Rd graph but then I think the comprehensiveness is a bit less because they don’t try to D duplicate on all the non pitted
Data then we are building something called the metadata schema and crosswalk registry so this is a service where you could uh or you can uh deposit metadata schemas of your services such as of the ud um um P store um then uh the idea is that you can also
Deposit schemas of other services and uh then once you have the schemas of two different Services you can create a crosswalk where you map the individual metadata schema of one schema to the metadata schema of another one and utilizing that mapping we call it the crosswalk which you can store in the
Service you can publish it you can get a pit for it you can version it you can U it’s becomes a link data object that everyone can see with Provence information then that can be operationalized so that means that you can use the operationalized crosswalk to actually calculate the conversions from
Metadata to metadata of one service so that allows services to exchange metadata between each other for each of the domains we try to identify a So-Cal core standard or core schema so we essentially try to map all the services to a core schema that is seem to be
Expressive enough so then um all the services essentially need map to and from the core schema and that will allow all services that are mapped to the core schema to exchange information within that domain with each other then a service that also supports very much the metadata schema crosswalk
Registry is the data data type registry so this is a place where you can for data sets or metad dat sets Define the data set types that the um attributes of a data set or a metadata set may contain this can also be used in pit kernel information profile definition
But it gives you a PID and the precise definitions labels in many languages of the data sets so that you can refer to them in a link data manner we can also utilize this information in the metadata schema cross register to say that this particular string looking field is in
Fact coordinates and the coordinate has a specific format and that allows you then to understand and process the information um much better then we are working on developing the PID meta resolver so first of all for for those pit types that will be incorporated you simply um don’t have to
Care where to resolve the pit you just give the pit to the system and it will resolve it to whether it’s a handle or a DOI or a UR and it will has the capacity to resolve you directly to the source but uh at least as valuable as that is
The possibility to actually not resolve but to get the metadata of that object so this is a system by which you can get metadata on all ped objects of those pit systems that are registered and that allows you to populate your system with information of what is behind that PID
And that’s where the kernel information profiles become relevant so um we are looking at the possibilities of improving or making the pit kernel information profiles of different pit systems a bit more compatible with each other so that it would enable getting more metadata on the PED object which then allows you to populate
Your system with information about the object that it is pointing to without actually resolving to it because in the case of Journal Publications it will point to a web page with a PDF which is not very good for machine actability then uh we are working on the iOS pit
Policy compliance assessment toolkit so iOS has developed a soal PID policy on uh the recommended utilization and good practices of using pids and the idea of this tool is to be able to evaluate services with regard to their es pit policy compliance then we uh are introducing to
Europe a new identifier system that has been developed in at by the ardc so the Australian research data Commons uh and that is the raid so the research activity identifier Service uh that’s essentially an identifier for research projects so um besides being the actual PID which has now been chosen to be a
DOI it add Al has a very rich metadata envelope that points to um project related uh participants Services data sets Publications uh people working on the projects um Grant agreements that have funded the project so these are longer term activities um which then kind of it works as a capsule
For information related to that activity and also by the way a prime source of information for the graphs then we look at two software Heritage related uh initiatives software Heritage is an entity um affiliated with UNESCO that se uh source code as a part of Humanity’s cultural heritage and um
It tries to essentially archive all the source code in the world um it harvests many different uh repositories of source code and we focus Focus now on improving its coverage of research software so we developed connectors and apis to connect to a lot of research source code
Repository so that it may Harvest them Harvest their metadata and um assign also software Heritage intrinsic identifier so sweds to them um yeah so this kind of also then gives the possibility to site source code using the seds additionally uh we uh the whole Archive of source code of the world we
Create a mirror for it in gree uh to actually mitigate the risk of information loss and losing out on this um or possible like service breaks so uh that is a parallel effort then and these are the nine core components that we develop in the project what am I doing with time I
Think I should finish now so um then I will go through uh quickly just mentioned that we have uh case studies Clarin which is the linguistic resources dkrz for the climate change F SC for mathematics uh then we have CSC doing the integration of uh National research information systems
And then we have finally an udat focused case study which uh will be probably addressed in more detail by the following uh presenters so these are ones you can see there which of the different components we’re developing these uh communities are planning to adopt and the idea is that
Uh the adoption of these Services would actually um result in long lasting benefits for these communities that these are worthwhile adopting um not just to show that these services are good and valuable but also for their own sake finally we’re about halfway through the project we have just uh released the
Beta versions of our components uh in four months time they will have been initially tested with the case studies and we will launch an updated version of the case the components um with further development and then we have still one year to uh by next year sorry by 2025 march to uh
Release the final versions of these components which should be then having the full functionalities and with that I have overstepped my time I apologize I’m done thank you very much Tommy um don’t worry it was uh it was a very nice presentation and um you explain very clearly the different components even
Though they’re very technical um all right so now we get to hear about um some of the interaction between uh UD that and Fe cor foros so I’ll pass over the floor to Hinrich um and you can now see your screen yes I hope so while HRI sharing your screen I’ll
Just mention we will answer um the questions at the end of all the presentations so we’re not ignoring them but we’ll get to them okay you need to put it in presenter mode sorry Hinrich I can see the uh okay so stop sharing and go ahead to the screen presentation mode looks
Better now yes perfect thanks okay hello everybody uh my name is hrich vitman from the chman climate Computing Center in Hamburg and I want talk about the B to find enhancements so coming with the integration and utilizing some of the fair core for Eos components Tommy just presented and I
Thank here to the B fine team team especially to analina and my colleagues analina and an who as well contribute to the slides okay okay B to find so what is it or I explain it with the uh with the ingestion workflow here of B2 fine
So it’s three major steps so first we be to find harvests from a lot of uh repositories and data sources not only from udat data centers but as well from a wide field on interdisciplinary and cross domain communities and we use here several protocols for harvesting preferably we use the oi protocol for
Metadata harvesting but as well we we to find supports other protocols like Chason apis or CSV stands for catalog service for web to harvest from Geo networks we on since long we have on our agenda that we want to as well to collect metad dat via sparql so to collect triples for
Link open data so but we had no resources for this to implement this but we will hopefully now uh do something here in the context with the pit graph and then the next big step is to map all this uh harvested metadata from the specific Community uh metadata schemes onto the
The common EU that core metad data schema we use here in B2 find and by this by the way as well used by B2 share uh so and so the this it’s a lot of work it’s of a technical U converting and uh and you he you need intense
Communication and negotiations with the new uptaken communities to because they have all different uh demands and requirements and uh so we support as listed here at the bottom a lot of U standard standardized schemas and most of them are as well have specific Community specific fields of
Flavors and as I show later here we uh want to use the mscr to to improve here or simplify our operational mapping workflow and finally the mapped and homogenized the metadata records are uploaded and indexed in the B2 finds Discovery portal and so in this allows then for seted search over the wide
Field on data sources uh coming from different and research domains so and we have here a lot of facets and option fil filter options filter options so for example we have here the possibility to to uh select by spatial and temporal coverage via world map Etc PP and as
Well we have for example here the facet research discipline so because to find as holy that follows a real cross domain in disciplinary approach and then you can filter by domain okay and it’s important as well in whole Au that is the concept that we use persistent identifi so to identify
Our referencing clearly to the objects and the hold the metadata or metadata entries in the record Cs and here as well the pit graph will come in the game so I go forward to the next slide so here’s a summary summary how B to find uh demonstrates some of the components
Fory presented so on the left hand side you can see that we uh uh Ed the graph components the research Discovery graph and the pit graph so essentially we integrated the B to find uh metad data in the research Discovery graph so I show later how this works worked out and
Then in the next step so we not really started with this but then vice versa we want to use the the graph information from the pit graph to enhance to find records by uh external references and links and in to improve the interlinking okay and on the other hand
So on the right hand side we as well so as Tommy already explained a little bit we want to use the metadata schem mind crosswalk registry to to uh uh support and simplify our operational ingestion and mapping workflow here of V2 find so this I show
This later a little bit more in detail uh so by uploading not only the community Med schemas a source but as well the target metadata schema what this here the .or metadata schema and as well upload first the first initial crosswalks and then use the Met other schema and and crosswalk
Registry then to to use the updated and change and the versioning so to for operational workflow okay good I illustrate this all with some slides follow which follow so in B2 find as set so we we har Harvest from uh now about 100 data sources so these are repositories or by itself metadata
Aggregator so this harvesting is is hierarchical and so here one example is shown here for this hierarchical uh is chesta which as well again Harvest from other repositories like this here as example for from the social Data Network and uh so and especially in particularly we Harvest as Mark already
Mentioned from the B2 share share and the data repository service of a UD that all the meter and here is a a strong collaboration with a strong connection to the P2 share service and yes and uh harvesting and the mapping is uh implemented in a a sustainable way and then that we can
Incremental and always up to dat and synchron synchronized with the data sources we harvested so and this the end allows users to have really uh elaborated search possibility to to search over WID spread of data resources coming from Cross domain Fields okay and so and then the next
Step on the next level so this is the engagement here the involvement in the fair call for Eos that then open air harvests as well from B to fine but again open air harvests from hundreds of data sources so but in the fair call foros uh project we showed here demonstrate here
How this works will be to find and then uh finally after clean up open air and the duplication the all the harvested uh metadata records and of data sources are end up in the EOC data catalog and so and can then searched via the research Discovery craft at the end okay
So just so I will now address some things about provenance so it’s very important that we you provide Provence metad dat on the data set so it’s it looks here in B2 find like this that of course the data sets are grouped Under the Umbrella of the repositories and
Communities here again example of a Chester data set and B2 find not only offers the link to the source itself so but as well link to the originally harvested metadata record here in this case it’s the oi request for the get record requests and we have we try
To provide as much as possible provance metad data so so for the creator of the data said and the main contact publication information publisher publication year of course the licenses and the rights so to open the data set and use the data is important and as when it’s available we provide as well
For example information about the instrument I will show this later on here very short I will show so how we expose uh the metad DAT in XML format so to to uh to open air that they can have all the information about the hierarchical provance of the repositories so here’s the example that
Udat first harvest from Chester but Chester again Harvest from us here for example for the social of the social Data Network Community or Repository and what is important Al as always that that you have good identifiers here and here we use uh preferably uh that the the the
Repositories uh has identified within or registered in the re3 data the central registry for repositories as shown here so I go ahead so and then on the next step so as said research the the open air harvests then and they as well display then from their perspective a two level governance chain
So here shown that is uh the the data set was orally hosted at Chester and but if you click here on the Chester then you see that as data this data sources as well B to find is listed so because open air Harvest is directly from B to
Find okay I think I have to go ahead to not running out of time as I said what it’s very important that you uh have good references and and have good persistent ID interface for all the objects and entities so as shown here here of course we preferred for the whole data sets
That we that is the best practice it’s reference by or a digital object identifier is assigned but as well if there’s a URL or white persistent URL or persistant identifier is as well fine and then here three examples all the entities and the facets and the fields should be as far as possible
Uh assigned by identifiers so persons like Creator or funders best visit or ID and as shown already a little bit so the repositories uh so in this repositories chain uh so we we prefer that there is an identifier or that is the all repositories are registered in Rey data
And I show another example here we have as well instruments uh here offer instruments uh in B2 in B2 find as far they are provided and here they uh uh then can registered by a new service B2 ins so I showed this here with this example so you have here data
Set which has in this in this Provence metadata part as well this instrument field and you can see here that in this case a lot of instruments are listed here and if you click of one of this links here link icons here the landing page of the B2 inst service and quite
New uh eudat service is opened and here you get more information and metad data about this instrument in this case in as telescope I don’t know from the Please wrap up hinr because we need to move on okay good I go ahead so I will skip this
More or less so but as I said in the next step we want to use graph information from the pit graph to involve this in the in the in B2 find but we not really started with this yet and was I I said then we will use use the metadata schema and crosswalk
Registry to uh to for the operational uh ingestion workflow and a mapping workflow in B2 find to handle better uh the course walk from Community meta schema and the mapping to the u. core meta do schema okay sorry to be overtime and yeah you can find some links for information
Okay thank you Henrik um I’ll pass the floor straight away to Harry the talk about B2 share let’s see if I manage to share oh yep perfect we can see it sorry I have to I have to stop and switch my screens sorry um
Well um it it it is what it is um I’ll try to present it like this so hi thanks for the previous presenters I’m uh har from CSC It Center for Science and I’m uh here to discuss about the P enhancements which air for your components um just
A quick reminder from the whole. service we’re dealing here with the P2 share service on this data access and sharing sharing level uh a short summary b share is a data set repository it allows storing publishing and sharing research data in a fairway for the context of this
Presentation or or this peros uh it is important to note that P supports describing data sets using the E extended metadata schema which is common for all all B2 records and in addition metadata records can be described using Community specific metadata elements in addition to this you extend metad metadata schema
Uh in faircore for Eos our goal is to well make it more easier to work on metadata schemas management of metadata schemas in in collaboration with the communities and uh to our let’s say best view this goal is achieved by utilizing MCR DTR metadata schema and crosswalk registry and data type
Registry uh in management of these uh B communities this slide tries to summarize a very high level uh interaction between between uh between the services we basically uh have have the ud. extended metadata schema described in deta and then a community metadata schema in DTR uh by utilizing uh a crosswalking
Mere we will provide a we will map this um Community metadata schema elements to e. extended uh which will become this uh common metadata schema elements in on the right side of picture and uh those elements that don’t map to e that extended we became let’s say candidates for Community specific metal
Elements u in together this is the let’s say first draft of a community definition for for B to share uh there not not much to say I could discuss more but let’s let’s save time and move to next slide in short use cases that we see see in this uh in
This project for for v2 sh is to enhance the community creation utilizing m d then enhance the community updates in B to share by am and DTR basically we we will get uh uh noic vations and data about scheme Community schema updates through through these
Services um as a third use case um we want to test uh enabling metadata export from Ab to share record According to some Community specific metadata schema uh the let’s say benefits for the first two ones two two use cases are quite clear it’s the we see we see that
The benefit is to is the management of communities and uh for for both B service administrators and uh well community personnels that that uh deal with the Metal Management uh in this P context um we also feel that this streamlining will enable more communities on a instance which again enables more communities to
Utilize this uh let’s say more uh more than just a generic set of metadata elements and to actually utilize this this this uh Community specific metadata Elements which hopefully will bring bring uh bring value to the community um I think in order for the panel discussion to uh start the panel
Discussion I I will just end here uh okay uh thank you Hardy thank you for keeping it brief as well um so if all the panelists could turn their their cameras on um first of all let’s just try and answer some of the questions that have been putting the Q&A in in the
Chat um so I think the ones in the Q&A have been answered directly uh but the ones in the chat we were unable to so someone asked have you made any assessment on the fairness of your data once you make them discoverable using the current tools like fairever or if-
UGI Etc or another tool that has been developed in the context of fairness um I can say something about uh vuser and the Fuji tool um uh we are familiar with the Fuji tool from the let’s say previous uh projects that we have taken part in and
Uh yes we see it as a useful tool for assessments of the fairness but uh so far within this project it has been uh let’s say left outside but uh definitely definitely there are plans to evaluate them and see see how how good job Fuji does for okay great and there
Was another question in the chat which was why the sparq prot I just I can just also comment further on this um for example if you look at the compli assessment tool kit uh which is also a fair cool foros component there you can Implement different kind of compliance
Criteria uh at the moment we are developing this for the eosp bid uh policy but you can also Implement Fair policies to develop criteria on assessing Fair policies that’s great uh and so quickly the the other question that was in the chat was why The Spar ql protocols for
Harvesting are not ready yet for B2 find for B2 find yeah I can comment on this uh so uh to be the short answer is because in B to find we had not enough resources and not a possibility we have some tests to uh to uh as well collect
Via sparql but we focus here as as well open air does on open OIP MH harvesting but as I said now we have another possibility now we start with this that we get get uh by a spark or get the graph information from the pit graph so they and they collect all this
Information so actually we had this on the agenda for B2 find but you know we have not enough resources here in V2 find so but uh for sure we should go in this direction to collect as well um information via sparkwell yeah okay so I think um I’ll just ask one panel
Question um if people want to remain on for five minutes uh I had a few here but um I’d like to ask the question how do eudat both eudat and vehicle fre incorporate use a feedback from the research Community to continuously adapt and improve their services um I can answer this from from
You that point of view uh we have regular interactions also with users in different for uh so that we collect this kind of information we do also assessment of our services within our community and also within our uh uh members uh to collect as much use of
Feedback uh how we want to evolve our services we have also a user board uh with number of members which also provideed feedback to us thank you in the case of fa call fre ask momy are you able to answer this sorry I missed the question so the
Question was how do you that in Fair cork incorporate user feedback from the research Community to continuously adapt and improve their services uh well we have in the proposal stage engaged specific communities uh who are in the project and have the resources to really kind of uh adopt the
Components we have the um case studies which I mentioned which adopt several of the components and then we have also so-called demonstrators where they really just kind of try out a single component in a specific context so those are the um user engagements that we can actually directly resource using the project
Resources and then we have um open Consulting ations of different kinds um the a little bit depends on the component that you’re discussing because there are all different we have nine unique individual different components and their engagement approaches vary but as I go around and talk about these things we constantly get approach
That hey I would really like to try this one out and I would like to try that one out and um I would actually like to adopt this can I do it yesterday so um then we have written for example letters of intent orus with specific organizations who actually want to adopt this
Yesterday um and you see immediate value added so then we engage more intensively with such organizations and kind of um support their adoption of the component as far as our resources allow but then they of course do the adoption on their own resources and then we
Also uh do have this kind of a uh in may we will have a physical session in Vienna uh where we invite kind of potential interested users to come out and try the components and get actually concrete support on how to adopt them and in addition sound like a telephone
Marketer now but not only but also uh the sister project Fair impact is having an open call uh for kind of supporting the adoption of fair practices which I think uh the one that we’re looking at will close in September next year where which could actually even pay for a small
Amount of working time for the adoption of these components um should the selected person be lucky or the organization so then that even further supports the engagement so there is um the interest side and then the capacity to actually um use time to work with those so we have tried to keep the
Pallet of Engagement large where our focus is really on the core component developing okay I’m just going to ask another question if people need to go uh they can go and catch this when we um post the recording publish the recording um so for these enhanced uh EET Services
Uh which I know are being enhanced at the moment but obviously at some point there will be uh the these e services will be enhanced and and published um will there be educational training programs to to help researchers in institution better use these Services um help explain to them uh the
Benefits is this for seen or um I can f com if I know Notting uh yes uh but that is also done through uh through the projects uh because there we have also the actual regation ment with the communities we also uh part of our project we also developing documentation and training
Material so that we are able to train and provide this kind of information uh but also there are different ways at different workshop and conferences where uh we make more interaction with with communities but also for example we have the. conference where we also provide more detailed information of what the
Services are but also how these services are can be used or how uh how communities can take out those those services so there are different ways and we try to be very active within our communities uh to to promote the services but also the capabilities uh which are for being developed
With for Eos or being extended on our services maybe I can add on that we had the summer schools in for UD that so this summer and I think this was was well very successful uccessful to bring the ud services to the researchers communities so yeah so uh the there’s
The summer school one year and the conference the next year so right so every two years they happen alternating so that’s a way to get uh and more uh knowledge on udb and youb services um all right this year this this year we have the. conference and next
Year next year we yeah the 20 24 we have the ud conference and 25 for every again as schol plan great all right I think I’m going to close the webinar here I thinkk everyone who’s still here for for attending for um taking part I want to
Really thank my speakers um for their contributions and the the insights they gave us and make sure to follow uh Fair call freeosk on uh social media to follow with the projects um uh how it’s coming along and for updates and make sure to follow eud as well and uh you
Can already start adopting e that services in your research um thank you very much and have a good day bye everyone thank you thank you byebye