2nd Health Economics Conference – The Use of Health Data, Platforms and Digital Technologies for Innovation | Round Table
The 2nd Health Center Conference was hold on June 19&20, 2024.
A look back at this two-days conference on cutting-edge topics!
Many thanks to all speakers and chairs.
💡 Find all the details on this event: https://www.tse-fr.eu/conferences/202…
☝️https://www.tse-fr.eu/health
🔔 Sign up for news from TSE: https://www.tse-fr.eu/tse-news-sign-form
🙌Thank you to all our partners for their support to research.
Crédits photos : @Unsplash @AdobeStock
[Music] [Music] okay we we are going to start on time it’s wonderful um we have a very exciting round TR table uh I look forward to it I’m going to learn a lot from it personally and uh we have three really uh so the round table is about AI digital health and uh both Innovation and medical practice uh I’m going we have three speakers each has 10 minutes uh in order of appearance we’ll start with CLA bio who is a vice president for the health industry at Doo system from poque and mean and you you went to for a master in the US and then PhD in IM Immunology in CER um and a young leader of French American Foundation uh then um after Claire and she will talk about her experience in particular with digital twins but not only and then we have Edwin Mo Fletcher who is a former professor of public policy at La Sapienza and uh and has been a president of Link which was founded like 23 years ago and there’s a lot of uh among other things has lots of activities but a lot of EU uh contracts ppps and the like funded projects it will tell us about his experience as well and then I’m not going to introduce iel because I already did that this morning and without further Ado I think uh we should be starting so we’re starting with Claire thank you so much I am very happy and honored to be here today with you you um one additional comment a quick comment I’ve been working at the French Ministry of Health in pricing and reimbursements I know you heard about that yesterday so you can also call to that experience from the past if you want and so today it’s my great pleasure to talk to you about virtual Twins and you’re going to see how it connects with Health Data um but maybe I’m going to start by telling you what is the so system because we’re in force and especially in tulus and so when you hear Theo system you hear Doo and you think about the fire jets Rafal and actually we were born as a spinoff from Theo Aviation 45 years ago because some Engineers built the 3D model of the wing of an aircraft and Marcel do understood the value of that 3D model Way Beyond theu Aviation and even Beyond Aerospace and using virtual twins we have really dramatically changed the way aircraft or car are designed today 95% of the crush test for cars for example are done using virtual twinss of cars that you’re going to crash against the virtual twin of the wall and obvious you’re going to use real cars at the end but it’s going to be much much faster cost less raw materials it’s more sustainable and it also allows to explore a much broader uh design space and so we want to do this for healthare and why do why do we care well because we think it’s time to act now if you look worldwide only two third of the worldwide population has access to essential healthare so there is a major need and even if you focus on developed countries like France the US I mean most of the countries We Have Heard about in this um Symposium the healthy life expectancy doesn’t progress much despite Healthcare expenses growing twice faster than GDP so a lot of money is poured into the system and yet the system doesn’t become better from a patient Health perspective and also from a health from a perspective for from the healthare professionals if you think about their working conditions and so we the a system we want to change this using virtual twin Technologies and why do we think that it can help well I will show you that it can help into on two Dimensions accelerating Innovation and also improving medical decision so if I first focus on Innovation actually that is the field that we serve today the healthcare for the so system represents more than 20% of our Revenue 1.2 billion euros so we are already a very large player in the field serving biopharma companies met companies and so we are leveraging data projecting them into models what we call virtual twins so it’s a shared representation of the product a drug a device as a way to then better collaborate together the different disciplines that are working on developing these drugs and also ask questions what if scenarios so that you can improve these drugs or devices and so I’m going to share with you three examples of what we do today and then I will move into Medical Practice which is more what we want to do tomorrow so the first example is actually uh has been uh introduced briefly by Ariel this morning it’s uh external control arms or synthetic control arms so part of the system we have a brand which is called medidata which is the leader in the digitalization of clinical trials every covid vaccine but one has been uh developed using our software to collect data data from hospitals sites that were involved in the trial but also data from patients that were collected on the phone or ancestors and we are luckily can use this data which is which are high quality regulatory gr data coming from clinical trials to um for secondary uh usage one of these types of usage is to build an external control arm and what is this well if you’re going to test a drug you have to compare the drug against standard of Care standard of care itself had to go through a clinical trial in the past and as we’re the leader in the field of clinical trials it’s very likely that we have clinical data from these previous clinical trials and so FDA and some extent EMA really encourage the use of synthetic patients meaning patients that are data from patients that went through these clinical trials in the past for the standard of care as a way to accelerate the time for Innovation when there is really a need and so it’s not for all indications but it’s very likely to work uh and be approved by FDA in uh rare diseases severe diseases for which the standard of care is not satisfactory gasta being a great example pancre itic cancer leukemia blood cancers and um many other actually and so there because we have disability to use Health Data high quality regulatory gr data we can help sponsors to uh reduce the time reduce of the clinical trials because they will require fewer patients in the trial so that was the first example the second is okay great you can build a synthetic control arm what about the treatment arm we’re not there yet with drugs but we’re moving forward on the medical device side and we have a um an example of that with in a partnership that we have had with FDA so it’s focused on the heart because we have a very sophisticated virtual twin of the heart which is um very accurate in the anatomy in the electromechanic behavior in the fluidic behavior and so the question we asked together with FDA was literally can we do a crush test of a device in a virtual twin instead of going to real patients and so what we did was using generative AI we have been able to build a cohort of virtual patients that all have a default in their mitrov valve uh function so you know with aging a lot of us will have defects in the mitro valve which is one of the four valves in the heart it doesn’t close well and because it doesn’t close well blood regurgitates stand still and it can cause Strokes because it’s going to coagulate so it has to be fixed it used to be open heart surgery now it’s minimally invasive surgery with repair devices two of the two of them are on the market another 20 are coming uh it’s really a booming field and the question is what so is the repair device or the placement device going to work in your heart and so we have been testing this with FDA and we have been able to reproduce in our virtual twins in our inco setting results that were obtained by the first metch provider about with the Metra we’re about to release a paper about that I would be happy to share with when it’s published so that was example number two the third one I will be quicker on that but as AI is top of mind for everyone and especially generative AI I also wanted to stress that what we do with our customers is to generate libraries of molecules uh using again generative AI to help identify the best drug candidates um for a metet medical needs and by doing so what’s super interesting is that you you can uh screen millions of potential compounds and reduce to a number of 100 200 those that will be tested at the at the bench collecting results from the bench you can re-inject these results into your Al model so that it’s going to be even more efficient to proposing the next wave of compounds and so there there is this idea that you can fail fast and fail cheap now if I just call out one point to launch the panel uh the next Frontier I think is to get more liquidity in terms of data because that’s obviously a challenge today and for this we think that it needs a business model at on which at which conditions stakeholders will be able to share their data uh standardization of data so that you can compare across different hospitals for example and um trust and for us this means that data have to be hosted on a sovereign Cloud happy to talk more about that and then uh I’m going to quickly move to the second exam uh second domain the one that we’re more developing for tomorrow which is to use Virtual twins of the patient as a way to improve Medical Practice what if your doctor could understand see a virtual twin of you um of me before uh deciding of the treatment should I take if I take the heart example that I was using what type of repair device what size where should I put it in the heart so that the outcome is best for the patient we’re working on it today um we have two uh projects that I will briefly mentioned one is called medwin it’s a partnership with Ina which is the French computer science research institute and the and seven yashu uh so medical uh institutes in France the it is to build seven use cases with virtual twins in oncology neurology Cardiology to uh position them as a way to improve either diagnosis for example in alimer disas or a medical decision and the idea is to really industrialize and scale this natural TN all the way up to reimbursement and so for this we will have to go through regulatory approval which means software as of medical device that areel uh brilliantly described this morning and uh demonstration of um um medical impact or medicoeconomic impacts and we can come back to that in the discussion and the second one that I can briefly mention is um leveraging data from medidata we run a lot of uh clinical trials uh for C cell therapies for example and using AI on that we’re able now to predict severe Adverse Events called cyto release syndrome and so today we use this in a clinical trial setting if we want to put this in the hands of practicing Physicians this uh would be uh something that is used for medical decision so here again it has to be software as a medical device and has to find its uh business model with that I think I ready to leave the floor to the next speaker well thank you so much CL that’s very interesting and also you thanks for keeping the time uh Edwin you well I thank you very much for having invited me here it’s very exciting and I thank in particular CLA for having mentioned virtual twins which is one of the points I want to to touch with you um and since uh Jean was kind enough just to mention the fact that we’ve been working in my company always on European projects I will refer to something which is particularly uh um happening just right now there was a a procurement tender from the European commission for a platform to Foster an ecosystem on digital twins so it was a platform for advanced virtual human twins models and this has just concluded uh so for for this submission period was just 10 days ago H there is a riddle about this because often it is quite complicated to understand what the European commission wants and uh and it is another more uh important investor in this area so it is important to to clarify this in this specific case there’s been a well a lot of talk everywhere about virtual human trns the European commission has been um uh talking already for from some time about the European uh health union about European health data space which should be the basis of this Union and the way of organizing what CLA was advocating so the possibility of exchanging data Etc and in particular there have been um rumors and discussions and recommendations already for some years to try to support the member states because the uh uh the health sector is still a national prerogative but the commission has been intervening on the uh on the technological side on supporting the the digital advances there and creating a common uh infrastructures at the Europe European level and in particular regarding um virtual human twins there have been a variety of initiatives first almost a couple of years ago there was the uh what has become the edit coordination and support action edit stands for uh an ecosystem for digital twins in healthcare uh and and this is coordination support action that means the commission is asking to from to get some support from a community to understand what is the direction it can can go and how to invest in future calls for uh for developing this sector and this is is well I I’m one of the partners we needed and we will deliver the road map by the end of the year besides this there was been by the end of past year in December 23 the launching from the European Commission of a virtual human um uh train initiative to which with a Manifesto which is been adhered to by a number of important partners and well minor Partners so both NES which is a Min and the so which is a huge one all signature signator of this of this Manifesto which has been supported by 100 and more uh entities by now and then there was this procurement so the riddle is about how to interpret this procurement because if you look at it something uh quite obvious they were indicated also in the work done by the edit part of the because edit was tasked with preparing a road map but also with preparing a repository for resources and of Designing a possible platform a simulation platform so most of the indications of the uh what was proposed as design in the first deliver the first report on this have been Incorporated in what has been the very extensive uh um specifications 100 seven pages for this procurement but something was different and while we had spoken of a way of having all possible pseudonymized data getting into into into the game now the interesting thing has been the fact that only uh well only the users identification data which is obvious but for health and disease data only anonymous and fully deidentified uh synthetic data are admitted so this is a big change and it is indicated explicitly that pseudonymized data are not accepted they might be accepted in a second phase but not at the beginning and so there is a choice there a choice which is to be needs to be interpreted and which has a background the background is that in fact there is a fight on how to use data and uh uh especially in Europe we have much stricter conditions than in elsewhere specifically in the in the United States we have the distinction between anonymized and pseudonymized data poniz data are data which are uh deidentified somehow but there is a key for reidentifying them and somebody has got this key uh now there was a sentence ruling uh by the European court of justice in La April uh 23 saying that if the entity receiving the piz data does not have the key then for that entity those data can be considered as Anonymous data this has been appealed by the European data protection supervisor and so the the question is still open because we don’t know what it will be the find on the fal judgment but this is is a revolution because in fact uh the the the the way it is indicated in the in the procurement by the commission now the data should be irreversible uh definitive and so it is it is sticking to the definition of anon of anonymization which was given already 10 years ago by what proceeded the European data protection supervisor but was in article 29 which had this very strict definition which is in contrast with the definition given in the general data protection regulation which is more flexible it speaks on reasonable nonidentifiability so but since it is defined as definitive irre irreversible Etc so the commission for the moment has adopted this this version which makes in fact Anonymous data not available nobody dares to say that they’ve got Anonymous data so Anonymous data are not uh admitted so in fact it seems to rely only on synthetic data which was one of the point already mentioned by by CLA and this is it is quite interesting so the the the interpretation riddle is um is the commission just waiting for other things to develop like the European health data space which should come with newer rules and make things clearer um so it is just a first phase is a provisional attitude but why loan should pform with such a restriction when this is still waiting for further development or is there a choice there is there a strategic choice in the fact that we want we try in Europe to make what is a hindrance for the moment the fact that we have much stricter rules on data protection uh personal data protection into something which allows Europe to uh to develop much strong synthetic data generation mechanism and so this is another aspect so this is a very interesting policy uh consideration to be to be taken and I wanted to bring it to your attention thank you thanks so much and win it was very interesting and now is yeah thank you so much um so you’ve heard a bit about me in the things I think about already so um I can just jump right in um I I want to sort of tie a bit together sort of my talk earlier and the topics we were just speaking about in particular some of the points that that Claire made I think um this is a this is an audience that I think of economists in general as being particularly susceptible to good logic around causal inference and statistical inference and sort of we’re a group of people most of you are empiricists who are trained to be creative in understanding new methods and approaches for making causal claims from data and I actually think that precisely the in particular the three applications that Claire was talking about are are very amenable to an economist’s way of thinking about the world um things like you know using using modeling and simulation in general synthetic control arms for clinical trials this just makes a bunch of sense this is a group of people who understand how to create control groups uh from Real World data from observational data sets that are collected not for not in a fit forp purpose way I would say that 99 you know for those of you who don’t do experiments 100% of your data sets that you use were not collected with your particular paper in mind um unless you unless you do surveys or experimental economics that means that this is and what what we call in healthcare real world data is just you know data collected from outside of the the RCT setting this is exactly what we mean here so I just kind of want to orient um these wonderful Technologies you were talking about in sort of the language of this room um I think the the best examples here are all of these what the FDA has started calling these learn and confirm approaches where we can learn from things like synthetic data sets for clinical trials or simulations of cardiovascular devices and then confirm those in much smaller groups of patients in an actual clinical trial setting but if we’re doing these if we’re doing all these very clever things that by the way we’re entirely capable of doing already from a computational and a statistical perspective it could drive massive efficiencies in things like new drug and new medical device development but we’re not talking about I mean you’ve said this but I want to say it explicitly and more we’re not talking about okay now we don’t do trials in humans anymore but rather let’s get to a much more efficient way to do that smaller confirmatory trial in humans because we’ve already done the the um the virtual crash test if you will I like the expression CLA I’m going to steal it from now on um of the cardiovascular device um and we this is you know we know this from stit statistics if you if you are more certain about the effect size that you’re expecting and you’re more certain there will be an effect size then you don’t need to you don’t you don’t need a smaller nend that’s just like a power calculation issue from frequent statistics I just kind of wanted to say all that um I I promise Jean I would say a few just brief things orienting all of this in a in a broader conversation about AI um we we published a paper in 2019 in the New England Journal of medicine’s catalyst publication about the applications of AI and Healthcare delivery um I’m not talking about R&D I’ll talk about that in a minute in healthare delivery there actually there are only three things really that you can do with artificial intelligence and with algorithms um you can use them for administrative purposes for diagnosis or for treatment and there really isn’t anything else if you think about what happens in hospitals or clinics everything is is somehow in one of those three categories the rub as you heard earlier and as I’ll repeat because it’s vitally important is that anytime you’re doing diagnosis or treatment you’re likely to have something that meets the statutory definition of a medical device and require regulation which is which is why um this all you know comes back to to regulatory economics um I think there are massive massive opportunities though on the administrative side so it’s it’s it’s wild that I have to make three phone calls to schedule a medical appointment um things like you know clinical notes scheduling even triage which is actually a higher risk activity all of this can be done better and supported by Ai and then there’s some of these exciting applications in the in the discovery in the R&D phase um using things like uh like digital Twins or virtual twins to support drug and medical device development I think is the main one we can talk more about that um but of course uh you know along really the R&D trajectory um I just want to say that there’s a wonderful article that I did not write but three of the authors are my co-authors so I I feel an affinity for this article it was just published recently um in uh in Jamma Network open it’s a super short piece and they just did a survey of drugs that have come to Market that actually some somehow use artificial intelligence in R&D it’s not that many so the total number if you actually look at the data is 164 I think is the number that they come up with of examples that they could find where you see R&D um that involves artificial intelligence for new biopharmaceutical products only one of those examples is a drug that’s actually on the market so it’s not like all of our new drugs somehow were developed with AI already that’s we’re still very very early in the pipeline it doesn’t mean it’s not happening and I actually believe it’s happening faster than most people in the room probably believe it’s happening um but we’re still in the very very very early days um and then I’ll say one more thing which is the final table of this 2019 paper we have about Ai and Healthcare delivery is about this this technology development trajectory which we borrowed it’s an it’s an MIT tech review framework for thinking about the stages of Technology development the vast and overwhelming majority of the AI things we have going on are in the the on the very left of that trajectory so the number of products that are actually on the market whether those products are themselves the algorithms or they products that were developed with the algorithms is is quite small relative to the early stage you know and you can’t open a newspaper without reading something about Ai and Healthcare but much of that is is cool stuff happening in University Research Laboratories or in the R&D units of companies um but we’re not yet sort of at the stage where suddenly all of our drugs have come to Market uh because of AI um but it’s in the pipeline so well thank you so much AEL um in terms of the topics we might want to discuss and you know that will proceed in two stages first I would ask the participants in the round table to basically comment on each other if they want to in obligation and then we’ll open the the know to the floor and I’m sure we’ll have lots of questions about this but maybe to structure those questions I would like to come back to the thems that uh have been discussed already so the first big issues for the economies of course is a business model who is going to pay how you going to reimburse that uh you know the value proposition basically and and also that also means something to do between Europe and and the rest so so for example who is going to have access you will you have a discriminatory access uh policy given that Europe will pay for for for this program and so on compared to other countries um of the world so that’s one of the thing the antire business proposition say second thing which has been discussed a lot was um dat data the data um and that there are various reasons why things may go wrong so I understand that privacy may be an obstacle maybe a reasonable obstacle but an obstacle to to this uh CLA talk about synthetic data Edwin actually went you know to discuss the fact that it’s much stricter in Europe than than in the US and does it mean mean that we are doomed uh that’s what other ways to avoid that but that’s not the only issue I mean one of the thing is that you know there is a question of data ownership which is that the the big players the dominant players might be reant to share the data with the others because it’s a competitive Advantage can we force them to do that um is there someone in terms of maybe that’s not an issue it’s an issue for the things but you know um do you need do you have anyone to standardize the data to to be useful have to be standardized usually it’s done through standard setting organization or through antitrust whatever but you know you need someone to actually uh standardize all those things um and then Ariel mentioned this morning the updating of data which is Al you know if you want to to get approval and you know what happens you know basically the the big thing about AI is that it improves over time you know there’s a feedback loop which is important if you if you cannot update that’s pretty bad um so all those issues on on data and then the the third issue I want I want to maybe we could discuss is basically the the it’s a much broader issue in economics there are lots of of papers on that and many of them outside the healthcare sector but you know what is the complementarity between Ai and humans basically and yeah I touch a little bit on that but not not from the point of view of of of this point of view exactly I mean of course you know the the issue of virtual Twins versus human experiments on humans clinical testing on humans but but there is you know more closer to the economics lure there’s a relationship between Ai and and and Innovation I mean are we going to still use people like us you know or are we going to be replaced by Ai and this is big debate are there complement or substitute Pier showed me a paper which uh basically in which there are complement you know but that’s you know in a specific field uh so so basically AI is about reducing the scope of visible of of of of promising roots and then you still have lots of false positive and and then you need domain knowledge to actually so there there some complimentarity but you know in general there could be complimentarity and subsid ability and that’s a sub topic I would like to emphasize uh no I’ve spoken too long I suggest we um first ask uh CLA you want to to react thank you well you raised so many points so I’m gonna try to address some that will Echo what the other speakers shared um I’m not sure I will hit all of them um thank you for the what you described about AI the one thing I might add is there is obviously a hype about gen and when it comes to gen what we see right now is it’s mostly focused on like on Li sense companies obviously the most excitement comes from R&D because the idea is really that your you can generate a universe of potential molecules Dr candidates and that will accelerate um and then a regulatory like because of you know text summary uh Gathering data uh so we see a lot of uh hype there regulatory quality can I generate my uh annual product quality review with AI or with geni on the the healthcare side it’s more administrative I would say and it’s kind of using the synthesis uh functionalities of J which is okay if if the companion of the do can it be a physician assistant can I get bits and pieces of the patient medical record as they are spread everywhere as of today if the Gen tool can gather all this information this is a valuable time State for the physician synthetic data one comment and one question question maybe for the panelist the one comment is we have had to go through synthetic data I’ve been talking about synthetic control arms the limit of that is that the data we use belong to the sponsor of the clinical trial so we cannot share them with the with the client that wants to build a synthetic control arm so we actually had to find a way to anonymize this data and so using gen actually we are um blurring the data not disturbing the not breaking the distribution of the patients so it’s kind of clear with Ariel’s nose but as we care about breast cancer the shape of my the nose doesn’t M matter so much and so with this technology now we can share a set this set of data with um with the with our customers and they can then pull them with other sets of data that they have internally if they’re interested now my question for the panelist there is when we’re talking poniz versus Anonymous data and maybe especially for youel as you can comp you have one one leg on each side of the p what is it that the US does better to Define what is a simiz data like is the safe Hub um act for example helpful in really setting the bound the some boundaries between what is considered anonymized versus sudon imized and uh business models well I guess we can save this for later because I’ve already been long but happy to yeah I’m expecting questions thank you well I think that be beh uh besides the administrative um the uh treatment and the diagnosis use of artificial intelligence one should consider also the predictive function and this is what fits particularly in the virtual twins because the the the real strength and the very strong transformative power of virtual TRS is that they allow to predict various alternative paths for the uh unrolling of the life of people and so this can be really strong and has a great great potential but now that’s what makes it so paradoxical because having this restricted for the moment to synthetic data specifying that the synthetic data but also have a a privacy Assurance assessment so they must be proved Anonymous in a way being synthetic they are generated so the artificial data they are generated as synthetic data they do not refer to personal data it is clearly defined that this so the outside the scope of the uh gdpr but uh uh they have the same patterns of real world data they maintain the same statistical distribution and so on so they can be representative very effectively of what these are and allow for being used in synthetic arms etc etc we we we we have already already heard but it is paradoxical to try to use synthetic data as the basis for virtual twins because virtual twins after all must absolutely uh eventually have to do with real world persons with people who are specific individuals with specific characteristics so this is a tricky issue like this and it it is it leads to to consider this apparent Paradox of doing this with this very complex technology as the one leading to the generation of uh data synthesis and uh this is an interesting concept and I think something which deserves more attention and um well that was the point I wanted to make thank uh great I will try to quickly touch on a selected sample of those topics um hopefully quickly um I want to double down on something Claire said which is the applications of things like gener generative AI for things like regulation but the sort of administrative bits uh so there’s a company that I’m working with if anyone’s interested they’ll give you a really cheap academic license there’s a company called Nyquist AI that has built this huge database of regulatory documents um mostly from the US but also from Europe and China um and uh they basically have a searchable database but it’s also they use gen to say find me all of the medical device applications that look like mine and you can you can use natural language um to sort of say I I’m in the process of coming up with a mitro valve clip I don’t know we were just talking about this um show me all of the mitro valve clips that have been approved by European manufacturers and so you can actually start to do these exercises already they’re selling their tool to regulatory consultants and medical device companies but this is just as useful for the people doing regulatory review so I just want to sort of highlight this as an excellent example of the types of things that we should be doing already and are doing already to some extent um I think in terms of data sort of ownership we could we could spend a one week Workshop having thoughtful conversations about this I I my personal hope is that we can move a bit away from conversations about data ownership and more towards conversations about data governance um and data stewardship and how do we think about mechanisms and this is again where economists are probably quite helpful how do we think about mechanisms and mechanism designs that make sense for ensuring access to data rather than um rather than you know focusing too much on like okay it’s Jan State I he owns it I don’t own it um but rather sort of what are the mechanisms through which all of you could do R&D with that data so that’s kind of my my personal um hope for for all of that there’s some glimmers of interesting things going on so Germany is in the process there’s a piece of legislation that passed in February to set up a national research data center to allow access to claims data National level claims data and then later other types of data cancer registry dat um not just for research and I think this is quite interesting because with the European health data space it’s a lot of research applications in Germany any project that you have that you can say will generate social impact somehow will be able to access this data set what that means is if you’re for-profit pharmaceutical company and you are using these data sets to bring a new medicine to Market you can claim their social benefit of that and and so it’s it’s not just academic researchers at medical schools and uh elsewhere but it’s it’s companies as well so so I I’m this still needs to be built we’ll see if it works but I’m sort of optimistic about these types of Endeavors um I think similarly with the European health data space um sort of Federated learning models which we talked about very briefly last night but the idea with Federated who knows about Federated data models okay like not so much um so basically the idea is that the data stay where they are so if if let’s imagine hospitals all have versions of um of a data set about cardiovascular patients these data sets would stay within the hospitals on the secure Hospital servers in the country and in fact in the city where the data set otherwise would live on a server and we have Federated learning algorithms these are algorithms that are specifically designed to collect basically data from local data sets then you you take you train an algorithm on a local data set collect only the parameters from that algorithm so none of the patient data that underlines the estimation but you collect you know you have to have a certain size da their relevant conditions you then go back and you collect the parameters of that algorithm you do that in multiple locations where all of these data sets are just sitting where they’ve been sitting all along and then you do some meta analysis where you can actually create a hybrid model based on multiple parameters I think this is a very exciting approach that in my very personal opinion but I share it with many others doesn’t get enough attention I think especially in Europe and especially as a workaround for data privacy regulations and in settings where synthetic data is not appropriate or available this is a workaround um and then I think the final thing this question sorry also com and you can combine them together I think they sort of anonymous versus pinous pinous pseud I struggle with this word pseudonymized data um the US is doing it but not really at the scale everyone thinks there are actually like 10 data sets that have been made publicly available and most of our medical AI algorithms are actually trained on like a dozen dat dat sets and this is a huge problem by the way so like the mimic data set which comes out of Boston and a bunch of my colleagues in Boston but also now my colleagues in Germany work with um it’s a terrific data set and if you’re doing computational statistics and machine learning on Healthcare data it’s exactly the place where you start and where you have all your PhD students write their dissertation the problem is I don’t think I want to be treated in a German hospital that has algorithms built on data sets that were in Boston and I think you know for anyone who’s basically not at the briam and Women’s Hospital in Boston that algorithm will potentially be somewhere between not quite appropriately tuned to actually completely inappropriate for what you’re trying to do and so I think um in the US we’re better but it’s actually not as widespread as as we think um and then we we’re not as strict about the there none of these uh lawsuits around data use that we see in Europe and and sort of stopping stopping the game uh until it’s sorted out okay well thank you so much um now I’m sure you’ll have lots of questions for our panelist okay let’s start with pi just uh talking about the Federated data I think I I had heard about it but I had forgotten the term uh what’s I mean I am a bit skeptical about this for the following reason if you don’t put a limit on the number of part parameters basically with as many if you allow me as many parameters as I want it’s as if I had your data so how where do they are they going to put the limit between generating data um parameters of a model and sharing the parameters versus sharing the data I I don’t have the solution I know so I mean so there are entire fields of computer science where people are basically developing best practices here I mean I think just like anything else if we’re doing any sort of other statistics or you know applied applied statistics we have to use best practices of there are massive risks of overfitting in general and I think the world is doing much more of this than we let than we uh admit um but there are best practices and there sort of rules decision rules how big does the training data set need to be in order for you to recover parameters that are even useful to put back into the sort of Master algorithm um and um most of these the the original applications of Federated learning were for massive genetic and genomic data sets um where the underlying data sets themselves are much larger than the data sets we’re typically talking about with patient data and so we need to rethink this there’s some um but there there’s good research being done um and I’ll just say that like you’re absolutely like from a statistical perspective you’re completely right and so these are the things to be thoughtful about as we but that doesn’t that doesn’t mean we don’t do this it means we do it thoughtfully sorry Pier want something to add to this there is a project which is just ongoing called Data tools for heart where we use uh Federated learning from seven different hospitals with massive amounts of uh of cardiac data and this is combined with so is Federated learning for having the uh possibility of leaving the data where they are and then developing also in parallel synthetic data so you can have both Federated learning synthetic data generation uh privacy uh uh differential privacy ated toe this and in fact what uh Ariel was mentioning the mimic a very power very very well known American uh data set is now being substituted by in this project data tools for Heart by something which will be called cardiio syn it will be a synthetic data set for cardiac diseases and so with guarantees for this in this sense so I love the Brave New World you guys just forcasted and I want to know I know what will happen when things go wrong in the US it will be a lawyer’s dream but what’s going to happen in the European context when patients die unexpectedly or where data security is breached lots of things can go wrong and kind of how this plays out how successfully this plays out will depend on how European institutions deal with those failures and so I’d like to hear your forecasts of that sort of that the way I would see it is that like if you take for example the use of a virtual twin to take make an to take a medical decision it’s considered as a medical device right it has to go through clinical testing through Market uh like the C Mark or Market authorization the US so for me the answer is they’re going to deal with it as they would deal with a drug so there will be an inquiry they will look for the for accountability but I think the key to prevent that from happening is to demonstrate clinical evidence of safety and efficacy and I would advocate for collecting real world data so that not only do youve actually that’s may be a point to EO you know the oh I get validation and then the algorithm doesn’t change for me especially when you talk about software used as a medical device there should be easy updates life cycling and I think it can be managed through if you have a much um much more seamless relationship with the regulatory bodies and that can happen if you’re allow if you can move from a document based approval when you submit a 400 page document with a lot of content to really point to what data are important and so that you can really have a discussion on this at this data level with the regulatory bodies that should allow for better um life cycle of your product it’s actually part even for drugs for those that are familiar with the SE rules so it’s the console for harmonization of quality uh and Regulatory rules it’s hq2 which is really about this life cycle of uh products hi um if you allow me I’ll like to start with a couple of remarks to explain my question uh I’m not an economist maybe one of the few here I’m actually the managing director of a clinic here in tulo specialized in cardiology so this I guess brings me another perspective on what you guys talked about which is by the way super interesting the second remark is it really feels like history is repeating himself because uh when I hear you talking about Ai and I think you touch on a very important point it feels like uh in the 2000s when people were talking about information technology and how is going to be changing the entire world and the way we do stuff so we always tend to overestimate the most big the grand stuff and underestimate the the small stuff best example is you all writing on paper still today uh so the question I have is about hey how do we actually create the right incentives for for people to work on the important stuff for AI I would venture to say that saving one click per nurse per patient per day could actually have a greater impact on the entire Healthcare budget of a country rather than just trying to recreate an AI that can help replace a doctor so how do we create those incentives thanks um thank you uh I I I agree um I think this was part of the motivation for for writing this survey article as well about AI in the healthcare delivery context I think once we get away from the sort of sexiness and exciting of understanding protein folding and everything else there’s this very real fact which is like how can we scale clinician time I said in my talk earlier we do not have enough clinicians we don’t have enough doctors and nurses like what what’s the thing that we can do to unlock like the equivalent of one free nurse per day per Hospital would be per Department would be amazing um so yes 100% um I think there it’s vitally important then to think about the healthcare delivery setting itself and what do clinicians actually want and I know that you’re probably happy with that response um I’ll so the I just finished my my most recent Harvard Business School cases about a company called ad do it’s spelled AI D ooc it’s an Israeli us company they currently have the most the greatest number of FDA certified Radiology diagnostic algorithms on the market um and if any ‘s interested these these like cases just so you know the HBS cases are copyrighted and the internet tries to get you to pay for them you can always just email the author or another HBS professor and they can give you a free copy I feel like um authors are always allowed to distribute free copies uh if anyone’s interested I’m happy to share a copy of the case but what they did and this is I think vitally important is they said what do Radiologists actually want what would actually be useful for the workflow not for like cool sexy stuff we can do with algorithms but what’s actually necessary in the workflow of a radiology department in a radiologist’s day and then built the products not that they thought would be coolest but the ones that the Radiologists actually wanted and the second the final point I want to make is a lot of this is a management thing right we have to this this came up yesterday as well I thought it was articulated really nicely by I think was Juliet was talking about this we we if nobody changes anything about how the hospital is managed then it doesn’t matter what tools are available if nobody’s job description changes if nobody gets training or incentives to use these new tools then you know it’s it’s um we’re in trouble you know just as an anecdote I was asked to speak about about Healthcare AI at this event that’s coming up and I thought I was going to talk about the stuff that we’re all talking about here and it turns out it’s an it’s a group for Hospital HR people because the HR departments in Hospitals and Clinics have no idea what any of this means and by the way they don’t have the like empirical backgrounds that people in this room have so they’re just like sh what does this mean for us what do I do with this how does this help me do my job and so I think part of the work we need to do now is the translation the actual Healthcare Management not the Innovation stuff but the like management so yes 100% um and we have plenty of work to do just a few additional comments I would say well first of all EHR the use of uh EHR is the first cause for physician burnout in the US correct me if I’m wrong so I mean to go in your direction so supposed to add levate um or to improve the working conditions but not always and so I would distinguish between two cases like first if you implement software any industry that we serve we never sell software only software it’s always a matter of implementing software while doing the change management with people and understanding the processes that it’s going to impact and there I would distinguish two cases the easy case is I’m going to insert in the current workflow it’s probably the most the easiest one to think about probably it’s more incremental innovation but it’s good because it requires less change management here would say that user like the user interface has to be as friendly as possible so that it makes it easy for people to change and they have to understand the value then what about disruptive innovation when you’re going to change the workflow well that one is more challenging because this is where you really need to Embark some good change management in uh in the operation otherwise you’re just going to fail I think that you point you touch a very important point we all agree upon however it must be considered that um artificial intelligence is going to impact things also for in Radiology it is now proven that uh the the way uh artificial intelligence can check the correctness of a diagnosis on on on on Radiology becomes better than a human one so there will be a hybrid combination of both things put together but artificial intelligence will not be able to work as it could because of the fact that we have a problem with data and I think that the question posed by uh by Jean what is the business model there is very important there should be now with the European health data space with the governance act with the new new legislation coming up the fact that there is data altruism allowed data El organization which can collect the data this way of those who are donors of data there should be uh U data Brokers coming into in into the game they even speak of a possibility of having some Financial remuneration for this so This opens the world to new possibilities but there is a very strict uh regulatory condition for the moment with very high sanctions for those who fall into data breaches of things like that just to to touch upon a point which was raised earlier so this has really to be clarified yet until we do this we won’t have artificial intelligence really working as much as we would because the data not there and we don’t you don’t have enough data you you you you can’t have artificial intelligence playing fully and that’s why synthetic data are so important because that’s a way of generating a lot of data POS and of having artificial intelligence generating synthetic data and a a loop going on in this direction but there are these things which are still unclear and and conditioning the whole the whole picture I just had the Curiosity um so my impression is that uh AI um very often is not like works but we don’t always know why it works and uh like if especially there is an application for health is going to be really impactful so I was thinking whether this is also going to impact for example regulatory approvals that we’re talking like this morning about being like some like you know it’s not a device but then if you approve something that we don’t know exactly why works yeah thank you yeah so the question is like about explainable AI versus works we do not know from a biochemistry perspective we’ve got a Doctor Who’s nodding um we all take I’m sure you’ve all taken it right we don’t we don’t the there a bunch of molecules that we use with a bunch of our anesthesia as well we actually don’t know how it works so I think the explainability if we’re going to hold medical products to that standard then we have to pull a bunch of very useful very safe drugs off of the market so I don’t think that’s the standard to which it should be held the question though I think is relevant in the settings which is actually this sort of liability setting that we were talking about earlier which is what is the combination of clinical decision making an algorithm that’s being used at any point in time and how does it matter in that case I think there it’s interesting um in the case of radiology algorithms the radiologist is still making the diagnosis the the liability all rolls up to the radiologist and so at that point then it becomes you know is the algorithm has the algorithm been through a clearance or certification process we know that it’s safe and effective effective um if the algorithm is not doing diagnosis on its own but supervised by a radiologist then the radiologist may have some real or imagined desire to understand why the algorithm came to the decision it did um it seems like until now the attempts to make the best algorithms more explainable are not very good um and so I think there’s been a a real like in my sort of conversations there’s been a real like push back and a real sort of Swing the pendulum in the other direction two years ago everyone’s like yeah we need AI to be explainable because then the doctors if you tell the doctors why the decision was made then they’ll be on board with it that’s like that seems like the sentiment is going completely in the other direction right now um and those those attempts backfire so but parasitol that’s my answer yes I have just a small question out of curiosity so my understanding is that in clinical trials the patient populations tend to be quite homogeneous more white more male would the the synthetic data and uh digital twins that’s that’s it uh help us would they just kind of perpetuate those those patterns or are there Avenues to make kind of these problems more dtic data can very helpfully contribute to debias data so that you can have either uh something replacing missing data or really correcting a bias data for instance an under representative as specific part of the population because in you can design it in such a way that you create an ideal cord you so in this sense it is it is a powerful a powerful tool and yeah and the dig for me the digital twin like the model is only as good as the knowledge that you put into the model so if you colle created all the knowledge you have based on a non- diverse population for me the virtual twin will not be diverse so I I would advocate for collecting more data from a diverse population and actually we we help our customers do it so you know FDA has release the guidance to really promote diversity in clinical trials and there are various way we can do it one way is to use the data we have from ongoing trials to identify the the hospitals we call them sites in clinical trials that are good at um fostering diversity in clinical trials the other way to improve diversity in clinical trials is to make clinical trials accessible to people that don’t leave in a you know 15minute Drive uh from a large hospital it’s called decentralization of clinical trials so what if you could do your trial with with a you know tele Health conference with the doctor collecting data on your phone or through sensors and more and more of my companies go there because they have understood it’s a way to increase the diversity in clinical trials last word just one last comment is if anybody is interested to learn more about virtual human twins there will be a large conference on the 15th and 16th of July in Amsterdam and some people could still ask to come and be invited uh just look at the edit so e d hyphen CSA uh website and you’ll find there the the the way to to register if any of you is interested on those words I would like to conclude not call it today because we have another session but uh thanks so much the three panelists are wonderful panel thank you [Music] [Music]