CRM Workshop: Inferring Neural Networks from Electrophysiological and Functional Imaging

    (November 28, 2023) Digital Twins in Brain Medicine

    Abstract: Over the past decade we have demonstrated that the fusion of subject-specific structural information of the human brain with mathematical dynamic models allows building biologically realistic brain network models, which have a predictive value, beyond the explanatory power of each approach independently. The network nodes hold neural population models, which are derived using mean field techniques from statistical physics expressing ensemble activity via collective variables. Our hybrid approach fuses data-driven with forward-modeling-based techniques and has been successfully applied to explain healthy brain function and clinical translation including aging, stroke and epilepsy. Here we illustrate the workflow along the example of epilepsy: we reconstruct personalized connectivity matrices of human epileptic patients using Diffusion Tensor weighted Imaging (DTI). Subsets of brain regions generating seizures in patients with refractory partial epilepsy are referred to as the epileptogenic zone (EZ). During a seizure, paroxysmal activity is not restricted to the EZ, but may recruit other healthy brain regions and propagate activity through large brain networks. The identification of the EZ is crucial for the success of neurosurgery and presents one of the historically difficult questions in clinical neuroscience. The application of latest techniques in Bayesian inference and model inversion, in particular Hamiltonian Monte Carlo, allows the estimation of the EZ, including estimates of confidence and diagnostics of performance of the inference. The example of epilepsy nicely underwrites the predictive value of personalized large-scale brain network models. The workflow of end-to-end modeling is an integral part of the European neuroinformatics platform EBRAINS and enables neuroscientists worldwide to build and estimate personalized virtual brains.

    so it’s my great pleasure to introduce uh Dr Victor jira um Dr Victor jira is the director uh of the uh inserm Institute uh the Neuroscience the system at University ex Marse in Marseilles in France and uh he received his PhD in 9096 in theoretical physics and applied mathematics and has since then contribute to the field of theoretical NE Neuroscience in particular through the development of large scale brain Network models based on realistic connectivity so with Victor I met Victor in 2001 for the first time and this is at the uh information processing in uh Medical Imaging in uh Davis in US Davis it’s you see Davis forgot about that oh yes this is this is my the opportunity to say 2001 yeah yeah but the name uh the conference it was it me it me exactly me workshop and I remember that he got the prize of the conference at that time so we becomes a little bit famous just afterward but but uh it was really and his contribution was in uh Nur field in EEG and Meg so he has since then really a a big a big and huge contribution in our domain in neuros science just to let you know that uh it start it start before I think in Florida when you did your PhD but uh uh at the international level I see him for the first time in 2001 so this is like a history of combat this is what you say so his work has been F foundational of network neur science in brain medicine and how to use personalized uh brain models in epilepsy uh he is the scientific director of the clinical trial epov evaluating the use of virtual brain technology that you heard about it yesterday with Dr RI makintosh in Epsy uh epileptic surgery uh Dr J sered as a chief official in the European digital Neuroscience infrastructures EB brains and Lead investigator in the human brain project you know how how how big is this uh uh infrastructures at the European level so Dr J has been awarded sever International prices as I mentioned I think that the first price was it me but uh I’m not sure uh so for his research including the first uh HBP Innovation prize and Innovation prize was in 2021 and the grand PRI provance in 2018 and has published more than 160 paper and and probably more so uh it’s my great pleasure having you with us today and uh okay thank you for accepting our invitation is yours thank you very much for this very kind introduction you touched me actually hit me I I um but I presented and me so 2001 it was you say yeah there was some very early work that was Ry before I even met you that was a beginnings of the new field Theory linking it to forward Solutions and actually Bridging the Gap between neural signals and uh neural modeling but at that time connect you remember when the first connectum paper came out it was about 2005 yeah yeah the word connectum did not exist at that time and one of uh what we showed at that time uh only naive approximations of connectivity were done typically with a high Symmetry and we have demonstrated that uh in order to ever bridge the gap to realistic signals you need to take realistic connectivity into account and at that time or diffusion tensor Imaging all of this was upcoming and became a reality so we actually said we need to integrate it and that was a little bit the very modest beginnings of what we call the virtual brain and then I ran into Randy and it became reality um okay uh we have switched the order what I will do today is I will give you a high level overview of some of the work that we have done particularly in epilepsy so applications in brain medicine um but also uh an Insight of the functioning of eands not too much but enough to make you go to the website probably and have a look at this ah and by the way I also brought you not a present but um there is a toolbook that came out of this all the softwares that I will talk about that are available yeah um on the ebrs account and I invite you to have a look at this maybe you can pass it through and people can have a look at this yeah just hot of the press so uh and in the second lecture we will do a little Deep dive into some of the intricacies what works what doesn’t work so much uh uh technically yeah so um so sa mentioned um much of the work that I will speak about uh comes out of the human brain project and actually precedes the human brain project as we just discussed the virtual brain for instance was released as a platform in 2012 Washington the human brain project started around 20134 and finished last week yeah so this is 10 years later and a big sum of money about 600 million euros later this is where we stand and the transition uh is occurring into ebrs which is today a European infrastructure so not a project infrastructure which means it doesn’t have a definite end and um is supported by the uh European com uh commission and it provides us with an ecosystem for digital Neuroscience helping us in principle to study fundamental mechanisms of brain function and translation into clinics and technology so one of the big elements that came out of the human brain project is uh the digital twin approach what does that mean actually when you look it up Wikipedia or something like that you will find out that this terminology is actually fairly recent 2006 yeah goes back to NASA yeah when they set up the first digital twins in industrial uh applications and Manufacturing um building um a uh cycle of uh fuse using ex uh experimental data with modeling data in the same framework operating it performing trying to individualize it specific to the particular platform they or flying object they were looking at and then use it in autonomous simulations in order to feedback information yeah and this is uh what we are trying to do in uh EB brains to provide you with the type of technologies that are necessary for that uh so it’s a cycle yeah don’t think of a digital twin as naively as a as good as possible copy for of the biological system no it goes way beyond that and I will actually exploit this here with you so uh essentially the approach today is show you with some principles of how to build a digital twin brain Randy showed you already some of those yesterday I will do the same with a different perspective but same Stu sorry and then uh go into more depth in epilepsy so first of all in terms of building it uh what we want to do in particular also ebrs to support that providing a full modeling uh platform on the brain level but also for other structures such as the microcircuit level as you see represented here and all the way down on the cellular level so what you will find on E brains are different representations of macro meso and micro scale and what you will not find yet but this is something we’re working on we want to provide you with standard models essentially off the shelf that you then can adapt to your needs don’t think that this adaptation process is uh easy it’s actually one of the biggest difficulties uh that you will have that we will have in order to make it adapted to the scientific questions that you want to ask either mechanistically or if you want to adapt it to your patient so it will require some support some computational support yeah in terms of uh storage computation um HPC so high performance Computing but also High throughput uh computation and this is initially the human brain project initially we thought the majority will be HPC high performance Computing we were wrong yeah the demands the majority is actually High throughput computation sometimes it goes under the label also under cloud computing yeah so this uh we had to make true and it is essentially a support that needs to be adapted to the workflow that you want to have parameterization yeah so I stay on the Cartoon level at the moment but I want to get you familiarized with some of the uh elements that go in there so this is a parameterization of the models but what we want to do we want to integrate it together with different sets of data the connectomic we discussed yesterday which did not exist in 2001 but things came up uh uh multiscale very important yeah so multiscale representation of data going from the centimeter to the micrometer scale and this needs to be kind of integrated but uh or cohart level if you want to touch cohort data now you have to put this together um um with uh in Vivo data if you want to perform clinical translation you need to specify it to to the individual patient yeah so you have actually the challenge to make XV invo invo and xvivo work together yeah and our task was then to make workflows out of this make these things work together put them into an integrated framework and if possible and this is what this is supposed to show a sing entry multi-user framework allowing clinical applications surgical applications applications in robotics uh or uh medication yeah these things exist today yeah and some of them I wish to show you so this is a framework and that was a task all that’s supported by the underlying uh infrastructure and this is what ebrin does what we want to do in EBR is to make sure that you are not bothered by this but knowing where are the the data stored how do the uh tools interoperate Etc yeah so um the this has then given rise to uh the science workflows have guided us to develop appropriate um technical workflows that we then build around five pillars and this is what you find in ebens today atlases data data and knowledge needs to be organized following certain Fair principles simulations brain inspired Technologies and medical data analytics and these are essentially they themselves built five Network nodes that we needed to make interoperable and that was actually our task supported by a hardware that is allowing you to seamlessly uh access Computer Resources store data uh be legally compliant with European data protection laws things like this that has become very difficult over the past five years due to gdpr so handling data following Fair principles organizing data in atlases multiscale atlases simulation on multiple scales if possible uh they have to speak with each other and derive brain inspired Technologies and build environments in which you can actually operate data as a clinician your own data or if possible Pro uh with appropriate data protection laws uh working with other colleagues yeah it’s supported by European member states eight members are eight member states are full members today together with quite a number of large institutions that are providing also in-kind Services uh in particular the five supercomputing centers in five uh member states providing HPC Services cloud services and uh the appropriate uh data storage that is necessary for that all this for everyone on this planet with an academic account it is free yeah and you as a clinician you know how expensive it is to store data so we get more and more clinical researchers actually turning towards us because the costs are actually Skyrock and uh within the reason able limits this is essentially what EBR is providing so what I want to do now this is the framework I want to jump into more depth in the organization thereof and show you how we actually connect the tools the models and the data in order to enable this flow that I was just in a cartoes fashion uh describing to you so the key anchor of all of this is the Atlas and yeah so in the atlas you represent data on different levels region level and it’s not only one instantiation you have to have a representation of the variability of the data also because this is one piece of information that we use quite a lot and at uh across multiple scales yeah so a region on the region level we are on the centimeter millimeter level and we can go down all the way to the micr structural detail having represent of cyto maps cyto architectures that we on the micrometer level where data are coming for instance from polarized light Imaging I will show you on the next slide an example um and this needs to be organized also across modalities yeah functional modalities structural modalities ranging from the cellular level all the way to uh the connectomics level and with a temporal domain the functional data yeah and that allows us then to in this reference space is to go and put in um your pet data for instance that are coming from your individual scanner map it on posterior probability distributions so here in this case cortical depth and this can be mapped by an ontology on the parameters in the models yeah so this has this is not automatized but it’s enabled and interoperable so you can actually it’s called zebra this has been developed by Timo dickite it’s a tool you can navigate in this space zoom into a particular region open a pull up uh pull down pull down menu and there you have all the different possibilities that you can visualize this is actually coming from one of the visualizations so this is a form of operating uh like Randy like us we did in the virtual brain we provide a graphical user interface and this is nice after you worked with it for two weeks you want to go faster more automated Etc and everyone is jumping onto Jupiter py notebooks this is a preferred means of use yeah but it’s a beautiful entry point to go through the GES yeah and um the models that we use for the virtual brain is essentially Regional representations and there we use meanfield models there’s not much that I need to say about this today we have spoken about this uh yesterday but it’s essentially meanfield models are uh stoas IC reductions of detailed uh single cell uh popular population models that are using statistical Tools in order to obtain a reductions of the inhibitory and excitatory neurons and typically you obtain a reduced Dynamic systems Jansen RIT was mentioned yesterday I think we have about 20 models in the virtual brain available so the structure this also very powerful is agnostic to the model that you want to use yeah the mathematics that you obtain has this particular structure later in the second lecture we will talk more about this s dot X being the location this is your internal Dynamics at the network node this is nearest neighbor connectivity intracortical decaying translationally invariant which has certain mathematical properties namely diffusive properties and this is a connector yeah and about this I spoke at ipme in 2001 already pointing out that this is not translationally variant uh invariant and we approximated by that and that never uh was possible to fit experimental data to represent them correctly we needed the Symmetry breaking in here in order to move forward and this is what became the connecton based approach yeah and so let me summarize this we reconstruct from Imaging data of an individual the cortical and subcortical surfaces we parcellate it then each region we represent by neural Mass we connect it by the connector this I’m re uh quoting the mathematical equation that I just showed you yeah and then in addition to this we have it spanned in threedimensional physical space so this is on the source level we can stick in sensors such as an EG so-called stereotyp IC EG to measure sensor activity the way that the uh clinician or researcher is measuring it so we are not limited to the source level in terms of modeling but we can actually mimic the patient’s own implantation and EG M functional MRI yeah so this is approximately the uh situation saw this video yesterday very briefly uh this is a simulation once you fix the parameters there are no changing parameters you make the simulations on the source level but you project on the sensor level and this is a Time series that you see and key to this that you’re capable of doing this is that you have a representation of the fast discharges but also the slower processes that can be captured for instance you spoke about asides and G yeah or extracellular effects that are linked to this yeah such as extracellular potassium so this is the environment this is what uh the data look like when you measure it from the patient yeah this is work from Kathy Shan lab uh in the US this is an electrocortical grum so the GDs are placed on top of the surface of uh the brain this is the representation that that you see here there where the Red Cross is she placed a micro electrode array yeah where you can actually on the reduced scale submillimeter scale you see actually the organization of the waves that you see evolving here that you see actually on a very slow scale evolving there this is a particular patient and there is a uh very rich Dynamics during this seizure of propagating waves spiral waves very classic spal Temple structures so this is actually the challenge that we have that we need to capture and this gets us actually to the first Next Step which is actually um we need to jump up a scale on the space What Randy showed you yesterday um with a connectomic based approach you have typically 150 to 250 brain region this gives you this size of a brain region which resembles something something like that yeah 16 square cm this is a little Mor think of half of it 16 square cm is huge yeah so um what we have done over the past uh 3 four years we made it possible to jump up by fact of 1,000 yeah and that gets us into the millimeter range and this is not a triviality it’s not just computational resources the uh there are many things that change on the Neuroscience level yeah you’re getting on a level where you have to take intracortical connectivity into account you cannot ignore this anymore U fibers become relevant you need high resolution data to inform this the bottleneck is not the simulation the bottleneck is actually the model inversion trying to make the estimation and inference that I have not spoken about there are so many bottlenecks that go beyond just adding a few vertices well back to a thousand more vertices but it uh changes also the mathematical formulation you’re now in a field Theory not in a discretly coupled uh differential equation uh even though it’s functional differential equation situation yeah so this has changed a lot and it has become only possible over the past few years to work with that yeah so um I’m going to give you a few examples thereof it has accelerated us en honestlyy in particular also for the situation with regards to the source to sensor mapping which becomes important when you want to take it very seriously that you’re performing personalized brain modeling yeah you put lots of effort into putting sensors in there and then your brain region is that large there is no proper uh implementation of uh the uh physics of the electromagnetic Theory mapping Source activity to sensor activity on this level here it’s becoming possible so here you see a representation of the connector this is now the level at which we are left hemisphere right hemisphere inter hemispheric connections so please note here we on the level of over 100,000 subcortical areas yeah sorry subcortical areas cerebell yeah so on uh how to make this is we built initially a first grid with this uh uh with a resolution of multiple Square millimet we uh build the vertices of the surface and then we extract the streamlines and we uh compute their intersection essentially with each vertex and this becomes then uh the connection Point allowing us to build the high resolution connecton resolution in terms of connectivity we on the 200 micrometer level yeah this is X Vivo yeah we need to uh connect it in Vio with uh data from the individual patient I will show I will show later how we do uh how we going to do this yeah this a type of data when you connect yourself you find available to your use and we are using it also for building high resolution templates this is coming from UL polarized light in Imaging they take a brain and they have scanned it yeah um in a uh in a particular preparation they have scanned it multiple times and then uh sliced it yeah and used polar uh polarized light to actually there are different techniques here you can either penetrate it and refract it or you can just measure the polarization Etc it gives you different images this here is PL and from the polarization that gives you the angle of the orientation of the fibers yeah here we on the resolution of 200 microm and look at these beautiful pictures with the resolution of the human brain yeah this is what we have available and represented in this brain reference space where you can use it actually as a template um then from this this allows us now that we have a structural representation thereof to build more detailed representations of areas that were not accessible before this is an representation of the hippocampus hippocampus is actually part when you go from the uh cortex it folds very nicely into this uh structure here we can actually unfold it inflate and unfold it and map it on the on a sphere and that allows us then to do things like this yeah where we go directly from the cortex into this structure here you see the IND indidual regions that are represent here now on the sphere on each vertex point and neural Mass represented with intracortical connectivity and cortical cortical connectivity and increase the excitability such that at the ca1 region you start the emergence of this type of propagating waves the way we have seen it from shon’s data and that we find in animal experiments this is a classic scenario you see you see the propagation going to the frontier and that is then traveling forward it starts slowing down this is a spike wave complex towards the end of the seizure and then it stops yeah so on this level of resolution this is becoming possible today so far the this ca1 region was just one point yes your your Atlas is based on how many subject one subject only one subject it’s one subject uh it depends on this Atlas the way the big brain is based on one subject the sh so is also one subject but a different subject uh what we can do if we want to make it patient specific and go do not go to these very high resolutions then we have to be stay on the resolution of 1 millimeter and this is what we do all the time it makes sense to go to this very high resolution if you have mechanistic questions question for instance about epileptogenesis we are not asking questions about epileptogenesis sprouting and reorganization of the mechanism supporting epilepsy we asking questions about ectogenesis as you have seen here and the eal propagations they typically for us millimeter resolution is sufficient yeah and there this we can do for subjects that do not have too much atrophy in their data yeah and then we can make this type of reconstructions you have another example and this is now for one subject this not this very high resolution this is the subject’s OWN data reconstruction rhinal cortex the fibers coming from the rhinal cortex within the hemisphere projecting on the other one uh representation of the connectivity strength and uh we uh increase the epileptogenic Zone value and when we do this please have a look at this the seizure will start here theoretically look at this point here you see how the activity came up it did not propagate through the cortical Sheed this came from the interactions this is not possible outside of a system where you do not have the heterogeneous connectivity through the white uh white white matter fiber system yeah so these are the type of symmetries we were talking about it’s simply impossible with a translationally uh inent connectivity so these are the some of the signatures that come from this in which the epileptogenic Zone uh the the Dynamics within the epileptogenic Zone yeah just for your interest if you ask mechanistic questions yeah uh for instance can we go to much more detail the answer is yes actually within uh the platform certain activities needed to be developed to have actually representations on the microscopic levels so there you have to use a a simulator which is called Nest a cellular simulator and here the virtual brain if you want them to cooperate to speak to each other they speak a different language mean Fields um Point uh so these are Point neurons with action potential firings yeah this has become now possible and what you can do for instance like the ca1 region you can take uh the hippocampus and represent it the nest and then embed it in a simulation using the virtual brain and the data for this you have available so this coming from Philipus lab in um Madrid where he has essentially scanned the human hippocampus human not rodent yeah human hippocampus extracted after surgery and reconstructed all positions in three-dimensional physical space of the cells that were possible to obtain and reconstructed the arborizations of the dendrites and then and the axonal tree represented this numerically and then computed the overlap with a technique called probability clouds where you actually are able to capture the same statistics as the individual neurons make synaptic contacts and build and reconstruct uh the uh same region in 3D uh of the human hippocampus and now you can simulate it either without t TB or with TBB and this is actually uh important because context matters yeah on the left hand side without TVB isolated and here with TVB and you see that the dynamic changes significantly and here you can now study questions about how we can actually stop the propagation patterns stimulation approaches are being uh developed in order to do this pretty much like we do it in the heart also we need to know how to time stimulus in order to stop a seizure today we do not use this in the brain so this is becoming possible now so this was kind of an overview of I want what I want to show you is what can be done in principle in this platform going vertically across the different scales what data are available in the remaining time now I want to go more horizontally and show you how we can use this type of Technology now to put it to use in clinical applications and um the framework is the following again digital twin framework on the left hand side real world representation on the right hand side your virtual brain twin essentially your mathematical representation of the totality of the knowledge you have you just formalize it mathematically let’s call it a digital twin yeah then you fuse the data coming from the human brain and start actually putting constraints on uh the parameters yeah that you have in the model how do you do this you do this typically through inference yeah not optimization not fitting but through inference because we have an enormous variability yeah so Monte Caro uh uh techniques typically most of them that we use are some variation of the Monte Caro technique yeah and in order to obtain posterior um probability distributions estimating the data and sorry estimating the par you have in the model and the model needs to be adapted to the type of question you will ask yeah so which means you have to know you have to have a very good idea about the pathophysiology in epilepsy it’s the epileptogenic Zone we speak a little more about this uh when we go in the application yeah in Alzheimer’s disease we discussed that yesterday yeah in the lyic system yeah it will be the plugs the deposits that change some of the parameters yeah in multiple sclerosis it’s the time delays you have a demalation of the fibers yeah however uh what you find is actually it’s a um it’s called the radiological Paradox there is a very very bad correlation with the with a lesion load you have in multiple sclerosis yeah and the clinical symptom why is that yeah and this can actually be very nicely explained P Pao uh in my lab is working on this it’s a time delays yeah that enter in there and play a very important role so you have different representations in the math yeah that can be estimated and this then the inference process here yeah and um what I want to do in the remaining time is I want to go a little deeper in the epilepsy application with you well epilepsy what you need to recognize is 1% of the human population has epilepsy number one 2/3 of this 1% we can actually control uh via medication the remaining onethird we cannot control yeah or for them it’s actually there are two approaches stimulation is upcoming not working too well at the moment but it’s in development the other one is reective surgery for this you need to know what to reect yeah and this is being uh helped typically you start with eg yeah you look for lesions in the brain yeah MRI that’s one of the best predictors we have at the moment you implant electrodes based on this information that you have you have to make a decision of where to implant the electrodes these are the so-called Tarak needles about that long needle you have a spacing of a few millimeters contacts you have about 10 15 or 20 contacts in there that you implant and then you have this type of measurements here and based on this you have to essentially make a decision of where to make the reception this work from Maxim Bo um in B I believe yeah uh just to give you an idea that over the past 30 years he has data for the past 30 years but it is actually valid for the past 50 years independent of the type of epilepsy Temple lob extra Temple lobe with leion no leion there hardly any Improvement yeah in terms of surgery success rate because Temple epilepsy we at around 70% surgery success rate and goes all the way down to 25% surgery success rate in the frontal Lo yeah so this is actually disastrous so and it’s being uh thought that this is due to the bad identification of the epileptogenic zone that is a key claim actually yeah so this translates now for us working in the modeling domain to how can we better identify the epileptogenic Zone very Crystal Clear statement and essentially we are taking this digital twin loop virtual brain here on the right hand side input data in Vivo data that we uh TP nowadays we put it into an X Vivo High resolution framework and we use this data as a prior in an inference framewor so it has to be an inference so we can actually put priors on this the individual data in order to have an individualized prediction for this particular patient yeah so that’s what I want to show you so what we need to make decisions about is what is the um data features that we’re looking at yeah we talked about functional connectivity yesterday but these things have never been very predictive yeah for seizures we’re interested in the seizure propagation pattern here you have a seizure breakless Spike seizure onset very classic pattern very low amplitude fast discharges Z becoming louder and then stop this translates uh into um when you look at the frequency time repr presentation it’s the same seizure pral spikes high frequency amplitude going down developing harmonics because you get the spike wave complex towards the end and then it stops what we are using in our framework is the envelope function yeah for every single electrode yeah and we do this on this uh SE level and then we can compute these envelopes this object mathematically if I span it in state space corresponds to this geometry this is a resting state seizure onset bation jumps over spiral evolves seizure offset goes back to resting state yeah this is the object that we want to sample using the mcmc so the M car marov chain techniques that we using and if we use the data feature of the envelope function what we do technically we project into this plane here and this is what it looks like essentially so the sampling happens here we could also go into the individual Cycles but that has not provided any additional information simply because it doesn’t matter apparently the phasing of the Cycles yeah this has been known actually quite well in the literature so this is a data feature that we have and out of this sampling sorry sampling and the process is essentially you generate many different instantiations uh of uh your generative model and you um confront it with the data and using certain criteria you estimate your probability distribution of the parameters that you’re after so it’s not an optimization this is what you get in blue is real data by the way now excitability for one brain region if you take a second order approach of very naive variational inference approaches very often you find this in uh some especially in Neuroscience DCM does it for instance yeah does a gaan approximation this what you get here yeah so what it’s helpful because it gives you a first shot but you do not capture the details of the multim modality yeah this here means epileptogenic cut this means not epileptogenic do not cut you want to know that and when you do it not only for a single region but unfolded in multiple regions you have a multi-dimensional parameter space and you want to sample this information essentially uh this information means it’s your posterior probabilities these are the different possibilities with a probability Mass underneath under each bike yeah so this can be formalized and communicate to the clinician this is what we do and then we provide a heat map together with the probability distributions to the Clin and the heat map for the maximum in this for a particular realization you find represented here overlay with a posterior uh anatomical MRI after surgery yeah so posterior so after the surgery so in here in this case you have a very good correspondence yeah what I want to show you is Maybe a use case that is coming out of our lab a patient a patient that we had female has been uh uh epilepsy drug resistant epilepsy since very very young age starts typically early yeah and uh the clinical symptoms have been known usually you go through a cycle of three four medications until you’re being declared drug resistant EG no spikes have been observed lesions in three nothing has been found going to 7t 7 Tesla something in the left orbital frontal cortex has is indicating a discharge so a lesion but it was not entirely sure yeah and um she was implanted and explored with SE this is the area of the lesion that may be visible in the 70 you see some of the discharges and also the pattern here the the delayed onset you see the fast discharges here some tingling and then the fast emergence and until it slows down at the end yeah so this is the corresponding seeg now you there are certain procedures you can we talked about inverse Solutions yesterday didn’t we yeah so you can actually there are different techniques high resolution EG Loretta some of you may know low resolution tomography can actually map on the IAL surface it provides you with this the FCD the displasia is located here no correspondence yeah same situation here simultaneous EG and Meg has not necessarily improved the localization does not what mg does not it depends in this case this a clinical use case now in this case not necessarily M helps and we do this regularly we have an Meg actually inhouse in the on the clinical sites so many of our patients go through M yeah here in this case it did not help when you analyze the data so I’m a full support of Meg in particular with the uh with the op and no uh Technologies to actually optically obtain these particular signals wonderful very promising yeah here just to give you an idea different indices can be computed in on the s data based on connectivity they have indicated into two regions left orbital frontal cortex and left insul gy yeah so when we performed with uh the workflow that I showed you the estimation of the heat map we got a pretty good correspondence plus one additional area yeah U there there certainly was not as large but three uh ectogenic zones that were ident ified in there patient went through surgery yeah they have decided to remove uh the first two areas yeah which is a correspondence with VP of 67% after one month she had seizures again yeah and she was classified angle three there is a classification scheme angle class one means seizure free angle two uh Improvement but some seizures angle three is essentially not good yeah so what we did then the clinicians came back to us and asked what is about this uh additional area that you have identified then we went back into the meeg and zoomed actually into this area that was identified uh identified by VP virtual epileptic patient this is how we call this workflow and actually identified spikes in spikes in this area um and uh after further so that was another indicator and together the evidence indicated we may go for another surgery she went through surgery last year in summer and is still seizure free since then yeah so this is one of these success stories where it was actually part of the clinical routine this uh process you can go now retrospectively and this a little bit what we discussed yesterday and compare the performance for 54 uh patients retrospectively and say how does the heat map overlay actually with uh the clinical hypothesis yeah this is precision there different uh um statistical metrics you can use in this case here it’s Precision the Precision is pretty good 77% the patients that is seizure free jumps down for not seizure free patients yes so seizure free not seizure free and if we compare it against postsurgical MRI what I showed you earlier yeah then this is the clinical hypothesis Precision this VP very good correspondence but here it shoots down completely yeah and um this and other um indications actually told us that there is probably some added value in performing The V because of this strong correlation in the retrospective data of surgery failure and not correspondance to what came out of the prediction and this is even improved when we go now to high resolution from low resolution to high resolution want to show you this one here in particular look at the Precision value seizure free look for high resolution how it starts nicely Gathering up at very high Precision values that’s due to this improved source to sensor uh mapping which makes perfect sense but it’s nice to see in the patients we had only 14 patients here because out of the 25 patients the 11 had too much atrophies in the data we were not able to reconstruct the high resolution representation this is a disadvantage here so what in summary what what is in there what is happening there where is this uh is it simply better identifying what the clinicians are not seen or what is happening there yeah and this what I want to illustrate you here you do EG for a particular patient MRI packed whatever you can preclinically and then you make the decision about an implantation s implantation this is the first big invasive step afterwards based on this you make a decision about the epileptogenic zone yeah and typically from there you make a decision to do surgery or not what we are doing we are adding a full brain network not just here it’s based on structural data not on functional data it’s based on structural data and then we perform partial sampling we essentially inform the whole brain Network only here but through the the maps from the sources to the sensors and here is essentially the activity that enters a particular organization that we have in terms of this but we cover with the model the inter Network which means we can also make predictions about this so-called missing electrode problem in epileptology there may be areas that are outside of the accessible area for the electrodes but we capture it with that yeah in principle in theory does it work and the answer is actually yes this false Discovery rate for the patients I’ve showed you 25 patients clinical hypothesis seizure free not seizure free same yeah for VP so very small value false disc rate is good yeah and it shoots up for those that are not seizure free and 29% a third of all the regions that are corresponding to the surgical failures they fall into the category that are outside of the accessible barriers for the SE and I’m answering your question what is the added value of a model yeah medicine medicine it opens up the range that is invisible to you yeah that’s another possible answer that one could give to you that you asked yesterday yeah where are we so that’s what I want to share with you where we going with this yes 54 54 patients and what we are doing now we’re doing this prospectively since 2019 we we are running a clinical trial in France with 13 reference centers yeah all over France independent centers that are collecting data structural functional s following this routine centralizing it Leon we are accessing it we are virtualizing the patient 400 patients uh or almost 400 and Performing uh passing them through the pipeline and providing them with a report with a heat map and sending it back to these uh centers and there are essentially two branches one branch what half of them receive the information and the other half not we call this VP positive and VP negative and the clinical trial has just finished in June this year at least patient inclusion uh finished yeah so we had 356 patients yeah so 300 patients randomized some of them got excluded 178 surgeries performed following this we have to wait for one more year because not all surgeries have been performed and we want to wait one year until after surgery to be able to judge actually if the patient is indeed seizure free or not because this is our criteria primary objective is can we improve the surgery success rate I remind you the one that has not change for for the past 50 years yeah and the second objective is to learn better actually about the intricacies of uh the Technologies the modeling the inference Etc that’s what I want to share with you and show you essentially this is the workflow that we hope for translating it into clinical applications in Vivo data merging it into this framework and then performing this inference so it has to be inference not just optimiz it you need to know how much you know or how much you don’t know actually that’s why it’s important to have an inference framework with posterior probability distributions if you don’t know you want to know that you don’t know this is very important and that’s why we chose this admittedly complicated approach this is a vision for the future and we are working on this I wish to thank my colleagues my friends that have worked to this and also the funders thank you yeah so thank you very much uh Victor for this U beautiful presentation so and that uh open the uh for question okay thanks a lot figure up I like it I like playing with thanks a lot Victor this is an amazing uh talk really impressive work impressive results I have really a lot of questions I will only do the main ones uh I just to clarification and need to follow up from what I asked yesterday so here you were saying at the input so of course you you provide data from the anatomy from diffusion data and so on and then from seeg point of view you said you mainly introduce the envelope of the signal we could do something else but we choose to do the envelope because of the technique that we used sorry so my my first question is uh how many or what is the influence like are you putting into the model one seizure several seizure as much seizure as possible that would be my first question and my second question is what following a little bit shifa’s comment uh what what would be uh what is happening because you know all uh implantations are not some are very well know and sometimes we are just missing the main focus and we know more or less that we are we are getting mainly the propagations so do you have an idea how the model is behaving in those case like vertically these are two different things these are you you catch the focus but there’s a second one which are the Red Zone that you put or you missed the focus in in that sense your implantation was not well know and how the model is behaving because the data you put is not the ideal data that you put as an input in this second case a little background for the last question when you have the seg electrode each contact samples approximately what one cimeter around it not more yeah so it’s so easy not to sample any uh any other uh epileptogenic Zone that’s why the missing electrod problem is so important let’s start with the first question best thing is I I tell you how the clinical trial protocol is organized we get typically one to three seizures spontaneous seizures are preferred if there are no spontaneous seizures we work with stimulated seizur seur yeah uh this is a number of seizures that we typically have it is sufficient to perform a reconstruction at the moment we use the as a data feature the convolution convolution the envelope function yeah um that was a choice uh uh because we use a monteo technique today so four years later we would do it differently the techniques in uh machine learning AI have evolved enormously in the last four or five years there are techniques uh such as uh um uh simulate based inference SBI for instance which is superb which is what we use today but this is in the research domain not in the clinical trial yeah and for the second one um um for the second question we uh so um as you see from the data I showed you 30% uh is estimated epileptogenic zones outside of the accessible zone so that tells you already that there is some predictive power with regard to this in the low resolution virtual brain framework shockingly we have discovered only very recently in the last half year that there is a higher sensitivity to the location of the SE contact moving it by a few millimeters will actually not change the results completely but the organization of uh the ranking of the elements that is due to some reasons one of the reason is a very bad mapping from source to sensor imagine the following if you have the 16 square cm and you fold it something like this if you have high resolution you have also the dipolar momentum yeah that if you reduce everything to uh to single point you have no momentum no M so you’re taking only the distance uh uh in information into account that’s bad yeah not good enough that’s what the key reason why we had to go to high resolution this improves it enormously but the clinical trial started 2019 again what would I do differently today but um this is a way to go so we find on the low resolution large sensitivity to this this is becoming much better in the high resolution uh we find many regions outside of the accessible region and we find evidence that this is predictive yeah beyond that I cannot tell you yet anything but uh I will show some patient data in the other yeah yes um I have a naive questions my question is is it possible to build a model without usingg data from the patients as a prior or not yes we don’t need SE data yeah we the model the model building happens entirely based on structural data XV or inil and then we put in the appropriate mathematics with which we work where the personalization comes in that we need functional data the first uh level of personalization is to use the individual’s own structural data this of course a level of personalization second is you do the uh the no the second is essentially functional data can be s data it can be FM data it can be egm data something that is actually fantastic are you epileptologist yes yeah fantastic in this workflow yeah we can demonstrate the following unpublished but uh we have a demonstrated now proof of concept yeah we can do the same thing with eg why can we generally not use sculp EG data because we don’t have any seizures ichal spikes you see them sometimes sometimes you don’t Etc yeah there are new technologies upcoming yeah in part for stimulation I’m not talking about tdcs or something like this tdcs is just close to the surface but there are new techniques such as temporal interference that allow yeah you know that allow you to go profound in our house we have Adam Williamson that uses actually with multiple nodes allowing to take the focus and reduce it to a size of 2 three millimet so suddenly you have a deep focus of 2 three millimet where you can stimulate we have demonstrated proof of concept that we can do this non-invasively trigger a seizure or not so we can do the sampling as you do it in the clinics with your patients when you go through a grid stimulate one electrode change the frequency Etc if you’re interested I can show this to we can do the same thing in a patient specific Manner and we can actually replace this workflow with an invasive seeg by EG noninvasive trigger the seizure and based on the data we can actually perform a reconstruction uh is it better than S EG I don’t know because sometimes you have a missing electrode in EG you have no missing electrodes but you have all the issues that are associated with sculp eg so we need to run studies on this but we have proof of concept that we can render this type of preclinical exploration noninvasive in principle very early times but very exciting thank you so you never use yes we do but not in this example of EG we use FM for you will not hear Randy spoke about this uh for in aging studies for instance we have built an interre healthy aging cohort in Alzheimer we use FM data Etc FM data are um it’s a different domain the same principles apply that’s why I showed you this translation to the clinic you can paraphrase every single question depending on the pathology and the corresponding pathophysiology and the Imaging technique and every time you have to ask the same question what is the data feature is it informative how do I represent it in the model I go through the inference uh cycle yeah is a pathophysiology that corresponds to that identifiable or non-identifiable what we have shown for instance in uh dementia yeah that uh in a resting state FM where the pla if we want to detect them it’s we we don’t manage to do this in the resting state Paradigm but we manage to do it in the stimulation Paradigm if we ever manage to get this Temple interference set up into an MRI and then start uh sampling the individual brain areas and then perturbing and then they get back yeah this relaxation samples much more information and renders actually the pathophysiology of Alzheimer in the context of the model identifiable which is pretty cool yeah yes very beautiful presentation very nice thanks so my question is the following let me do an example okay so you have a car right and you you know to you have good I will not ask which mod you have you have a car and [Laughter] uh and you get the flat tire my wife does yesly sorry it’s so you know to model a car right now uh and let’s say that one of the parameter for modeling the car is the radius of your tires okay now you get the flat tire so then you you know that there is something wrong with your car right you will experience and uh so one thing that you can do is say okay I know I have my model of the car right the healthy car and what I do is I try to fit the pathological situation and so what I can do is just reduce the radius of one of the tire right and so then I can see if with reducing the the radius of the tire I can fit the behavior right I can model the behavior of a of a car with a flat tire but we know that that’s not enough right a car with a flat tire is not the same as a car which has one of the tire with smaller rages right so now what we usually do and what I see you you are doing is exactly taking the model of a healthy car and change according to to the pathologist one or the other parameter you say let’s say uh problems with the mation then you you change the tow parameter the the delay right in the transmission whatever so but it’s always the same starting from a healthy model and adapting the parameter of a healthy model to describe a pathological situation but my point is is there any effort right in the virtual brain and your approach to start modeling directly a pathological condition without starting from a healthy situation and just changing the parameter in the healthy model wouldn’t you agree you always start from the healthy model and then the I think you’re talking more about macro and micro representation uh because what would be an example of modeling the pathopysiology in multiple sclerosis the example that you just brought it’s the demalation of the fibers yeah so you see actually the Colossal volume is going down this is modeling the uh pathophysiology yes what what I’m saying is is that but uh the point is that sometimes right a pathological Behavior complicate the situation and by just changing a parameter in a model that was working for a healthy situation may not be enough right so if you want to model a flat tire you cannot just use the model of a healthy car and just change the radius of one of the tire to pretend that you are modeling the about what if I change the El elasticity properties of the tire also would you be then happy yeah so you need to do more than that that’s what saying so so not just one parameter but multiple parameters or potentially also go more microscopic in order to have more biophysical detail in there yeah yes this is multiscale modeling and we do this I in the beginning I showed you the core simulation of nest and the virtual brain that we are capable of doing some of these things I’m not driving this myself yeah this is more our Italian Friend such as m m Etc I typically stay on this microscopic level however I want to know how microscopic features actually inform my macroscopic parameters so in this sense we are doing it the multiscale Cascade of cause and effect which then provides multiple parameters if they matter or not this is being done typically separately it’s one of the approaches that we are working on u in or where we are um modeling this typically in a high dimensional more cellular approach and trying to sample the different types of behaviors and then we are bless you and then we are taking the outcome typically is parameter manifolds and then use it to constrain our macroscopic virtual patient model it’s a little complicated one naive answer would be I can build a cellular model of the brain and represent every single cell Etc this doesn’t work you run into the parametric explosion other people have tried that you get completely lost yeah so you have you need you need to use smarter approaches yeah so what we are doing in our hands we estimate the cause effect from microscopic to macroscopic this is all ongoing work at the moment yeah and this uh cause effect we try to parameterize it yeah pretty much as what we discussed yesterday when you gave your o yeah and when you gave your per a presentation multiple qualitative behaviors as a function of parameters and then we take this behavior and implant it into the virtual brain model assuming there is an assumption behind there we call this weak coupling in nonlinear Dynamics assuming that the basic characteristics of the network Noe of the the dynamic system do not change when you put it in a coupled Network situation this is a strong assumption all of us do it all the time whether you say it explicitly or not okay yes um if there are no other questions I just uh I think my question yeserday may have offended you and I wanted I just sent you a message but I want to apologize to all mathematicians in this room that I absolutely have no doubt that mathematics is the basis of every single knowledge and advance that we have made in medicine not at all I apologize to you for what I meant to say is afterwards we discussed actually Randy and I what is the best way of responding to you so we were reflecting about this and I said Ry I will mention it tomorrow absolutely so no I’m a biomedical engineer so I have played with mathematics yes so now all I wanted to say I I think I was referring to the issue of using mathematics to address things such as social psychosocial determinance of Health that’s what he there I think might have a bit of difficulty with with mathematics but in medicine we can’t do it without no you place it very nicely into this social context and Randy responded to this um very well yeah but it’s a question that can go wider yeah so it’s a very important question so okay hello um thank you for a great talk I’m um more research on embodied cognition working virtual reality and all these things so mathematics and Neuroscience I I just have a taste of everything so might be a naive question but when I looked at the folder what I was very interested to see was this embodied cognition actually coming back into this pipeline the simulation environments and so my question to you and it’s like say maybe a naive and and Broad or high level question but I’m curious about your answer anyway um are I’m thinking of when I see this of modeling of the brain of of a brain in a vet and as I’m sure that many of you might follow is that we are not just brains in the vet we are interacting with our environments and and so on so the robotics thing so I’m curious and also for the future um with these more simulated environments virtual reality robotics like you say prothesis but also for your clinical work where not just brains and evets interaction with environments how what’s your stance or strategy of of maybe placing them back into the real world or uh creating more richer context to to maybe inform the modeling uh itself yeah yeah very good question very important uh I myself am not working on this uh in this domain but here I’m responding as a chief science officer of e brains we are very aware of this and World modeling so external World modeling plays a very important role people such as at G dangelo for instance or Rino G for instance uh they’re working on this um and um the what AO does what or what you find in E brains it it’s not it’s by far not as far Advanced as we’d like to have to yeah there is a lot of activity in software development for external World modeling so we don’t want to recreate it but what we created is a so-called NRP neuro robotics platform the name is misnomer it gives you the wrong impression what it does is it allows you actually to plug different softwares together in the same environment you um so uh you ENT uh put the software you unpack it in its own uh so it’s a virtual box you unpack it and then it can operate interoperable with the other softwares this is what we using at the moment which allows us to use different types of softwares there you will find a representation of a rodent skeleton that you can for instance Mikel and aidio do this use a cognitive architecture yeah coming from the rodent you can actually use in order order to control muscle fibers and have the rodent uh operate on a treadmill if you don’t want to have it that fancy you can reduce it to a point and then navigate in a spatial environment and interact with visual signals uh move locomote and you have obstacles that you can work work around there all using the cognitive architecture not like connecton based but much reduced modeling individual areas however uh it’s this representation this is actually where it’s coming from yeah you have multi- region models that are uh represented by spiking neurons using Nest on neuron uh simulators but classic cognitive architectures U so predictive coding yeah you have an internal model and you want to navigate in this space so spatial navigation was what was used mostly so this is possible by far not as much as we want to yeah but there are one or two use cases that may be of interest to you and the Technologies are in place but we need good users that build better use cases so it’s really the beginning I I saw this Angora P also so I made the note of that and I will definitely look it up Angora p l exactly that’s you made a very good note okay okay yeah yeah and small followup but if I can last one yeah yeah something about no yeah timing is important uh about social cognition is there anything more social in in the whole brain uh simulation project I mean we’re talked about this yeah uh in terms of data modeling nothing in terms of data yes uh for instance uh the data sets the 1,000 brains cohort that has been collected in Uli healthy aging cohort from 55 to 90 years comes with beautiful structural data functional data all MRI fmri but also uh it’s been uh um calibrated very beautifully with uh uh cognitive uh tests yeah filled out questionnaires you know if they are doing physical exercise reaction tests Etc all this data all this information is available on this level yes Rich data sets in in terms of modeling very poor nothing um I cannot think of another dimension at the moment okay yeah thank you yeah okay thank you uh comment and question uh comment is uh about e brains you mentioned that we uh we can store the storage is for free so uh for a standard user standard user not the medical uh if you then start for standard single user yeah and uh with uh within reason if you then want to go beyond that reach out and interact with us and we find Solutions so is this for free for standard uh research use okay which makes clear makes sense and uh have you solved the problem of Regulation the uh clinical data uh solved it’s going to accompany us for many years but what we have today uh what we have today and it is functional there are two two approach three approaches I’ll give you the quick answer one is called Health Data Cloud so there is a cloud system with multiple uh layers of controlled access this is a way how to do this this is actually Petra for Petra rer yeah so you have Cloud access different layers of controlled access this one possibility of doing it second one is a hip human intra uh intracerebral uh uh platform data platform yeah this is specific for SE data effectively and an protected environment in uh Geneva yeah also controlled access so that is not that is uh different organization yeah very specific uh but it is functional as of today if you get an account you cross all the barriers authorization you can actually access some of the seeg data to today well your account if you apply for your account it will be approved within 24 hours yeah this is a promise yeah and then you can get access to the different users number three MIP medical informatics uh platform there I think it’s about 40 hospitals that participate you go behind the firewall of the hospitals work on uh and uh so on the hospital servers service are provided and you put uh we refer to this sometimes say ebrs in a Bo box yeah you unpack it on the server and your functionality is very limited however it allows you to access patient data no patient data will ever leave the hospital but some of the results can leave and then uh you can process uh the uh appropriate results statistically good for rare diseases imagine each hospital has two three uh individual cases times 40 suddenly you have a cohort where you can get statistical significance in a way where you’ll never ever be able to do it in any differently yeah because simply the patients do not exist okay so different use cases different scenarios have we solved it no we’re working but it’s it’s getting there absolutely so this is a general problem that we all had uh uh one one question you answer this question about the multiscale approach because you you present the uh the nest uh approach but in your for epileptic uh seizure you don’t use this multic scale you are at the macro scale level all the features that you’re talking about doing prediction are at the macro scale level yeah right so uh for the mcro scale level in application to patients in clinical use we’re completely at the micro level for research applications we are using the core simulation and we are working on the spiking Network oh okay so my question is uh when you connect this micros scale level uh is there any I mean adding value to the prediction that you are looking at is uh so far not in our hands yeah so in our hands so far no okay yeah okay last question is about the Insight me ins that weate expect showing something coming from this multiscale modeling and predicting this something may come out of this so far uh not yeah it’s starting right now going to the multi to the high resolution modeling um uh for instance um uh like stimulation usually you don’t stimulate as a function of the traveling wave at the High resolution we can actually show and in uh Cardiology you do that actually you measure the propagating waves on the heart muscle and then you stimulate for as a function of the phase in epilepsy not at all now here at high resolution for the first time we can even Envision something like this until now it was not possible to even formulate the question so uh it is possible but we have not done it yet okay uh the last part is about the uh prediction so uh using the basing framework basing framework simulation back to the data and doing some prediction and evaluation so the prediction and the distribution that you show are the features that you give to the clinical uh side and they decide based on this kind of prediction so correct incorrect so they you have a staff meeting yeah where the responsible clinician presents a patient and typically the patient who that age uh this is a clinical history MRI negative MRI positive EG has shown the following Etc so you go through all these elements and VP becomes one of them okay and the role of the virtual brain which is the simulation side it’s mostly to simulate the uh the the activity of the different areas of the brain and the feature that you are looking at or you extract from the data is the envelope of the uh seeg or the envelope of the lfp that you simulate is that right no no the data feature of the empirical SE this is a data feature that goes into the inference framework careful it’s a causal inference framework which means we have a generative model this is a virtual brain that’s what we are sampling uh the simulation itself ah I have to answer carefully the simulation itself can either generate the full seizure at the highest resolution yes or we also have a reduced version which simulates uh only the envelope function so yes and no both variants exist okay so um my understanding is that the simulation comes with this uh uh envelope or the the the whole dynamic of the system yes and uh the inference goes to the uh estimating some parameters uh to do uh the prediction so my question is about the personalized parameters that you change return in the simulation and you estimate using the ban approach what is your experience on that depending on do you do almost the time estimate the I would say the right parameters because the basian inference needs some a prior and those a prior may change uh depending on the context and how this affect the estimation of the parameters and the prediction that you are looking at or maybe it’s enough strong using like noninformative aior and it’s enough to do a kind of prediction what is your experience we’ve done a lot of work on this last five years we have done essentially nothing else well not nothing else but put loads of emphasis on this the most detailed simulation you you see here it’s not just envelope functions it’s detailed discharges Etc um um the a priority typically is if we have a legion for instance if the patient is MRI positive then we actually narrow the prior in certain areas and we bias it a little towards the legion otherwise the prior is actually quite wide because we want the sampler allow to explore different alternative and different solutions um how well does it infer The epileptogenic Zone it depends it depends on the patient sometimes patients have uh different types of seizures yeah um sometimes uh um a epileptogenic zone is not identifiable we get we can get always results if we bias it with the prior strongly but that doesn’t mean that uh the posterior uh this typically means that the sampler got stuck at the prior and was not able to find other Solutions sorry this is becoming now a little technical but essentially it’s a sampling Paradigm and it’s stuck at the uh uh uh at the prior so we need to allow it to sample others uh possibilities it can absolutely happen that uh the posterior probability distribution is quite wide not uniform but wide and we cannot make a very knowledgeable statement and this we pass on to the clinicians essentially we’re not sure the multimodality you saw that that is reality which is actually very very cool because you with your patients see this also this is reality yeah sometimes you cannot say but you can actually say whether this is a good candidate for the Zone penic zone or not there are several possibilities so we see everything you can imagine multimodality nonidentifiability very strong convergence it depends on the seizures of the patients on the implantation scheme um uh and Etc so we we are very careful with the sampling and if it doesn’t converge we have multiple criteria and quality checks which we have to we in a trial yeah to make sure it converges but we also know when we don’t know and it happens more often than we want to that’s great okay so thank you for this response uh we are going to uh stop this uh discussion about this first part of the uh

    Leave A Reply