Séminaire International de Sémiotique à Paris 2023-2024 : “Énonciation(s) et passions dans les territoires sémiotiques ouverts par l’Intelligence Artificielle”

    Onzième séance du mercredi 29 mai 2024:
    Andrea Valle (Université de Turin)
    From grammar to text. A semiotic perspective on a paradigm shift in computation and its usages

    Retrouvez les vidéos du séminaire et toutes les ressources de l’Association Française de Sémiotique:
    https://afsemio.fr/

    Andre Andre Andre Dan discipline music spect [Music] [Music] temporary music notation setic and Aesthetics aspect Center for research on multimedia and audio video edzan [Music] Sy fore okay Lang uh so as I was saying I’m very happy to speak here for this first time at this seminar very famous Seminars the same time I’m a bit afraid to present my consideration at this point of the seminar because uh you know all all the all the other intervention sort of systematically took and burn the few consideration that I had in mind so you know like Hunters Gator slash and burn technique everything slash it down and burn so uh I hope I will not say I will say something at least uh a bit interesting so first of all it seems uh it seems to me it seems relevant to observe that we talked about a I but we typically meant deep learning so this aspect was clearly underlined by the but also by stocking contribution in the seminars we are certainly experiencing a moment of hype which I think concerns many even in the technical scientific field but certainly this sort of enthusiasm maybe amazement Wonder seems to me to be higher among humanists rather than scientists so the debate over neural networks as we’re mostly speaking about the learning undoubtly as a long history in the field of AI the issue is eminently epistemological and concerns the nature of the explanation Turner sub observation are still to me perfectly relevant so the interpretation of inputs and outputs is usually pre-specified by the researcher but this is not the case for units in Hidden layers and for this no clear interpretation will be possible in most cases even after learning due to the distributed nature of the representation the advantage of the of these is that the researcher does not have to make unnecessary assumptions on the represent a disadvantage disadvantage is that it can make it more difficult to non riser to define a real Mo what to Define what the real model does uh what why it does it and so to extra more difficult to extrapolate to extrapolate this Behavior to the real world that’s that’s the point so on the history of uh and the current debate on AI for example a very interesting book much appreciated by the AI Community itself is the one by Antonio Leo which is unluckily an ex colleague and this book among other things poses a specific problem certainly near to semiotic interest so the the verification of the cognitive plausibility of computational architectures the topic is controversial and for example Leo clearly distinguishes between functional the way it works and and structural how it is models of the Mind the question that has arisen several times in the seminar is in fact death of subjectivity who do we think these models are put like that it seems a bit ridiculous but the subjectivity effects that emerge from Deep learning application are obviously interesting for semiotics see the many discussion on chat GPT Alonso compo basali Puchi so here the final somewhat but they mystifying consideration by P and mackworth on large language models large language models are controversial because of the claims that are made in particular there is a lot to be impressed with if you set out to be impressed however there’s a lot to be critical of if you set out to be critical now this book Po and mewords is a comprehensive introductory book to AI now at the Third Edition the book declares its reference Paradigm in the second part of the title as you can see somehow it was the previous hpe in AI but unlike deep learning it didn’t reach a general audience that is agency the theoretical framework refers to the idea of computational agent in this context deep learning is Trac it back to one of the aspects of computational agency namely the learning component and possibly the generation of specific behaviors as a function of the learning component I believe that this idea of agency on which for example capet has focused albate from an anthropological phenomenological perspective but also stockinger and colas blaz have referred it this idea of agency I was saying is in fact an interesting terrain of dialogue which has not yet occurred between Ai and semiotics a computational agent has beliefs knowledge in semiotics the notion of actant concerns precisely an idea of subjectivity oriented towards value the ISS of plans is discussed in terms of narrative programs while the aspect of believe takes various declination in terms of value or paion finally knowledge can be thought off as competence so one side effect of this seminar is that I’m grateful for is that I finally studied machine learning as often happens when you look for a serious technical manual you end up with a book by or this text that you can see by Miller and Guido is particularly clear and rigorous it’s written by by the way in the pre-ip period 2016 it seems like a lifetime ago it dedicates very little space to neural networks and not with particular enthusiasm as you can see then I came across a Fran cholet book and I finally found a clear and understandable description of neural networks Machinery this is a remarkable text and in fact by the way is cited by P and mcworth on the one end this book seems to have an applicative vocation declared in the title you see deepar deep learning with python in reality together with this aspect it provides an epistemological interpretation of deep learning while not omitting some important philosophical consideration so um rephrasing some of the Le Adan consideration starting from cholet what is a neural network in essence a neural network the data is organized into an N dimensional structure One D and it’s a vector just like a sequence or a list and the position requires just an index 0 one two and one and so 2D in the data then is organized as a matrix each data point can be identified by two coordinates in a cion space if n is greater than two then it is a tensor a 3D tensor can be visualized otherwise you have to think of a space in which each point is defined by n coordinates but typically no more than form five this special uh Dimension is not incidental each layer in a neural network is a tensor to which a transformation function is associated which simply takes the previous input layer calculates each value as a function as you can see on top of w and B and then applies an activation function reu here in in the formula let’s say which means rectified linear unit the final layer outputs the prediction given the initial input a final architector can complexify the scheme as an example including many layers or providing different activation and optimization function more on this later but in the end we always have transformation between adjacent tensors for example the figures uh the colored figure shows the vgg16 network for extracting features from images it’s a chain of transformation one layer to the other so uh before thinking about learning how can we interpret such an information architecture shet seems to me very original sin provides a geometric interpretation on the one hand every tensor we know it’s an N dimensional space as can be see from the figures which are examples uh from cholet each function in a layer geometrically transform the input data in some way the last transformation on the bottom is particularly interesting this is a visualization of a Ru trivially this Ru acts as a threshold function that is all values less than zero become zero the others remain unchanged the presence of nonlinear function like this is fundamental they are nonlinear because they are irreversible after the ru we have lost information that is we are not able to apply an inverse function that takes us back to the previous layer in fact how to map all the zeros now to the previous values there’s no more information in essence Chet notes that deep learning involves doing an unring operation on a very crumpled surface so what a neural network is meant to do is figure out a transformation of the paper ball that would uncrumple it with deep learning this would be implemented as a series of simple transformation of the 3D space such as those you could apply on the paper ball with your fingers one movement a time so like this un cremling paper ball is what machine learning is about finding neit representation for complex highly folded manifolds in high dimensional spaces a manifold is a continuous surface like our CR crumpled sheet of paper now I find this geometric interpretation very fascinating and convincing and it also has the Merit of completely eliminating the cerebral mistake of neural networks because one point that has remained since the first proposal of computers is that of the electronic brain in other words to put it with Leo there is a sort of rhetoric of structural the argument which is like 8 years old so for Norman in 45 uh spoke about synapsis about computers is that the argument is the computer is an electronic brain and therefore by analogy just as the brain is somehow implicated in human or animal subjectivity then an electronic brain implies sub subjectivity now this does not mean as we have seen and thinking again about Alonso comp Bali pauchi that there are not somewhat surprising effects in the output of the system and certainly interesting ones but in the end a neural network can be complicated but not so complex which is perhaps the true charms of these architectures so how does a neural a neural network learn so the final output is measured by a loss function compared to a real reference output which passes the result to an optimization function which in turn changes the values called weights associated with the layer function now we know this aspect but how does it do it well by simply introducing small variation with opposite signs to the output of the loss function so you get a score and if the score is too big weights are diminished if it’s too little they are increased so as cholet notes this IDE reinforces the geometric interpretation in fact the assumption is that these spaces like ND surfaces are continuous like surfaces that is they are mathematically differentiable if they are surfaces just think about the the crumpled paper uh ball one can move continuously on the surfaces this is the idea of the gradient a term largely used in the neural network Jaron meant as a generalization of a derivative sorry in mathematics so the idea is that you have these continuous surfaces and so you can adjust on these surfaces that this learning mechanism seems particularly interesting to me evidently it is based on a feedback idea in a feedback system the output fits back into the input as you can see in the basic schema on top this idea of re-entry can be described in various ways for example formally in recursive terms so you see there is a definition of a uh function in Python which is called fibo and it’s a recursive implementation of a Fibonacci Sequence generator like 0 1 1 2 3 5 8 uh 8 13 it means you get the next value by summing the previous two uh now the point here is that inside the definition of the function itself as long as the overall sequence of number is fewer than 20 elements the function continues to call itself and we know that this a can be exploited on a classic textual level just think about misab beam or the call the so-called dust effect now the diagram below the one on uh blue uh however indicates a feature of feedback system that is not described by this recursive mechanism the output is not calculated as in fibo later it is measured the output is information external to the system that produce it and it has to be measured output is a full part of the environment surrounding the system so the idea of feedback was first introduced by Rosen blut Viner and Bigalow in their famous 1943 paper which is the epistemological basis of cybernetics I’m happy that Pier Luigi baso is still thinking about this kind of approach so in the paper Rosen blut Viner and Bigalow introduced the idea of feedback as a measure of Effectiveness towards the goal and indicate that teleology is a fundamental aspect of behavior teleology here is synon synonymous and I am quoting with purpose controlled by feedback in particular with negative feedback as an adjusting tuning operation like the one done by an Optimizer in a neur network so Rosen BL V and bigo propos a clear definition of subjectivity that seems interesting to me for semiotics it is based on an idea of purpose directed agency it is an operational formal very general definition by the way it seems to me to be consistent with seoc thesis According to which semiosis is typical of every living being as the letter letter is oriented at least towards self-preservation on the other hand the idea of feedback is implicit in Fon ukul idea of umelt based on a closed cycle between action and perception passing through the environment but as it is formal this definition the cybernetic definition is even more General in fact for V certain forms of purposeful Behavior certainly apply to technological devices an interesting point however is that the feedback mechanism called back propagation in neural network is not biologically plausible if we take into account neural biological Network as already noted by Nobel Prize uh Creek so to end with the neural metaphor a computational neural network first it is a set of geometric transformation between adjacent chain and spaces second requires a mechanism that has been proposed for the description of agency as a purposive behavior that is feedback that at the low neuronal neuronal level is not biologically plausible in short in order to inject a neural network with an intelligent Behavior we have to rely on a theoretical formal construct feedback that has been proposed to describe high level agency in humans animals and machines so deep learning and more generally machine learning is the basis of a paradig shiting computation as noted by De in the seminars In classical programming results are produced starting from the formulation of algorithms rules and the available available data in machine learning the system receive results and data and data sorry and generates the rules by learning in terms of person logical operation it could be say that the emphasis In classical programming is on deduction a certain result strictly follows from a rule while uh in machine learning the focus is on abduction a certain fact is proposed as a result of a newly established rule thus machine learning can be thought off as a set of technical methodologies for the automation of abduction now I’m very flattered and at the same time intimidated by the fact that this simp diagram has been cited here multiple times uh it’s just you know basic a basic diagram with some uh setic annotation uh the idea here was that programming meant as organizing computational programs and also as code writing as an as a practice is indeed a relevant setic activity uh uh spawning thousands of languages literally in 50 years and resulting in millions of lines in written code the whole digital Revolution in this sense is still rooted in writing as a semiotic activity and we saw the discussion by last sec in the seminar programming languages are semiotic systems that show an interesting twofold orientation on one side towards the machine that has to perform calculation based on Strictly Formal instructions on the other toward the programmer and the community so that a code can be also thought of as a form of literary writing an interesting semetic feature is uh in programming languages is that this double interpretation both machine and Human Side is not only linked but has to be consistent between these two side and yet it allows a relevant degree of semiotic Freedom observed from this dual perspective AI application based on machine deep learning do not present particular features they are standard computer programs relying on the touring uh as you can see on top 1936 for noyman on the bottom 1945 architecture as a matter of fact this can be easily observed by taking into account kasas now kasas is the state-of-the-art library in deep learning provided by Fran cholet himself and you recognize the logo we saw it before as an example in the graph here uh solid lines indicate it is written in the shed lines indicate can control so kasas is written in Python and can control pych tensor flow and Google Jacks environments three standard environments for machine learning as an example pytorch is written in Python in C and in Cuda which is a proprietary language directly targeting Nvidia gpus because for deep learning we have to thank video Gamers the graph here incomplete is complicated because as an example python can be directly used to control the intermediate layers of pytorch tensor flow and Jacks and there is also a C++ interface meaning you can write C++ code for py and Cuda but considering in the graph only the solid lines which mean is written in the graph is a cyclic it means that you can you you always end up if you follow the lines in GPU GPU so the programmer user can insert uh herself theoretically in all nodes from highest to the lowest we are always surprised by the randomness and then the creativity of deep Learning Systems down there they all run on standard deterministic machines so on one side nothing changes but this is just Alp of the story the previous scheme we saw from cholet didn’t include feedback the point is that the rules In classical programming are defined those in machine learning are progressively infered to sum up in classic programming the programmer starts from Rules while in machine learning she GS them from the data in machine learning classic rules are indeed present but act as meta rules that is methodology methodologies to be used to get rules from the data now a recent buz on the web is feedback economy I think there is a side guys PR T So to say that has its pivot in feedback this feedback based communication is indeed a common feature of social network and electronic markets but with deep learning now it is also built into application the result is is a situation like the one in figure in which the dotted line represents the classic programming which is still required to program the machine Learning System while the others indicate a circuit that always passes through the data the data are properly the environment external to the subjects this communication is open to speak with Maron not fully controllable by the agents not deterministic can be disturbed in the the programming training phase the programmer must inspect the output of the machine learning algorithm to verify the behavior of the machine in the wild comparing it with the empirical nature of the data the goal is the tuning of the algorithm the user can only treat communication with the machine in terms of an interation interaction through the data do he receives that is the machine learning output and those it provides user output like a prompt this interation continuously modif I the results produced by the machine see as an example no the the title of this article from open on concerning chat GPT or d e therefore this the new paradigma which also insist on the classic one due to the stratification of the symbolic dimension in computer could be defined as feedback computation it is based circly to speak with baso on a sort of relocation of the programmer’s position this produces a sort of of blurring between user and programmer but not a complete one I’m throwing just a couple of suggestion on the table first meta programming if a program is a string of symbols and large language models produce strings of symbols is it possible to generate programs with uh llms well maybe not according to cholet there are two basic cognitive Dimension One geometric based on proximity value Centric analogy the other structural based on exactness program Centric analogy this indeed reminds us of autography orography in Goodman more on this later neural network excel at the geometric component while meta programming requires exactness in fact for this purpose cholet hypothesize the use of other comput computational architectures rather than neural networks the point is that an algorithm must be correct truth not approximately correct very similitude likelihood Bas said just and a classic meme as you can see on the bugging you have chat gbt programming and then you spend a lot of hours debugging the result because you need exactly that result the second idea the first was meta programming is prompt as programming now Rome has suggested that prompt can be thought of as a pelo code I find this very interesting even if I do not agree sure they both share they both share appr proximity to human language and the idea of describing the control of the machine both prompting and uh pelo code if you want or code but pelo code even if related to language is still related to instructing the machine while prompting is a way to train tune inspect the machine so how can we how can we think in terms of semiotics of culture about this side guys that drives toward feedback computation these two approaches of to programming can be characterized from a semiotic perspective by referring to the couple grammar versus text lotman and uspensky but also immediately Echo a grammar defines a set of rules to be applied so that an output is generated that is formally consistent with the prescribed rules thus a grammar approach focuses on the prescription of rules raer a text acts as an example from which to infer regularities in order to generate new text can C es based on grammar relies on rules to be respected the letter being modifiable at the cost of a radical transformation of the culture itself text based cultures operate by a sort of continuous drift by chaining text that are recursively taken again into account as examples to be inspected in order to generate to produce new texts lotman and uspensky observes that textual culture are based on expression I think this is interesting while grammar um cultures on content because here content is thought to be defined in somewhere else one can easily observe that grammar relies on deduction text on abduction one could indeed think with Goodman That textual culture are autographic mostly while grammar ones are holographic it is indeed curious that such a shift towards textuality example autography at the higher level happens thanks to symbolic machine based on formal strictly holographic grammatical realle based systems standard machines I say this epistemological shift on the comput computation side is coupled with an analogous one on the user side as data are the driving force users have to focus on sets of examples in order to cope with algorithms as the re the relevance of Corpus archives sets I’m thinking about the work by Maria Julia and and so also the the the presentation of the different projects on the computation side generative AI application based on machine deep learning offer users a variety of results given the same input thus triggering on the user side practices based on examples oral culture are mostly text based oriented as there not there is no support for objectifying rules written cultures are more grammar based oriented this textual drifting is not of course entirely new secondary orality as introduced by on following mlang is a common feature of social net networks but it is now built into application so I would like to conclude with an example this is a Facebook group maybe you know it named cursed AI as you can see it is a group oriented towards the bizar and includes images but also comics and sometimes texts created agnostically with all available software you can see a screenshot of a tiny part of the media archive uh on the side of the slide it is immediately evident that the images follow one another organized in systems of variation on thematic figurative plastic basis with a clear idea of a progressive drift there are few apox leginon legoma mostly it is a chain of Pudo regularities but the point is not only that machine displays a human proposed prompt subject to various drifting there is properly an interaction now this specific relation between a certain machine Behavioral State and the users may be thought as a form of seic interaction of Mayor seic interaction is the detection of regularities that is abducted as a rule coupling the spotted reoccurring item item as the expression of a content of a possible new content while the term is used in the context of biosemiotics yet it seems interesting uh to investigate it cybernetically so to say in the context of generative AI regularities in the machine Behavior not strictly formalized are detected by users and used to interact with the machine itself but on the other side user behavior is captured by the same machine that capitalize it in order to adjust Its Behavior that’s leading to a multi-layered syic interaction so I’m sorry for the following example it is for the sake of science so the first example is is linked to the so-called yellow coolant a user Phil Barber has persistently posted raer bizarre images that refer to urine through a politically correct term in order to fool prompt censorship yellow coolant now twist taste often in relation to cars by obsessively repeating the post in the group is somehow triggered an imitative reaction atic interaction this thematic figurative Dimension has become significant the theme yellow coolant in its entirety has become an expression of possible other content contents it has become an object of recognition in terms of echo this unit is multimodal it includes a visual part of course but also the linguistic prompt itself as you can see on the container H not least the author who is honored as you can see on the field Barber uh uh soda soda can the second thread I’d like to show you is this one and uh it started with an image indicating Kandahar Circa 1923 the result is the construction of a truly surrealist Collective imagination of an alternative post first world war Kandahar catic interaction can be recursive so there is a thread in inspired by the giant something of Kanda art Circa 1923 so you can see there is the giant Cookie Monster there is the giant Lobster of Kandahar uh so this trade has been widely and wildly developed H so that an Afghan user has complained as the This Thread as colonized search engin so people were looking for kand C 193 1923 and having this kind of results on on Google so what is the role of the machine and I’m finishing one possibility is to provide additional features on which humans can further elaborate yet in the previous examples in some sense deep learning was a sort of displaying agent it acted mainly as a support the case of clus is interesting now most a um most image AI treats text as images in short there is no holographic status writing versus autographic image everything is an image and so unreadable writings can emerge in these images but also readable ones like clus this is terrible I know so uh AI proposed the name clonus the name entered in the community mainly Associated to meet with an organ based Spectrum initially it could be both a sort of anus and Fus like but also like an alien or a microorganism like a thirard grade or a Tulu like figure later usage stabilized the notion of clonus as a meat all so that it became also an entry in Urban Dictionary of course merging operation like sort of crossover epes are possible so to say so like as can see on the side clonus plus yellow coolant plus candar 1923 so to conclude uh first agency is probably the most interesting concept through which enter into dialogue with AI two deep learning is interesting for a discussion of subjectivity not in relation to a connectionist brain metaphor rather because it shows a cybernetic basic agency as it is feedback driven three this feedback creates a specific datadriven computational environment like call it feedback computation where environment is assumed in a biosemiotic sense even if biosci would surely not agree for as noted by Bas ofos the situation creates some specific communication circuitry and indeed some specific enunciative positions agency of different kinds is distributed as noted by colas blaz or by pauchi these five these initiative position are multiplied as they lie at various levels in first in the complex hierarchy of programming as a semiotic activity two in the human machine interaction in terms of usage three inside the text produced by generative AI since the opening of the seminars Maria Julia has actually observed this multiple perspective on enunciation I agree that in relation to enunciation this may ask for a typology of observers as proposed by the way by Jack Fontan since a seminal Les pass subjective uh my sixth point is that such a situation is indeed another move in a shift that mclan associated in general to the electronic Revolution towards a textual regim versus a grammar one based on an analogy examples drifting AOG graphy and finally curiously enough all this is powered at the End by the development on steroids like with big data gpus of the formal grammar based regime of classic computation which bakes it so thanks a lot thank you so much Andrea I think that we can switch from English to French as you as we like I think that he is’s able to to understand I know that he’s able to understand your work your question in in French but I would like to begin with a question in English um about Al ography and autography uh sure you presented grammar as Al apography and text as autography I was uh wondering if uh the the the the works of deep learning uh deep learning machines let’s think about about diffusion models with journey and and so on because you you are not focus you you did not focus only on this but for me it’s very important I was wondering if this relation between orography and autography you said that uh deep learning has to do with text more than with grammar um and I was thinking probably it’s a diagrammatic working more than um because to to to name and to describe the work of M Journey for for for instance uh we can say that it’s it’s a accumulation of texts um but from this accumulation of text uh we have to be able sometimes if we want to uh extract styles for instance okay so the problem of styles we have to extract from a multitude of texts something that can be transferred this is why I speak about the diagram in the sense of Goodman obviously to stay to stay in the Goodman uh thinking uh uh how do you are you do you agree about this idea of uh uh diagram as described in in in Goodman as um in a sense a middle way between holography and autography and how could you explain otherwise as well in other terms this extraction of something transferable from uh the analysis of texts that machine is doing uh well yeah I agree I think that if we uh if you look if you look at the at this issue from uh from the point of view of computation so by using that kind of lexicon and we spoke about rule based system versus uh machine Learning System maybe we could think about this in terms of eristics so anistic is a rule but actually it is not uh uh it is not guarantee uh in anistic and so I think that this idea of uh weak generalization is something that in some sense can be able to mediate between the the idea of the terministic rules on one side and on the other uh the the the idea see the multiplicity the idea of the idiosyncrasy of uh the empirical data in some sense so in fact you get some um some regularities sistic regularities so the way van go probably has used his hand no but how it is this is trans ferred on in in terms of visual markers and so you say okay if I see this visual Mark there are good chances that it’s vangog it mostly works sometimes it doesn’t work so I think that this idea of Ru as ristics can be a sort of mediations very good very good I was asking for a [Laughter] Mark um on this Ros I don’t understand if you’re we do you [Music] pref for we temp uh so uh concerning the geometric aspects uh well actually it’s those are not my consideration but I’m reporting the idea of Fran chol and one of the ideas is also that one can ask himself or herself why does it work and the uh one of the the main point in Chet which is if you want a philosophical and also if you want a metaphysical uh consideration he said that most of the data typically most of the organization of the information not of the data uh lies on a simpler uh on simpler spaces but we found it like crumpled so the idea is that uh uh this work because actual actual information is in itself structured in simple ways and we are able to extract this kind of information so it seems that that um uh uh it’s like those features are lowlevel information that we are able to extract like those mediation between the richness of idiosyncrasies of of examples of or text and the idea of something which has been uh which has been gen which is a generalization so the mediation between the two is that in some that in some way the space of empirical data is it’s like it’s too much folded and uh machine learning works because it is able to uncrumple it so to not to reduce it but to bring it back if you want that that’s an aspect of metaphysics if you want but to bring it back to to simpler uh Dimension so I think it’s interesting this fact because is also provide empirical examples so the idea is that it’s not like interpolating between data but it’s different it’s extracting information assuming that information in some sense is inside inside uh the data you are analyzing so I think that this uh it’s an interesting interpretation of the generalization process in this sense like geometric transformation uh I’m not sure about your last question because uh in some sense um in some sense uh large language models which are kind of but there variation on the same uh architectures like Transformer but at the end they deal with time not exactly with chronological time but with the logical time of text because text is a sequence so in fact from a technical point of view I don’t want to say that it is the same but you have the same issues with text and with uh moving images because they are sequences and you get some good results since some uh months in generate uh Transformations uh sure I agree with you that typically text and uh videos generated by AI mostly focus on continuous transformation because probably it’s the way the model under works by modifying weights and progressively transforming all the inner spaces that at the end results in continuous surface transformation so the point is that uh it the the real uh the real change CH uh challenge I think it’s uh well discussed by chol when he speaks about metaprogramming so can you generate a program that’s the real challenge with neural networks and uh well there’s a lot of people trying to do that and you get good results the the point is that you there is no approximation to a COR algorithm it can be correct or it is not correct there’s no way in the middle uh in fact by the way cholet who is an authority on um on deep learning proposed genetic algorithms for automatic uh generation just as an idea he throws out this idea as a way for generating uh code because in that case I think there’s a there’s a really a philosophical element so there are different things in some sense and I have to say that I agree with that actually that that but I don’t see in in the idea of uh um temporal sequence necessarily something particularly difficult very interesting mer very very interesting um r ET for inter layers so uhic the we uh uh well I’m I’m not an expert in machine learning by the way uh I in this at least in this field I think I understand something when I program it but I’ve never programmed directly but I think that an interesting examples is the idea of uh deep dream by Google in deep dream what happened is that the neural network is operating extracting features and these features the intermediate layers can be visualized in fact if you want to ins what’s going on and in deep dreams the idea was to use the visualization of the data of the way the neural networ worked to be composed with the original image and you get we know all those images with strange uh little dogs faces and eyes and whatever so that’s uh that’s uh interesting because in this case you really see that these intermediate later layers creates if you want isotopies meaning that they’re able to define a a pertinence a sort of relevance and they uh and they extract information in relation to that uh exclus exclusively I’m sorry EX in relation to that feature so in fact all those visual uh feature extra are sort of automatic way of creating isotopies and in in deep dreams you you really see them uh going up on the surface because they are used to to so so yes I agree with with you they can be thought as way of defining isotopy specific uh information based on a basic uh relevance fore

    Leave A Reply