Join Us For ADC24 – Bristol – 11-13 November 2024
    More Info: https://audio.dev/
    @audiodevcon​

    Procedural Sound Design Applications – From Embedded to Games – Aaron Myles Pereira – ADCx India 2024

    Procedural Sound Design, defined by its adaptability and real-time generation capabilities, offers newfound possibilities in crafting immersive and dynamic auditory experiences. In this presentation, we will explore its key principles, benefits, and applications across two diverse domains: vehicles and video games.
    _

    Edited by Digital Medium Ltd – online.digital-medium.co.uk
    _

    Organized and produced by JUCE: https://juce.com/
    _

    Special thanks to the ADC24 Team:

    Sophie Carus
    Derek Heimlich
    Andrew Kirk
    Bobby Lombardi
    Tom Poole
    Ralph Richbourg
    Prashant Mishra

    #adc #audio #sounddesign #embedded

    [Applause] hi everyone uh my name is Miles Perera uh and I’m going to give a talk on procedural sound design from embedded to games so who I am I am a creative technologist a musician and a visual artist sort of um I studied at Berkeley I studied at Berkeley College of Music in the electronic production and design Department um and after that I quickly deviated from being a musician and being moan to installations and doing um interactive media uh and the main thing that I use is Max MSP that’s the thing that I was trained on and where it’s the medium that I use to express myself so what is procedural sound design procedural sound design is a technique or a method that involves using that involves using algorithms to create dynamic or Dynamic audio experiences with certain parameters that that use real time gen real time gen generation of sounds um so what does this mean is that you are when you’re designing a experience of some sort be it for an electronic vehicle be it for like an installation for games for an app you are not recording samples that will then stay in your me in your in your memory and get played at certain Target events you are writing a synth engine which is continuously gen which is continuously generating sound and based on certain parameters that you assign to that s that synth engine you can adapt the sound like you can change the pitch of it the tone the tamber the filtration the effects and all sorts of things now the question that I get asked is like why like why should we do procedural sound design the main thing that I talk about is that it just saves a lot of space because whenever uh like let’s say you have a tripa game and you’re shipping it uh if any of you all have been on Steam a tri game is nearly like 60 to 80 gb and I remember the times when I used to wait like 2 days to like download a game and just play it uh but procedural sound design does decrease that size because you’re not loading so you’re not shipping with a lot of assets that are just sound based so what are some of tools and tech and and technologies that you can use uh there’s Max there’s Max MSP that is created by cycling SE by cycling Set set4 uh it’s a graphical programming language that lets you real time generate sound and visuals uh I’m pretty sure since we in the audio Dev commy there are a lot of people who use max um max is paid uh you can export code from it but then as soon as you cross a certain threshold of income there’s some licensing that gets in that gets involved uh there’s pure data which is kind of like Max but a boil down version pure data is is completely free it’s it’s it’s open source um if you’re looking at commercial uh at commercial stuff there’s audio evil designer that’s based out of Boston um they have um set of tools that you can also use to like compile code and run it and it does real time all uh and it does Real Time audio problem processing and if you’re working in games there’s Vice that lets you use uh Pro seal sound design so I’m going to get into a few things so max MSP how do you like make this patch in Max and then you take it out uh there are two things for the longest time there was Genta uh now what glda is is it’s a subset of Max that has uh not all like every time you pull up a small thing in Max it’s called an an or object in gentle you have operators which is a boil down version of what you get in the whole world of Max so you don’t have all the filters that that uh that that you get you have a very smaller subsection of operators that you can use this does limit you in creating a very uh immersive experience but what you can do with that is that you can build your own filters you can build your own synthesizers uh how you pass it midi data and all of this is written in the Gen language so you’re it’s a subset inside Max and uh you still are using a patching kind of uh kind of kind of environment but uh you can export that code into C++ you can have certain parameters that you expose you can then recompile that into like a VST into an app um you couldn’t do it for uh the web audio assembly that is why last year uh R uh rainbow was released uh rainbow like since rainbow took gen which would let you use like 60% of the code that you wrote in Max outside of Max it lets you it takes you to like 95% now it lets you do a C++ e export with like full uh documentation as to how to incorporate that into let’s say um another vsst so you can like build a juice app sorry a juice VST straight from Rainbow um you can export it to web audio which is great so if you have uh a patch you like the syn sound or you like this like generative Ambience that you want to host on your website you can just do a JavaScript export if you are doing an interactive installation where uh let’s say you have like 16 sensors and you need to like when someone triggers sensor number one a certain sound needs to play you can export that to like a Raspberry Pi as well and then uh there’s full set of do documentation as to how you get sensors then attached to these parameters and Trigger sound events next is pure data P data again as I said is like Max just a lot less objects to use and weirdly when I started to work with P data for a project um it I found it that it was really really limiting because simple stuff like having a scale object which you use a lot in Max where you just want to take an incoming number that’s going from let’s say 0 to 1 and you want to scale that to 300 to 556 it’s not there you just have to like build it from scratch which is great cuz if you’re used and spoiled by Max you learn to then remake certain objects that you’re really used to inside pure data so you you teach yourself how to uh make certain things and how certain Logics work now pure data uh doesn’t natively support exporting all uh all all the code that you make in it uh but there but there are people like uh who develop the heavy the heavy compiler where you can download this uh tool chain onto your terminal and you just need to put your pure data patch into that and it generates um a a c a C++ code output with certain parameters that you can expose so the only downside of that is that you need to build everything in PD vanilla so there are a few flavors of uh pure data there’s uh pure data vanilla there’s pure data extended which lets which gives you some of the features that are then Max into pure data and then I think so there PD cat which is like it just looks a little um better and it’s a little cute I like using it at times when I’m when I want to feel happy uh but there’s pure data which you can use to export so there’s like any synthesis technique that you want to build build you can do it so let’s say you want to build a subtractive synth you can do it in pure data you can do it in Max and then you can X and and then you can export it and host it in another land uh grandular you can do FM uh a lot of times you can do like even con uh convolution based stuff but that is a little bit tricky where you need to like involve other C++ code to like get it in you can do a lot of time based Pro processing so my main experience in using Pro seal sound design is in automative app applications uh last year I was subcontracted by a contractor for uh an uh an e startup in Bangalore where they wanted to have a fake exhaust notes sound for their bikes uh if youall have been around you might have seen it it’s pretty it’s pretty popular I can’t take names cuz I have an an an an NDA but but how I got in touch with them is that oh I didn’t I didn’t mention this I also work for an for an animal Factory and amplifications I am one of the three I help in everything from circuit design to product design Graphics marketing PR um very small company so yeah uh so the guy who got in touch with me was one of our customers and he was like uh hey I got this project from this uhy we need someone who’s like really good at Max and since I had uh I had like tons of experience in Max and a little bit of procedural sound design x uh X and little bit of procedural sound designer experience uh I got on board with them and since they were more of a traditional sound designer where they worked in da in D in da in Daws like logic they how they were implementing their fake exhaust note was designing it in massive X and then sporting samples having samples pitched up by certain parameters and then you interpolate between the S the samples uh with like given external parameters like the throttle or the RPM and when I was got on board like every Bangalore startup they were in fire mode they’re like it needs to ship next month and I was like y’all that’s not going to happen that you cannot ship a project next month they’re like no we have the software it’s a to VI designer we’ve got all these samples and then I played it and it first first of all audio designer is uh not interpreted it’s um it’s compiled so every time you make a new design you have to like hit compile and you wait for 5 minutes and then you as soon as it plays you’re like it’s wrong oh sorry I can’t swear damn it’s wrong um uh so you have to play hit stop then go change your patch and then see okay is am I pitch tunings right what’s happening at like 20 km per hour okay it’s not sounding right uh I’m getting a lot of aing so I came in I was like hey guys you know there’s this thing called procedural sound design where you can like make a synth engine you can assign it parameters and it’s like fully real time it takes barely any Cycles on your on your CPU so your UI doesn’t hang because every time that they had to load in samples uh their UI would hang cuz it’s taking it from the uh um from their storage to the RAM and then there would be a little glitch so it was a labor of love to get them convinced uh where we had to write white papers and I’m not a developer so I didn’t know what a white paper was so I had to write a white paper as to why we should use procedural sound design then there was the whole licensing talk we got in touch with cycling and said that hey you know we want to use this they’re like oh yeah there’s this thing called rain rainbow and we got a beta testing version of it turns out that if you’re uh more than $200,000 in Revenue you need to strike a licensing deal where each time the product is shipped uh cycling gets a cut so we uh the the higher ups went with pure data um there and that was also a whole thing of how we set up the tool chain where we had to First make demos of certain parameters of certain synth engines cuz they had different synth engines that they wanted to run depending on how the UI changes um and uh yeah so that was how how it went through we Ed pure data to recreate The Sounds in real time then uh how we would test these sounds and validate these sounds on our laptop was that we had to design an app that ran on their software that would record can data if uh if you’re not familiar with can it’s controller it’s controller area network it’s the protocol which on on which all automative um signals are sent through um so you like what’s your art RPM what’s your throttle what’s your brake temp temperature uh is your oil low all of that is sent through the can to the can to the canet the can net the can Network so we had to build an app that would record can data into a CSV file so we would record can data go on a ride and then you take all the CSV data you convert it into a wave file that wave file would then be played through a through a tablon live piped into pure data which would then change parameters so depending on how fast the person is going how high they like how quickly they turn their throttle all the data would then be fed into pure data and then pure data would make the sounds now once pure data makes the sounds you’re like okay this is great you whichever parameters you want to expose uh to the can and uh to tune it afterwards you need to like ex you need to expose that run it through the hey compiler uh you get a c you get a C++ output we would send that the DSP team the DSP team would then recompile it into Java and that would run as a service now after all of this is done it still you still need to uh like what you make on your laptop versus how you interact with it on a vehicle is extremely different cuz you might think that okay this sounds good when it like goes from 10 from like 10 from like 10 km per hour to 20 to like to like 20 km per hour but what happens when when you break like the the the car the sorry the B the vehicle can’t say things the vehicle is slowing down gradually but for the user they have like gone to a sudden stop so how do you like bring that in so then okay wait you also need to taking break data and and as soon as there’s a Break um as soon as we see that okay there’s a break been applied what do you do to the pitch what do you do the or what do you do to the volume if someone presses the front brake versus someone presses the back break or the the back break is a more slowed down gentle stop but like the front brake is a like immediate stop so how do we bring that in so all of this if you had to do this via samples I mean you can’t do it like there’s no way so that is what um that is what we had done for that team uh it went well they shipped it I don’t know how many people use the um use the feature that we worked on but it’s there um and so that is what you can do for uh vehicles and that is more as uh a thing called AAS acoustic vehicle alerting alerting system for for for for for EVS where they are so quiet between the 0 to 20 km per hour that it’s a kind of a hazard for people with uh limited Vision uh so yeah so what you can do is that there are other companies in like Porsche and BMW that are like having this plate inside their cars because they want their users to feel because they have this pedigree of having all these V8 engines and V and V12 engines and they want to recreate that experience in their cars so they have that inside uh inside their cars to have their users feel like oh yeah this sounds great there’s Mercedes who makes like really really nice sound for their eqs engine it sounds somewhere between Tron and yeah it’s great and then they also like Braden regen so each time the car goes in to regen more there’s a different sound that plays so it also a uh like affirms to the user that okay you know what all of these things are actually happening in the car and it’s a more rounded experience uh this is a project that I worked on while I was working at a architectural studio in Boston uh doing a lot of installations my boss was kind of a crazy guy he was from the media lab in his 60s he found a Porsche Tara that was on the verge of breaking down so he converted into uh an electronic vehicle this was the first time back in 2019 when I first got exposed procedural sound design and over here there’s the whole UI of the car was built in Max which I don’t know why he did that but everything that we did was done in Max so we had like 30 installations that ran 24/7 throughout Europe and the us and for some reason he always stuck to Max and um it’s not the best thing like it’s great to Pro prototype in Max you don’t really ship a a product in Max but working over there learned how you can actually ship a a product in Max and this is one of the engines that didn’t get uh um assigned like it didn’t go through like the high UPS didn’t like it but I loved it so this is a quick uh [Music] demo and this is made with just FM and but in instead of using like it’s a two two operator two channel FM but instead of using sine waves you just like use a triangle or a saw and a phaser you put some offsets and weirdly it sounds like a very good engine uh video games I think so I quickly spoke on this um you use it like you can use it for footsteps you can use it for uh waterfall sounds like all your environmental sounds procedural sound and is great for that because you don’t need to like ship like a 20 second waterfall sound that has to come continuously Loop you just have a few cycles and you can just have it have it running the back you have adaptive scores this is something that MC Gordon used for Doom if you haven’t checked that gtdc talk it’s a great one as to how when a guy comes in front how’s a music change he has not used Pro he has not used procedural sound design but uh it’s also a great way to like use those techniques into proced sound design this is a quick thing that I had made in R&V to create a environmental sound for Unity so I just built a plugin that gives me like a nighttime Ambience uh and you have full like here I’ve only uh exposed the parameters for the volumes but you can expose any parameter that you build into the synth as well uh challenges balancing computer power versus audio quality procedural sound design doesn’t always sound great like in without without context some of the footsteps sound terrible but then it’s only when you put that footstep into the video and you see the person’s step is where you where you make your brain makes the connection like oh wait yeah no that is a footstep so always you always you need to balance that as to how much audio how much stuff do you like how much Reliance you give to audio quality versus computing power you can make engaging sound soundscapes compatibility you can have this hosted on the web you can have it on an STM board you can have it on fpj if you want you can have it as an Android app anywhere as long as it’s in C C++ it can be compiled and output whever you want and there are some regulatory aspects this is mainly for uh electronic Vehicles as to how loud they are how quiet they are all of this because it’s such a tunable system based on whever you ship this item like Europe has a very different legal system as to how low the car need to be versus the US and there are also different things like you can have a fart sound for your backing up things because which which which tasla does that doesn’t fly in Europe so you can always you know balance all these things out future Trends Aid driven a AI driven audio integration there’s a company in Germany that’s doing this where they have procedural sound design models that uh are tuned to how the person is driving so it like takes in all the data of how hard they accelerate how slow they accelerate and they take all that data and they change their models in real time with that there’s real time Environmental simulation it’s great for VR and all that stuff I’m not really into that as of right now but I’m sure it’s going to be um our future and the best practice is always test in context never have it on your headphones and test it on your laptop and be like this sounds great you need to put it on the vehicle you need to put it in the game you need to put it wherever it needs to be and always test in context because your models really change depending on wherever they are shipped so always T in context that’s my best practice thing yeah thank [Applause] you oh B hi hi um this is more of a hardware question um where are the speakers uh placed in the vehicle uh is it where the engine typically was at because that’s where the sound comes from but it’s more of a curiosity like where in in the in the vehicle that I worked on let’s say you’re working on a motorcycle let’s assume you’re working on a motorcycle in that case it depends so ideally um you would want them to be at your level but it uh you always want to design from how a Trad traditional motorcycle works if is if that’s what you want to recreate corre so you would want it in the body You’ want it because you’d want it where a traditional engine would be correct uh then again the bulk of the sound comes from the silencer not M the engine that’s kind of in the behind so yeah so you would want it there’s Revolt who’s done a great job uh they are more of like a performance SC scooter and where they are uh speakers places that it’s placed right underneath the battery where so if you have a traditional fuel tank it’s placed underneath that and it’s a huge subwoofer that also like shakes the bike a little bit so that is where they have placed it and honestly that was the best implementation that I’ve seen in India for uh an avas system that that also gives you like a sense of like a vibration like the and it like reaffirms that you are riding I mean a lot of people don’t use it uh but for the ones who are like ice nerds they really like that so for them they like keep it there also the size of your speaker change like depends on how good it’ll sound so you can create the best sound that like you know it sounds great but then if you are uh developing a sound that’s going to be on like a one in speaker you need to take those challenges and considerations into account when you’re delivering uh a product like this and are the speakers mature enough to be like or are they cased well enough to be like Road prooof you know like water does I mean I damage I’m I don’t really deal in that okay yeah but I hope they are because they should be they should be yeah from your testing perspective oh from from my testing I can’t comment on that yeah industrial designer yeah I mean I I would like to but uh I might get into legal troubles I can’t do that I see okay yeah um I just wanted to ask in terms of the safety considerations um I’m I’m guessing in this case you were talking about the safety of the rider and being able to know when they’re breaking and feel intuitively feel that uh no that is the safety of everyone around the as well yeah which is what I wanted to ask um especially I guess in the Indian context where a lot of places we don’t have foot paaths we have pedestrians on the street I mean a lot of times I felt the wind of the electric bus passed by me before I knew there was a bus yeah um so what kind of um things did you take into I mean what kind of um factors did you take consideration consideration and what you about them so there is no regulatory body setup in India that is checking this that’s why most of our electric vehicles like and the companies in India they don’t really have it ather um their engine is loud enough so you know that it’s there uh but there are a lot of buses that don’t know and uh so usually try and stick with the European Standard where between 0 and 20 km/ hour which is the speed at which it is quite enough and there’s not enough wind noise to like know that there is something near you uh it needs to I think so hit like 8 no not 85 that’s I think so 75 or something in like a certain R radius yeah that was like we were following that cuz India doesn’t have any did you also take into consideration maybe a higher noise flow in India um yes we did take that into consideration but by the end of the project it turned more into shipping the product than keeping these things into mind so I’m I’m guessing if there’s someone who’s taking a more uh slow approach to like building um this feature into it they will keep all these things into kind into uh conation the poraga that I worked on um my boss wanted that to be Road road legal in Lexington County and for that we had certain things that we need to meet so there we like paid attention to it that only because he wanted to like drive it in his City yeah but so we had like a b sound a b sound and there huge sub subwoofer that would just make uh um we I tried to I tried to and I tried to emulate a pora sound uh it wasn’t great cuz that was my first time doing it uh but yeah it worked and he just needed to like Drive in his City so that’s why we had it yeah thanks

    1 Comment

    Leave A Reply