{"rowid": 20, "title": "Make Your Browser Dance", "contents": "It was a crisp winter\u2019s evening when I pulled up alongside the pier. I stepped out of my car and the bitterly cold sea air hit my face. I walked around to the boot, opened it and heaved out a heavy flight case. I slammed the boot shut, locked the car and started walking towards the venue.\n\nThis was it. My first gig. I thought about all those weeks of preparation: editing video clips, creating 3-D objects, making coloured patterns, then importing them all into software and configuring effects to change as the music did; targeting frequency, beat, velocity, modifying size, colour, starting point; creating playlists of these\u2026 and working out ways to mix them as the music played.\n\nThis was it. This was me VJing.\n\nThis was all a lifetime (well a decade!) ago.\n\nWhen I started web designing, VJing took a back seat. I was more interested in interactive layouts, semantic accessible HTML, learning all the IE bugs and mastering the quirks that CSS has to offer. More recently, I have been excited by background gradients, 3-D transforms, the @keyframe directive, as well as new APIs such as getUserMedia, indexedDB, the Web Audio API\n\nBut wait, have I just come full circle? Could it be possible, with these wonderful new things in technologies I am already familiar with, that I could VJ again, right here, in a browser?\n\nWell, there\u2019s only one thing to do: let\u2019s try it!\n\nLet\u2019s take to the dance floor \n\nOver the past couple of years working in The Lab I have learned to take a much more iterative approach to projects than before. One of my new favourite methods of working is to create a proof of concept to make sure my theory is feasible, before going on to create a full-blown product. So let\u2019s take the same approach here.\n\nThe main VJing functionality I want to recreate is manipulating visuals in relation to sound. So for my POC I need to create a visual, with parameters that can be changed, then get some sound and see if I can analyse that sound to detect some data, which I can then use to manipulate the visual parameters. Easy, right?\n\nSo, let\u2019s start at the beginning: creating a simple visual. For this I\u2019m going to create a CSS animation. It\u2019s just a funky i element with the opacity being changed to make it flash.\n\n See the Pen Creating a light by Rumyra (@Rumyra) on CodePen\n\nA note about prefixes: I\u2019ve left them out of the code examples in this post to make them easier to read. Please be aware that you may need them. I find a great resource to find out if you do is caniuse.com. You can also check out all the code for the examples in this article\n\nStart the music\n\nWell, that\u2019s pretty easy so far. Next up: loading in some sound. For this we\u2019ll use the Web Audio API. The Web Audio API is based around the concept of nodes. You have a source node: the sound you are loading in; a destination node: usually the device\u2019s speakers; and any number of processing nodes in between. All this processing that goes on with the audio is sandboxed within the AudioContext.\n\nSo, let\u2019s start by initialising our audio context.\n\nvar contextClass = window.AudioContext;\nif (contextClass) {\n //web audio api available.\n var audioContext = new contextClass();\n} else {\n //web audio api unavailable\n //warn user to upgrade/change browser\n}\n\nNow let\u2019s load our sound file into the new context we created with an XMLHttpRequest.\n\nfunction loadSound() {\n\t//set audio file url\n\tvar audioFileUrl = '/octave.ogg';\n\t//create new request\n\tvar request = new XMLHttpRequest();\n\trequest.open(\"GET\", audioFileUrl, true);\n\trequest.responseType = \"arraybuffer\";\n\n\trequest.onload = function() {\n\t\t//take from http request and decode into buffer\n\t\tcontext.decodeAudioData(request.response, function(buffer) {\n\t \taudioBuffer = buffer;\n\t });\n\t\t}\n\trequest.send();\n}\n\nPhew! Now we\u2019ve loaded in some sound! There are plenty of things we can do with the Web Audio API: increase volume; add filters; spatialisation. If you want to dig deeper, the O\u2019Reilly Web Audio API book by Boris Smus is available to read online free.\n\nAll we really want to do for this proof of concept, however, is analyse the sound data. To do this we really need to know what data we have.\n\n Learning the steps\n\nLet\u2019s take a minute to step back and remember our school days and science class. I\u2019m sure if I drew a picture of a sound wave, we would all start nodding our heads.\n\n \n\nThe sound you hear is caused by pressure differences in the particles in the air. Sound pushes these particles together, causing vibrations. Amplitude is basically strength of pressure. A simple example of change of amplitude is when you increase the volume on your stereo and the output wave increases in size.\n\nThis is great when everything is analogue, but the waveform varies continuously and it\u2019s not suitable for digital processing: there\u2019s an infinite set of values. For digital processing, we need discrete numbers.\n\nWe have to sample the waveform at set time intervals, and record data such as amplitude and frequency. Luckily for us, just the fact we have a digital sound file means all this hard work is done for us. What we\u2019re doing in the code above is piping that data in the audio context. All we need to do now is access it.\n\nWe can do this with the Web Audio API\u2019s analysing functionality. Just pop in an analysing node before we connect the source to its destination node.\n\nfunction createAnalyser(source) {\n\t//create analyser node\n\tanalyser = audioContext.createAnalyser();\n\t//connect to source\n\tsource.connect(analyzer);\n\t//pipe to speakers\n\tanalyser.connect(audioContext.destination);\n}\n\nThe data I\u2019m really interested in here is frequency. Later we could look into amplitude or time, but for now I\u2019m going to stick with frequency.\n\nThe analyser node gives us frequency data via the getFrequencyByteData method.\n\n Don\u2019t forget to count!\n\nTo collect the data from the getFrequencyByteData method, we need to pass in an empty array (a JavaScript typed array is ideal). But how do we know how many items the array will need when we create it?\n\nThis is really up to us and how high the resolution of frequencies we want to analyse is. Remember we talked about sampling the waveform; this happens at a certain rate (sample rate) which you can find out via the audio context\u2019s sampleRate attribute. This is good to bear in mind when you\u2019re thinking about your resolution of frequencies.\n\nvar sampleRate = audioContext.sampleRate;\n\nLet\u2019s say your file sample rate is 48,000, making the maximum frequency in the file 24,000Hz (thanks to a wonderful theorem from Dr Harry Nyquist, the maximum frequency in the file is always half the sample rate). The analyser array we\u2019re creating will contain frequencies up to this point. This is ideal as the human ear hears the range 0\u201320,000hz.\n\nSo, if we create an array which has 2,400 items, each frequency recorded will be 10Hz apart. However, we are going to create an array which is half the size of the FFT (fast Fourier transform), which in this case is 2,048 which is the default. You can set it via the fftSize property.\n\n//set our FFT size\nanalyzer.fftSize = 2048;\n//create an empty array with 1024 items\nvar frequencyData = new Uint8Array(1024);\n\nSo, with an array of 1,024 items, and a frequency range of 24,000Hz, we know each item is 24,000 \u00f7 1,024 = 23.44Hz apart.\n\nThe thing is, we also want that array to be updated constantly. We could use the setInterval or setTimeout methods for this; however, I prefer the new and shiny requestAnimationFrame.\n\nfunction update() {\n \t//constantly getting feedback from data\n \trequestAnimationFrame(update);\n \tanalyzer.getByteFrequencyData(frequencyData);\n}\n\n Putting it all together\n\nSweet sticks! Now we have an array of frequencies from the sound we loaded, updating as the sound plays. Now we want that data to trigger our animation from earlier.\n\nWe can easily pause and run our CSS animation from JavaScript:\n\nelement.style.webkitAnimationPlayState = \"paused\";\nelement.style.webkitAnimationPlayState = \"running\";\n\nUnfortunately, this may not be ideal as our animation might be a whole heap longer than just a flashing light. We may want to target specific points within that animation to have it stop and start in a visually pleasing way and perhaps not smack bang in the middle.\n\nThere is no really easy way to do this at the moment as Zach Saucier explains in this wonderful article. It takes some jiggery pokery with setInterval to try to ascertain how far through the CSS animation you are in percentage terms.\n\nThis seems a bit much for our proof of concept, so let\u2019s backtrack a little. We know by the animation we\u2019ve created which CSS properties we want to change. This is pretty easy to do directly with JavaScript.\n\nelement.style.opacity = \"1\";\nelement.style.opacity = \"0.2\";\n\nSo let\u2019s start putting it all together. For this example I want to trigger each light as a different frequency plays. For this, I\u2019ll loop through the HTML elements and change the opacity style if the frequency gain goes over a certain threshold.\n\n//get light elements\nvar lights = document.getElementsByTagName('i');\nvar totalLights = lights.length;\n\nfor (var i=0; i 160){\n //start animation on element\n lights[i].style.opacity = \"1\";\n } else {\n lights[i].style.opacity = \"0.2\";\n }\n}\n\nSee all the code in action here. I suggest viewing in a modern browser :)\n\nAwesome! It is true \u2014 we can VJ in our browser!\n\nLet\u2019s dance!\n\nSo, let\u2019s start to expand this simple example. First, I feel the need to make lots of lights, rather than just a few. Also, maybe we should try a sound file more suited to gigs or clubs.\n\nCheck it out!\n\nI don\u2019t know about you, but I\u2019m pretty excited \u2014 that\u2019s just a bit of HTML, CSS and JavaScript!\n\nThe other thing to think about, of course, is the sound that you would get at a venue. We don\u2019t want to load sound from a file, but rather pick up on what is playing in real time. The easiest way to do this, I\u2019ve found, is to capture what my laptop\u2019s mic is picking up and piping that back into the audio context. We can do this by using getUserMedia.\n\nLet\u2019s include this in this demo. If you make some noise while viewing the demo, the lights will start to flash.\n\n And relax :)\n\nThere you have it. Sit back, play some music and enjoy the Winamp like experience in front of you.\n\nSo, where do we go from here? I already have a wealth of ideas. We haven\u2019t started with canvas, SVG or the 3-D features of CSS. There are other things we can detect from the audio as well. And yes, OK, it\u2019s questionable whether the browser is the best environment for this. For one, I\u2019m using a whole bunch of nonsensical HTML elements (maybe each animation could be held within a web component in the future). But hey, it\u2019s fun, and it looks cool and sometimes I think it\u2019s OK to just dance.", "year": "2013", "author": "Ruth John", "author_slug": "ruthjohn", "published": "2013-12-02T00:00:00+00:00", "url": "https://24ways.org/2013/make-your-browser-dance/", "topic": "code"} {"rowid": 70, "title": "Bringing Your Code to the Streets", "contents": "\u2014 or How to Be a Street VJ\nOur amazing world of web code is escaping out of the browser at an alarming rate and appearing in every aspect of the environment around us. Over the past few years we\u2019ve already seen JavaScript used server-side, hardware coded with JavaScript, a rise of native style and desktop apps created with HTML, CSS and JavaScript, and even virtual reality (VR) is getting its fair share of front-end goodness.\nYou can go ahead and play with JavaScript-powered hardware such as the Tessel or the Espruino to name a couple. Just check out the Tessel project page to see JavaScript in the world of coffee roasting or sleep tracking your pet. With the rise of the internet of things, JavaScript can be seen collecting information on flooding among other things. And if that\u2019s not enough \u2018outside the browser\u2019 implementations, Node.js servers can even be found in aircraft!\nI previously mentioned VR and with three.js\u2019s extra StereoEffect.js module it\u2019s relatively simple to get browser 3D goodness to be Google Cardboard-ready, and thus set the stage for all things JavaScript and VR. It\u2019s been pretty popular in the art world too, with interactive works such as Seb Lee-Delisle\u2019s Lunar Trails installation, featuring the old arcade game Lunar Lander, which you can now play in your browser while others watch (it is the web after all). The Science Museum in London held Chrome Web Lab, an interactive exhibition featuring five experiments, showcasing the magic of the web. And it\u2019s not even the connectivity of the web that\u2019s being showcased; we can even take things offline and use web code for amazing things, such as fighting Ebola.\nOne thing is for sure, JavaScript is awesome. Hell, if you believe those telly programs (as we all do), JavaScript can even take down the stock market, purely through the witchcraft of canvas! Go JavaScript!\nNow it\u2019s our turn\nSo I wanted to create a little project influenced by this theme, and as it\u2019s Christmas, take it to the streets for a little bit of party fun! Something that could take code anywhere. Here\u2019s how I made a portable visual projection pack, a piece of video mixing software and created some web-coded street art.\nStep one: The equipment\nYou will need:\n\nOne laptop: with HDMI output and a modern browser installed, such as Google Chrome.\nOne battery-powered mini projector: I\u2019ve used a Texas Instruments DLP; for its 120 lumens it was the best cost-to-lumens ratio I could find.\nOne MIDI controller (optional): mine is an ICON iDJ as it suits mixing visuals. However, there is more affordable hardware on the market such as an Akai LPD8 or a Korg nanoPAD2. As you\u2019ll see in the article, this is optional as it can be emulated within the software.\nA case to carry it all around in.\n\n\nStep two: The software\nThe projected visuals, I imagined, could be anything you can create within a browser, whether that be simple HTML and CSS, images, videos, SVG or canvas. The only requirement I have is that they move or change with sound and that I can mix any one visual into another.\nYou may remember a couple of years ago I created a demo on this very site, allowing audio-triggered visuals from the ambient sounds your device mic was picking up. That was a great starting point \u2013 I used that exact method to pick up the audio and thus the first requirement was complete. If you want to see some more examples of visuals I\u2019ve put together for this, there\u2019s a showcase on CodePen.\nThe second requirement took a little more thought. I needed two screens, which could at any point show any of the visuals I had coded, but could be mixed from one into the other and back again. So let\u2019s start with two divs, both absolutely positioned so they\u2019re on top of each other, but at the start the second screen\u2019s opacity is set to zero.\nNow all we need is a slider, which when moved from one side to the other slowly sets the second screen\u2019s opacity to 1, thereby fading it in.\nSee the Pen Mixing Screens (Software Version) by Rumyra (@Rumyra) on CodePen.\nMixing Screens (CodePen)\n\nAs you saw above, I have a MIDI controller and although the software method works great, I\u2019d quite like to make use of this nifty piece of kit. That\u2019s easily done with the Web MIDI API. All I need to do is call it, and when I move one of the sliders on the controller (I\u2019ve allocated the big cross fader in the middle for this), pick up on the change of value and use that to control the opacity instead.\nvar midi, data;\n// start talking to MIDI controller\nif (navigator.requestMIDIAccess) {\n navigator.requestMIDIAccess({\n sysex: false\n }).then(onMIDISuccess, onMIDIFailure);\n} else {\n alert(\u201cNo MIDI support in your browser.\u201d);\n}\n\n// on success\nfunction onMIDISuccess(midiData) {\n // this is all our MIDI data\n midi = midiData;\n\n var allInputs = midi.allInputs.values();\n // loop over all available inputs and listen for any MIDI input\n for (var input = allInputs.next(); input && !input.done; input = allInputs.next()) {\n // when a MIDI value is received call the onMIDIMessage function\n input.value.onmidimessage = onMIDIMessage;\n }\n}\n\nfunction onMIDIMessage(message) {\n // data comes in the form [command/channel, note, velocity]\n data = message.data;\n\n // Opacity change for screen. The cross fader values are [176, 8, {0-127}]\n if ( (data[0] === 176) && (data[1] === 8) ) {\n // this value will change as the fader is moved\n var opacity = data[2]/127;\n screenTwo.style.opacity = opacity;\n }\n}\n\nThe final code was slightly more complicated than this, as I decided to switch the two screens based on the frequencies of the sound that was playing, and use the cross fader to depict the frequency threshold value. This meant they flickered in and out of each other, rather than just faded. There\u2019s a very rough-and-ready first version of the software on GitHub.\nPhew, Great! Now we need to get all this to the streets!\nStep three: Portable kit\nDid you notice how I mentioned a case to carry it all around in? I wanted the case to be morphable, so I could use the equipment from it too, a sort of bag-to-usherette-tray-type affair. Well, I had an unused laptop bag\u2026\n\nI strengthened it with some MDF, so when I opened the bag it would hold like a tray where the laptop and MIDI controller would sit. The projector was Velcroed to the external pocket of the bag, so when it was a tray it would project from underneath. I added two durable straps, one for my shoulders and one round my waist, both attached to the bag itself. There was a lot of cutting and trimming. As it was a laptop bag it was pretty thick to start and sewing was tricky. However, I only broke one sewing machine needle; I\u2019ve been known to break more working with leather, so I figured I was doing well. By the way, you can actually buy usherette trays, but I just couldn\u2019t resist hacking my own :)\nStep four: Take to the streets\nFirst, make sure everything is charged \u2013 everything \u2013 a lot! The laptop has to power both the MIDI controller and the projector, and although I have a mobile phone battery booster pack, that\u2019ll only charge the projector should it run out. I estimated I could get a good hour of visual artistry before I needed to worry, though.\nI had a couple of ideas about time of day and location. Here in the UK at this time of year, it gets dark around half past four, so I could easily head out in a city around 5pm and it would be dark enough for the projections to be seen pretty well. I chose Bristol, around the waterfront, as there were some interesting locations to try it out in. The best was Millennium Square: busy but not crowded and plenty of surfaces to try projecting on to.\nMy first time out with the portable audio/visual pack (PAVP as it will now be named) was brilliant. I played music and projected visuals, like a one-woman band of A/V!\n\n\nYou might be thinking what the point of this was, besides, of course, it being a bit of fun. Well, this project got me to look at canvas and SVG more closely. The Web MIDI API was really interesting; MIDI as a data format has some great practical uses. I think without our side projects we may not have all these wonderful uses for our everyday code. Not only do they remind us coding can, and should, be fun, they also help us learn and grow as makers.\nMy favourite part? When I was projecting into a water feature in Millennium Square. For those who are familiar, you\u2019ll know it\u2019s like a wall of water so it produced a superb effect. I drew quite a crowd and a kid came to stand next to me and all I could hear him say with enthusiasm was, \u2018Oh wow! That\u2019s so cool!\u2019\nYes\u2026 yes, kid, it was cool. Making things with code is cool.\nMassive thanks to the lovely Drew McLellan for his incredibly well-directed photography, and also Simon Johnson who took a great hand in perfecting the kit while it was attached.", "year": "2015", "author": "Ruth John", "author_slug": "ruthjohn", "published": "2015-12-06T00:00:00+00:00", "url": "https://24ways.org/2015/bringing-your-code-to-the-streets/", "topic": "code"} {"rowid": 248, "title": "How to Use Audio on the Web", "contents": "I know what you\u2019re thinking. I never never want to hear sound anywhere near a browser, ever ever, wow! \ud83d\ude49\nYou\u2019re having flashbacks, flashbacks to the days of yore, when we had a element and yup did everyone think that was the most rad thing since . I mean put those two together with a , only use CSS colour names, make sure your borders were all set to ridge and you\u2019ve got yourself the neatest website since 1998.\nThe sound played when the website loaded and you could play a MIDI file as well! Everyone could hear that wicked digital track you chose. Oh, surfing was gnarly back then.\nYes it is 2018, the end of in fact, soon to be 2019. We are certainly living in the future. Hoverboards self driving cars, holodecks VR headsets, rocket boots drone racing, sound on websites get real, Ruth.\nWe can\u2019t help but be jaded, even though the element is depreciated, and the autoplay policy appeared this year. Although still in it\u2019s infancy, the policy \u201ccontrols when video and audio is allowed to autoplay\u201d, which should reduce the somewhat obtrusive playing of sound when a website or app loads in the future.\nBut then of course comes the question, having lived in a muted present for so long, where and why would you use audio?\n\u2728 Showcase Time \u2728\nThere are some incredible uses of audio on websites today. This is my personal favourite futurelibrary.no, a site from Norway chronicling books that have been published from a forest of trees planted precisely for the books themselves. The sound effects are lovely, adding to the overall experience.\nfuturelibrary.no\nAnother site that executes this well is pottermore.com. The Hogwarts WebGL simulation uses both sound effects and ambient background music and gives a great experience. The button hovers are particularly good.\npottermore.com\nEighty-six and a half years is a beautiful narrative site, documenting the musings of an eighty-six and a half year old man. The background music playing on this site is not offensive, it adds to the experience.\nEighty-six and a half years\nSound can be powerful and in some cases useful. Last year I wrote about using them to help validate forms. Audiochart is a library which \u201callows the user to explore charts on web pages using sound and the keyboard\u201d. Ben Byford recorded voice descriptions of the pages on his website for playback should you need or want it. There is a whole area of accessibility to be explored here.\nThen there\u2019s education. Fancy beginning with some piano in the new year? flowkey.com is a website which allows you to play along and learn at the same time. Need to brush up on your music theory? lightnote.co takes you through lessons to do just that, all audio enhanced. Electronic music more your thing? Ableton has your back with learningmusic.ableton.com, a site which takes you through the process of composing electronic music. A website, all made possible through the powers with have with the Web Audio API today.\nlightnote.co\nlearningmusic.ableton.com\nConsiderations\nYes, tis the season, let\u2019s be more thoughtful about our audios. There are some user experience patterns to begin with. 86andahalfyears.com tells the user they are about to \u2018enter\u2019 the site and headphones are recommended. This is a good approach because it a) deals with the autoplay policy (audio needs to be instigated by a user gesture) and b) by stating headphones are recommended you are setting the users expectations, they will expect sound, and if in a public setting can enlist the use of a common electronic device to cause less embarrassment.\nEighty-six and a half years\nAllowing mute and/or volume control clearly within the user interface is a good idea. It won\u2019t draw the user out of the experience, it\u2019ll give more control to the user about what audio they want to hear (they may not want to turn down the volume of their entire device), and it\u2019s less thought to reach for a very visible volume than to fumble with device settings.\nIndicating that sound is playing is also something to consider. Browsers do this by adding icons to tabs, but this isn\u2019t always the first place to look for everyone.\nTo The Future\nSo let\u2019s go!\nWe see amazing demos built with Web Audio, and I\u2019m sure, like me, they make you think, oh wow I wish I could do that / had thought of that / knew the first thing about audio to begin to even conceive that.\nBut audio doesn\u2019t actually need to be all bells and whistles (hey, it\u2019s Christmas). Starting, stopping and adjusting simple panning and volume might be all you need to get started to introduce some good sound design in your web design.\nIsn\u2019t it great then that there\u2019s a tutorial just for that! Head on over to the MDN Web Audio API docs where the Using the Web Audio API article takes you through playing and pausing sounds, volume control and simple panning (moving the sound from left to right on stereo speakers).\nThis year I believe we have all experienced the web as a shopping mall more than ever. It\u2019s shining store fronts, flashing adverts, fast food, loud noises.\nLet\u2019s use 2019 to create more forests to explore, oceans to dive and mountains to climb.", "year": "2018", "author": "Ruth John", "author_slug": "ruthjohn", "published": "2018-12-22T00:00:00+00:00", "url": "https://24ways.org/2018/how-to-use-audio-on-the-web/", "topic": "design"}