{"rowid": 290, "title": "Creating a Weekly Research Cadence", "contents": "Working on a product team, it\u2019s easy to get hyper-focused on building features and lose sight of your users and their daily challenges. User research can be time-consuming to set up, so it often becomes ad-hoc and irregular, only performed in response to a particular question or concern. But without frequent touch points and opportunities for discovery, your product will stagnate and become less and less relevant. Setting up an efficient cadence of weekly research conversations will re-focus your team on user problems and provide a steady stream of insights for product development.\nAs my team transitioned into a Lean process earlier this year, we needed a way to get more feedback from users in a short amount of time. Our users are internet marketers\u2014always busy and often difficult to reach. Scheduling research took days of emailing back and forth to find mutually agreeable times, and juggling one-off conversations made it difficult to connect with more than one or two people per week. The slow pace of research was allowing additional risk to creep into our product development.\nI wanted to find a way for our team to test ideas and validate assumptions sooner and more often\u2014but without increasing the administrative burden of scheduling. The solution: creating a regular cadence of research and testing that required a minimum of effort to coordinate. \nSetting up a weekly user research cadence accelerated our learning and built momentum behind strategic experiments. By dedicating time every week to talk to a few users, we made ongoing research a painless part of every weekly sprint. But increasing the frequency of our research had other benefits as well. With only five working days between sessions, a weekly cadence forced us to keep our work small and iterative. Committing to testing something every week meant showing work earlier and more often than we might have preferred\u2014pushing us out of your comfort zone into a process of more rapid experimentation.\nBest of all, frequent conversations with users helped us become more customer-focused. After just a few weeks in a consistent research cadence, I noticed user feedback weaving itself through our planning and strategy sessions. Comments like \u201cRemember what Jenna said last week, about not being able to customize her lists?\u201d would pop up as frequent reference points to guide our decisions. As discussions become less about subjective opinions and more about responding to user needs, we saw immediate improvement in the quality of our solutions.\nEstablishing an efficient recruitment process\nThe key to creating a regular cadence of ongoing user research is an efficient recruitment and scheduling process\u2014along with a commitment to prioritize the time needed for research conversations. This is an invaluable tool for product teams (whether or not they follow a Lean process), but could easily be adapted for content strategy teams, agency teams, a UX team of one, or any other project that would benefit from short, frequent conversations with users. \nThe process I use requires a few hours of setup time at the beginning, but pays off in better learning and better releases over the long run. Almost any team could use this as a starting point and adapt it to their own needs.\nPick a dedicated time each week for research\nIn order to make research a priority, we started by choosing a time each week when everyone on the product team was available. Between stand-ups, grooming sessions, and roadmap reviews, it wasn\u2019t easy to do! Nevertheless, it\u2019s important to include as many people as possible in conversations with your users. Getting a second-hand summary of research results doesn\u2019t have the same impact as hearing someone describe their frustrations and concerns first-hand. The more people in the room to hear those concerns, the more likely they are to become priorities for your team.\nI blocked off 2 hours for research conversations every Thursday afternoon. We make this time sacred, and never schedule other meetings or work across those hours. \nDivide your time into several research slots\nAfter my weekly cadence was set, I divided the time into four 20-minute time slots. Twenty minutes is long enough for us to ask several open-ended questions or get feedback on a prototype, without being a burden on our users\u2019 busy schedules. Depending on your work, you may need schedule longer sessions\u2014but beware the urge to create blocks that last an hour or more. A weekly research cadence is designed to facilitate rapid, ongoing feedback and testing; it should force you to talk to users often and to keep your work small and iterative. Projects that require longer, more in-depth testing will probably need a dedicated research project of their own.\nI used the scheduling software Calendly to create interview appointments on a calendar that I can share with users, and customized the confirmation and reminder emails with information about how to access our video conferencing software. (Most of our research is done remotely, but this could be set up with details for in-person meetings as well.) Automating these emails and reminders took a little bit of time to set up, but was worth it for how much faster it made the process overall.\n\n\nInvite users to sign up for a time that\u2019s convenient for them\nWith a calendar set up and follow-up emails automated, it becomes incredibly easy to schedule research conversations. Each week, I send a short email out to a small group of users inviting them to participate, explaining that this is a chance to provide feedback that will improve our product or occasionally promoting the opportunity to get a sneak peek at new features we\u2019re working on. The email includes a link to the Calendly appointments, allowing users who are interested to opt in to a time that fits their schedule.\nSetting up appointments the first go around involved a bit of educated guessing. How many invitations would it take to fill all four of my weekly slots? How far in advance did I need to recruit users? But after a few weeks of trial and error, I found that sending 12-16 invitations usually allows me to fill all four interview slots. Our users often have meetings pop up at short notice, so we get the best results when I send the recruiting email on Tuesday, two days before my research block.\nIt may take a bit of experimentation to fine tune your process, but it\u2019s worth the effort to get it right. (The worst thing that\u2019s happened since I began recruiting this way was receiving emails from users complaining that there were no open slots available!) I can now fill most of an afternoon with back-to-back user research sessions just by sending just one or two emails each week, increasing our research pace while leaving plenty time to focus on discovery and design.\nGetting the most out of your research sessions\nAs you get comfortable with the rhythm of talking to users each week, you\u2019ll find more and more ways to get value out of your conversations. At first, you may prefer to just show work in progress\u2014such as mockups or a simple prototype\u2014and ask open-ended questions to measure user reaction. When you begin new projects, you may want to use this time to research behavior on existing features\u2014either watching participants as they use part of your product or asking them to give an account of a recent experience in your app. You may even want to run more abstracted Lean experiments, if that\u2019s the best way to validate the assumptions your team is working from.\nWhatever you do, plan some time a day or two later to come back together and review what you\u2019ve learned each week. Synthesizing research outcomes as a group will help keep your team in alignment and allow each person to highlight what they took away from each conversation. \nOver time, you may find that the pace of weekly user research becomes more exhausting than energizing, especially if the responsibility for scheduling and planning falls on just one person. Don\u2019t allow yourself to get burned out; a healthy research cadence should also include time to rest and reflect if the pace becomes too rapid to sustain. Take breaks as needed, then pick up the pace again as soon as you\u2019re ready.", "year": "2016", "author": "Wren Lanier", "author_slug": "wrenlanier", "published": "2016-12-02T00:00:00+00:00", "url": "https://24ways.org/2016/creating-a-weekly-research-cadence/", "topic": "ux"} {"rowid": 300, "title": "Taking Device Orientation for a Spin", "contents": "When The Police sang \u201cDon\u2019t Stand So Close To Me\u201d they weren\u2019t talking about using a smartphone to view a panoramic image on Facebook, but they could have been. For years, technology has driven relentlessly towards devices we can carry around in our pockets, and now that we\u2019re there, we\u2019re expected to take the thing out of our pocket and wave it around in front of our faces like a psychotic donkey in search of its own dangly carrot.\nBut if you can\u2019t beat them, join them.\nA brave new world\nA couple of years back all sorts of specs for new HTML5 APIs sprang up much to our collective glee. Emboldened, we ran a few tests and found they basically didn\u2019t work in anything and went off disheartened into the corner for a bit of a sob.\nTurns out, while we were all busy boohooing, those browser boffins have actually being doing some work, and lo and behold, some of these APIs are even half usable. Mostly literally half usable\u2014we\u2019re still talking about browsers, after all.\nNow, of course they\u2019re all a bit JavaScripty and are going to involve complex methods and maths and science and probably about a thousand dependancies from Github that will fall out of fashion while we\u2019re still trying to locate the documentation, right? Well, no! \nSo what if we actually wanted to use one of these APIs, say to impress our friends with our ability to make them wave their phones in front of their faces (because no one enjoys looking hapless more than the easily-technologically-impressed), how could we do something like that? Let\u2019s find out.\nThe Device Orientation API\nThe phone-wavy API is more formally known as the DeviceOrientation Event Specification. It does a bunch of stuff that basically doesn\u2019t work, but also gives us three values that represent orientation of a device (a phone, a tablet, probably not a desktop computer) around its x, y and z axes. You might think of it as pitch, roll and yaw if you like to spend your weekends wearing goggles and a leather hat.\nThe main way we access these values is through an event listener, which can inform our code every time the value changes. Which is constantly, because you try and hold a phone still and then try and hold the Earth still too.\nThe API calls those pitch, roll and yaw values alpha, beta and gamma. Chocks away:\nwindow.addEventListener('deviceorientation', function(e) {\n console.log(e.alpha);\n console.log(e.beta);\n console.log(e.gamma);\n});\nIf you look at this test page on your phone, you should be able to see the numbers change as you twirl the thing around your body like the dance partner you never had. Wrist strap recommended.\nOne important note\nLike may of these newfangled APIs, Device Orientation is only available over HTTPS. We\u2019re not allowed to have too much fun without protection, so make sure that you\u2019re working on a secure line. I\u2019ve found a quick and easy way to share my local dev environment over TLS with my devices is to use an ngrok tunnel.\nngrok http -host-header=rewrite mylocaldevsite.dev:80\nngrok will then set up a tunnel to your dev site with both HTTP and HTTPS URL options. You, of course, want the HTTPS option.\nRight, where were we?\nMake something to look at\nIt\u2019s all well and good having a bunch of numbers, but they\u2019re no use unless we do something with them. Something creative. Something to inspire the generations. Or we could just build that Facebook panoramic image viewer thing (because most of us are familiar with it and we\u2019re not trying to be too clever here). Yeah, let\u2019s just build one of those.\nOur basic framework is going to be similar to that used for an image carousel. We have a container, constrained in size, and CSS overflow property set to hidden. Into this we place our wide content and use positioning to move the content back and forth behind the \u2018window\u2019 so that the part we want to show is visible.\nHere it is mocked up with a slider to set the position. When you release the slider, the position updates. (This actually tests best on desktop with your window slightly narrowed.)\nThe details of the slider aren\u2019t important (we\u2019re about to replace it with phone-wavy goodness) but the crucial part is that moving the slider results in a function call to position the image. This takes a percentage value (0-100) with 0 being far left and 100 being far right (or \u2018alt-nazi\u2019 or whatever).\nvar position_image = function(percent) {\n var pos = (img_W / 100)*percent;\n img.style.transform = 'translate(-'+pos+'px)'; \n};\nAll this does is figure out what that percentage means in terms of the image width, and set the transform: translate(\u2026); CSS property to move the image. (We use translate because it might be a bit faster to animate than left/right positioning.)\nOk. We can now read the orientation values from our device, and we can programatically position the image. What we need to do is figure out how to convert those raw orientation values into a nice tidy percentage to pass to our function and we\u2019re done. (We\u2019re so not done.)\nThe maths bit\nIf we go back to our raw values test page and make-believe that we have a fascinating panoramic image of some far-off beach or historic monument to look at, you\u2019ll note that the main value that is changing as we swing back and forth is the \u2018alpha\u2019 value. That\u2019s the one we want to track.\nAs our goal here is hey, these APIs are interesting and fun and not let\u2019s build the world\u2019s best panoramic image viewer, we\u2019ll start by making a few assumptions and simplifications:\n\nWhen the image loads, we\u2019ll centre the image and take the current nose-forward orientation reading as the middle.\nMoving left, we\u2019ll track to the left of the image (lower percentage).\nMoving right, we\u2019ll track to the right (higher percentage).\nIf the user spins round, does cartwheels or loads the page then hops on a plane and switches earthly hemispheres, they\u2019re on their own.\n\nNose-forward\nWhen the page loads, the initial value of alpha gives us our nose-forward position. In Safari on iOS, this is normalised to always be 0, whereas most everywhere else it tends to be bound to pointy-uppy north. That doesn\u2019t really matter to us, as we don\u2019t know which direction the user might be facing in anyway \u2014 we just need to record that initial state and then use it to compare any new readings.\nvar initial_position = null;\n\nwindow.addEventListener('deviceorientation', function(e) {\n if (initial_position === null) {\n initial_position = Math.floor(e.alpha);\n };\n\n var current_position = initial_position - Math.floor(e.alpha);\n});\n(I\u2019m rounding down the values with Math.floor() to make debugging easier - we\u2019ll take out the rounding later.)\nWe get our initial position if it\u2019s not yet been set, and then calculate the current position as a difference between the new value and the stored one.\nThese values are weird\nOne thing you need to know about these values, is that they range from 0 to 360 but then you also get weird left-of-zero values like -2 and whatever. And they wrap past 360 back to zero as you\u2019d expect if you do a forward roll.\nWhat I\u2019m interested in is working out my rotation. If 0 is my nose-forward position, I want a positive value as I turn right, and a negative value as I turn left. That puts the awkward 360-tipping point right behind the user where they can\u2019t see it.\nvar rotation = current_position;\nif (current_position > 180) rotation = current_position-360;\nWhich way up?\nSince we\u2019re talking about orientation, we need to remember that the values are going to be different if the device is held in portrait on landscape mode. See for yourself - wiggle it like a steering wheel and you get different values. That\u2019s easy to account for when you know which way up the device is, but in true browser style, the API for that bit isn\u2019t well supported. The best I can come up with is:\nvar screen_portrait = false;\nif (window.innerWidth < window.innerHeight) {\n screen_portrait = true;\n}\nIt works. Then we can use screen_portrait to branch our code:\nif (screen_portrait) {\n if (current_position > 180) rotation = current_position-360;\n} else {\n if (current_position < -180) rotation = 360+current_position;\n}\nHere\u2019s the code in action so you can see the values for yourself. If you change screen orientation you\u2019ll need to refresh the page (it\u2019s a demo!).\nLimiting rotation\nNow, while the youth of today are rarely seen without a phone in their hands, it would still be unreasonable to ask them to spin through 360\u00b0 to view a photo. Instead, we need to limit the range of movement to something like 60\u00b0-from-nose in either direction and normalise our values to pan the entire image across that 120\u00b0 range. -60 would be full-left (0%) and 60 would be full-right (100%).\nIf we set max_rotation = 60, that code ends up looking like this:\nif (rotation > max_rotation) rotation = max_rotation;\nif (rotation < (0-max_rotation)) rotation = 0-max_rotation;\n\nvar percent = Math.floor(((rotation + max_rotation)/(max_rotation*2))*100);\nWe should now be able to get a rotation from -60\u00b0 to +60\u00b0 expressed as a percentage. Try it for yourself.\nThe big reveal\nAll that\u2019s left to do is pass that percentage to our image positioning function and would you believe it, it might actually work.\nposition_image(percent);\nYou can see the final result and take it for a spin. Literally.\nSo what have we made here? Have we built some highly technical panoramic image viewer to aid surgeons during life-saving operations using only JavaScript and some slightly questionable mathematics? No, my friends, we have not. Far from it. \nWhat we have made is progress. We\u2019ve taken a relatively newly available hardware API and a bit of simple JavaScript and paired it with existing CSS knowledge and made something that we didn\u2019t have this morning. Something we probably didn\u2019t even want this morning. Something that if you take a couple of steps back and squint a bit might be a prototype for something vaguely interesting. But more importantly, we\u2019ve learned that our browsers are just a little bit more capable than we thought.\nThe web platform is maturing rapidly. There are new, relatively unexplored APIs for doing all sorts of crazy thing that are often dismissed as the preserve of native apps. Like some sort of app marmalade. Poppycock. \nThe web is an amazing, exciting place to create things. All it takes is some base knowledge of the fundamentals, a creative mind and a willingness to learn. We have those! So let\u2019s create things.", "year": "2016", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2016-12-24T00:00:00+00:00", "url": "https://24ways.org/2016/taking-device-orientation-for-a-spin/", "topic": "code"} {"rowid": 293, "title": "A Favor for Your Future Self", "contents": "We tend to think about the future when we build things. What might we want to be able to add later? How can we refactor this down the road? Will this be easy to maintain in six months, a year, two years? As best we can, we try to think about the what-ifs, and build our websites, systems, and applications with this lens. \nWe comment our code to explain what we knew at the time and how that impacted how we built something. We add to-dos to the things we want to change. These are all great things! Whether or not we come back to those to-dos, refactor that one thing, or add new features, we put in a bit of effort up front just in case to give us a bit of safety later.\nI want to talk about a situation that Past Alicia and Team couldn\u2019t even foresee or plan for. Recently, the startup I was a part of had to remove large sections of our website. Not just content, but entire pages and functionality. It wasn\u2019t a very pleasant experience, not only for the reason why we had to remove so much of what we had built, but also because it\u2019s the ultimate \u201cI really hope this doesn\u2019t break something else\u201d situation. It was a stressful and tedious effort of triple checking that the things we were removing weren\u2019t dependencies elsewhere. To be honest, we wouldn\u2019t have been able to do this with any amount of success or confidence without our test suite.\nWriting tests for code is one of those things that developers really, really don\u2019t want to do. It\u2019s one of the easiest things to cut in the development process, and there\u2019s often a struggle to have developers start writing tests in the first place. One of the best lessons the web has taught us is that we can\u2019t, in good faith, trust the happy path. We must make sure ourselves, and our users, aren\u2019t in a tough spot later on because we only thought of the best case scenarios.\nJavaScript\nRegardless of your opinion on whether or not everything needs to be built primarily with JavaScript, if you\u2019re choosing to build a JavaScript heavy app, you absolutely should be writing some combination of unit and integration tests.\nUnit tests are for testing extremely isolated and small pieces of code, which we refer to as the units themselves. Great for reused functions and small, scoped areas, this is the closest you examine your code with the testing microscope. For example, if we were to build a calculator, the most minute piece we could test could be the basic operations.\n/*\n * This example uses a test framework called Jasmine\n */\n\ndescribe(\"Calculator Operations\", function () {\n\n it(\"Should add two numbers\", function () {\n\n // Say we have a calculator\n Calculator.init();\n\n // We can run the function that does our addition calculation...\n var result = Calculator.addNumbers(7,3);\n\n // ...and ensure we're getting the right output\n expect(result).toBe(10);\n\n });\n});\nEven though these teeny bits work in isolation, we should ensure that connecting the large pieces work, as well. This is where integration tests excel. These tests ensure that two or more different areas of code, that may not directly know about each other, still behave in expected ways. Let\u2019s build upon our calculator - we may want the operations to be saved in memory after a calculation runs. This isn\u2019t as suited for a unit test because there are a few other moving pieces involved in the process (the calculations, checking if the result was an error, etc.).\n it(\u201cShould remember the last calculation\u201d, function () {\n\n // Run an operation\n Calculator.addNumbers(7,10);\n\n // Expect something else to have happened as a result\n expect(Calculator.updateCurrentValue).toHaveBeenCalled();\n expect(Calculator.currentValue).toBe(17);\n });\nUnit and integration tests provide assurance that your hand-rolled JavaScript should, for the most part, never fail in a grand fashion. Although it still might happen, you could be able to catch problems way sooner than without a test suite, and hopefully never push those failures to your production environment.\nInterfaces\nRegardless of how you\u2019re building something, it most definitely has some kind of interface. Whether you\u2019re using a very barebones structure, or you\u2019re leveraging a whole design system, these things can be tested as well.\nAcceptance testing helps us ensure that users can get from point A to point B within our web things, which can provide assurance that major features are always functioning properly. By simulating user input and data entry, we can go through whole user workflows to test for both success and failure scenarios. These are not necessarily for simulating edge-case scenarios, but rather ensuring that our core offerings are stable.\nFor example, if your site requires signup, you want to make sure the workflow is behaving as expected - allowing valid information to go through signup, while invalid information does not let you progress.\n/*\n * This example uses Jasmine along with an add-on called jasmine-integration\n */\n\ndescribe(\"Acceptance tests\", function () {\n\n // Go to our signup page\n var page = visit(\"/signup\");\n\n // Fill our signup form with invalid information\n page.fill_in(\"input[name='email']\", \"Not An Email\");\n page.fill_in(\"input[name='name']\", \"Alicia\");\n page.click(\"button[type=submit]\");\n\n // Check that we get an expected error message\n it(\"Shouldn't allow signup with invalid information\", function () {\n expect(page.find(\"#signupError\").hasClass(\"hidden\")).toBeFalsy();\n });\n\n // Now, fill our signup form with valid information\n page.fill_in(\"input[name='email']\", \"thisismyemail@gmail.com\");\n page.fill_in(\"input[name='name']\", \"Gerry\");\n page.click(\"button[type=submit]\");\n\n // Check that we get an expected success message and the error message is hidden\n it(\"Should allow signup with valid information\", function () {\n expect(page.find(\"#signupError\").hasClass(\"hidden\")).toBeTruthy();\n expect(page.find(\"#thankYouMessage\").hasClass(\"hidden\")).toBeFalsy();\n });\n});\nIn terms of visual design, we\u2019re now able to take snapshots of what our interfaces look like before and after any code changes to see what has changed. We call this visual regression testing. Rather than being a pass or fail test like our other examples thus far, this is more of an awareness test, intended to inform developers of all the visual differences that have occurred, intentional or not. Developers may accidentally introduce a styling change or fix that has unintended side effects on other areas of a website - visual regression testing helps us catch these sooner rather than later. These do require a bit more consistent grooming than other tests, but can be valuable in major CSS refactors or if your CSS is generally a bit like Jenga.\nTools like PhantomCSS will take screenshots of your pages, and do a visual comparison to check what has changed between two sets of images. The code would look something like this:\n/*\n * This example uses PhantomCSS\n */\n\ncasper.start(\"/home\").then(function(){\n\n // Initial state of form\n phantomcss.screenshot(\"#signUpForm\", \"sign up form\");\n\n // Hit the sign up button (should trigger error)\n casper.click(\"button#signUp\");\n\n // Take a screenshot of the UI component\n phantomcss.screenshot(\"#signUpForm\", \"sign up form error\");\n\n // Fill in form by name attributes & submit\n casper.fill(\"#signUpForm\", {\n name: \"Alicia Sedlock\",\n email: \"alicia@example.com\"\n }, true);\n\n // Take a second screenshot of success state\n phantomcss.screenshot(\"#signUpForm\", \"sign up form success\");\n});\nYou run this code before starting any development, to create your baseline set of screen captures. After you\u2019ve completed a batch of work, you run PhantomCSS again. This will create a second batch of screenshots, which are then put through an image comparison tool to display any differences that occurred. Say you changed your margins on our form elements \u2013 your image diff would look something like this:\n\nThis is a great tool for ensuring not just your site retains its expected styling, but it\u2019s also great for ensuring nothing accidentally changes in the living style guide or modular components you may have developed. It\u2019s hard to keep eagle eyes on every visual aspect of your site or app, so visual regression testing helps to keep these things monitored.\nConclusion\nThe shape and size of what you\u2019re testing for your site or app will vary. You may not need lots of unit or integration tests if you don\u2019t write a lot of JavaScript. You may not need visual regression testing for a one page site. It\u2019s important to assess your codebase to see which tests would provide the most benefit for you and your team.\nWriting tests isn\u2019t a joy for most developers, myself included. But I end up thanking Past Alicia a lot when there are tests, because otherwise I would have introduced a lot of issues into codebases. Shipping code that\u2019s broken breaks trust with our users, and it\u2019s our responsibility as developers to make sure that trust isn\u2019t broken. Testing shouldn\u2019t be considered a \u201cnice to have\u201d - it should be an integral piece of our workflow and our day-to-day job.", "year": "2016", "author": "Alicia Sedlock", "author_slug": "aliciasedlock", "published": "2016-12-03T00:00:00+00:00", "url": "https://24ways.org/2016/a-favor-for-your-future-self/", "topic": "code"} {"rowid": 301, "title": "Stretching Time", "contents": "Time is valuable. It\u2019s a precious commodity that, if we\u2019re not too careful, can slip effortlessly through our fingers. When we think about the resources at our disposal we\u2019re often guilty of forgetting the most valuable resource we have to hand: time.\nWe are all given an allocation of time from the time bank. 86,400 seconds a day to be precise, not a second more, not a second less.\nIt doesn\u2019t matter if we\u2019re rich or we\u2019re poor, no one can buy more time (and no one can save it). We are all, in this regard, equals. We all have the same opportunity to spend our time and use it to maximum effect. As such, we need to use our time wisely.\nI believe we can \u2018stretch\u2019 time, ensuring we make the most of every second and maximising the opportunities that time affords us.\nThrough a combination of \u2018Structured Procrastination\u2019 and \u2018Focused Finishing\u2019 we can open our eyes to all of the opportunities in the world around us, whilst ensuring that we deliver our best work precisely when it\u2019s required. A win win, I\u2019m sure you\u2019ll agree.\nStructured Procrastination\nI\u2019m a terrible procrastinator. I used to think that was a curse \u2013 \u201cWhy didn\u2019t I just get started earlier?\u201d \u2013 over time, however, I\u2019ve started to see procrastination as a valuable tool if it is used in a structured manner.\nDon Norman refers to procrastination as \u2018late binding\u2019 (a term I\u2019ve happily hijacked). As he argues, in Why Procrastination Is Good, late binding (delay, or procrastination) offers many benefits:\n\nDelaying decisions until the time for action is beneficial\u2026 it provides the maximum amount of time to think, plan, and determine alternatives.\n\nWe live in a world that is constantly changing and evolving, as such the best time to execute is often \u2018just in time\u2019. By delaying decisions until the last possible moment we can arrive at solutions that address the current reality more effectively, resulting in better outcomes.\nProcrastination isn\u2019t just useful from a project management perspective, however. It can also be useful for allowing your mind the space to wander, make new discoveries and find creative connections. By embracing structured procrastination we can \u2018prime the brain\u2019.\nAs James Webb Young argues, in A Technique for Producing Ideas, all ideas are made of other ideas and the more we fill our minds with other stimuli, the greater the number of creative opportunities we can uncover and bring to life.\nBy late binding, and availing of a lack of time pressure, you allow the mind space to breathe, enabling you to uncover elements that are important to the problem you\u2019re working on and, perhaps, discover other elements that will serve you well in future tasks.\nWhen setting forth upon the process of writing this article I consciously set aside time to explore. I allowed myself the opportunity to read, taking in new material, safe in the knowledge that what I discovered \u2013 if not useful for this article \u2013 would serve me well in the future. \nRon Burgundy summarises this neatly:\n\nProcrastinator? No. I just wait until the last second to do my work because I will be older, therefore wiser.\n\nAn \u2018older, therefore wiser\u2019 mind is a good thing. We\u2019re incredibly fortunate to live in a world where we have a wealth of information at our fingertips. Don\u2019t waste the opportunity to learn, rather embrace that opportunity. Make the most of every second to fill your mind with new material, the rewards will be ample.\nDeadlines are deadlines, however, and deadlines offer us the opportunity to focus our minds, bringing together the pieces of the puzzle we found during our structured procrastination.\nLike everyone I\u2019ll hear a tiny, but insistent voice in my head that starts to rise when the deadline is approaching. The older you get, the closer to the deadline that voice starts to chirp up.\nAt this point we need to focus.\nFocused Finishing\nWe live in an age of constant distraction. Smartphones are both a blessing and a curse, they keep us connected, but if we\u2019re not careful the constant connection they provide can interrupt our flow.\nWhen a deadline is accelerating towards us it\u2019s important to set aside the distractions and carve out a space where we can work in a clear and focused manner.\nWhen it\u2019s time to finish, it\u2019s important to avoid context switching and focus. All those micro-interactions throughout the day \u2013 triaging your emails, checking social media and browsing the web \u2013 can get in the way of you hitting your deadline. At this point, they\u2019re distractions.\nChunking tasks and managing when they\u2019re scheduled can improve your productivity by a surprising order of magnitude. At this point it\u2019s important to remove distractions which result in \u2018attention residue\u2019, where your mind is unable to focus on the current task, due to the mental residue of other, unrelated tasks.\nBy focusing on a single task in a focused manner, it\u2019s possible to minimise the negative impact of attention residue, allowing you to maximise your performance on the task at hand.\nCal Newport explores this in his excellent book, Deep Work, which I would highly recommend reading. As he puts it:\n\nEfforts to deepen your focus will struggle if you don\u2019t simultaneously wean your mind from a dependence on distraction.\n\nTo help you focus on finishing it\u2019s helpful to set up a work-focused environment that is purposefully free from distractions. There\u2019s a time and a place for structured procrastination, but \u2013 equally \u2013 there\u2019s a time and a place for focused finishing.\nThe French term \u2018mise en place\u2019 is drawn from the world of fine cuisine \u2013 I discovered it when I was procrastinating \u2013 and it\u2019s applicable in this context. The term translates as \u2018putting in place\u2019 or \u2018everything in its place\u2019 and it refers to the process of getting the workplace ready before cooking.\nJust like a professional chef organises their utensils and arranges their ingredients, so too can you.\nThanks to the magic of multiple users on computers, it\u2019s possible to create a separate user on your computer \u2013 without access to email and other social tools \u2013 so that you can switch to that account when you need to focus and hit the deadline.\nAnother, less technical way of achieving the same result \u2013 depending, of course, upon your line of work \u2013 is to close your computer and find some non-digital, unconnected space to work in.\nThe goal is to carve out time to focus so you can finish. As Newport states:\n\nIf you don\u2019t produce, you won\u2019t thrive \u2013 no matter how skilled or talented you are.\n\nProcrastination is fine, but only if it\u2019s accompanied by finishing. Create the space to finish and you\u2019ll enjoy the best of both worlds.\nIn closing\u2026\nThere is a time and a place for everything: there is a time to procrastinate, and a time to focus. To truly reap the rewards of time, the mind needs both.\nBy combining the processes of \u2018Structured Procrastination\u2019 and \u2018Focused Finishing\u2019 we can make the most of our 86,400 seconds a day, ensuring we are constantly primed to make new discoveries, but just as importantly, ensuring we hit the all-important deadlines.\nMake the most of your time, you only get so much. Use every second productively and you\u2019ll be thankful that you did. Don\u2019t waste your time, once it\u2019s gone, it\u2019s gone\u2026 and you can never get it back.", "year": "2016", "author": "Christopher Murphy", "author_slug": "christophermurphy", "published": "2016-12-21T00:00:00+00:00", "url": "https://24ways.org/2016/stretching-time/", "topic": "process"} {"rowid": 295, "title": "Internet of Stranger Things", "contents": "This year I\u2019ve been running a workshop about using JavaScript and Node.js to work with all different kinds of electronics on the Raspberry Pi. So especially for 24 ways I\u2019m going to show you how I made a very special Raspberry Pi based internet connected project! And nothing says Christmas quite like a set of fairy lights connected to another dimension1.\nWhat you\u2019ll see\nYou can rig up the fairy lights in your home, with the scrawly letters written under each one. The people from the other side (i.e. the internet) will be able to write messages to you from their browser in real time. In fact why not try it now; check this web page. When you click the lights in your browser, my lights (and yours) will turn on and off in real life! (There may be a queue if there are lots of people accessing it, hit the \u201cSend a message\u201d button and wait your turn.)\n\n\n\n\nIt\u2019s all done with JavaScript, using Node.js running on both the Raspberry Pi and on the server. I\u2019m using WebSockets to communicate in real time between the browser, server and Raspberry Pi.\nWhat you\u2019ll need\n\nRaspberry Pi any of the following models: Zero (will need straight male header pins soldered2 and Micro USB OTG adaptor), A+, B+, 2, or 3\nMicro SD card at least 4Gb Class 10 speed3\nMicro USB power supply at least 2A\nUSB Wifi dongle (unless you have a Pi 3 - that has wifi built in). \nAddressable fairy lights\nLogic level shifter (with pins soldered unless you want to do it!)\nBreadboard\nJumper wires (3x male to male and 4x female to male)\n\nOptional but recommended\n\nBase board to hold the Pi and Breadboard (often comes with a breadboard!)\n\nFind links for where to buy all of these items that goes along with this tutorial. The total price should be around $1004.\nSetting up the Raspberry Pi\nYou\u2019ll need to install the SD card for the Raspberry Pi. You\u2019ll find a link to download a disk image on the support document, ready-made with the Raspbian version of Linux, along with Node.js and all the files you need. Download it and write it to the SD card using the fantastic free software Etcher5. \nNext up you have to configure the wifi details on the SD card. If you plug the card into your computer you should see a drive called BOOT. There\u2019s a text file on there called wpa_supplicant.conf. Open it up in your favourite text editor and replace mywifi and mypassword with your wifi details6.\nnetwork={\n ssid=\"mywifi\"\n psk=\"mypassword\"\n}\nSave the file, eject the card from your computer and plug it into the Raspberry Pi. \nIf you have a base board or holder for the Raspberry Pi, attach it now. Then connect the wifi USB dongle7 and power supply, but don\u2019t plug it in yet!\nWiring!\nTime to wire everything up! \nFirst of all, push the Logic Level Converter into the middle of the breadboard:\n\nLogic Level Converter\nThe logic level converter may be labelled differently from the one in the diagram but the pins are usually exactly the same internally. I would just make sure the pins marked HV (High Voltage) are on the bottom and LV (Low Voltage) are on the top. \n\nRaspberry Pi pins only output 3.3v but the lights need 5v. That\u2019s why we need the logic level converter in there to boost up the signal.\nConnect the first two wires between the Raspberry Pi pins and the breadboard:\n\nNote that the pins on the Raspberry Pi are male, so you need a female to male jumper wire to connect between them and the breadboard. The colours don\u2019t have to match but it\u2019s easier to follow (and check) if you use the same ones as in the diagram. \n\nThen the next two:\n\nThis is what you should have so far:\n\nLights\nNow to connect the lights! My ones have a connector with three holes in it that I can push jumper wires into, and hopefully yours will too! So I used the male-to-male jumper wires to connect them to the breadboard.\n\n\n\nMake sure that you connect the right end of the lights, mine has a male connector at the wrong end so it\u2019s impossible to do this, but double check. \nAlso make sure that the holes in the light connector are the same as mine. To do this, follow the wires from the connector to the first light and look at the circuit board inside. You should just about be able to make out the connections labelled + (sometimes 5V, V+ or VCC), GND (or \u2018-\u2019 or G) and DI (sometimes DIN for data in).\n\nYou can just about make out the +, DI and GND on this picture. Note that on the other side of the board there is a DO for data out - that\u2019s what takes the data along to the chip in the next light. Make sure that you\u2019re plugging into the data-in and not the data-out! \nThat\u2019s it! Everything\u2019s plugged in and ready to go! But before you plug power into your Pi, double check all your wires and make sure they\u2019re exactly right! You could damage your Raspberry Pi if it is not wired correctly. So triple check!\n\nThe Moment of Truth!\nPlug in the Raspberry Pi and wait around a minute or two for it to boot up. If all is well, the lights should strobe rainbow colours for one second - that\u2019s your confirmation that it\u2019s connected to my WebSocket server and ready to receive messages from the upside-down! \n\nHowever, if the first light in the string is pulsing red, it means that you\u2019re not connected to the internet. So check the Troubleshooting section of the support document. If it\u2019s pulsing green then you\u2019re connected to the internet but can\u2019t connect to my server. It must have gone down. Sorry! The code will keep trying so leave it running and maybe it\u2019ll come back up. \nRig up the lights!\nFix the lights up on the wall however you want, pins, nails, tape. I\u2019ve used cable clips. Just be careful! I\u2019m using a 50 light string so I\u2019ve programmed it to use the lights at the end for the letters. That way I have just under half the string to extend down to the floor where I can keep the Raspberry Pi. \nCheck the photo here to see how the lights line up, note that there are spare unused lights in-between each row:\n\nNow visit lights.seb.ly and you\u2019ll see this : \n\nIf you\u2019re the only one online you\u2019ll have direct connection to the lights and any letter you click on will light up both in the browser and in real life. If there are other people there, you\u2019ll need to click the button to join the queue and wait your turn. \nHow it works - the geeky details!\nElectronics:\nThe pins on the Raspberry Pi are known as GPIO pins, general-purpose input/output. You can connect a wide variety of electronic components to them, LED lights, buttons, switches, and sensors. You can turn the power to the pins on and off using Node.js (or Python, if you prefer). \nAddressable LEDs or \u201cNeopixels\u201d\nWe\u2019re only using one GPIO pin on the Raspberry Pi (the other connections are 5V, 3.3V and ground) and that single pin is controlling all of the lights in the string. The code turns the pin on and off really fast in strictly timed morse-code-like dots and dashes to transmit binary data. The chips attached to each LED decode the binary and adjust the output to the LED accordingly. That chip then sends the data on to the next light in the string. \nThe chips on each light are the WS2811, part of the WS281x family that come in a multitude of different form factors and are often packaged with tiny LEDs in a single component. They are commonly referred to as Neopixels8 and I used them on my Laser Light Synths project.\nNeopixels with the chip and the LED all in one - it\u2019s the white square shaped component and the darker square inside is the chip. These are only 5mm wide!\nA Laser Light Synth! Covered with around 800 super bright neopixels!\nLogic Level Converter\nThe logic level converter is a really cheap and easy way to change the level from 3.3v to 5v and back again. You must be careful that you do not connect 5v into a GPIO pin or you will most likely damage the Raspberry Pi processor chip. \nPower\nNeopixels can often draw a lot of current so you need to be careful how you power them. I\u2019ve measured the current draw from the string to be less than 800mA so you should be fine wired directly to the 5V output. But if you use more lights or have them all on really bright at once, you\u2019ll need to use a separate 5V power supply. If you want to learn more, check out Adafruit\u2019s Neopixel Uberguide. \nNode.js\nThere are two Node.js apps running here, one on the Raspberry Pi and one on my server. You can see the code on my GitHub at github.com/sebleedelisle/stranger-lights for the Raspberry Pi and github.com/sebleedelisle/stranger-lights-server for the server. And they\u2019re hosted on npm as stranger-lights and stranger-lights-server. \nThe server side code sets up a standard web server to deliver the HTML for the web interface. It also sets up a WebSocket server that allows for real-time communication between the browser and the server. This server code also manages the queue and who is in control of the lights at any given time.\nWebSockets\nI\u2019m using the excellent Socket.io library to manage the WebSocket connection. Both the browser and the Raspberry Pi Node.js app connects to my WebSocket server. \nWhen you click on a letter in the browser, a message is sent to the server, which forwards it to the connected Raspberry Pi clients and also all the web browsers9. \nThe Raspberry Pi code\nThe Node.js app runs automatically on startup, and I made this happen by adding this to the /etc/rc.local file: \nnode /home/pi/strangerthings/client.js > /dev/null &\nAnything in the rc.local file gets executed when the Pi boots up and this line of code runs the Node.js app and routes its output to nowhere (ie /dev/null). The & means that it runs it in the background and doesn\u2019t hold up the boot process. \nWorking with the Raspberry Pi headless\nYou might know that when a computer has no screen or keyboard, you would refer to it as \u201crunning headless\u201d. So just like most web servers, you need to configure it over the network with ssh10. If you\u2019re on a mac you can find your Pi on the network through the name raspberrypi.local11, otherwise you\u2019ll need to find its IP address. There\u2019s more on the guide to Remote Access instructions on the Raspberry Pi website. And if you\u2019re very new to the terminal, I highly recommend this great online Linux command line tutorial.\nImprovements\nThis is quite an early experiment and I\u2019m sure I\u2019ll discover lots of optimisations over the next few weeks, especially if the server gets a proper hammering today! But there are a few things you can do. Obviously I\u2019ve just rigged up my lights with Post-it notes. It\u2019d be a lot nicer to get a paint brush and try to recreate the Winona-in-a-manic-state text style. \nWhere next?\nFinding quality resources about Node.js for electronics on the Pi can be somewhat hit and miss, but this is getting better all the time. Alternatively I am thinking about running some online courses, please let me know if that\u2019s something you\u2019d be interested in, or sign up to my mailing list at st4i.com. \nThere are many many more resources for the Raspberry Pi with Python (gpiozero is a good place to start), so if that language works for you, you\u2019ll be spoilt for choice! \nAlso take a look at Arduino - it\u2019s an incredibly popular platform for electronics and the internet is literally bursting with resources. \nI hope you enjoyed this little foray into the world of JavaScript electronics on the Raspberry Pi! If you get this working at home please let me know! Tweet me at @seb_ly. \n\n\n\n\nNot a particularly original idea, but I don\u2019t think I\u2019ve seen anyone do it quite like this before, ie using WebSockets, and Node.js on a Raspberry Pi. Other examples: Internet of Stranger Things, Strangerlights.com, and loads of examples on Instructables\u00a0\u21a9\ufe0e\n\n\nVideo guide to soldering pins on to a Pi Zero and further soldering advice from Adafruit\u00a0\u21a9\ufe0e\n\n\nSlower cards will work but performance may suffer\u00a0\u21a9\ufe0e\n\n\nOr \u00a35,000 in UK money. Sorry, Brexit joke :)\u00a0\u21a9\ufe0e\n\n\nYou will need a card reader on your computer - most micro SD cards come with an adaptor that fits standard SD slots. \u00a0\u21a9\ufe0e\n\n\nSSID and password should be all that you need but you can see all the config options on this wpa supplicant guide\u00a0\u21a9\ufe0e\n\n\nRaspberry Pi Zero will require the OTG to USB adaptor to attach the wifi dongle\u00a0\u21a9\ufe0e\n\n\nThanks to Adafruit who invented the term neopixels so we don\u2019t have to refer to them as WS281x any more!\u00a0\u21a9\ufe0e\n\n\nSo you can see other people sending messages in the browser\u00a0\u21a9\ufe0e\n\n\nssh is short for Secure Shell and is a way to connect to a remote computer and type in it just like you would in the terminal.\u00a0\u21a9\ufe0e\n\n\nYou can change this default hostname using raspi-config\u00a0\u21a9\ufe0e", "year": "2016", "author": "Seb Lee-Delisle", "author_slug": "sebleedelisle", "published": "2016-12-01T00:00:00+00:00", "url": "https://24ways.org/2016/internet-of-stranger-things/", "topic": "code"} {"rowid": 310, "title": "Fairytale of new Promise", "contents": "There are only four good Christmas songs.\nI know, yeah, JavaScript or whatever. We\u2019ll get to that in a minute, I promise.\nFirst\u2014and I cannot stress this enough\u2014 there are four good Christmas songs. You\u2019re free to disagree with me here, of course, but please try to understand that you will be wrong.\nThey don\u2019t all have the most safe-for-work titles; I can\u2019t list all of them here, but if you choose to let your fingers do the walkin\u2019 to your nearest search engine, I will say that one was released by the band FEAR way back in 1982 and one was on Run the Jewels\u2019 self-titled debut album. The lyrics are a hell of a lot worse than the titles, so maybe wait until you get home from work before you queue them up. Wear headphones, if you\u2019ve got thin walls.\nFor my money, though, the two I can reference by name are the top of that small heap: Tom Waits\u2019 Christmas Card from a Hooker in Minneapolis, and The Pogues\u2019 Fairytale of New York. The former once held the honor of being the only good Christmas song\u2014about which which I was also unequivocally correct, right up until I changed my mind. It\u2019s not the song up for discussion today, but feel free to familiarize yourself just the same\u2014I\u2019ll wait.\nFairytale of New York\u2014the top of the list\u2014starts out by hinting at some pretty standard holiday fare; dreams and cheer and whatnot. Typical seasonal stuff, so long as you ignore that the story seems to be recounted as a drunken flashback in a jail cell. You can probably make a few guesses at the underlying spirit of the song based on that framing: following a lucky break, our bright-eyed protagonists move to New York in search of fame and fortune, only to quickly descend into bad decisions, name-calling, and vaguely festive chaos.\nThis song speaks to me on a couple of levels, not the least of which is as a retelling of my day-to-day interactions with JavaScript. Each day\u2019s melody might vary a little bit, granted, but the lyrics almost always follow a pretty clear arc toward \u201cPARENTAL ADVISORY: EXPLICIT CONTENT.\u201d You might have heard a similar tune yourself; it goes a little somethin\u2019 like setTimeout(function() { console.log( \"this should be happening last\" ); }, 1000); . Callbacks are calling callbacks calling callbacks and something is happening somewhere, as the JavaScript interpreter plods through our code start-to-finish, line-by-line, step-by-step. If we need to take actions based on the results of something that could take its sweet time resolving, well, we\u2019d better fiddle with the order of things to make sure those actions don\u2019t happen too soon.\n\u201cBut I can see a better time,\u201d as the song says, \u201cwhen all our dreams come true.\u201d So, with that Pogues brand of holiday spirit squarely in mind\u2014by which I mean that your humble narrator is almost certainly drunk, and may be incarcerated at the time of publication\u2014gather \u2019round for a story of hope, of hardships, of semi-asynchronous JavaScript programming, and ultimately: of Promise unfulfilled.\nThe Main Thread\nJavaScript is single-minded, in a manner of speaking. Anything we tell the JavaScript runtime to do goes into a single-file queue; you\u2019ll see it referred to as the \u201cmain thread,\u201d or \u201cUI thread.\u201d That thread can be shared by a number of critical browser processes, like rendering and re-rendering parts of the page, and user interactions ranging from the simple\u2014say, highlighting text\u2014to the more complex\u2014interacting with form elements.\nIf that sounds a little scary to you, well, that\u2019s because it is. The more complex our scripts, the more we\u2019re cramming into that single-file main thread, to be processed along with\u2014say\u2014some of our CSS animations. Too much JavaScript clogging up the main thread means a lot of user-facing performance jankiness. Getting away from that single thread is a big part of all the excitement around Web Workers, which allow us to offload entire scripts into their own dedicated background threads\u2014though not without limitations of their own. Outside of Web Workers, that everything-thread is the only game in town: scripts executed one thing at a time, functions calling functions calling functions, taking numbers and crowding up the same deli counter as a user\u2019s interactions\u2014which, in this already strained metaphor, would be ham, I guess?\nAsynchronous JavaScript\nNow, those queued actions may include asynchronous things. For example: AJAX callbacks, setTimeout/setInterval, and addEventListener won\u2019t block the main thread while we\u2019re waiting for a request to come back, a timer to tick away, or an event to trigger. Once those things do kick in, though, the actions they\u2019re meant to perform will get shuffled right back into that single-thread queue.\nThere are a couple of places you might have written asynchronously-fired JavaScript, even if you\u2019re not super familiar with the overarching concept: XMLHttpRequest\u2014\u201cAJAX,\u201d if ya nasty\u2014or just kicking off a function once a user triggers a click or mouseenter event. Event-driven development is writ a little larger, with the overall flow of the script dictated by events, both internal and external. Writing event-driven JavaScript applications is a step in the right direction for sure\u2014it won\u2019t cure what ails the main thread, but it does work with the medium in a reasonable way. Event-driven development allows us to manage our use of the main thread in a way that makes sense. If any of this rings a bell for you, the motivation for Promises should feel familiar.\nFor example, a custom init event might kick things off, and fire a create event that applies our classes and restructures our markup which, on completion, fires a bindEvents event to handle all the event listeners for user interaction. There might not sound like much difference between that and one big function that kicks off, manipulates the DOM, and binds our events line-by-line\u2014but in a script of sufficient size and complexity we\u2019re not only provided with a decoupled flow through the script, but obvious touchpoints for future updates and a predictable structure for ongoing maintenance. \nThis pattern falls apart a little where we were still creating, binding, and listening for events in the same top-to-bottom, one-item-at-a-time way\u2014we had to set a listener on a given object before the event fires, or nothing would happen:\n// Create the event:\nvar event = document.createEvent( \"Event\" );\n\n// Name the event:\nevent.initEvent( \"doTheStuff\", true, true );\n\n// Listen for the custom `doTheStuff` event on `window`:\nwindow.addEventListener( \"doTheStuff\", initializeEverything );\n\n// Fire the custom event\nwindow.dispatchEvent( event );\nThis example is a little contrived, and this stuff is a lot more manageable for sure with the addition of a framework, but that\u2019s the basic gist: create and name the event, add a listener for the event, and\u2014after setting our listener\u2014dispatch the event.\nEvents and callbacks aren\u2019t the only game in town for weaving our way in and out of the main thread, though\u2014at least, not anymore. \nPromises\nA Promise is, at the risk of sounding sentimental, pure potential\u2014an empty container into which a value eventually results. A Promise can exist in several states: \u201cpending,\u201d while the computation they contain is being performed or \u201cresolved\u201d once that computation is complete. Once resolved, a Promise is \u201cfulfilled\u201d if it gave us back something we expect, or \u201crejected\u201d if it didn\u2019t.\nThe Promise constructor accepts a callback with two arguments: resolve and reject. We perform an action\u2014asynchronous or otherwise\u2014within that callback. If everything in there has gone according to plan, we call resolve. If something has gone awry, we call reject\u2014with an error, conventionally. To illustrate, let\u2019s tack something together with a pretty decent chance of doing what we don\u2019t want: a promise meant only to give us the number 1, but has a chance of giving us back a 2. No reasonable person would ever do this, of course, but I wouldn\u2019t necessarily put it past me.\nvar promisedOne = new Promise( function( resolve, reject ) {\n var coinToss = Math.floor( Math.random() * 2 ) + 1;\n\n if( coinToss === 1 ) {\n resolve( coinToss );\n } else {\n reject( new Error( \"That ain\u2019t a one.\" ) );\n }\n});\nThere\u2019s nothing too surprising in there, after you boil it all down. It\u2019s a little return-y, with the exception that we\u2019re flagging results as \u201cas expected\u201d or \u201csomething went wrong.\u201d\nTapping into that Promise uses another new keyword: then\u2014and as someone who attempts to make sense of JavaScript by breaking it down to plain ol\u2019 human-language, I\u2019m a big fan of this syntax. then is tacked onto our Promise identifier, and does just what it says on the tin: once the Promise is resolved, then do one of two things, both supplied as callbacks: the first in the case of a fulfilled promise, and the second in the case of a rejected one. Those two callbacks will have, as arguments, the results we specified with resolve orreject, respectively. It sounds like a lot in prose, but in code it\u2019s a pretty simple pattern:\npromisedOne.then( function( result ) {\n console.log( result );\n}, function( error ) {\n console.error( error );\n});\nIf you\u2019ve spent any time working with AJAX\u2014jQuery-wise, in particular\u2014you\u2019ve seen something like this pattern before: a success callback and an error callback. The state of a promise, once fulfilled or rejected, cannot be changed\u2014any reference we make to promisedOne will have a single, fixed result.\nIt may not look like too much the way I\u2019m using it here, but it\u2019s powerful stuff\u2014a pattern for asynchronously resolving anything. I\u2019ve recently used Promises alongside a script that emulates Font Load Events, to apply webfonts asynchronously and avoid a potential performance hit. Font Face Observer allows us to, as the name implies, determine when the files referenced by our @font-face rules have finished loading. \nvar fontObserver = new FontFaceObserver( \"Fancy Font\" );\n\nfontObserver.check().then(function() {\n document.documentElement.className += \" fonts-loaded\";\n}, function( error ) {\n console.error( error );\n});\nfontObserver.check() gives us back a Promise, allowing us to chain on a then containing our callbacks for success and failure. We use the fulfilled callback to bolt a class onto the page once the font file has been fully transferred. We don\u2019t bother including an argument in the first function, since we don\u2019t care about the result itself so much as we care that the promise resolved without error\u2014we\u2019re not doing anything with the resolved value, just adding a class to the page. We do include the error argument, since we\u2019ll want to know what happened should something go wrong.\nNow, this isn\u2019t the tidiest syntax around\u2014at least to my eyes\u2014with those two functions just kinda floating in a then. Luckily there\u2019s an similar alternative syntax; one that I find a bit easier to parse at-a-glance:\nfontObserver.check()\n .then(function() {\n document.documentElement.className += \" fonts-loaded\";\n })\n .catch(function( error ) {\n console.log( error );\n });\nThe first callback inside then provides us with our success state, while the catch provides us with a single, explicit \u201csomething went wrong\u201d callback. The two syntaxes aren\u2019t completely identical in all situations, but for a simple case like this, I find it a little neater.\nThe Common Thread\nI guess I still owe you an explanation, huh. Not about the JavaScript-whatever; I think I\u2019ve explained that plenty. No, I mean Fairytale of New York, and why it\u2019s perched up there at the top of the four (4) song heap.\nFairytale is a sad song, ostensibly. If you follow the main thread\u2014start to finish, line-by-line, step by step\u2014 Fairytale is a sad song. And I can see you out there, visions of Die Hard dancing in your heads: \u201cbut is it a Christmas song?\u201d\nWell, for my money, nothing says \u201cholidays\u201d quite like unreliable narration.\nShane MacGowan, the song\u2019s author, has placed the first verse about \u201cChristmas Eve in the drunk tank\u201d as happening right after the \u201clucky one, came in eighteen-to-one\u201d\u2014not at the chronological end of the story. That means the song might not be mostly drunken flashback, but all of it a single, overarching flashback including a Christmas Eve in protective custody. It could be that the man and woman are, together, recounting times long past\u2014good times and bad times\u2014maybe not even in chronological order. Hell, the \u201cNYPD Choir\u201d mentioned in the chorus? There\u2019s no such thing.\nWe\u2019re not big Christmas folks, my family and I. But just the same, every year, the handful of us get together, and every year\u2014like clockwork\u2014there\u2019s a lull in conversation, there\u2019s a sharp exhale, and Ma says \u201cwe all made it.\u201d Not to a house, not to a dinner, but through another year, to another Christmas. At this point, without fail, someone starts telling a story\u2014and one begets another, and so on. Sometimes the stories are happy, sometimes they\u2019re sad, more often than not they\u2019re both. Some are about things we were lucky to walk away from, some are about a time when another one of us didn\u2019t.\nStart-to-finish, line-by-line, step-by-step, the main thread through the year doesn\u2019t change, and maybe there isn\u2019t a whole lot we can do to change it. But by carefully weaving our way in and out of that thread\u2014stories all out of sync and resolving one way or the other, with the results determined by questionably reliable narrators\u2014we can change the way we interact with it and, little by little, we can start making sense of it.", "year": "2016", "author": "Mat Marquis", "author_slug": "matmarquis", "published": "2016-12-19T00:00:00+00:00", "url": "https://24ways.org/2016/fairytale-of-new-promise/", "topic": "code"} {"rowid": 298, "title": "First Steps in VR", "contents": "The web is all around us. As web folk, it is our responsibility to consider the impact our work can have. Part of this includes thinking about the future; the web changes lives and if we are building the web then we are the ones making decisions that affect people in every corner of the world. I find myself often torn between wanting to make the right decisions, and just wanting to have fun. To fiddle and play. We all know how important it is to sometimes just try ideas, whether they will amount to much or not. \nI think of these two mindsets as production and prototyping, though of course there are lots of overlap and phases in between. I mention this because virtual reality is currently seen as a toy for rich people, and in some ways at the moment it is. But with WebVR we are able to create interesting experiences with a relatively low entry point. I want us to have open minds, play around with things, and then see how we can use the tools we have at our disposal to make things that will help people.\nEvery year we see articles saying it will be the \u201cyear of virtual reality\u201d, that was especially prevalent this year. 2016 has been a year of progress, VR isn\u2019t quite mainstream but with efforts like Playstation VR and Google Cardboard, we are definitely seeing much more of it. This year also saw the consumer editions of the Oculus Rift and HTC Vive. So it does seem to be a good time for an overview of how to get involved with creating virtual reality on the web.\nWebVR is an API for connecting to devices and retrieving continuous data such as the position and orientation. Unlike the Web Audio API and some other APIs, WebVR does not feel like a framework. You use it however you want, taking the data and using it as you wish. To make it easier, there are plenty of resources such as Three.js, A-Frame and ReactVR that help to make the heavy lifting a bit easier.\nGetting Started with A-Frame\nI like taking the opportunity to learn new things whenever I can. So while planning this article I thought that instead of trying to teach WebGL or even Three.js in a way that is approachable for all, I would create my first project using A-Frame and write about that. This is not a tutorial as such, I just want to show how to go about getting involved with VR. The beauty of A-Frame is that it is very similar to web components, you can just write HTML to build worlds that will automatically work on all the different types of devices. It uses WebGL and WebVR but in such a way that it quite drastically reduces the learning curve. That\u2019s not to say you can\u2019t build complex things, you have complete access to write JavaScript and shaders.\nI\u2019m lazy. Whenever I learn a new language or framework I have found that the best way, personally, for me to learn is to have a project and to copy the starting code from someone else. A project lets you have a good idea of what you want to produce and it means you can ignore a lot of the irrelevant documentation, focussing purely on what you need. That reduces the stress of figuring things out. Copying code also makes it easier, because you know your boilerplate code is working. There\u2019s nothing worse than getting stuck before anything actually works the first time. So I tinker. I take code and I modify it, I play around. It\u2019s fun.\nFor this project I wanted to keep things as simple as possible, so I can easily explain it without the classic \u201cdraw a circle then draw an owl\u201d. I wrote a list of requirements, with some stretch goals that you can give a try yourself if you fancy:\n\nMust work on Google Cardboard at a minimum, because of price\nTherefore, it must not rely on having a controller\nAuto-moving around a maze would be a good example\nMove in direction you look\nStretch goal: Scoring, time until you hit a wall or get stuck in maze\nStretch goal: Levels, so the map doesn\u2019t need to be random\nStretch goal: Snow!\n\nI decided to base this project on an example, Platforms, by Don McCurdy who wrote the really useful aframe-extras. Platforms has random 3D blocks that you can jump onto, going up into the sky. So I took his code and reduced it so that the blocks are randomly spread on the ground. \n\n\n\n \n \n 24 ways\n \n \n\n\n\n \n \n\n \n\n \n\n \n\n \n \n\n\n\n\nAs you can see, this is very readable. Especially if you ignore the JavaScript that is used to create the maze. A-Frame (with A-Frame Extras) gives you a lot of power with relatively little to learn. We start with an which is the container for everything that is going to show up on the screen. There are a few which can be compared to
as they are essentially non-semantic containers, able to be used for any purpose. The attributes are used to define functionality, for example the camera attribute sets the entity to function as a camera and kinematic-body makes it collide instead of go through objects. Attributes are also used to set position and sizes, often using JavaScript to dynamically define them.\nStyling\nNow we\u2019ve got the HTML written, we need to style it. To do this we add A-Frame compatible attributes such as color and material. I recommend playing around, you can get some quite impressive effects fairly easily. Originally I wanted a light snowy maze but it ended up being dark and foggy, as I really liked the feeling it gave.\nNote, you will probably need a server running for images to work. You can do this by running python -m \"SimpleHTTPServer\" in the folder where the code is, then go to localhost:8000 in browser.\nTextures\nUnless you are going for a cartoony style, you probably want to find some textures. I found some on textures.com, one image worked well for the walls and the other for the floor.\n\n \n \n\nThe is used to define (as well as preload and cache) all assets, including images, audio and video. As you can see, images in the Asset Management System just use normal img tags. The ids are important here as we can use them later for using the textures. \nTo apply a texture to an object, you create a material. For a simple material where it just shows the image, you set the src to the id selector of the image.\nReplace: \n\nWith:\n\nThis will automatically make the image repeat over the entire floor, in my case filling it with bricks. The walls are pretty much identical, with the slight exception that it is set in JavaScript as they are dynamically defined.\nbox.setAttribute('material', 'src: #texture-wall');\nThat\u2019s it for the textures, for now at least. These will not look completely realistic, as the light will bump off the rectangular wall rather than texture itself. This can be improved by using maps, textures that are used to modify the shape and physical properties of the object. \nLighting\nThe next part of styling is lighting. By using fog and different types of lighting, we are able to add atmospheric details to the game to make it feel that bit more realistic and polished.\nThere are lots of types of light in A-Frame (most coming from Three.js). You can add a light either by using the entity or by attaching a light attribute to any other entity. If there are no lights defined then A-Frame adds some by default so that the scene is always lit.\nTo start with I wanted to light up the scene with a general light, type=\"ambient\", so that the whole game felt slightly dark. I chose to set the light to a reddish colour #92455E. After playing around with intensity I chose 0.4, it added enough light to get the feeling I wanted without it being overly red. I also added a blue skybox (), as it looked a bit odd with a white sky.\n\n\nI felt that the maze looked good with a red tinge but it was a bit flat, everything was the same colour and it was a bit dark. So I added a light within the #player entity, this could have been as an attribute but I set it as a child a-light instead. By using type=\"point\" with a high intensity and low distance, it showed close walls as being lighter. It also added a sort-of object to the player, it isn\u2019t a walking human or anything but by moving light where the player is it feels a bit more physical.\n\n\nBy this point it was starting to look decent, so I wanted to add the fog to really give some personality and depth to the maze. To do this I added the fog attribute to the with type=exponential so it looks thicker the further away it is and a mid intensity, so you feel a bit lost but can still see.\n\nI was very happy with this result. It took a lot of playing around with colours and values, which is fun in itself. I highly recommend you take the code (or write your own) and play around with the numbers.\nMovement\nOne of the reasons I decided to use aframe-extras is that it has a few different camera controls built in. As you saw earlier, I am using the universal-controls which gives WASD (keyboard) controls by default. I wanted to make it automatically move in the direction that you\u2019re looking, but I wasn\u2019t quite sure how without rewriting the controls. So I asked Don McCurdy for advice and he very nicely gave me a small snippet of code to get it working.\nAFRAME.registerComponent('automove-controls', {\n init: function () {\n this.speed = 0.1;\n this.isMoving = true;\n this.velocityDelta = new THREE.Vector3();\n },\n isVelocityActive: function () {\n return this.isMoving;\n },\n getVelocityDelta: function () {\n this.velocityDelta.z = this.isMoving ? -speed : 0;\n return this.velocityDelta.clone();\n }\n});\nReplace:\nuniversal-controls\nWith:\nuniversal-controls=\"movementControls: automove, gamepad, keyboard\"\nThis works by creating a component automove-controls that adds auto-move to the player without overriding movement completely. It doesn\u2019t even touch direction, it just checks if isMoving is true then moves the player by the set speed. Components can be creating for adding all kinds of functionality with relative ease. It makes it very powerful for people of all difficulty levels.\nBuilding a map\nCurrently the maze is created randomly, which is great but means there will often be walls that overlap or the player gets trapped with nowhere to go. So to solve this, I decided to use a map editor (Tiled) so that we can create the mazes ourselves. This is a great start towards one of the stretch goals, levels.\nI made the maze in Tiled by finding a random tileset online (we don\u2019t need to actually show the images), I used one tile for the wall and another for the player. Then I exported as a JavaScript file and modified it in my text editor to get rid of everything I didn\u2019t need. I made it so 0 is the path, 1 is the wall and 2 is the player. I then added the script to the HTML, as a separate file so it\u2019s easy to update in the future. \nvar map =\n{\n \"data\":[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\n \"height\":10,\n \"width\":10\n}\n\nAs you can see, this gives a simple 10x10 maze with some dead ends. The player starts in the bottom right corner (my choice, could be anywhere). I rewrote the random platforms code (from Don\u2019s example) to instead loop over the map data and place walls where it is 1 and position the player where data is 2. I set the position so that the origin of the map would be 0,1.5,0. The y axis is in this case the height (ground being 0), but if a wall is positioned at 0 by its centre then some of it is underground. So the y needed to be the height divided by 2.\ndocument.querySelector('a-scene').addEventListener('render-target-loaded', function () {\n var WALL_SIZE = 5,\n WALL_HEIGHT = 3;\n var el = document.querySelector('#walls');\n var wall;\n\n for (var x = 0; x < map.height; x++) {\n for (var y = 0; y < map.width; y++) {\n\n var i = y*map.width + x;\n var position = (x-map.width/2)*WALL_SIZE + ' ' + 1.5 + ' ' + (y-map.height/2)*WALL_SIZE;\n if (map.data[i] === 1) {\n // Create wall\n wall = document.createElement('a-box');\n el.appendChild(wall);\n wall.setAttribute('color', '#fff');\n wall.setAttribute('material', 'src: #texture-wall;');\n wall.setAttribute('width', WALL_SIZE);\n wall.setAttribute('height', WALL_HEIGHT);\n wall.setAttribute('depth', WALL_SIZE);\n wall.setAttribute('position', position);\n wall.setAttribute('static-body', ');\n }\n\n if (map.data[i] === 2) {\n // Set player position\n document.querySelector('#player').setAttribute('position', position);\n }\n\n }\n }\n console.info('Walls added.');\n});\n\nWith this added, it makes it nice and easy to change around the map as well as to add new features. Perhaps you want monsters or objects. Just set the number in the map data and add an if statement to the loop. In the future you could add layers, so multiple things can be in the same position. Or perhaps even make the maze go up the y axis too, with ramps or staircases. There\u2019s a lot you can do with relative ease. As you can see, A-Frame really does reduce the learning curve of 3D and VR on the web.\nIt\u2019s Not All Fun And Games\nA lot of examples of virtual reality are games, including this one. So it is understandable to think that VR is for gaming, but actually that\u2019s just a tiny subset. There are all sorts of applications for VR, including story telling, data visualisation and even meditation.\nThere have been a number of cases where it has been shown virtual reality can help as a tool for therapies:\n\nOxford study finds virtual reality can help treat severe paranoia\nVirtual Reality Therapy for Phobias at the Duke Faculty Practice\nBravemind: Virtual Reality Exposure Therapy at the University of Southern California\n\nThese are just a few examples of where virtual reality is being used around the world to help people feel better and get through some very tough times. There have also been examples of it being used for simulating war zones or medical situations, both as a teaching and journalism tool.\nWrapping Up\nTen years ago, on this very site, Cameron Moll wrote an article explaining the mobile web. He explained how mobile phones with data plans were becoming increasingly common, that WAP 2.0 included the XHTML Mobile Profile meaning it would be familiar with web folk. \u201cThe mobile web is rapidly becoming an XHTML environment, and thus you and I can apply our existing \u201cdesktop web\u201d skills to understand how to develop content for it.\u201d\nWe can look at that and laugh a little, we have come a very long way in the last decade. Even people in developing countries with very little money have mobile phones with access to a web that is far more capable than the \u201cdesktop web\u201d Cameron was referring to.\nSo while I am not saying virtual reality is going to change the world or replace our phones, who knows! We can use our skills as web folk to dabble, we don\u2019t need to learn any new languages. If on the 2026 edition of 24 ways, somebody references this article and looks at how far we have come\u2026 well, let\u2019s hope we have used our skills well and made the world just that little bit better. And if VR is a fad? Well it\u2019s fun\u2026 have a go anyway.", "year": "2016", "author": "Shane Hudson", "author_slug": "shanehudson", "published": "2016-12-11T00:00:00+00:00", "url": "https://24ways.org/2016/first-steps-in-vr/", "topic": "code"} {"rowid": 289, "title": "Front-End Developers Are Information Architects Too", "contents": "The theme of this year\u2019s World IA Day was \u201cInformation Everywhere, Architects Everywhere\u201d. This article isn\u2019t about what you may consider an information architect to be: someone in the user-experience field, who maybe studied library science, and who talks about taxonomies. This is about a realisation I had a couple of years ago when I started to run an increasing amount of usability-testing sessions with people who have disabilities: that the structure, labelling, and connections that can be made in front-end code is information architecture. People\u2019s ability to be successful online is unequivocally connected to the quality of the code that is written.\nPlaces made of information\nIn information architecture we talk about creating places made of information. These places are made of ones and zeros, but we talk about them as physical structures. We talk about going onto a social media platform, posting in blogs, getting locked out of an environment, and building applications. In 2002, Andrew Hinton stated:\n\nPeople live and work in these structures, just as they live and work in their homes, offices, factories and malls. These places are not virtual: they are as real as our own minds.\n25 Theses\n\nWe\u2019re creating structures which people rely on for significant parts of their lives, so it\u2019s critical that we carry out our work responsibly. This means we must use our construction materials correctly. Luckily, our most important material, HTML, has a well-documented specification which tells us how to build robust and accessible places. What is most important, I believe, is to understand the semantics of HTML.\nSemantics\nThe word \u201csemantic\u201d has its origin in Greek words meaning \u201csignificant\u201d, \u201csignify\u201d, and \u201csign\u201d. In the physical world, a structure can have semantic qualities that tell us something about it. For example, the stunning Westminster Abbey inspires awe and signifies much about the intent and purpose of the structure. The building\u2019s size; the quality of the stone work; the massive, detailed stained glass: these are all signs that this is a building meant for something the creators deemed important. Alternatively consider a set of large, clean, well-positioned, well-lit doors on the ground floor of an office block: they don\u2019t need an \u201centrance\u201d sign to communicate their use and to stop people trying to use a nearby fire exit to get into the building. The design of the doors signify their usage. Sometimes a more literal and less awe-inspiring approach to communicating a building\u2019s purpose happens, but the affect is similar: the building is signifying something about its purpose.\nHTML has over 115 elements, many of which have semantics to signify structure and affordance to people, browsers, and assistive technology. The HTML 5.1 specification mentions semantics, stating:\n\nElements, attributes, and attribute values in HTML are defined \u2026 to have certain meanings (semantics). For example, the
    element represents an ordered list, and the lang attribute represents the language of the content.\nHTML 5.1 Semantics, structure, and APIs of HTML documents\n\nHTML\u2019s baked-in semantics means that developers can architect their code to signify structure, create relationships between elements, and label content so people can understand what they\u2019re interacting with. Structuring and labelling information to make it available, usable, and understandable to people is what an information architect does. It\u2019s also what a front-end developer does, whether they realise it or not.\nA brief introduction to information architecture\nWe\u2019re going to start by looking at what an information architect is. There are many definitions, and I\u2019m going to quote Richard Saul Wurman, who is widely regarded as the father of information architecture. In 1976 he said an information architect is:\n\nthe individual who organizes the patterns inherent in data, making the complex clear; a person who creates the structure or map of information which allows others to find their personal paths to knowledge; the emerging 21st century professional occupation addressing the needs of the age focused upon clarity, human understanding, and the science of the organization of information.\nOf Patterns And Structures\n\nTo me, this clearly defines any developer who creates code that a browser, or other user agent (for example, a screen reader), uses to create a structured, navigable place for people.\nJust as there are many definitions of what an information architect is, there are for information architecture itself. I\u2019m going to use the definition from the fourth edition of Information Architecture For The World Wide Web, in which the authors define it as:\nThe structural design of shared information environments.\nThe synthesis of organization, labeling, search, and navigation systems within digital, physical, and cross-channel ecosystems.\nThe art and science of shaping information products and experiences to support usability, findability, and understanding.\nInformation Architecture For The World Wide Web, 4th Edition\nTo me, this describes front-end development. Done properly, there is an art to creating robust, accessible, usable, and findable spaces that delight all our users. For example, at 2015\u2019s State Of The Browser conference, Edd Sowden talked about the accessibility of s. He discovered that by simply not using the semantically-correct
    element to mark up headings, in some situations browsers will decide that a
    is being used for layout and essentially make it invisible to assistive technology. Another example of how coding practices can affect the usability and findability of content is shown by L\u00e9onie Watson in her How ARIA landmark roles help screen reader users video. By using ARIA landmark roles, people who use screen readers are quickly able to identify and jump to common parts of a web page.\nOur definitions of information architects and information architecture mention patterns, rules, organisation, labelling, structure, and relationships. There are numerous different models for how these elements get boiled down to their fundamentals. In his Understanding Context book, Andrew Hinton calls them Labels, Relationships, and Rules; Jorge Arango calls them Links, Nodes, And Order; and Dan Klyn uses Ontology, Taxonomy, and Choreography, which is the one we\u2019re going to use. Dan defines these terms as:\nOntology\nThe definition and articulation of the rules and patterns that govern the meaning of what we intend to communicate.\nWhat we mean when we say what we say.\nTaxonomy\nThe arrangements of the parts. Developing systems and structures for what everything\u2019s called, where everything\u2019s sorted, and the relationships between labels and categories\nChoreography\nRules for interaction among the parts. The structures it creates foster specific types of movement and interaction; anticipating the way users and information want to flow and making affordance for change over time.\n\nWe now have definitions of an information architect, information architecture, and a model of the elements of information architecture. But is writing HTML really creating information or is it just wrangling data and metadata? When does data turn into information? In his book Managing For The Future Peter Drucker states:\n\n\u2026 data is not information. Information is data endowed with relevance and purpose.\nManaging For The Future\n\nIf we use the correct semantic element to mark up content then we\u2019re developing with purpose and creating relevance. For example, if we follow the advice of the HTML 5.1 specification and mark up headings using heading rank instead of the outline algorithm, we\u2019re creating a structure where the depth of one heading is relevant to the previous one. Architected correctly, an

    element should be relevant to its parent, which should be the

    . By following the HTML specification we can create a structured, searchable, labeled document that will hopefully be relevant to what our users need to be successful. If you\u2019ve never used a screen reader, you might be wondering how the headings on a page are searchable. Screen readers give users the ability to interact with headings in a couple of ways:\n\nby creating a list of headings so users can quickly scan the page for information\nby using a keyboard command to cycle through one heading at a time\n\nIf we had a document for Christmas Day TV we might structure it something like this:\n

    Christmas Day TV schedule

    \n

    BBC1

    \n

    Morning

    \n

    Evening

    \n

    BBC2

    \n

    Morning

    \n

    Evening

    \n

    ITV

    \n

    Morning

    \n

    Evening

    \n

    Channel 4

    \n

    Morning

    \n

    Evening

    \nIf I use VoiceOver to generate a list of headings, I get this:\n\nOnce I have that list I can use keyboard commands to filter the list based on the heading level. For example, I can press 2 to hear just the

    s:\n\nIf we hadn\u2019t used headings, of if we\u2019d nested them incorrectly, our users would be frustrated.\nPutting this together\nLet\u2019s put this together with an example of a button that, when pressed, toggles the appearance of a panel of links. There are numerous ways we could create a button on a web page, but the best way is to just use a \n\n
    \n \n
    \nThere\u2019s quite a bit going on here. We\u2019re using the:\n\naria-controls attribute to architect a connection between the