rowid,title,contents,year,author,author_slug,published,url,topic 300,Taking Device Orientation for a Spin,"When The Police sang “Don’t Stand So Close To Me” they weren’t talking about using a smartphone to view a panoramic image on Facebook, but they could have been. For years, technology has driven relentlessly towards devices we can carry around in our pockets, and now that we’re there, we’re expected to take the thing out of our pocket and wave it around in front of our faces like a psychotic donkey in search of its own dangly carrot. But if you can’t beat them, join them. A brave new world A couple of years back all sorts of specs for new HTML5 APIs sprang up much to our collective glee. Emboldened, we ran a few tests and found they basically didn’t work in anything and went off disheartened into the corner for a bit of a sob. Turns out, while we were all busy boohooing, those browser boffins have actually being doing some work, and lo and behold, some of these APIs are even half usable. Mostly literally half usable—we’re still talking about browsers, after all. Now, of course they’re all a bit JavaScripty and are going to involve complex methods and maths and science and probably about a thousand dependancies from Github that will fall out of fashion while we’re still trying to locate the documentation, right? Well, no! So what if we actually wanted to use one of these APIs, say to impress our friends with our ability to make them wave their phones in front of their faces (because no one enjoys looking hapless more than the easily-technologically-impressed), how could we do something like that? Let’s find out. The Device Orientation API The phone-wavy API is more formally known as the DeviceOrientation Event Specification. It does a bunch of stuff that basically doesn’t work, but also gives us three values that represent orientation of a device (a phone, a tablet, probably not a desktop computer) around its x, y and z axes. You might think of it as pitch, roll and yaw if you like to spend your weekends wearing goggles and a leather hat. The main way we access these values is through an event listener, which can inform our code every time the value changes. Which is constantly, because you try and hold a phone still and then try and hold the Earth still too. The API calls those pitch, roll and yaw values alpha, beta and gamma. Chocks away: window.addEventListener('deviceorientation', function(e) { console.log(e.alpha); console.log(e.beta); console.log(e.gamma); }); If you look at this test page on your phone, you should be able to see the numbers change as you twirl the thing around your body like the dance partner you never had. Wrist strap recommended. One important note Like may of these newfangled APIs, Device Orientation is only available over HTTPS. We’re not allowed to have too much fun without protection, so make sure that you’re working on a secure line. I’ve found a quick and easy way to share my local dev environment over TLS with my devices is to use an ngrok tunnel. ngrok http -host-header=rewrite mylocaldevsite.dev:80 ngrok will then set up a tunnel to your dev site with both HTTP and HTTPS URL options. You, of course, want the HTTPS option. Right, where were we? Make something to look at It’s all well and good having a bunch of numbers, but they’re no use unless we do something with them. Something creative. Something to inspire the generations. Or we could just build that Facebook panoramic image viewer thing (because most of us are familiar with it and we’re not trying to be too clever here). Yeah, let’s just build one of those. Our basic framework is going to be similar to that used for an image carousel. We have a container, constrained in size, and CSS overflow property set to hidden. Into this we place our wide content and use positioning to move the content back and forth behind the ‘window’ so that the part we want to show is visible. Here it is mocked up with a slider to set the position. When you release the slider, the position updates. (This actually tests best on desktop with your window slightly narrowed.) The details of the slider aren’t important (we’re about to replace it with phone-wavy goodness) but the crucial part is that moving the slider results in a function call to position the image. This takes a percentage value (0-100) with 0 being far left and 100 being far right (or ‘alt-nazi’ or whatever). var position_image = function(percent) { var pos = (img_W / 100)*percent; img.style.transform = 'translate(-'+pos+'px)'; }; All this does is figure out what that percentage means in terms of the image width, and set the transform: translate(…); CSS property to move the image. (We use translate because it might be a bit faster to animate than left/right positioning.) Ok. We can now read the orientation values from our device, and we can programatically position the image. What we need to do is figure out how to convert those raw orientation values into a nice tidy percentage to pass to our function and we’re done. (We’re so not done.) The maths bit If we go back to our raw values test page and make-believe that we have a fascinating panoramic image of some far-off beach or historic monument to look at, you’ll note that the main value that is changing as we swing back and forth is the ‘alpha’ value. That’s the one we want to track. As our goal here is hey, these APIs are interesting and fun and not let’s build the world’s best panoramic image viewer, we’ll start by making a few assumptions and simplifications: When the image loads, we’ll centre the image and take the current nose-forward orientation reading as the middle. Moving left, we’ll track to the left of the image (lower percentage). Moving right, we’ll track to the right (higher percentage). If the user spins round, does cartwheels or loads the page then hops on a plane and switches earthly hemispheres, they’re on their own. Nose-forward When the page loads, the initial value of alpha gives us our nose-forward position. In Safari on iOS, this is normalised to always be 0, whereas most everywhere else it tends to be bound to pointy-uppy north. That doesn’t really matter to us, as we don’t know which direction the user might be facing in anyway — we just need to record that initial state and then use it to compare any new readings. var initial_position = null; window.addEventListener('deviceorientation', function(e) { if (initial_position === null) { initial_position = Math.floor(e.alpha); }; var current_position = initial_position - Math.floor(e.alpha); }); (I’m rounding down the values with Math.floor() to make debugging easier - we’ll take out the rounding later.) We get our initial position if it’s not yet been set, and then calculate the current position as a difference between the new value and the stored one. These values are weird One thing you need to know about these values, is that they range from 0 to 360 but then you also get weird left-of-zero values like -2 and whatever. And they wrap past 360 back to zero as you’d expect if you do a forward roll. What I’m interested in is working out my rotation. If 0 is my nose-forward position, I want a positive value as I turn right, and a negative value as I turn left. That puts the awkward 360-tipping point right behind the user where they can’t see it. var rotation = current_position; if (current_position > 180) rotation = current_position-360; Which way up? Since we’re talking about orientation, we need to remember that the values are going to be different if the device is held in portrait on landscape mode. See for yourself - wiggle it like a steering wheel and you get different values. That’s easy to account for when you know which way up the device is, but in true browser style, the API for that bit isn’t well supported. The best I can come up with is: var screen_portrait = false; if (window.innerWidth < window.innerHeight) { screen_portrait = true; } It works. Then we can use screen_portrait to branch our code: if (screen_portrait) { if (current_position > 180) rotation = current_position-360; } else { if (current_position < -180) rotation = 360+current_position; } Here’s the code in action so you can see the values for yourself. If you change screen orientation you’ll need to refresh the page (it’s a demo!). Limiting rotation Now, while the youth of today are rarely seen without a phone in their hands, it would still be unreasonable to ask them to spin through 360° to view a photo. Instead, we need to limit the range of movement to something like 60°-from-nose in either direction and normalise our values to pan the entire image across that 120° range. -60 would be full-left (0%) and 60 would be full-right (100%). If we set max_rotation = 60, that code ends up looking like this: if (rotation > max_rotation) rotation = max_rotation; if (rotation < (0-max_rotation)) rotation = 0-max_rotation; var percent = Math.floor(((rotation + max_rotation)/(max_rotation*2))*100); We should now be able to get a rotation from -60° to +60° expressed as a percentage. Try it for yourself. The big reveal All that’s left to do is pass that percentage to our image positioning function and would you believe it, it might actually work. position_image(percent); You can see the final result and take it for a spin. Literally. So what have we made here? Have we built some highly technical panoramic image viewer to aid surgeons during life-saving operations using only JavaScript and some slightly questionable mathematics? No, my friends, we have not. Far from it. What we have made is progress. We’ve taken a relatively newly available hardware API and a bit of simple JavaScript and paired it with existing CSS knowledge and made something that we didn’t have this morning. Something we probably didn’t even want this morning. Something that if you take a couple of steps back and squint a bit might be a prototype for something vaguely interesting. But more importantly, we’ve learned that our browsers are just a little bit more capable than we thought. The web platform is maturing rapidly. There are new, relatively unexplored APIs for doing all sorts of crazy thing that are often dismissed as the preserve of native apps. Like some sort of app marmalade. Poppycock. The web is an amazing, exciting place to create things. All it takes is some base knowledge of the fundamentals, a creative mind and a willingness to learn. We have those! So let’s create things.",2016,Drew McLellan,drewmclellan,2016-12-24T00:00:00+00:00,https://24ways.org/2016/taking-device-orientation-for-a-spin/,code 305,CSS Writing Modes,"Since you may not have a lot of time, I’m going to start at the end, with the dessert. You can use a little-known, yet important and powerful CSS property to make text run vertically. Like this. Or instead of running text vertically, you can layout a set of icons or interface buttons in this way. Or, of course, with anything on your page. The CSS I’ve applied makes the browser rethink the orientation of the world, and flow the layout of this element at a 90° angle to “normal”. Check out the live demo, highlight the headline, and see how the cursor is now sideways. See the Pen Writing Mode Demo — Headline by Jen Simmons (@jensimmons) on CodePen. The code for accomplishing this is pretty simple. h1 { writing-mode: vertical-rl; } That’s all it takes to switch the writing mode from the web’s default horizontal top-to-bottom mode to a vertical right-to-left mode. If you apply such code to the html element, the entire page is switched, affecting the scroll direction, too. In my example above, I’m telling the browser that only the h1 will be in this vertical-rl mode, while the rest of my page stays in the default of horizontal-tb. So now the dessert course is over. Let me serve up this whole meal, and explain the the CSS Writing Mode Specification. Why learn about writing modes? There are three reasons I’m teaching writing modes to everyone—including western audiences—and explaining the whole system, instead of quickly showing you a simple trick. We live in a big, diverse world, and learning about other languages is fascinating. Many of you lay out pages in languages like Chinese, Japanese and Korean. Or you might be inspired to in the future. Using writing-mode to turn bits sideways is cool. This CSS can be used in all kinds of creative ways, even if you are working only in English. Most importantly, I’ve found understanding Writing Modes incredibly helpful when understanding Flexbox and CSS Grid. Before I learned Writing Mode, I felt like there was still a big hole in my knowledge, something I just didn’t get about why Grid and Flexbox work the way they do. Once I wrapped my head around Writing Modes, Grid and Flexbox got a lot easier. Suddenly the Alignment properties, align-* and justify-*, made sense. Whether you know about it or not, the writing mode is the first building block of every layout we create. You can do what we’ve been doing for 25 years – and leave your page set to the default left-to-right direction, horizontal top-to-bottom writing mode. Or you can enter a world of new possibilities where content flows in other directions. CSS properties I’m going to focus on the CSS writing-mode property in this article. It has five possible options: writing-mode: horizontal-tb; writing-mode: vertical-rl; writing-mode: vertical-lr; writing-mode: sideways-rl; writing-mode: sideways-lr; The CSS Writing Modes Specification is designed to support a wide range of written languages in all our human and linguistic complexity. Which—spoiler alert—is pretty insanely complex. The global evolution of written languages has been anything but simple. So I’ve got to start with explaining some basic concepts of web page layout and writing systems. Then I can show you what these CSS properties do. Inline Direction, Block Direction, and Character Direction In the world of the web, there’s a concept of ‘block’ and ‘inline’ layout. If you’ve ever written display: block or display: inline, you’ve leaned on these concepts. In the default writing mode, blocks stack vertically starting at the top of the page and working their way down. Think of how a bunch of block-levels elements stack—like a bunch of a paragraphs—that’s the block direction. Inline is how each line of text flows. The default on the web is from left to right, in horizontal lines. Imagine this text that you are reading right now, being typed out one character at a time on a typewriter. That’s the inline direction. The character direction is which way the characters point. If you type a capital “A” for instance, on which side is the top of the letter? Different languages can point in different directions. Most languages have their characters pointing towards the top of the page, but not all. Put all three together, and you start to see how they work as a system. The default settings for the web work like this. Now that we know what block, inline, and character directions mean, let’s see how they are used in different writing systems from around the world. The four writing systems of CSS Writing Modes The CSS Writing Modes Specification handles all the use cases for four major writing systems; Latin, Arabic, Han and Mongolian. Latin-based systems One writing system dominates the world more than any other, reportedly covering about 70% of the world’s population. The text is horizontal, running from left to right, or LTR. The block direction runs from top to bottom. It’s called the Latin-based system because it includes all languages that use the Latin alphabet, including English, Spanish, German, French, and many others. But there are many non-Latin-alphabet languages that also use this system, including Greek, Cyrillic (Russian, Ukrainian, Bulgarian, Serbian, etc.), and Brahmic scripts (Devanagari, Thai, Tibetan), and many more. You don’t need to do anything in your CSS to trigger this mode. This is the default. Best practices, however, dictate that you declare in your opening element which language and which direction (LTR or RTL) you are using. This website, for instance, uses to let the browser know this content is published in Great Britian’s version of English, in a left to right direction. Arabic-based systems Arabic, Hebrew and a few other languages run the inline direction from right to left. This is commonly known as RTL. Note that the inline direction still runs horizontally. The block direction runs from top to bottom. And the characters are upright. It’s not just the flow of text that runs from right to left, but everything about the layout of the website. The upper right-hand corner is the starting position. Important things are on the right. The eyes travel from right to left. So, typically RTL websites use layouts that are just like LTR websites, only flipped. On websites that support both LTR and RTL, like the United Nations’ site at un.org, the two layouts are mirror images of each other. For many web developers, our experiences with internationalization have focused solely on supporting Arabic and Hebrew script. CSS layout hacks for internationalization & RTL To prepare an LTR project to support RTL, developers have had to create all sorts of hacks. For example, the Drupal community started a convention of marking every margin-left and -right, every padding-left and -right, every float: left and float: right with the comment /* LTR */. Then later developers could search for each instance of that exact comment, and create stylesheets to override each left with right, and vice versa. It’s a tedious and error prone way to work. CSS itself needed a better way to let web developers write their layout code once, and easily switch language directions with a single command. Our new CSS layout system does exactly that. Flexbox, Grid and Alignment use start and end instead of left and right. This lets us define everything in relationship to the writing system, and switch directions easily. By writing justify-content: flex-start, justify-items: end, and eventually margin-inline-start: 1rem we have code that doesn’t need to be changed. This is a much better way to work. I know it can be confusing to think through start and end as replacements for left and right. But it’s better for any multiligual project, and it’s better for the web as a whole. Sadly, I’ve seen CSS preprocessor tools that claim to “fix” the new CSS layout system by getting rid of start and end and bringing back left and right. They want you to use their tool, write justify-content: left, and feel self-righteous. It seems some folks think the new way of working is broken and should be discarded. It was created, however, to fulfill real needs. And to reflect a global internet. As Bruce Lawson says, WWW stands for the World Wide Web, not the Wealthy Western Web. Please don’t try to convince the industry that there’s something wrong with no longer being biased towards western culture. Instead, spread the word about why this new system is here. Spend a bit of time drilling the concept of inline and block into your head, and getting used to start and end. It will be second nature soon enough. I’ve also seen CSS preprocessors that let us use this new way of thinking today, even as all the parts aren’t fully supported by browsers yet. Some tools let you write text-align: start instead of text-align: left, and let the preprocessor handle things for you. That is terrific, in my opinion. A great use of the power of a preprocessor to help us switch over now. But let’s get back to RTL. How to declare your direction You don’t want to use CSS to tell the browser to switch from an LTR language to RTL. You want to do this in your HTML. That way the browser has the information it needs to display the document even if the CSS doesn’t load. This is accomplished mainly on the html element. You should also declare your main language. As I mentioned above, the 24 ways website is using to declare the LTR direction and the use of British English. The UN Arabic website uses to declare the site as an Arabic site, using a RTL layout. Things get more complicated when you’ve got a page with a mix of languages. But I’m not going to get into all of that, since this article is focused on CSS and layouts, not explaining everything about internationalization. Let me just leave direction here by noting that much of the heavy work of laying out the characters which make up each word is handled by Unicode. If you are interested in learning more about LTR, RTL and bidirectional text, watch this video: Introduction to Bidirectional Text, a presentation by Elika Etemad. Meanwhile, let’s get back to CSS. The writing mode CSS for Latin-based and Arabic-based systems For both of these systems—Latin-based and Arabic-based, whether LTR or RTL—the same CSS property applies for specifying the writing mode: writing-mode: horizontal-tb. That’s because in both systems, the inline text flow is horizontal, while the block direction is top-to-bottom. This is expressed as horizontal-tb. horizontal-tb is the default writing mode for the web, so you don’t need to specify it unless you are overriding something else higher up in the cascade. You can just imagine that every site you’ve ever built came with: html { writing-mode: horizontal-tb; } Now let’s turn our attention to the vertical writing systems. Han-based systems This is where things start to get interesting. Han-based writing systems include CJK languages, Chinese, Japanese, Korean and others. There are two options for laying out a page, and sometimes both are used at the same time. Much of CJK text is laid out like Latin-based languages, with a horizontal top-to-bottom block direction, and a left-to-right inline direction. This is the more modern way to doing things, started in the 20th century in many places, and further pushed into domination by the computer and later the web. The CSS to do this bit of the layouts is the same as above: section { writing-mode: horizontal-tb; } Or, you know, do nothing, and get that result as a default. Alternatively Han-based languages can be laid out in a vertical writing mode, where the inline direction runs vertically, and the block direction goes from right to left. See both options in this diagram: Note that the horizontal text flows from left to right, while the vertical text flows from right to left. Wild, eh? This Japanese issue of Vogue magazine is using a mix of writing modes. The cover opens on the left spine, opposite of what an English magazine does. This page mixes English and Japanese, and typesets the Japanese text in both horizontal and vertical modes. Under the title “Richard Stark” in red, you can see a passage that’s horizontal-tb and LTR, while the longer passage of text at the bottom of the page is typeset vertical-rl. The red enlarged cap marks the beginning of that passage. The long headline above the vertical text is typeset LTR, horizontal-tb. The details of how to set the default of the whole page will depend on your use case. But each element, each headline, each section, each article can be marked to flow the opposite of the default however you’d like. For example, perhaps you leave the default as horizontal-tb, and specify your vertical elements like this: div.articletext { writing-mode: vertical-rl; } Or alternatively you could change the default for the page to a vertical orientation, and then set specific elements to horizontal-tb, like this: html { writing-mode: vertical-rl; } h2, .photocaptions, section { writing-mode: horizontal-tb; } If your page has a sideways scroll, then the writing mode will determine whether the page loads with upper left corner as the starting point, and scroll to the right (horizontal-tb as we are used to), or if the page loads with the upper right corner as the starting point, scrolling to the left to display overflow. Here’s an example of that change in scrolling direction, in a CSS Writing Mode demo by Chen Hui Jing. Check out her demo — you can switch from horizontal to vertical writing modes with a checkbox and see the difference. Mongolian-based systems Now, hopefully so far all of this kind of makes sense. It might be a bit more complicated than expected, but it’s not so hard. Well, enter the Mongolian-based systems. Mongolian is also a vertical script language. Text runs vertically down the page. Just like Han-based systems. There are two major differences. First, the block direction runs the other way. In Mongolian, block-level elements stack from left to right. Here’s a drawing of how Wikipedia would look in Mongolian if it were laid out correctly. Perhaps the Mongolian version of Wikipedia will be redone with this layout. Now you might think, that doesn’t look so weird. Tilt your head to the left, and it’s very familiar. The block direction starts on the left side of the screen and goes to the right. The inline direction starts on the top of the page and moves to the bottom (similar to RTL text, just turned 90° counter-clockwise). But here comes the other huge difference. The character direction is “upside down”. The top of the Mongolian characters are not pointing to the left, towards the start edge of the block direction. They point to the right. Like this: Now you might be tempted to ignore all this. Perhaps you don’t expect to be typesetting Mongolian content anytime soon. But here’s why this is important for everyone — the way Mongolian works defines the results writing-mode: vertical-lr. And it means we cannot use vertical-lr for typesetting content in other languages in the way we might otherwise expect. If we took what we know about vertical-rl and guessed how vertical-lr works, we might imagine this: But that’s wrong. Here’s how they actually compare: See the unexpected situation? In both writing-mode: vertical-rl and writing-mode: vertical-lr latin text is rotated clockwise. Neither writing mode let’s us rotate text counter-clockwise. If you are typesetting Mongolian content, apply this CSS in the same way you would apply writing-mode to Han-based writing systems. To the whole page on the html element, or to specific pages of the page like this: section { writing-mode: vertical-lr; } Now, if you are using writing-mode for a graphic design effect on a language that is otherwise typesets horizontally, I don’t think writing-mode: vertical-lr is useful. If the text wraps onto two lines, it stacks in a very unexpected way. So I’ve sort of obliterated it from my toolkit. I find myself using writing-mode: vertical-rl a lot. And never using -lr. Hm. Writing modes for graphic design So how do we use writing-mode to turn English headlines sideways? We could rely on transform: rotate() Here are two examples, one for each direction. (By the way, each of these demos use CSS Grid for their overall layout, so be sure to test them in a browser that supports CSS Grid, like Firefox Nightly.) In this demo 4A, the text is rotated clockwise using this code: h1 { writing-mode: vertical-rl; } In this demo 4B, the text is rotated counter-clockwise using this code: h1 { writing-mode: vertical-rl; transform: rotate(180deg); text-align: right; } I use vertical-rl to rotate the text so that it takes up the proper amount of space in the overall flow of the layout. Then I rotate it 180° to spin it around to the other direction. And then I use text-align: right to get it to rise up to the top of it’s container. This feels like a hack, but it’s a hack that works. Now what I would like to do instead is use another CSS value that was designed for this use case — one of the two other options for writing mode. If I could, I would lay out example 4A with: h1 { writing-mode: sideways-rl; } And layout example 4B with: h1 { writing-mode: sideways-lr; } The problem is that these two values are only supported in Firefox. None of the other browsers recognize sideways-*. Which means we can’t really use it yet. In general, the writing-mode property is very well supported across browsers. So I’ll use writing-mode: vertical-rl for now, with the transform: rotate(180deg); hack to fake the other direction. There’s much more to what we can do with the CSS designed to support multiple languages, but I’m going to stop with this intermediate introduction. If you do want a bit more of a taste, look at this example that adds text-orientation: upright; to the mix — turning the individual letters of the latin font to be upright instead of sideways. It’s this demo 4C, with this CSS applied: h1 { writing-mode: vertical-rl; text-orientation: upright; text-transform: uppercase; letter-spacing: -25px; } You can check out all my Writing Modes demos at labs.jensimmons.com/#writing-modes. I’ll leave you with this last demo. One that applies a vertical writing mode to the sub headlines of a long article. I like how small details like this can really bring a fresh feeling to the content. See the Pen Writing Mode Demo — Article Subheadlines by Jen Simmons (@jensimmons) on CodePen.",2016,Jen Simmons,jensimmons,2016-12-23T00:00:00+00:00,https://24ways.org/2016/css-writing-modes/,code 312,Preparing to Be Badass Next Year,"Once we’ve eaten our way through the holiday season, people will start to think about new year’s resolutions. We tend to focus on things that we want to change… and often things that we don’t like about ourselves to “fix”. We set rules for ourselves, or try to start new habits or stop bad ones. We focus in on things we will or won’t do. For many of us the list of things we “ought” to be spending time on is just plain overwhelming – family, charity/community, career, money, health, relationships, personal development. It’s kinda scary even just listing it out, isn’t it? I want to encourage you to think differently about next year. The ever-brilliant Kathy Sierra articulates a better approach really well when talking about the attitude we should have to building great products. She tells us to think not about what the user will do with our product, but about what they are trying to achieve in the real world and how our product helps them to be badass1. When we help the user be badass, then we are really making a difference. I suppose this is one way of saying: focus not on what you will do, focus on what it will help you achieve. How will it help you be awesome? In what ways do you want to be more badass next year? A professional lens Though of course you might want to focus in on health or family or charity or community or another area next year, many people will want to become more badass in their chosen career. So let’s talk about a scaffold to help you figure out your professional / career development next year. First up, an assumption: everyone wants to be awesome. Nobody gets up in the morning aiming to be crap at their job. Nobody thinks to themselves “Today I am aiming for just south of mediocre, and if I can mess up everybody else’s ability to do good work then that will be just perfect2”. Ergo, you want to be awesome. So what does awesome look like? Danger! The big trap that people fall into when think about their professional development is to immediately focus on the things that they aren’t good at. When you ask people “what do you want to work on getting better at next year?” they frequently gravitate to the things that they believe they are bad at. Why is this a trap? Because if you focus all your time and energy on improving the areas that you suck at, you are going to end up middling at everything. Going from bad → mediocre at a given skill / behaviour takes a bunch of time and energy. So if you spend all your time going from bad → mediocre at things, what do you think you end up? That’s right, mediocre. Mediocrity is not a great career goal, kids. What do you already rock at? The much better investment of time and energy is to go from good → awesome. It often takes the same amount of relative time and energy, but wow the end result is better! So first, ask yourself and those who know you well what you are already pretty damn good at. Combat imposter syndrome by asking others. Then figure out how to double down on those things. What does brilliant look like for a given skill? What’s the knowledge or practice that you need to level yourself up even further in that thing? But what if I really really suck? Admittedly, sometimes something you suck at really is holding you back. But it’s important to separate out weaknesses (just something you suck at) from controlling weaknesses (something you suck at that actually matters for your chosen career). If skill x is just not an important thing for you to be good at, you may never need to care that you aren’t good at it. If your current role or the one you aspire to next really really requires you to be great at x, then it’s worth investing your time and energy (and possibly money too) getting better at it. So when you look at the things that you aren’t good at, which of those are actually essential for success? The right ratio A good rule of thumb is to pick three things you are already good at to work on becoming awesome at and limit yourself to one weakness that you are trying to improve on. That way you are making sure that you get to awesome in areas where you already have an advantage, and limit the amount of time you are spending on going from bad → mediocre. Levelling up learning So once you’ve figured out your areas you want to focus on next year, what do you actually decide to do? Most of all, you should try to design your day-to-day work in a way that it is also an effective learning experience. This means making sure you have a good feedback loop – you get to try something, see if it works, learn from it, rinse and repeat. It’s also about balance: you want to be challenged enough for work to be interesting, without it being so hard it’s frustrating. You want to do similar / the same things often enough that you get to learn and improve, without it being so repetitive that it’s boring. Continuously getting better at things you are already good at is actually both easier and harder than it sounds. The advantage is that it’s pretty easy to add the feedback loop to make sure that you are improving; the disadvantage is that you’re already good at these skills so you could easily just “do” without ever stopping to reflect and improve. Build in time for personal retrospectives (“What went well? What didn’t? What one thing will I choose to change next time?”) and find a way of getting feedback from outside sources as well. As for the new skills, it’s worth knowing that skill development follows a particular pattern: We all start out unconsciously incompetent (we don’t know what to do and if we tried we’d unwittingly get it wrong), progress on to conscious incompetence (we now know we’re doing it wrong) then conscious competence (we’re doing it right but wow it takes effort and attention) and eventually get to unconscious competence (automatically getting it right). Your past experiences and knowledge might let you move faster through these stages, but no one gets to skip them. Invest the time and remember you need the feedback loop to really improve. What about keeping up? Everything changes very fast in our industry. We need to invest in not falling behind, in keeping on top of what great looks like. There are a bunch of ways to do this, from reading blog posts, following links on Twitter, reading books to attending conferences or workshops, or just finding time to build things in new ways or with new technologies. Which will work best for you depends on how you best learn. Do you prefer to swallow a book? Do you learn most by building or experimenting? Whatever your learning style though, remember that there are three real needs: Scan the landscape (what’s changing, does it matter) Gain the knowledge or skills (get the detail) Apply the knowledge or skills (use it in reality) When you remember that you need all three of these things it can help you get more of what you do. For me personally, I use a combination of conferences and blogs / Twitter to scan the landscape. Half of what I want out of a conference is just a list of things to have on my radar that might become important. I then pick a couple of things to go read up on more (I personally learn most effectively by swallowing a book or spec or similar). And then I pick one thing at a time to actually apply in real life, to embed the skill / knowledge. In summary Aim to be awesome (mediocrity is not a career goal). Figure out what you already rock at. Only care about stuff you suck at that matters for your career. Pick three things to go from good → awesome and one thing to go from bad → mediocre (or mediocre → good) this year. Design learning into your daily work. Scan the landscape, learn new stuff, apply it for real. Be badass! She wrote a whole book about it. You should read it: Badass: Making Users Awesome ↩ Before you argue too vehemently: I suppose some antisocial sociopathic bastards do exist. Identify them, and then RUN AWAY FAST AS YOU CAN #realtalk ↩",2016,Meri Williams,meriwilliams,2016-12-22T00:00:00+00:00,https://24ways.org/2016/preparing-to-be-badass-next-year/,business 301,Stretching Time,"Time is valuable. It’s a precious commodity that, if we’re not too careful, can slip effortlessly through our fingers. When we think about the resources at our disposal we’re often guilty of forgetting the most valuable resource we have to hand: time. We are all given an allocation of time from the time bank. 86,400 seconds a day to be precise, not a second more, not a second less. It doesn’t matter if we’re rich or we’re poor, no one can buy more time (and no one can save it). We are all, in this regard, equals. We all have the same opportunity to spend our time and use it to maximum effect. As such, we need to use our time wisely. I believe we can ‘stretch’ time, ensuring we make the most of every second and maximising the opportunities that time affords us. Through a combination of ‘Structured Procrastination’ and ‘Focused Finishing’ we can open our eyes to all of the opportunities in the world around us, whilst ensuring that we deliver our best work precisely when it’s required. A win win, I’m sure you’ll agree. Structured Procrastination I’m a terrible procrastinator. I used to think that was a curse – “Why didn’t I just get started earlier?” – over time, however, I’ve started to see procrastination as a valuable tool if it is used in a structured manner. Don Norman refers to procrastination as ‘late binding’ (a term I’ve happily hijacked). As he argues, in Why Procrastination Is Good, late binding (delay, or procrastination) offers many benefits: Delaying decisions until the time for action is beneficial… it provides the maximum amount of time to think, plan, and determine alternatives. We live in a world that is constantly changing and evolving, as such the best time to execute is often ‘just in time’. By delaying decisions until the last possible moment we can arrive at solutions that address the current reality more effectively, resulting in better outcomes. Procrastination isn’t just useful from a project management perspective, however. It can also be useful for allowing your mind the space to wander, make new discoveries and find creative connections. By embracing structured procrastination we can ‘prime the brain’. As James Webb Young argues, in A Technique for Producing Ideas, all ideas are made of other ideas and the more we fill our minds with other stimuli, the greater the number of creative opportunities we can uncover and bring to life. By late binding, and availing of a lack of time pressure, you allow the mind space to breathe, enabling you to uncover elements that are important to the problem you’re working on and, perhaps, discover other elements that will serve you well in future tasks. When setting forth upon the process of writing this article I consciously set aside time to explore. I allowed myself the opportunity to read, taking in new material, safe in the knowledge that what I discovered – if not useful for this article – would serve me well in the future. Ron Burgundy summarises this neatly: Procrastinator? No. I just wait until the last second to do my work because I will be older, therefore wiser. An ‘older, therefore wiser’ mind is a good thing. We’re incredibly fortunate to live in a world where we have a wealth of information at our fingertips. Don’t waste the opportunity to learn, rather embrace that opportunity. Make the most of every second to fill your mind with new material, the rewards will be ample. Deadlines are deadlines, however, and deadlines offer us the opportunity to focus our minds, bringing together the pieces of the puzzle we found during our structured procrastination. Like everyone I’ll hear a tiny, but insistent voice in my head that starts to rise when the deadline is approaching. The older you get, the closer to the deadline that voice starts to chirp up. At this point we need to focus. Focused Finishing We live in an age of constant distraction. Smartphones are both a blessing and a curse, they keep us connected, but if we’re not careful the constant connection they provide can interrupt our flow. When a deadline is accelerating towards us it’s important to set aside the distractions and carve out a space where we can work in a clear and focused manner. When it’s time to finish, it’s important to avoid context switching and focus. All those micro-interactions throughout the day – triaging your emails, checking social media and browsing the web – can get in the way of you hitting your deadline. At this point, they’re distractions. Chunking tasks and managing when they’re scheduled can improve your productivity by a surprising order of magnitude. At this point it’s important to remove distractions which result in ‘attention residue’, where your mind is unable to focus on the current task, due to the mental residue of other, unrelated tasks. By focusing on a single task in a focused manner, it’s possible to minimise the negative impact of attention residue, allowing you to maximise your performance on the task at hand. Cal Newport explores this in his excellent book, Deep Work, which I would highly recommend reading. As he puts it: Efforts to deepen your focus will struggle if you don’t simultaneously wean your mind from a dependence on distraction. To help you focus on finishing it’s helpful to set up a work-focused environment that is purposefully free from distractions. There’s a time and a place for structured procrastination, but – equally – there’s a time and a place for focused finishing. The French term ‘mise en place’ is drawn from the world of fine cuisine – I discovered it when I was procrastinating – and it’s applicable in this context. The term translates as ‘putting in place’ or ‘everything in its place’ and it refers to the process of getting the workplace ready before cooking. Just like a professional chef organises their utensils and arranges their ingredients, so too can you. Thanks to the magic of multiple users on computers, it’s possible to create a separate user on your computer – without access to email and other social tools – so that you can switch to that account when you need to focus and hit the deadline. Another, less technical way of achieving the same result – depending, of course, upon your line of work – is to close your computer and find some non-digital, unconnected space to work in. The goal is to carve out time to focus so you can finish. As Newport states: If you don’t produce, you won’t thrive – no matter how skilled or talented you are. Procrastination is fine, but only if it’s accompanied by finishing. Create the space to finish and you’ll enjoy the best of both worlds. In closing… There is a time and a place for everything: there is a time to procrastinate, and a time to focus. To truly reap the rewards of time, the mind needs both. By combining the processes of ‘Structured Procrastination’ and ‘Focused Finishing’ we can make the most of our 86,400 seconds a day, ensuring we are constantly primed to make new discoveries, but just as importantly, ensuring we hit the all-important deadlines. Make the most of your time, you only get so much. Use every second productively and you’ll be thankful that you did. Don’t waste your time, once it’s gone, it’s gone… and you can never get it back.",2016,Christopher Murphy,christophermurphy,2016-12-21T00:00:00+00:00,https://24ways.org/2016/stretching-time/,process 304,Five Lessons From My First 18 Months as a Dev,"I recently moved from Sydney to London to start a dream job with Twitter as a software engineer. A software engineer! Who would have thought. Having started my career as a journalist, the title ‘engineer’ is very strange to me. The notion of writing in first person is also very strange. Journalists are taught to be objective, invisible, to keep yourself out of the story. And here I am writing about myself on a public platform. Cringe. Since I started learning to code I’ve often felt compelled to write about my experience. I want to share my excitement and struggles with the world! But as a junior I’ve been held back by thoughts like ‘whatever you have to say won’t be technical enough’, ‘any time spent writing a blog would be better spent writing code’, ‘blogging is narcissistic’, etc.  Well, I’ve been told that your thirties are the years where you stop caring so much about what other people think. And I’m almost 30. So here goes! These are five key lessons from my first year and a half in tech: Deployments should delight, not dread Lesson #1: Making your deployment process as simple as possible is worth the investment. In my first dev job, I dreaded deployments. We would deploy every Sunday night at 8pm. Preparation would begin the Friday before. A nominated deployment manager would spend half a day tagging master, generating scripts, writing documentation and raising JIRAs. The only fun part was choosing a train gif to post in HipChat: ‘All aboard! The deployment train leaves in 3, 2, 1…” When Sunday night came around, at least one person from every squad would need to be online to conduct smoke tests. Most times, the deployments would succeed. Other times they would fail. Regardless, deployments ate into people’s weekend time — and they were intense. Devs would rush to have their code approved before the Friday cutoff. Deployment managers who were new to the process would fear making a mistake.  The team knew deployments were a problem. They were constantly striving to improve them. And what I’ve learnt from Twitter is that when they do, their lives will be bliss. TweetDeck’s deployment process fills me with joy and delight. It’s quick, easy and stress free. In fact, it’s so easy I deployed code on my first day in the job! Anyone can deploy, at any time of day, with a single command. Rollbacks are just as simple. There’s no rush to make the deployment train. No manual preparation. No fuss. Value — whether in the form of big new features, simple UI improvements or even production bug fixes — can be shipped in an instant. The team assures me the process wasn’t always like this. They invested lots of time in making their deployments better. And it’s clearly paid off. Code reviews need love, time and acceptance Lesson #2: Code reviews are a three-way gift. Every time I review someone else’s code, I help them, the team and myself. Code reviews were another pain point in my previous job. And to be honest, I was part of the problem. I would raise code reviews that were far too big. They would take days, sometimes weeks, to get merged. One of my reviews had 96 comments! I would rarely review other people’s code because I felt too junior, like my review didn’t carry any weight.  The review process itself was also tiring, and was often raised in retrospectives as being slow. In order for code to be merged it needed to have ticks of approval from two developers and a third tick from a peer tester. It was the responsibility of the author to assign the reviewers and tester. It was felt that if it was left to team members to assign themselves to reviews, the “someone else will do it” mentality would kick in, and nothing would get done. At TweetDeck, no-one is specifically assigned to reviews. Instead, when a review is raised, the entire team is notified. Without fail, someone will jump on it. Reviews are seen as blocking. They’re seen to be equally, if not more important, than your own work. I haven’t seen a review sit for longer than a few hours without comments.  We also don’t work on branches. We push single commits for review, which are then merged to master. This forces the team to work in small, incremental changes. If a review is too big, or if it’s going to take up more than an hour of someone’s time, it will be sent back. What I’ve learnt so far at Twitter is that code reviews must be small. They must take priority. And they must be a team effort. Being a new starter is no “get out of jail free card”. In fact, it’s even more of a reason to be reviewing code. Reviews are a great way to learn, get across the product and see different programming styles. If you’re like me, and find code reviews daunting, ask to pair with a senior until you feel more confident. I recently paired with my mentor at Twitter and found it really helpful. Get friendly with feature flagging Lesson #3: Feature flagging gives you complete control over how you build and release a project. Say you’re implementing a new feature. It’s going to take a few weeks to complete. You’ll complete the feature in small, incremental changes. At what point do these changes get merged to master? At what point do they get deployed? Do you start at the back end and finish with the UI, so the user won’t see the changes until they’re ready? With feature flagging — it doesn’t matter. In fact, with feature flagging, by the time you are ready to release your feature, it’s already deployed, sitting happily in master with the rest of your codebase.  A feature flag is a boolean value that gets wrapped around the code relating to the thing you’re working on. The code will only be executed if the value is true. if (TD.decider.get(‘new_feature’)) { //code for new feature goes here } In my first dev job, I deployed a navigation link to the feature I’d been working on, making it visible in the product, even though the feature wasn’t ready. “Why didn’t you use a feature flag?” a senior dev asked me. An honest response would have been: “Because they’re confusing to implement and I don’t understand the benefits of using them.” The fix had to wait until the next deployment. The best thing about feature flagging at TweetDeck is that there is no need to deploy to turn on or off a feature. We set the status of the feature via an interface called Deckcider, and the code makes regular API requests to get the status.  At TweetDeck we are also able to roll our features out progressively. The first rollout might be to a staging environment. Then to employees only. Then to 10 per cent of users, 20 per cent, 30 per cent, and so on. A gradual rollout allows you to monitor for bugs and unexpected behaviour, before releasing the feature to the entire user base. Sometimes a piece of work requires changes to existing business logic. So the code might look more like this: if (TD.decider.get(‘change_to_existing_feature’)) { //new logic goes here } else { //old logic goes here } This seems messy, right? Riddling your code with if else statements to determine which path of logic should be executed, or which version of the UI should be displayed. But at Twitter, this is embraced. You can always clean up the code once a feature is turned on. This isn’t essential, though. At least not in the early days. When a cheeky bug is discovered, having the flag in place allows the feature to be very quickly turned off again. Let data and experimentation drive development Lesson #4: Use data to determine the direction of your product and measure its success. The first company I worked for placed a huge amount of emphasis on data-driven decision making. If we had an idea, or if we wanted to make a change, we were encouraged to “bring data” to show why it was necessary. “Without data, you’re just another person with an opinion,” the chief data scientist would say. This attitude helped to ensure we were building the right things for our customers. Instead of just plucking a new feature out of thin air, it was chosen based on data that reflected its need. But how do you design that feature? How do you know that the design you choose will have the desired impact? That’s where experiments come into play.  At TweetDeck we make UI changes that we hope will delight our users. But the assumptions we make about our users are often wrong. Our front-end team recently sat in a room and tried to guess which UIs from A/B tests had produced better results. Half the room guessed incorrectly every time. We can’t assume a change we want to make will have the impact we expect. So we run an experiment. Here’s how it works. Users are placed into buckets. One bucket of users will have access to the new feature, the other won’t. We hypothesise that the bucket exposed to the new feature will have better results. The beauty of running an experiment is that we’ll know for sure. Instead of blindly releasing the feature to all users without knowing its impact, once the experiment has run its course, we’ll have the data to make decisions accordingly. Hire the developer, not the degree Lesson #5: Testing candidates on real world problems will allow applicants from all backgrounds to shine. Surely, a company like Twitter would give their applicants insanely difficult code tests, and the toughest technical questions, that only the cleverest CS graduates could pass, I told myself when applying for the job. Lucky for me, this wasn’t the case. The process was insanely difficult—don’t get me wrong—but the team at TweetDeck gave me real world problems to solve. The first code test involved bug fixes, performance and testing. The second involved DOM traversal and manipulation. Instead of being put on the spot in a room with a whiteboard and pen I was given a task, access to the internet, and time to work on it. Similarly, in my technical interviews, I was asked to pair program on real world problems that I was likely to face on the job. In one of my phone screenings I was told Twitter wanted to increase diversity in its teams. Not just gender diversity, but also diversity of experience and background. Six months later, with a bunch of new hires, team lead Tom Ashworth says TweetDeck has the most diverse team it’s ever had. “We designed an interview process that gave us a way to simulate the actual job,” he said. “It’s not about testing whether you learnt an algorithm in school.” Is this lowering the bar? No. The bar is whether a candidate has the ability to solve problems they are likely to face on the job. I recently spoke to a longstanding Atlassian engineer who said they hadn’t seen an algorithm in their seven years at the company. These days, only about 50 per cent of developers have computer science degrees. The majority of developers are self taught, learn on the job or via online courses. If you want to increase diversity in your engineering team, ensure your interview process isn’t excluding these people.",2016,Amy Simmons,amysimmons,2016-12-20T00:00:00+00:00,https://24ways.org/2016/my-first-18-months-as-a-dev/,process 310,Fairytale of new Promise,"There are only four good Christmas songs. I know, yeah, JavaScript or whatever. We’ll get to that in a minute, I promise. First—and I cannot stress this enough— there are four good Christmas songs. You’re free to disagree with me here, of course, but please try to understand that you will be wrong. They don’t all have the most safe-for-work titles; I can’t list all of them here, but if you choose to let your fingers do the walkin’ to your nearest search engine, I will say that one was released by the band FEAR way back in 1982 and one was on Run the Jewels’ self-titled debut album. The lyrics are a hell of a lot worse than the titles, so maybe wait until you get home from work before you queue them up. Wear headphones, if you’ve got thin walls. For my money, though, the two I can reference by name are the top of that small heap: Tom Waits’ Christmas Card from a Hooker in Minneapolis, and The Pogues’ Fairytale of New York. The former once held the honor of being the only good Christmas song—about which which I was also unequivocally correct, right up until I changed my mind. It’s not the song up for discussion today, but feel free to familiarize yourself just the same—I’ll wait. Fairytale of New York—the top of the list—starts out by hinting at some pretty standard holiday fare; dreams and cheer and whatnot. Typical seasonal stuff, so long as you ignore that the story seems to be recounted as a drunken flashback in a jail cell. You can probably make a few guesses at the underlying spirit of the song based on that framing: following a lucky break, our bright-eyed protagonists move to New York in search of fame and fortune, only to quickly descend into bad decisions, name-calling, and vaguely festive chaos. This song speaks to me on a couple of levels, not the least of which is as a retelling of my day-to-day interactions with JavaScript. Each day’s melody might vary a little bit, granted, but the lyrics almost always follow a pretty clear arc toward “PARENTAL ADVISORY: EXPLICIT CONTENT.” You might have heard a similar tune yourself; it goes a little somethin’ like setTimeout(function() { console.log( ""this should be happening last"" ); }, 1000); . Callbacks are calling callbacks calling callbacks and something is happening somewhere, as the JavaScript interpreter plods through our code start-to-finish, line-by-line, step-by-step. If we need to take actions based on the results of something that could take its sweet time resolving, well, we’d better fiddle with the order of things to make sure those actions don’t happen too soon. “But I can see a better time,” as the song says, “when all our dreams come true.” So, with that Pogues brand of holiday spirit squarely in mind—by which I mean that your humble narrator is almost certainly drunk, and may be incarcerated at the time of publication—gather ’round for a story of hope, of hardships, of semi-asynchronous JavaScript programming, and ultimately: of Promise unfulfilled. The Main Thread JavaScript is single-minded, in a manner of speaking. Anything we tell the JavaScript runtime to do goes into a single-file queue; you’ll see it referred to as the “main thread,” or “UI thread.” That thread can be shared by a number of critical browser processes, like rendering and re-rendering parts of the page, and user interactions ranging from the simple—say, highlighting text—to the more complex—interacting with form elements. If that sounds a little scary to you, well, that’s because it is. The more complex our scripts, the more we’re cramming into that single-file main thread, to be processed along with—say—some of our CSS animations. Too much JavaScript clogging up the main thread means a lot of user-facing performance jankiness. Getting away from that single thread is a big part of all the excitement around Web Workers, which allow us to offload entire scripts into their own dedicated background threads—though not without limitations of their own. Outside of Web Workers, that everything-thread is the only game in town: scripts executed one thing at a time, functions calling functions calling functions, taking numbers and crowding up the same deli counter as a user’s interactions—which, in this already strained metaphor, would be ham, I guess? Asynchronous JavaScript Now, those queued actions may include asynchronous things. For example: AJAX callbacks, setTimeout/setInterval, and addEventListener won’t block the main thread while we’re waiting for a request to come back, a timer to tick away, or an event to trigger. Once those things do kick in, though, the actions they’re meant to perform will get shuffled right back into that single-thread queue. There are a couple of places you might have written asynchronously-fired JavaScript, even if you’re not super familiar with the overarching concept: XMLHttpRequest—“AJAX,” if ya nasty—or just kicking off a function once a user triggers a click or mouseenter event. Event-driven development is writ a little larger, with the overall flow of the script dictated by events, both internal and external. Writing event-driven JavaScript applications is a step in the right direction for sure—it won’t cure what ails the main thread, but it does work with the medium in a reasonable way. Event-driven development allows us to manage our use of the main thread in a way that makes sense. If any of this rings a bell for you, the motivation for Promises should feel familiar. For example, a custom init event might kick things off, and fire a create event that applies our classes and restructures our markup which, on completion, fires a bindEvents event to handle all the event listeners for user interaction. There might not sound like much difference between that and one big function that kicks off, manipulates the DOM, and binds our events line-by-line—but in a script of sufficient size and complexity we’re not only provided with a decoupled flow through the script, but obvious touchpoints for future updates and a predictable structure for ongoing maintenance. This pattern falls apart a little where we were still creating, binding, and listening for events in the same top-to-bottom, one-item-at-a-time way—we had to set a listener on a given object before the event fires, or nothing would happen: // Create the event: var event = document.createEvent( ""Event"" ); // Name the event: event.initEvent( ""doTheStuff"", true, true ); // Listen for the custom `doTheStuff` event on `window`: window.addEventListener( ""doTheStuff"", initializeEverything ); // Fire the custom event window.dispatchEvent( event ); This example is a little contrived, and this stuff is a lot more manageable for sure with the addition of a framework, but that’s the basic gist: create and name the event, add a listener for the event, and—after setting our listener—dispatch the event. Events and callbacks aren’t the only game in town for weaving our way in and out of the main thread, though—at least, not anymore. Promises A Promise is, at the risk of sounding sentimental, pure potential—an empty container into which a value eventually results. A Promise can exist in several states: “pending,” while the computation they contain is being performed or “resolved” once that computation is complete. Once resolved, a Promise is “fulfilled” if it gave us back something we expect, or “rejected” if it didn’t. The Promise constructor accepts a callback with two arguments: resolve and reject. We perform an action—asynchronous or otherwise—within that callback. If everything in there has gone according to plan, we call resolve. If something has gone awry, we call reject—with an error, conventionally. To illustrate, let’s tack something together with a pretty decent chance of doing what we don’t want: a promise meant only to give us the number 1, but has a chance of giving us back a 2. No reasonable person would ever do this, of course, but I wouldn’t necessarily put it past me. var promisedOne = new Promise( function( resolve, reject ) { var coinToss = Math.floor( Math.random() * 2 ) + 1; if( coinToss === 1 ) { resolve( coinToss ); } else { reject( new Error( ""That ain’t a one."" ) ); } }); There’s nothing too surprising in there, after you boil it all down. It’s a little return-y, with the exception that we’re flagging results as “as expected” or “something went wrong.” Tapping into that Promise uses another new keyword: then—and as someone who attempts to make sense of JavaScript by breaking it down to plain ol’ human-language, I’m a big fan of this syntax. then is tacked onto our Promise identifier, and does just what it says on the tin: once the Promise is resolved, then do one of two things, both supplied as callbacks: the first in the case of a fulfilled promise, and the second in the case of a rejected one. Those two callbacks will have, as arguments, the results we specified with resolve orreject, respectively. It sounds like a lot in prose, but in code it’s a pretty simple pattern: promisedOne.then( function( result ) { console.log( result ); }, function( error ) { console.error( error ); }); If you’ve spent any time working with AJAX—jQuery-wise, in particular—you’ve seen something like this pattern before: a success callback and an error callback. The state of a promise, once fulfilled or rejected, cannot be changed—any reference we make to promisedOne will have a single, fixed result. It may not look like too much the way I’m using it here, but it’s powerful stuff—a pattern for asynchronously resolving anything. I’ve recently used Promises alongside a script that emulates Font Load Events, to apply webfonts asynchronously and avoid a potential performance hit. Font Face Observer allows us to, as the name implies, determine when the files referenced by our @font-face rules have finished loading. var fontObserver = new FontFaceObserver( ""Fancy Font"" ); fontObserver.check().then(function() { document.documentElement.className += "" fonts-loaded""; }, function( error ) { console.error( error ); }); fontObserver.check() gives us back a Promise, allowing us to chain on a then containing our callbacks for success and failure. We use the fulfilled callback to bolt a class onto the page once the font file has been fully transferred. We don’t bother including an argument in the first function, since we don’t care about the result itself so much as we care that the promise resolved without error—we’re not doing anything with the resolved value, just adding a class to the page. We do include the error argument, since we’ll want to know what happened should something go wrong. Now, this isn’t the tidiest syntax around—at least to my eyes—with those two functions just kinda floating in a then. Luckily there’s an similar alternative syntax; one that I find a bit easier to parse at-a-glance: fontObserver.check() .then(function() { document.documentElement.className += "" fonts-loaded""; }) .catch(function( error ) { console.log( error ); }); The first callback inside then provides us with our success state, while the catch provides us with a single, explicit “something went wrong” callback. The two syntaxes aren’t completely identical in all situations, but for a simple case like this, I find it a little neater. The Common Thread I guess I still owe you an explanation, huh. Not about the JavaScript-whatever; I think I’ve explained that plenty. No, I mean Fairytale of New York, and why it’s perched up there at the top of the four (4) song heap. Fairytale is a sad song, ostensibly. If you follow the main thread—start to finish, line-by-line, step by step— Fairytale is a sad song. And I can see you out there, visions of Die Hard dancing in your heads: “but is it a Christmas song?” Well, for my money, nothing says “holidays” quite like unreliable narration. Shane MacGowan, the song’s author, has placed the first verse about “Christmas Eve in the drunk tank” as happening right after the “lucky one, came in eighteen-to-one”—not at the chronological end of the story. That means the song might not be mostly drunken flashback, but all of it a single, overarching flashback including a Christmas Eve in protective custody. It could be that the man and woman are, together, recounting times long past—good times and bad times—maybe not even in chronological order. Hell, the “NYPD Choir” mentioned in the chorus? There’s no such thing. We’re not big Christmas folks, my family and I. But just the same, every year, the handful of us get together, and every year—like clockwork—there’s a lull in conversation, there’s a sharp exhale, and Ma says “we all made it.” Not to a house, not to a dinner, but through another year, to another Christmas. At this point, without fail, someone starts telling a story—and one begets another, and so on. Sometimes the stories are happy, sometimes they’re sad, more often than not they’re both. Some are about things we were lucky to walk away from, some are about a time when another one of us didn’t. Start-to-finish, line-by-line, step-by-step, the main thread through the year doesn’t change, and maybe there isn’t a whole lot we can do to change it. But by carefully weaving our way in and out of that thread—stories all out of sync and resolving one way or the other, with the results determined by questionably reliable narrators—we can change the way we interact with it and, little by little, we can start making sense of it.",2016,Mat Marquis,matmarquis,2016-12-19T00:00:00+00:00,https://24ways.org/2016/fairytale-of-new-promise/,code 294,New Tricks for an Old Dog,"Much of my year has been spent helping new team members find their way around the expansive and complex codebase that is the TweetDeck front-end, trying to build a happy and productive group of people around a substantial codebase with many layers of legacy. I’ve loved doing this. Everything from writing new documentation, drawing diagrams, and holding technical architecture sessions teaches you something you didn’t know or exposes an area of uncertainty that you can go work on. In this article, I hope to share some experiences and techniques that will prove useful in your own situation and that you can impress your friends in some new and exciting ways! How do you do, fellow kids? To start with I’d like to introduce you to our JavaScript framework, Flight. Right now it’s used by twitter.com and TweetDeck although, as a company, Twitter is largely moving to React. Over time, as we used Flight for more complex interfaces, we found it wasn’t scaling with us. Composing components into trees was fiddly and often only applied for a specific parent-child pairing. It seems like an obvious feature with hindsight, but it didn’t come built-in to Flight, and it made reusing components a real challenge. There was no standard way to manage the state of a component; they all did it slightly differently, and the technique often varied by who was writing the code. This cost us in maintainability as you just couldn’t predict how a component would be built until you opened it. Making matters worse, Flight relied on events to move data around the application. Unfortunately, events aren’t good for giving structure to complex logic. They jump around in a way that’s hard to understand and debug, and force you to search your code for a specific string — the event name‚ to figure out what’s going on. To find fixes for these problems, we looked around at other frameworks. We like React for it’s simple, predictable state management and reactive re-render flow, and Elm for bringing strict functional programming to everyone. But when you have lots of existing code, rewriting or switching framework is a painful and expensive option. You have to understand how it will interact with your existing code, how you’ll test it alongside existing code, and how it will affect the size and performance of the application. This all takes time and effort! Instead of planning a rewrite, we looked for the ideas hidden within other frameworks that we could reapply in our own situation or bring to the tools we already were using. Boiled down, what we liked seemed quite simple: Component nesting & composition Easy, predictable state management Normal functions for data manipulation Making these ideas applicable to Flight took some time, but we’re in a much better place now. Through persistent trial-and-error, we have well documented, testable and standard techniques for creating complex component hierarchies, updating and reacting to state changes, and passing data around the app. While the specifics of our situation and Flight aren’t really important, this experience taught me something: Distill good tech into great ideas. You can apply great ideas anywhere. You don’t have to use cool kids’ latest framework, hottest build tool or fashionable language to benefit from them. If you can identify a nugget of gold at the heart of it all, why not use it to improve what you have already? Times, they are a changin’ Apart from stealing ideas from the new and shiny, how can we keep make the most of improved tooling and techniques? Times change and so should the way we write code. Going back in time a bit, TweetDeck used some slightly outmoded tools for building and bundling. Without a transpiler like Babel we were missing out new language features, and without a more advanced build tools like Webpack, every module’s source was encased in AMD boilerplate. In fact, we found ourselves with a mix of both AMD syntaxes: define([""lodash""], function (_) { // . . . }); define(function (require) { var _ = require(""lodash""); // . . . }); This just wouldn’t do. And besides, what we really wanted was CommonJS, or even ES2015 module syntax: import _ from ""lodash""; These days we’re using Babel, Webpack, ES2015 modules and many new language features that make development just… better. But how did we get there? To explain, I want to introduce you to codemods and jscodeshift. A codemod is a large-scale refactor of a whole codebase, often mechanical or repetitive. Think of renaming a module or changing an API like URL(""..."") to new URL(""...""). jscodeshift is a toolkit for running automated codemods, where you express a code transformation using code. The automated codemod operates on each file’s syntax tree – a data-structure representation of the code — finding and modifying in place as it goes. Here’s an example that renames all instances of the variable foo to bar: module.exports = function (fileInfo, api) { return api .jscodeshift(fileInfo.source) .findVariableDeclarators('foo') .renameTo('bar') .toSource(); }; It’s a seriously powerful tool, and we’ve used it to write a series of codemods that: rename modules, unify our use of AMD to a single syntax, transition from one testing framework to another, and switch from AMD to CommonJS. These changes can be pretty huge and far-reaching. Here’s an example commit from when we switched to CommonJS: commit 8f75de8fd4c702115c7bf58febba1afa96ae52fc Date: Tue Jul 12 2016 Run AMD -> CommonJS codemod 418 files changed, 47550 insertions(+), 48468 deletions(-) Yep, that’s just under 50k lines changed, tested, merged and deployed without any trouble. AMD be gone! From this step-by-step approach, using codemods to incrementally tweak and improve, we extracted a little codemod recipe for making significant, multi-stage changes: Find all the existing patterns Choose the two most similar Unify with a codemod Repeat. For example: For module loading, we had 2 competing AMD patterns plus some use of CommonJS The two AMD syntaxes were the most similar We used a codemod to move to unify the AMD patterns Later we returned to AMD to convert it to CommonJS It’s worked for us, and if you’d like to know more about codemods then check out Evolving Complex Systems Incrementally by Facebook engineer, Christoph Pojer. Welcome aboard! As TweetDeck has gotten older and larger, the amount of things a new engineer has to learn about has exploded. The myriad of microservices that manage our data and their layers of authentication, security and business logic around them make for an overwhelming amount of information to hand to a newbie. Inspired by Amy’s amazing Guide to the Care and Feeding of Junior Devs, we realised it was important to take time to design our onboarding that each of our new hires go through to make the most of their first few weeks. Joining a new company, team, or both, is stressful and uncomfortable. Everything you can do to help a new hire will be valuable to them. So please, take time to design your onboarding! And as you build up an onboarding process, you’ll create things that are useful for more than just new hires; it’ll force you to write documentation, for example, in a way that’s understandable for people who are unfamiliar with your team, product and codebase. This can lead to more outside contributions: potential contributors feel more comfortable getting set up on your product without asking for help. This is something that’s taken for granted in open source, but somehow I think we forget about it in big companies. After all, better documentation is just a good thing. You will forget things from time to time, and you’d be surprised how often the “beginner” docs help! For TweetDeck, we put together system and architecture diagrams, and one-pager explanations of important concepts: What are our dependencies? Where are the potential points of failure? Where does authentication live? Storage? Caching? Who owns “X”? Of course, learning continues long after onboarding. The landscape is constantly shifting; old services are deprecated, new APIs appear and what once true can suddenly be very wrong. Keeping up with this is a serious challenge, and more than any one person can track. To address this, we’ve thought hard about our knowledge sharing practices across the whole team. For example, we completely changed the way we do code review. In my opinion, code review is the single most effective practice you can introduce to share knowledge around, and build the quality and consistency of your team’s work. But, if you’re not doing it, here’s my suggestion for getting started: Every pull request gets a +1 from someone else. That’s all — it’s very light-weight and easy. Just ask someone to have a quick look over your code before it goes into master. At Twitter, every commit gets a code review. We do a lot of reviewing, so small efficiency and effectiveness improvements make a big difference. Over time we learned some things: Don’t review for more than hour 1 Keep reviews smaller than ~400 lines 2 Code review your own code first 2 After an hour, and above roughly 400 lines, your ability to detect issues in a code review starts to decrease. So review little and often. The gaps around lunch, standup and before you head home are ideal. And remember, if someone’s put code up for a review, that review is blocking them doing other work. It’s your job to unblock them. On TweetDeck, we actually try to keep reviews under 250 lines. It doesn’t sound like much, but this constraint applies pressure to make smaller, incremental changes. This makes breakages easier to detect and roll back, and leads to a very natural feature development process that encourages learning and iteration. But the most important thing I’ve learned personally is that reviewing my own code is the best way to spot issues. I try to approach my own reviews the way I approach my team’s: with fresh, critical eyes, after a break, using a dedicated code review tool. It’s amazing what you can spot when you put a new in a new interface around code you’ve been staring at for hours! And yes, this list features science. The data backs up these conclusions, and if you’d like to learn more about scientific approaches to software engineering then I recommend you buy Making Software: What Really Works, and Why We Believe It. It’s ace. For more dedicated information sharing, we’ve introduced regular seminars for everyone who works on a specific area or technology. It works like this: a team-member shares or teaches something to everyone else, and next time it’s someone else’s turn. Giving everyone a chance to speak, and encouraging a wide range of topics, is starting to produce great results. If you’d like to run a seminar, one thing you could try to get started: run a point at the thing you least understand in our architecture session — thanks to James for this idea. And guess what… your onboarding architecture diagrams will help (and benefit from) this! More, please! There’s a few ideas here to get you started, but there are even more in a talk I gave this year called Frontend Archaeology, including a look at optimising for confidence with front-end operations. And finally, thanks to Amy for proof reading this and to Passy for feedback on the original talk. Dunsmore et al. 2000. Object-Oriented Inspection in the Face of Delocalisation. Beverly, MA: SmartBear Software. ↩ Cohen, Jason. 2006. Best Kept Secrets of Peer Code Review. Proceedings of the 22nd ICSE 2000: 467-476. ↩ ↩",2016,Tom Ashworth,tomashworth,2016-12-18T00:00:00+00:00,https://24ways.org/2016/new-tricks-for-an-old-dog/,code 289,Front-End Developers Are Information Architects Too,"The theme of this year’s World IA Day was “Information Everywhere, Architects Everywhere”. This article isn’t about what you may consider an information architect to be: someone in the user-experience field, who maybe studied library science, and who talks about taxonomies. This is about a realisation I had a couple of years ago when I started to run an increasing amount of usability-testing sessions with people who have disabilities: that the structure, labelling, and connections that can be made in front-end code is information architecture. People’s ability to be successful online is unequivocally connected to the quality of the code that is written. Places made of information In information architecture we talk about creating places made of information. These places are made of ones and zeros, but we talk about them as physical structures. We talk about going onto a social media platform, posting in blogs, getting locked out of an environment, and building applications. In 2002, Andrew Hinton stated: People live and work in these structures, just as they live and work in their homes, offices, factories and malls. These places are not virtual: they are as real as our own minds. 25 Theses We’re creating structures which people rely on for significant parts of their lives, so it’s critical that we carry out our work responsibly. This means we must use our construction materials correctly. Luckily, our most important material, HTML, has a well-documented specification which tells us how to build robust and accessible places. What is most important, I believe, is to understand the semantics of HTML. Semantics The word “semantic” has its origin in Greek words meaning “significant”, “signify”, and “sign”. In the physical world, a structure can have semantic qualities that tell us something about it. For example, the stunning Westminster Abbey inspires awe and signifies much about the intent and purpose of the structure. The building’s size; the quality of the stone work; the massive, detailed stained glass: these are all signs that this is a building meant for something the creators deemed important. Alternatively consider a set of large, clean, well-positioned, well-lit doors on the ground floor of an office block: they don’t need an “entrance” sign to communicate their use and to stop people trying to use a nearby fire exit to get into the building. The design of the doors signify their usage. Sometimes a more literal and less awe-inspiring approach to communicating a building’s purpose happens, but the affect is similar: the building is signifying something about its purpose. HTML has over 115 elements, many of which have semantics to signify structure and affordance to people, browsers, and assistive technology. The HTML 5.1 specification mentions semantics, stating: Elements, attributes, and attribute values in HTML are defined … to have certain meanings (semantics). For example, the
    element represents an ordered list, and the lang attribute represents the language of the content. HTML 5.1 Semantics, structure, and APIs of HTML documents HTML’s baked-in semantics means that developers can architect their code to signify structure, create relationships between elements, and label content so people can understand what they’re interacting with. Structuring and labelling information to make it available, usable, and understandable to people is what an information architect does. It’s also what a front-end developer does, whether they realise it or not. A brief introduction to information architecture We’re going to start by looking at what an information architect is. There are many definitions, and I’m going to quote Richard Saul Wurman, who is widely regarded as the father of information architecture. In 1976 he said an information architect is: the individual who organizes the patterns inherent in data, making the complex clear; a person who creates the structure or map of information which allows others to find their personal paths to knowledge; the emerging 21st century professional occupation addressing the needs of the age focused upon clarity, human understanding, and the science of the organization of information. Of Patterns And Structures To me, this clearly defines any developer who creates code that a browser, or other user agent (for example, a screen reader), uses to create a structured, navigable place for people. Just as there are many definitions of what an information architect is, there are for information architecture itself. I’m going to use the definition from the fourth edition of Information Architecture For The World Wide Web, in which the authors define it as: The structural design of shared information environments. The synthesis of organization, labeling, search, and navigation systems within digital, physical, and cross-channel ecosystems. The art and science of shaping information products and experiences to support usability, findability, and understanding. Information Architecture For The World Wide Web, 4th Edition To me, this describes front-end development. Done properly, there is an art to creating robust, accessible, usable, and findable spaces that delight all our users. For example, at 2015’s State Of The Browser conference, Edd Sowden talked about the accessibility of s. He discovered that by simply not using the semantically-correct
    element to mark up headings, in some situations browsers will decide that a
    is being used for layout and essentially make it invisible to assistive technology. Another example of how coding practices can affect the usability and findability of content is shown by Léonie Watson in her How ARIA landmark roles help screen reader users video. By using ARIA landmark roles, people who use screen readers are quickly able to identify and jump to common parts of a web page. Our definitions of information architects and information architecture mention patterns, rules, organisation, labelling, structure, and relationships. There are numerous different models for how these elements get boiled down to their fundamentals. In his Understanding Context book, Andrew Hinton calls them Labels, Relationships, and Rules; Jorge Arango calls them Links, Nodes, And Order; and Dan Klyn uses Ontology, Taxonomy, and Choreography, which is the one we’re going to use. Dan defines these terms as: Ontology The definition and articulation of the rules and patterns that govern the meaning of what we intend to communicate. What we mean when we say what we say. Taxonomy The arrangements of the parts. Developing systems and structures for what everything’s called, where everything’s sorted, and the relationships between labels and categories Choreography Rules for interaction among the parts. The structures it creates foster specific types of movement and interaction; anticipating the way users and information want to flow and making affordance for change over time. We now have definitions of an information architect, information architecture, and a model of the elements of information architecture. But is writing HTML really creating information or is it just wrangling data and metadata? When does data turn into information? In his book Managing For The Future Peter Drucker states: … data is not information. Information is data endowed with relevance and purpose. Managing For The Future If we use the correct semantic element to mark up content then we’re developing with purpose and creating relevance. For example, if we follow the advice of the HTML 5.1 specification and mark up headings using heading rank instead of the outline algorithm, we’re creating a structure where the depth of one heading is relevant to the previous one. Architected correctly, an

    element should be relevant to its parent, which should be the

    . By following the HTML specification we can create a structured, searchable, labeled document that will hopefully be relevant to what our users need to be successful. If you’ve never used a screen reader, you might be wondering how the headings on a page are searchable. Screen readers give users the ability to interact with headings in a couple of ways: by creating a list of headings so users can quickly scan the page for information by using a keyboard command to cycle through one heading at a time If we had a document for Christmas Day TV we might structure it something like this:

    Christmas Day TV schedule

    BBC1

    Morning

    Evening

    BBC2

    Morning

    Evening

    ITV

    Morning

    Evening

    Channel 4

    Morning

    Evening

    If I use VoiceOver to generate a list of headings, I get this: Once I have that list I can use keyboard commands to filter the list based on the heading level. For example, I can press 2 to hear just the

    s: If we hadn’t used headings, of if we’d nested them incorrectly, our users would be frustrated. Putting this together Let’s put this together with an example of a button that, when pressed, toggles the appearance of a panel of links. There are numerous ways we could create a button on a web page, but the best way is to just use a There’s quite a bit going on here. We’re using the: aria-controls attribute to architect a connection between the