{"rowid": 264, "title": "Dynamic Social Sharing Images", "contents": "Way back when social media was new, you could be pretty sure that whatever you posted would be read by those who follow you. If you\u2019d written a blog post and you wanted to share it with those who follow you, you could post a link and your followers would see it in their streams. Oh heady days! \nWith so many social channels and a proliferation of content and promotions flying past in everyone\u2019s streams, it\u2019s no longer enough to share content on social media, you have to actively sell it if you want it to be seen. You really need to make the most of every opportunity to catch a reader\u2019s attention if you\u2019re trying to get as many eyes as possible on that sweet, sweet social content.\nOne of the best ways to grab attention with your posts or tweets is to include an image. There\u2019s heaps of research that says that having images in your posts helps them stand out to followers. Reports I found showed figures from anything from 35% to 150% improvement from just having image in a post. Unfortunately, the details were surrounded with gross words like engagement and visual marketing assets and so I had to close the page before I started to hate myself too much.\nSo without hard stats to quote, we\u2019ll call it a rule of thumb. The rule of thumb is that posts with images will grab more attention than those without, so it makes sense that when adding pages to a website, you should make sure that they have social media sharing images associated with them.\nAdding sharing images\nThe process for declaring an image to be used in places like Facebook and Twitter is very simple, and at this point is familiar to many of us. You add a meta tag to the head of the page to point to the location of the image to use. When a link to the page is added to a post, the social network will fetch the page, look for the meta tag and then use the image you specified.\n\nThere\u2019s a good post on this over at CSS-Tricks if you need to bone up on the details of this and other similar meta tags for social media sharing.\nThis is all fine and well for content that has a very obvious choice of image to go along with it, but what if you don\u2019t necessarily have an image? One approach is to use stock photography, but that\u2019s not going to be right for every situation.\nThis was something we faced with 24 ways in 2017. We wanted to add images to the tweets we post each day announcing a new article. Some articles have images, but not all, and there tended not to be any consistency in terms of imagery from one article to the next. We always have an author photograph, but those don\u2019t usually lend themselves directly to being the main \u2018hero\u2019 image for an article.\nPutting his thinking cap on, Paul came up with a design for an image that used the author photo along with a quote extracted from the article.\nOne of the hand-made sharing images from 2017\nEach day we would pick a quote from the article, and Paul would manually compose an image to be uploaded to the site. The results were great, but the whole process was a bit too labour intensive and relied on an individual (Paul) being available each day to do the work. I thought we could probably improve this.\nHatching a new plan\nOne initial idea I came up with was to script the image editor to dynamically build a new image by pulling content from our database. Sketch has plugins available to pull JSON content into a design, and our CMS can easily output JSON data, so that was one possibility.\nThe more I thought about this and how much I wish graphic design tools worked just a little bit more like CSS, the obvious solution hit me. We should just build it with CSS!\nIn fact, as the author name and image already exist in our CMS, and the visual styling is based on the design of the website, couldn\u2019t this just be another page on the site generated by the CMS?\nBreaking it down, I figured the steps needed would be something like:\n\nCreate the CSS to lay out a component that could be turned into an image\nAdd a new field to articles in the CMS to hold a handpicked quote\nBuild a new article template in the CMS to output the author name and quote dynamically for any article\n\u2026 um \u2026 screenshot?\n\nI thought I\u2019d get cracking and see if I could figure out the final steps later.\nBuilding the page\nThe first thing to tackle was the basic HTML and CSS to lay out the components for our image. That bit was really easy, as I just asked Paul to do it. Everyone should have a Paul.\nPaul\u2019s code uses a fixed dimension container in the HTML, set to 600 \u00d7 315px. This is to make it the correct aspect ratio for Facebook\u2019s recommended image size. It\u2019s useful to remember here that it doesn\u2019t need to be responsive or robust, as the page only needs to lay out correctly for a screenshot and a fixed size in a known browser.\nWith the markup and CSS in place, I turned this into a new template. Our CMS can easily display content through any number of templates, so I created a version of the article template that was totally stripped down. It only included the author details and the quote, along with Paul\u2019s markup.\nI also added the quote as a new field on the article in the CMS, so each \u2018image\u2019 could be quickly and easily customised in the editing process.\nI added a new field to the article template to capture the quote.\nWith very little effort, we quickly had a page to dynamically generate our \u2018image\u2019 right from the CMS. You can see any of them by adding /sharing onto the end of an article URL for any 2018 article.\nOur automatically generated layout direct from the CMS\nIt soon became clear that the elusive Step 4 was going to be the tricky part. I can create a small page on the site that looks like an image, but how should I go about turning it into one? An obvious route is to screenshot the page by hand, but that\u2019s going back to some of the manual steps I was trying to eliminate, and also opens up a possibility for errors to be made. But it did lead me to the thought\u2026 how could I automatically take a screenshot?\nEnter Puppeteer\nPuppeteer is a Node.js library that provides a nice API onto Headless Chrome. What is Headless Chrome, you ask? It\u2019s just a version of the Chrome browser than runs from the command line without ever drawing anything to a user interface window. It loads pages, renders CSS, runs JavaScript, pretty much every normal thing that Chrome on the desktop does, but without a clicky user interface.\nHeadless Chrome can be used for all sorts of things such as running automated tests on front-end code after making changes, or\u2026 get this\u2026 rendering pages that can be used for screenshots. The actual process of writing some code to control Chrome and to take the screenshot is where Puppeteer comes in. Puppeteer puts a friendly layer in front of big old scary Chrome to enable us to interact with it using simple JavaScript code running in Node.\nUsing Puppeteer, I can write a small script that will repeatably turn a URL into an image. So simple is it to do this, that\u2019s it\u2019s actually Puppeteer\u2019s \u2018hello world\u2019 example.\nFirst you install Puppeteer. It downloads a compatible headless browser (actually Chromium) as a dependancy, so you don\u2019t need to worry about installing that. At the command line:\nnpm i puppeteer\nThen save a new file as example.js with this code:\nconst puppeteer = require('puppeteer');\n\n(async () => {\n const browser = await puppeteer.launch();\n const page = await browser.newPage();\n await page.goto('https://example.com');\n await page.screenshot({path: 'example.png'});\n await browser.close();\n})();\nand then run it using Node:\nnode example.js\nThis will output an image file example.png to disk, which contains a screenshot of, in this case https://example.com. The logic of the code is reasonably easy to follow:\n\nLaunch a browser\nOpen up a new page\nGoto a URL\nTake a screenshot\nClose the browser\n\nThe async function and await keywords are a way to have the script pause and wait for normally asynchronous code to return before proceeding. That\u2019s useful with actions like loading a web page that might take some time to complete. They\u2019re used with Promises, and the effect is to make asynchronous code behave as if it\u2019s synchronous. You can read more about async and await at MDN if you\u2019re interested.\nThat\u2019s a good proof-of-concept using the basic Puppeteer example. I can take a screenshot of a URL! But what happens if I put the URL of my new special page in there?\nOur content is up in the corner of the image with lots of empty space.\nThat\u2019s not great. It\u2019s okay, but not great. It looks like that, by default, Puppeteer takes a screenshot with a resolution of 800 \u00d7 600, so we need to find out how to adjust that. Fortunately, the docs aren\u2019t the worst and I was able to find the page.setViewport() method pretty easily.\nconst puppeteer = require('puppeteer');\n\n(async () => {\n const browser = await puppeteer.launch();\n const page = await browser.newPage();\n await page.goto('https://24ways.org/2018/clip-paths-know-no-bounds/sharing');\n await page.setViewport({\n width: 600,\n height: 315\n });\n await page.screenshot({path: 'example.png'});\n await browser.close();\n})();\nThis worked! The screenshot is now 600 \u00d7 315 as expected. That\u2019s exactly what we asked for. Trouble is, that\u2019s a bit low res and it is nearly 2019 after all. While in those docs, I noticed the deviceScaleFactor option that can be passed to page.setViewport(). Setting that to 2 gives us an image of the same area of the screen, but with twice as many pixels.\n await page.setViewport({\n width: 600,\n height: 315,\n deviceScaleFactor: 2\n });\nPerfect! We now have a programmatic way of turning a URL into an image.\nImproving the script\nRather than having a script with a fixed URL in it that outputs an image called example.png, the next step is to make that a bit more dynamic. The aim here is to have a script that we can run with a URL as an argument and have it output an image for that one page. That way we can run it manually, or hook it into part of our site\u2019s build process to automate the generation of the image.\nOur goal is to call the script like this:\nnode shoot-sharing-image.js https://24ways.org/2018/clip-paths-know-no-bounds/\nAnd I want the image to come out with the name clip-paths-know-no-bounds.png. To do that, I need to have my script look for command arguments, and then to split the URL up to grab the slug from it.\n// Get the URL and the slug segment from it\nconst url = process.argv[2];\nconst segments = url.split('/');\n// Get the second-to-last segment (the slug)\nconst slug = segments[segments.length-2];\nWe can then use these variables later in the script, remembering to add sharing back onto the end of the URL to get our dedicated page.\n(async () => {\n const browser = await puppeteer.launch();\n const page = await browser.newPage();\n await page.goto(url + 'sharing');\n await page.setViewport({\n width: 600,\n height: 315,\n deviceScaleFactor: 2\n });\n await page.screenshot({path: slug + '.png'});\n await browser.close();\n})();\nOnce you\u2019re generating the image with Node, there\u2019s all sorts of things you can do with it. An obvious step is to move it to the correct location within your site or project.\nYou can also run optimisations on the file. I\u2019m using imagemin with pngquant to reduce the file size a little.\nconst imagemin = require('imagemin');\nconst imageminPngquant = require('imagemin-pngquant');\n\nawait imagemin([slug + '.png'], 'build', {\n plugins: [\n imageminPngquant({quality: '75-90'})\n ]\n});\n\nYou can see the completed example as a gist.\nIntegrating it with your CMS\nSo we now have a command we can run to take a URL and generate a custom image for that URL. It\u2019s in a format that can be called by any sort of build script, or triggered from a publishing hook in a CMS. Exactly how you do that is going to depend on the way your site is built and the technology stack you\u2019re using, but it\u2019s likely not too hard as long as you can run a command as part of the process.\nFor 24 ways this year, I\u2019ve been running the script by hand once each article is ready. My script adds the file to a git repo and pushes to a deployment remote that is configured to automatically deploy static assets to our server. Along with our theme of making incremental improvements, next year I\u2019ll look to automate this one step further.\nWe may also look at having a few slightly different layouts to choose from, so that each day isn\u2019t exactly the same as the last. Interestingly, we could even try some A/B tests to see if there\u2019s any particular format of image or type of quote that does a better job of grabbing attention. There are lots of possibilities!\n\nBy using a bit of ingenuity, some custom CMS templates, and the very useful Puppeteer project, we\u2019ve been able to reliably produce dynamic social media sharing images for all of our articles. In doing so, we reduced the dependancy on any individual for producing those images, and opened up a world of possibilities in how we use those images.\nI hope you\u2019ll give it a try!", "year": "2018", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2018-12-24T00:00:00+00:00", "url": "https://24ways.org/2018/dynamic-social-sharing-images/", "topic": "code"}
{"rowid": 263, "title": "Securing Your Site like It\u2019s 1999", "contents": "Running a website in the early years of the web was a scary business. The web was an evolving medium, and people were finding new uses for it almost every day. From book stores to online auctions, the web was an expanding universe of new possibilities.\nAs the web evolved, so too did the knowledge of its inherent security vulnerabilities. Clever tricks that were played on one site could be copied on literally hundreds of other sites. It was a normal sight to log in to a website to find nothing working because someone had breached its defences and deleted its database. Lessons in web security in those days were hard-earned.\nWhat follows are examples of critical mistakes that brought down several early websites, and how you can help protect yourself and your team from the same fate.\nBad input validation: Trusting anything the user sends you\nOur story begins in the most unlikely place: Animal Crossing. Animal Crossing was a 2001 video game set in a quaint town, filled with happy-go-lucky inhabitants that co-exist peacefully. Like most video games, Animal Crossing was the subject of many fan communities on the early web.\nOne such unofficial web forum was dedicated to players discussing their adventures in Animal Crossing. Players could trade secrets, ask for help, and share pictures of their virtual homes. This might sound like a model community to you, but you would be wrong.\nOne day, a player discovered a hidden field in the forum\u2019s user profile form. Normally, this page allows users to change their name, their password, or their profile photo. This person discovered that the hidden field contained their unique user ID, which identifies them when the forum\u2019s backend saves profile changes to its database. They discovered that by modifying the form to change the user ID, they could make changes to any other player\u2019s profile.\nNeedless to say, this idyllic online community descended into chaos. Users changed each other\u2019s passwords, deleted each other\u2019s messages, and attacked each-other under the cover of complete anonymity. What happened?\nThere aren\u2019t any official rules for developing software on the web. But if there were, my golden rule would be:\nNever trust user input. Ever.\nAlways ask yourself how users will send you data that isn\u2019t what it seems to be. If the nicest community of gamers playing the happiest game on earth can turn on each other, nowhere on the web is safe.\nMake sure you validate user input to make sure it\u2019s of the correct type (e.g. string, number, JSON string) and that it\u2019s the length that you were expecting. Don\u2019t forget that user input doesn\u2019t become safe once it is stored in your database; any data that originates from outside your network can still be dangerous and must be escaped before it is inserted into HTML.\nMake sure to check a user\u2019s actions against what they are allowed to do. Create a clear access control policy that defines what actions a user may take, and to whose data they are allowed access to. For example, a newly-registered user should not be allowed to change the user profile of a web forum\u2019s owner.\nFinally, never rely on client-side validation. Validating user input in the browser is a convenience to the user, not a security measure. Always assume the user has full control over any data sent from the browser and make sure you validate any data sent to your backend from the outside world.\nSQL injection: Allowing the user to run their own database queries\nA long time ago, my favourite website was a web forum dedicated to the Final Fantasy video game series. Like the users of the Animal Crossing forum, I\u2019d while away many hours arguing with other people on the internet about my favourite characters, my favourite stories, and the greatest controversies of the day.\nOne day, I noticed people were acting strangely. Users were being uncharacteristically nasty and posting in private areas of the forum they wouldn\u2019t normally have access to. Then messages started disappearing, and user accounts for well-respected people were banned.\nIt turns out someone had discovered a way of logging in to any other user account, using a secret password that allowed them to do literally anything they wanted. What was this password that granted untold power to those who wielded it?\n' OR '1'='1\nSQL is a computer language that is used to query databases. When you fill out a login form, just like the one above, your username and your password are usually inserted into an SQL query like this:\n\nSELECT COUNT(*)\nFROM USERS\nWHERE USERNAME='Alice'\nAND PASSWORD='hunter2'\nThis query selects users from the database that match the username Alice and the password hunter2. If there is at least one user matching record, the user will be granted access. Let\u2019s see what happens when we use our magic password instead!\n\nSELECT COUNT(*)\nFROM USERS\nWHERE USERNAME='Admin'\nAND PASSWORD='' OR '1'='1'\nDoes the password look like part of the query to you? That\u2019s because it is! This password is a deliberate attempt to inject our own SQL into the query, hence the term SQL injection. The query is now looking for users matching the username Admin, with a password that is blank, or 1=1. In an SQL query, 1=1 is always true, which makes this query select every single record in the database. As long as the forum software is checking for at least one matching user, it will grant the person logging in access. This password will work for any user registered on the forum!\nSo how can you protect yourself from SQL injection?\nNever build SQL queries by concatenating strings. Instead, use parameterised query tools. PHP offers prepared statements, and Node.JS has the knex package. Alternatively, you can use an ORM tool, such as Propel or sequelize.\nExpert help in the form of language features or software tools is a key ally for securing your code. Get all the help you can!\nCross site request forgery: Getting other users to do your dirty work for you\nDo you remember Netflix? Not the Netflix we have now, the Netflix that used to rent you DVDs by mailing them to you. My next story is about how someone managed to convince Netflix users to send him their DVDs - free of charge.\nHave you ever clicked on a hyperlink, only to find something that you weren\u2019t expecting? If you were lucky, you might have just gotten Rickrolled. If you were unlucky\u2026\nLet\u2019s just say there are older and fouler things than Rick Astley in the dark places of the web.\nWhat if you could convince people to visit a page you controlled? And what if those people were Netflix users, and they were logged in? In 2006, Dave Ferguson did just that. He created a harmless-looking page with an image on it:\n\nDid you notice the source URL of the image? It\u2019s deliberately crafted to add a particular DVD to your queue. Sprinkle in a few more requests to change the user\u2019s name and shipping address, and you could ship yourself DVDs completely free of charge!\nThis attack is possible when websites unconditionally trust a user\u2019s session cookies without checking where HTTP requests come from.\nThe first check you can make is to verify that a request\u2019s origin and referer headers match the location of the website. These headers can\u2019t be programmatically set.\nAnother check you can use is to add CSRF tokens to your web forms, to verify requests have come from an actual form on your website. Tokens are long, unpredictable, unique strings that are generated by your server and inserted into web forms. When users complete a form, the form data sent to the server can be checked for a recently generated token. This is an effective deterrent of CSRF attacks because CSRF tokens aren\u2019t stored in cookies.\nYou can also set SameSite=Strict when setting cookies with the Set-Cookie HTTP header. This communicates to browsers that cookies are not to be sent with cross-site requests. This is a relatively new feature, though it is well supported in evergreen browsers.\nCross site scripting: Someone else\u2019s code running on your website\nIn 2005, Samy Kamkar became famous for having lots of friends. Lots and lots of friends.\nSamy enjoyed using MySpace which, at the time, was the world\u2019s largest social network. Social networks at that time were more limited than today. For instance, MySpace let you upload photos to your photo gallery, but capped the limit at twelve. Twelve photos. At least you didn\u2019t have to wade through photos of avocado toast back then\u2026\nSamy discovered that MySpace also locked down the kinds of content that you could post on your MySpace page. He discovered he could inject and
tags into his headline, but was filtered. MySpace wasn\u2019t about to let someone else run their own code on MySpace.\nIntrigued, Samy set about finding out exactly what he could do with and tags. He found that you could add style properties to tags to style them with CSS.\n
\nThis code only worked in Internet Explorer and in some versions of Safari, but that was plenty of people to befriend. However, MySpace was prepared for this: they also filtered the word javascript from .\n
\nSamy discovered that by inserting a line break into his code, MySpace would not filter out the word javascript. The browser would continue to run the code just fine! Samy had now broken past MySpace\u2019s first line of defence and was able to start running code on his profile page. Now he started looking at what he could do with that code.\nalert(document.body.innerHTML)\nSamy wondered if he could inspect the page\u2019s source to find the details of other MySpace users to befriend. To do this, you would normally use document.body.innerHTML, but MySpace had filtered this too.\nalert(eval('document.body.inne' + 'rHTML'))\nThis isn\u2019t a problem if you build up JavaScript code inside a string and execute it using the eval() function. This trick also worked with XMLHttpRequest.onReadyStateChange, which allowed Samy to send friend requests to the MySpace API and install the JavaScript code on his new friends\u2019 pages.\nOne final obstacle stood in his way. The same origin policy is a security mechanism that prevents scripts hosted on one domain interacting with sites hosted on another domain.\nif (location.hostname == 'profile.myspace.com') {\n document.location = 'http://www.myspace.com'\n + location.pathname + location.search\n}\nSamy discovered that only the http://www.myspace.com domain would accept his API requests, and requests from http://profile.myspace.com were being blocked by the browser\u2019s same-origin policy. By redirecting the browser to http://www.myspace.com, he discovered that he could load profile pages and successfully make requests to MySpace\u2019s API. Samy installed this code on his profile page, and he waited.\n\nOver the course of the next day, over a million people unwittingly installed Samy\u2019s code into their MySpace profile pages and invited their friends. The load of friend requests on MySpace was so large that the site buckled and shut down. It took them two hours to remove Samy\u2019s code and patch the security holes he exploited. Samy was raided by the United States secret service and sentenced to do 90 days of community service.\nThis is the power of installing a little bit of JavaScript on someone else\u2019s website. It is called cross site scripting, and its effects can be devastating. It is suspected that cross-site scripting was to blame for the 2018 British Airways breach that leaked the credit card details of 380,000 people.\nSo how can you help protect yourself from cross-site scripting?\nAlways sanitise user input when it comes in, using a library such as sanitize-html. Open source tools like this benefit from hundreds of hours of work from dozens of experienced contributors. Don\u2019t be tempted to roll your own protection. MySpace was prepared, but they were not prepared enough. It makes no sense to turn this kind of help down.\nYou can also use an auto-escaping templating language to make sure nobody else\u2019s HTML can get into your pages. Both Angular and React will do this for you, and they are extremely convenient to use.\nYou should also implement a content security policy to restrict the domains that content like scripts and stylesheets can be loaded from. Loading content from sites not under your control is a significant security risk, and you should use a CSP to lock this down to only the sources you trust. CSP can also block the use of the eval() function.\nFor content not under your control, consider setting up sub-resource integrity protection. This allows you to add hashes to stylesheets and scripts you include on your website. Hashes are like fingerprints for digital files; if the content changes, so does the fingerprint. Adding hashes will allow your browser to keep your site safe if the content changes without you knowing.\nnpm audit: Protecting yourself from code you don\u2019t own\nJavaScript and npm run the modern web. Together, they make it easy to take advantage of the world\u2019s largest public registry of open source software. How do you protect yourself from code written by someone you\u2019ve never met? Enter npm audit.\nnpm audit reviews the security of your website\u2019s dependency tree. You can start using it by upgrading to the latest version of npm:\nnpm install npm -g\nnpm audit\nWhen you run npm audit, npm submits a description of your dependencies to the Registry, which returns a report of known vulnerabilities for the packages you have installed.\n\nIf your website has a known cross-site scripting vulnerability, npm audit will tell you about it. What\u2019s more, if the vulnerability has been patched, running npm audit fix will automatically install the patched package for you!\nSecuring your site like it\u2019s 2019\nThe truth is that since the early days of the web, the stakes of a security breach have become much, much higher. The web is so much more than fandom and mailing DVDs - online banking is now mainstream, social media and dating websites store intimate information about our personal lives, and we are even inviting the internet into our homes.\nHowever, we have powerful new allies helping us stay safe. There are more resources than ever before to teach us how to write secure code. Tools like Angular and React are designed with security features baked-in from the start. We have a new generation of security tools like npm audit to watch over our dependencies.\nAs we roll over into 2019, let\u2019s take the opportunity to reflect on the security of the code we write and be grateful for the everything we\u2019ve learned in the last twenty years.", "year": "2018", "author": "Katie Fenn", "author_slug": "katiefenn", "published": "2018-12-01T00:00:00+00:00", "url": "https://24ways.org/2018/securing-your-site-like-its-1999/", "topic": "code"}
{"rowid": 262, "title": "Be the Villain", "contents": "Inclusive Design is the practice of making products and services accessible to, and usable by as many people as reasonably possible without the need for specialized accommodations. The practice was popularized by author and User Experience Design Director Kat Holmes. If getting you to discover her work is the only thing this article succeeds in doing then I\u2019ll consider it a success.\nAs a framework for creating resilient solutions to problems, Inclusive Design is incredible. However, the aimless idealistic aspirations many of its newer practitioners default to can oftentimes run into trouble. Without outlining concrete, actionable outcomes that are then vetted by the people you intend to serve, there is the potential to do more harm than good. \nWhen designing, you take a user flow and make sure it can\u2019t be broken. Ensuring that if something is removed, it can be restored. Or that something editable can also be updated at a later date\u2014you know, that kind of thing. What we want to do is avoid surprises. Much like a water slide with a section of pipe missing, a broken flow forcibly ejects a user, to great surprise and frustration. Interactions within a user flow also have to be small enough to be self-contained, so as to avoid creating a none pizza with left beef scenario.\nLately, I\u2019ve been thinking about how to expand on this practice. Watertight user flows make for a great immediate experience, but it\u2019s all too easy to miss the forest for the trees when you\u2019re a product designer focused on cranking out features. \nWhat I\u2019m concerned about is while to trying to envision how a user flow could be broken, you also think about how it could be subverted. In addition to preventing the removal of a section of water slide, you also keep someone from mugging the user when they shoot out the end.\nIf you pay attention, you\u2019ll start to notice this subversion with increasing frequency:\n\nDomestic abusers using internet-controlled devices to spy on and control their partner.\nZealots tanking a business\u2019 rating on Google because its owners spoke out against unchecked gun violence.\nForcing people to choose between TV or stalking because the messaging center portion of a cable provider\u2019s entertainment package lacks muting or blocking features.\nWhite supremacists tricking celebrities into endorsing anti-Semitic conspiracy theories.\nFacebook repeatedly allowing housing, credit, and employment advertisers to discriminate against users by their race, ability, and religion.\nWhite supremacists also using a video game chat service as a recruiting tool.\nThe unchecked harassment of minors on Instagram.\nSwatting.\n\nIf I were to guess why we haven\u2019t heard more about this problem, I\u2019d say that optimistically, people have settled out of court. Pessimistically, it\u2019s most likely because we ignore, dismiss, downplay, and suppress those who try to bring it to our attention. \nSubverted design isn\u2019t the practice of employing Dark Patterns to achieve your business goals. If you are not familiar with the term, Dark Patterns are the use of cheap user interface tricks and psychological manipulation to get users to act against their own best interests. User Experience consultant Chris Nodder wrote Evil By Design, a fantastic book that unpacks how to detect and think about them, if you\u2019re interested in this kind of thing\nSubverted design also isn\u2019t beholden design, or simple lack of attention. This phenomenon isn\u2019t even necessarily premeditated. I think it arises from na\u00efve (or willfully ignorant) design decisions being executed at a historically unprecedented pace and scale. These decisions are then preyed on by the shrewd and opportunistic, used to control and inflict harm on the undeserving. Have system, will game.\nThis is worth discussing. As the field of design continues to industrialize empathy, it also continues to ignore the very established practice of threat modeling. Most times, framing user experience in terms of how to best funnel people into a service comes with an implicit agreement that the larger system that necessitates the service is worth supporting. \nTo achieve success in the eyes of their superiors, designers may turn to emotional empathy exercises. By projecting themselves into the perceived surface-level experiences of others, they play-act at understanding how to nudge their targeted demographics into a conversion funnel. This roleplaying exercise has the effect of scoping concerns to the immediate, while simultaneously reinforcing the idea of engagement at all cost within the identified demographic.\nThe thing is, pure engagement leaves the door wide open for bad actors. Even within the scope of a limited population, the assumption that everyone entering into the funnel is acting with good intentions is a poor one. Security researchers, network administrators, and other professionals who practice threat modeling understand that the opposite is true. By preventing everyone save for well-intentioned users from operating a system within the parameters you set for them, you intentionally limit the scope of abuse that can be enacted.\nDon\u2019t get me wrong: being able to escort as many users as you can to the happy path is a foundational skill. But we should also be having uncomfortable conversations about why something unthinkable may in fact not be. \nThey\u2019re not going to be fun conversations. It\u2019s not going to be easy convincing others that these aren\u2019t paranoid delusions best tucked out of sight in the darkest, dustiest corner of the backlog. Realistically, talking about it may even harm your career.\nBut consider the alternative. The controlled environment of the hypothetical allows us to explore these issues without propagating harm. Better to be viewed as the office\u2019s resident villain than to have to live with something like this:\n\nIf the past few years have taught us anything, it\u2019s that the choices we make\u2014or avoid making\u2014have consequences. Design has been doing a lot of growing up as of late, including waking up to the idea that technology isn\u2019t neutral. \nYou\u2019re going to have to start thinking the way a monster does\u2014if you can imagine it, chances are someone else can as well. To get into this kind of mindset, inverting the Inclusive Design Principles is a good place to start:\n\nProviding a comparable experience becomes forcing a single path.\nConsidering situation becomes ignoring circumstance.\nBeing consistent becomes acting capriciously.\nGiving control becomes removing autonomy. \nOffering choice becomes limiting options. \nPrioritizing content becomes obfuscating purpose.\nAdding value becomes filling with gibberish. \n\nCombined, these inverted principles start to paint a picture of something we\u2019re all familiar with: a half-baked, unscrupulous service that will jump at the chance to take advantage of you. This environment is also a perfect breeding ground for spawning bad actors.\nThese kinds of services limit you in the ways you can interact with them. They kick you out or lock you in if you don\u2019t meet their unnamed criteria. They force you to parse layout, prices, and policies that change without notification or justification. Their controls operate in ways that are unexpected and may shift throughout the experience. Their terms are dictated to you, gaslighting you to extract profit. Heaps of jargon and flashy, unnecessary features are showered on you to distract from larger structural and conceptual flaws.\nSo, how else can we go about preventing subverted design? Marli Mesibov, Content Strategist and Managing Editor of UX Booth, wrote a brilliant article about how to use Dark Patterns for good\u2014perhaps the most important takeaway being admitting you have a problem in the first place. \nAnother exercise is asking the question, \u201cWhat is the evil version of this feature?\u201d Ask it during the ideation phase. Ask it as part of acceptance criteria. Heck, ask it over lunch. I honestly don\u2019t care when, so long as the question is actually raised. \nIn keeping with the spirit of this article, we can also expand on this line of thinking. Author, scientist, feminist, and pacifist Ursula Franklin urges us to ask, \u201cWhose benefits? Whose risks?\u201d instead of \u201cWhat benefits? What risks?\u201d in her talk, When the Seven Deadly Sins Became the Seven Cardinal Virtues. Inspired by the talk, Ethan Marcotte discusses how this relates to the web platform in his powerful post, Seven into seven.\nFew things in this world are intrinsically altruistic or good\u2014it\u2019s just the nature of the beast. However, that doesn\u2019t mean we have to stand idly by when harm is created. If we can add terms like \u201canti-pattern\u201d to our professional vocabulary, we can certainly also incorporate phrases like \u201cabuser flow.\u201d \nDesign finally got a seat at the table. We should use this newfound privilege wisely. Listen to women. Listen to minorities, listen to immigrants, the unhoused, the less economically advantaged, and the less technologically-literate. Listen to the underrepresented and the underprivileged.\nSubverted design is a huge problem, likely one that will never completely go away. However, the more of us who put the hard work into being the villain, the more we can lessen the scope of its impact.", "year": "2018", "author": "Eric Bailey", "author_slug": "ericbailey", "published": "2018-12-06T00:00:00+00:00", "url": "https://24ways.org/2018/be-the-villain/", "topic": "ux"}
{"rowid": 261, "title": "Surviving\u2014and Thriving\u2014as a Remote Worker", "contents": "Remote work is hot right now. Many people even say that remote work is the future. Why should a company limit itself to hiring from a specific geographic location when there\u2019s an entire world of talent out there?\nI\u2019ve been working remotely, full-time, for five and a half years. I\u2019ve reached the point where I can\u2019t even fathom working in an office. The idea of having to wake up at a specific time and commute into an office, work for eight hours, and then commute home, feels weirdly anachronistic. I\u2019ve grown attached to my current level of freedom and flexibility.\nHowever, it took me a lot of trial and error to reach success as a remote worker \u2014 and sometimes even now, I slip up. Working remotely requires a great amount of discipline, independence, and communication. It can feel isolating, especially if you lean towards the more extroverted side of the social spectrum. Remote working isn\u2019t for everyone, but most people, with enough effort, can make it work \u2014 or even thrive. Here\u2019s what I\u2019ve learned in over five years of working remotely.\nExperiment with your environment\nAs a remote worker, you have almost unprecedented control of your environment. You can often control the specific desk and chair you use, how you accessorize your home office space \u2014 whether that\u2019s a dedicated office, a corner of your bedroom, or your kitchen table. (Ideally, not your couch\u2026 but I\u2019ve been there.) Hate fluorescent lights? Change your lightbulbs. Cover your work area in potted plants. Put up blackout curtains and work in the dark like a vampire. Whatever makes you feel most comfortable and productive, and doesn\u2019t completely destroy your eyesight.\nWorking remotely doesn\u2019t always mean working from home. If you don\u2019t have a specific reason you need to work from home (like specialized equipment), try working from other environments (which is especially helpful it you have roommates, or children). Cafes are the quintessential remote worker hotspot, but don\u2019t just limit yourself to your favorite local haunt. More cities worldwide are embracing co-working spaces, where you can rent either a roaming spot or a dedicated desk. If you\u2019re a social person, this is a great way to build community in your work environment. Most have phone rooms, so you can still take calls.\nCo-working spaces can be expensive, and not everyone has either the extra income, or work-provided stipend, to work from one. Local libraries are also a great work location. They\u2019re quiet, usually have free wi-fi, and you have the added bonus of being able to check out books after work instead of, ahem, spending too much money on Kindle books. (I know most libraries let you check out ebooks, but reader, I am impulsive and impatient person. When I want a book now, I mean now.) \nJust be polite \u2014 make sure your headphones don\u2019t leak, and don\u2019t work from a library if you have a day full of calls.\nRemember, too, that you don\u2019t have to stay in the same spot all day. It\u2019s okay to go out for lunch and then resume work from a different location. If you find yourself getting restless, take a walk. Wash some dishes while you mull through a problem. Don\u2019t force yourself to sit at your desk for eight hours if that doesn\u2019t work for you.\nSet boundaries\nIf you\u2019re a workaholic, working remotely can be a challenge. It\u2019s incredibly easy to just\u2026 work. All the time. My work computer is almost always with me. If I remember at 11pm that I wanted to do something, there\u2019s nothing but my own willpower keeping me from opening up my laptop and working until 2am. Some people are naturally disciplined. Some have discipline instilled in them as children. And then some, like me, are undisciplined disasters that realize as adults that wow, I guess it\u2019s time to figure this out, eh?\nLearning how to set boundaries is one of the most important lessons I\u2019ve learned working remotely. (And honestly, it\u2019s something I still struggle with). \nFor a long time, I had a bad habit of waking up, checking my phone for new Slack messages, seeing something I need to react to, and then rolling over to my couch with my computer. Suddenly, it\u2019s noon, I\u2019m unwashed, unfed, starting to get a headache, and wondering why suddenly I hate all of my coworkers. Even when I finally tear myself from my computer to shower, get dressed, and eat, the damage is done. The rest of my day is pretty much shot.\nI recently had a conversation with a coworker, in which she remarked that she used to fill her empty time with work. Wake up? Scroll through Slack and email before getting out of bed. Waiting in line for lunch? Check work. Hanging out on her couch in the evening? You get the drift. She was only able to break the habit after taking a three month sabbatical, where she had no contact with work the entire time.\nI too had just returned from my own sabbatical. I took her advice, and no longer have work Slack on my phone, unless I need it for an event. After the event, I delete it. I also find it too easy to fill empty time with work. Now, I might wake up and procrastinate by scrolling through other apps, but I can\u2019t get sucked into work before I\u2019m even dressed. I\u2019ve gotten pretty good at forbidding myself from working until I\u2019m ready, but building any new habit requires intentionality. \nSomething else I experimented with for a while was creating a separate account on my computer for social tasks, so if I wanted to hang out on my computer in the evening, I wouldn\u2019t get distracted by work. It worked exceptionally well. The only problems I encountered were technical, like app licensing and some of my work proxy configurations. I\u2019ve heard other coworkers have figured out ways to work through these technical issues, so I\u2019m hoping to give it another try soon.\nYou might noticed that a lot of these ideas are just hacks for making myself not work outside of my designated work times. It\u2019s true! If you\u2019re a more disciplined person, you might not need any of these coping mechanisms. If you\u2019re struggling, finding ways to subvert your own bad habits can be the difference between thriving or burning out.\nCreate intentional transition time\nI know it\u2019s a stereotype that people who work from home stay in their pajamas all day, but\u2026 sometimes, it\u2019s very easy to do. I\u2019ve found that in order to reach peak focus, I need to create intentional transition time. \nThe most obvious step is changing into different clothing than I woke up in. Ideally, this means getting dressed in real human clothing. I might decide that it\u2019s cold and gross out and I want to work in joggers and a hoody all day, but first, I need to change out of my pajamas, put on a bra, and then succumb to the lure of comfort. \nI\u2019ve found it helpful to take similar steps at the end of my day. If I\u2019ve spent the day working from home, I try to end my day with something that occupies my body, while letting my mind unwind. Often, this is doing some light cleaning or dinner prep. If I try to go straight into another mentally heavy task without allowing myself this transition time, I find it hard to context switch. \nThis is another reason working from outside your home is advantageous. Commutes, even if it\u2019s a ten minute walk down the road, are great transition time. Lunch is a great transition time. You can decompress between tasks by going out for lunch, or cooking and eating lunch in your kitchen \u2014 not next to your computer. \nEmbrace async\nIf you\u2019re used to working in an office, you\u2019ve probably gotten pretty used to being able to pop over to a colleague\u2019s desk if you need to ask a question. They\u2019re pretty much forced to engage with you at that point. When you\u2019re working remotely, your coworkers might not be in the same timezone as you. They might take an hour to finish up a task before responding to you, or you might not get an answer for your entire day because dangit Gary\u2019s in Australia and it\u2019s 3am there right now. \nFor many remote workers, that\u2019s part of the package. When you\u2019re not co-located, you have to build up some patience and tolerance around waiting. You need to intentionally plan extra time into your schedule for waiting on answers.\nAsynchronous communication is great. Not everyone can be present for every meeting or office conversation \u2014 and the same goes for working remotely. However, when you\u2019re remote, you can read through your intranet messages later or scroll back a couple hours in Slack. My company has a bunch of internal blogs (\u201cp2s\u201d) where we record major decisions and hold asynchronous conversations. I feel like even if I missed a meeting, or something big happened while I was asleep, I can catch up later. We have a phrase \u2014 \u201cp2 or it didn\u2019t happen.\u201d\nWorking remotely has made me a better communicator largely because I\u2019ve gotten into the habit of making written updates. I\u2019ve also trained myself to wait before responding, which allows me to distance myself from what could potentially be an emotional reaction. (On the internet, no one can see you making that face.) Having the added space that comes from not being in the same physical location with somebody else creates an opportunity to rein myself in and take the time to craft an appropriate response, without having the pressure of needing to reply right meow. Lean into it!\n(That said, if you\u2019re stuck, sometimes the best course of action is to hop on a video call with someone and hash out the details. Use the tools most appropriate for the problem. They invented Zoom for a reason.)\nSeek out social opportunities\nEven introverts can feel lonely or isolated. When you work remotely, there isn\u2019t a built-in community you\u2019re surrounded by every day. You have to intentionally seek out social opportunities that an office would normally provide.\nI have a couple private Slack channels where I can joke around with work friends. Having that kind of safe space to socialize helps me feel less alone. (And, if the channels get too noisy, I can mute them for a couple hours.)\nEvery now and then, I\u2019ll also hop on a video call with some work friends and just hang out for a little while. It feels great to actually see someone laugh.\nIf you work from a co-working space, that space likely has events. My co-working space hosts social hours, holiday parties, and sometimes even lunch-and-learns. These events are great opportunities for making new friends and forging professional connections outside of work. \nIf you don\u2019t have access to a co-working space, your town or city likely has meetups. Create a Meetup.com account and search for something that piques your interest. If you\u2019ve been stuck inside your house for days, heads-down on a hard deadline, celebrate by getting out of the house. Get coffee or drinks with friends. See a show. Go to a religious service. Take a cooking class. Try yoga. Find excuses to be around someone other than your cats. When you can\u2019t fall back on your work to provide community, you need to build your own.\n\nThese are tips that I\u2019ve found help me, but not everyone works the same way. Remember that it\u2019s okay to experiment \u2014 just because you\u2019ve worked one way, doesn\u2019t mean that\u2019s the best way for you. Check in with yourself every now and then. Are you happy with your work environment? Are you feeling lonely, down, or exhausted? Try switching up your routine for a couple weeks and jot down how you feel at the end of each day. Look for patterns. You deserve to have a comfortable and productive work environment!\nHope to see you all online soon \ud83d\ude4c", "year": "2018", "author": "Mel Choyce", "author_slug": "melchoyce", "published": "2018-12-09T00:00:00+00:00", "url": "https://24ways.org/2018/thriving-as-a-remote-worker/", "topic": "process"}
{"rowid": 260, "title": "The Art of Mathematics: A Mandala Maker Tutorial", "contents": "In front-end development, there\u2019s often a great deal of focus on tools that aim to make our work more efficient. But what if you\u2019re new to web development? When you\u2019re just starting out, the amount of new material can be overwhelming, particularly if you don\u2019t have a solid background in Computer Science. But the truth is, once you\u2019ve learned a little bit of JavaScript, you can already make some pretty impressive things.\nA couple of years back, when I was learning to code, I started working on a side project. I wanted to make something colorful and fun to share with my friends. This is what my app looks like these days:\nMandala Maker user interface\nThe coolest part about it is the fact that it\u2019s a tool: anyone can use it to create something original and brand new. \nIn this tutorial, we\u2019ll build a smaller version of this app \u2013 a symmetrical drawing tool in ES5, JavaScript and HTML5. The tutorial app will have eight reflections, a color picker and a Clear button. Once we\u2019re done, you\u2019re on your own and can tweak it as you please. Be creative!\nPreparations: a blank canvas\nThe first thing you\u2019ll need for this project is a designated drawing space. We\u2019ll use the HTML5 canvas element and give it a width and a height of 600px (you can set the dimensions to anything else if you like).\nFiles\nCreate 3 files: index.html, styles.css, main.js. Don\u2019t forget to include your JS and CSS files in your HTML. \n\n\n\n \n \n \n\n\n \n\n\nI\u2019ll ask you to update your HTML file at a later point, but the CSS file we\u2019ll start with will stay the same throughout the project. This is the full CSS we are going to use:\nbody {\n background-color: #ccc;\n text-align: center;\n}\n\ncanvas {\n touch-action: none;\n background-color: #fff;\n}\n\nbutton {\n font-size: 110%;\n}\nNext steps\nWe are done with our preparations and ready to move on to the actual tutorial, which is made up of 4 parts:\n\nBuilding a simple drawing app with one line and one color \nAdding a Clear button and a color picker\nAdding more functionality: 2 line drawing (add the first reflection)\nAdding more functionality: 8 line drawing (add 6 more reflections!)\n\nInteractive demos\nThis tutorial will be accompanied by four CodePens, one at the end of each section. In my own app I originally used mouse events, and only added touch events when I realized mobile device support was (A) possible, and (B) going to make my app way more accessible. For the sake of code simplicity, I decided that in this tutorial app I will only use one event type, so I picked a third option: pointer events. These are supported by some desktop browsers and some mobile browsers. An up-to-date version of Chrome is probably your best bet.\nPart 1: A simple drawing app\nLet\u2019s get started with our main.js file. Our basic drawing app will be made up of 6 functions: init, drawLine, stopDrawing, recordPointerLocation, handlePointerMove, handlePointerDown. It also has nine variables:\nvar canvas, context, w, h,\n prevX = 0, currX = 0, prevY = 0, currY = 0,\n draw = false;\nThe variables canvas and context let us manipulate the canvas. w is the canvas width and h is the canvas height. The four coordinates are used for tracking the current and previous location of the pointer. A short line is drawn between (prevX, prevY) and (currX, currY) repeatedly many times while we move the pointer upon the canvas. For your drawing to appear, three conditions must be met: the pointer (be it a finger, a trackpad or a mouse) must be down, it must be moving and the movement has to be on the canvas. If these three conditions are met, the boolean draw is set to true. \n1. init\nResponsible for canvas set up, this listens to pointer events and the location of their coordinates and sets everything in motion by calling other functions, which in turn handle touch and movement events. \nfunction init() {\n canvas = document.querySelector(\"canvas\");\n context = canvas.getContext(\"2d\");\n w = canvas.width;\n h = canvas.height;\n\n canvas.onpointermove = handlePointerMove;\n canvas.onpointerdown = handlePointerDown;\n canvas.onpointerup = stopDrawing;\n canvas.onpointerout = stopDrawing;\n}\n2. drawLine\nThis is called to action by handlePointerMove() and draws the pointer path. It only runs if draw = true. It uses canvas methods you can read about in the canvas API documentation. You can also learn to use the canvas element in this tutorial.\nlineWidth and linecap set the properties of our paint brush, or digital pen, but pay attention to beginPath and closePath. Between those two is where the magic happens: moveTo and lineTo take canvas coordinates as arguments and draw from (a,b) to (c,d), which is to say from (prevX,prevY) to (currX,currY).\nfunction drawLine() {\n var a = prevX,\n b = prevY,\n c = currX,\n d = currY;\n\n context.lineWidth = 4;\n context.lineCap = \"round\";\n\n context.beginPath();\n context.moveTo(a, b);\n context.lineTo(c, d);\n context.stroke();\n context.closePath();\n}\n3. stopDrawing\nThis is used by init when the pointer is not down (onpointerup) or is out of bounds (onpointerout).\nfunction stopDrawing() {\n draw = false;\n}\n4. recordPointerLocation\nThis tracks the pointer\u2019s location and stores its coordinates. Also, you need to know that in computer graphics the origin of the coordinate space (0,0) is at the top left corner, and all elements are positioned relative to it. When we use canvas we are dealing with two coordinate spaces: the browser window and the canvas itself. This function converts between the two: it subtracts the canvas offsetLeft and offsetTop so we can later treat the canvas as the only coordinate space. If you are confused, read more about it.\nfunction recordPointerLocation(e) {\n prevX = currX;\n prevY = currY;\n currX = e.clientX - canvas.offsetLeft;\n currY = e.clientY - canvas.offsetTop;\n}\n5. handlePointerMove\nThis is set by init to run when the pointer moves. It checks if draw = true. If so, it calls recordPointerLocation to get the path and drawLine to draw it.\nfunction handlePointerMove(e) {\n if (draw) {\n recordPointerLocation(e);\n drawLine();\n }\n}\n6. handlePointerDown\nThis is set by init to run when the pointer is down (finger is on touchscreen or mouse it clicked). If it is, calls recordPointerLocation to get the path and sets draw to true. That\u2019s because we only want movement events from handlePointerMove to cause drawing if the pointer is down.\nfunction handlePointerDown(e) {\n recordPointerLocation(e);\n draw = true;\n}\nFinally, we have a working drawing app. But that\u2019s just the beginning!\nSee the Pen Mandala Maker Tutorial: Part 1 by Hagar Shilo (@hagarsh) on CodePen.\n\nPart 2: Add a Clear button and a color picker\nNow we\u2019ll update our HTML file, adding a menu div with an input of the type and class color and a button of the class clear.\n\n \n
\n \n \n
\n\nColor picker\nThis is our new color picker function. It targets the input element by its class and gets its value. \nfunction getColor() {\n return document.querySelector(\".color\").value;\n}\nUp until now, the app used a default color (black) for the paint brush/digital pen. If we want to change the color we need to use the canvas property strokeStyle. We\u2019ll update drawLine by adding strokeStyle to it and setting it to the input value by calling getColor.\nfunction drawLine() {\n //...code... \n context.strokeStyle = getColor();\n context.lineWidth = 4;\n context.lineCap = \"round\";\n\n //...code... \n}\nClear button\nThis is our new Clear function. It responds to a button click and displays a dialog asking the user if she really wants to delete the drawing.\nfunction clearCanvas() {\n if (confirm(\"Want to clear?\")) {\n context.clearRect(0, 0, w, h);\n }\n}\nThe method clearRect takes four arguments. The first two (0,0) mark the origin, which is actually the top left corner of the canvas. The other two (w,h) mark the full width and height of the canvas. This means the entire canvas will be erased, from the top left corner to the bottom right corner. \nIf we were to give clearRect a slightly different set of arguments, say (0,0,w/2,h), the result would be different. In this case, only the left side of the canvas would clear up.\nLet\u2019s add this event handler to init:\nfunction init() {\n //...code...\n canvas.onpointermove = handleMouseMove;\n canvas.onpointerdown = handleMouseDown;\n canvas.onpointerup = stopDrawing;\n canvas.onpointerout = stopDrawing;\n document.querySelector(\".clear\").onclick = clearCanvas;\n}\nSee the Pen Mandala Maker Tutorial: Part 2 by Hagar Shilo (@hagarsh) on CodePen.\n\nPart 3: Draw with 2 lines\nIt\u2019s time to make a line appear where no pointer has gone before. A ghost line! \nFor that we are going to need four new coordinates: a', b', c' and d' (marked in the code as a_, b_, c_ and d_). In order for us to be able to add the first reflection, first we must decide if it\u2019s going to go over the y-axis or the x-axis. Since this is an arbitrary decision, it doesn\u2019t matter which one we choose. Let\u2019s go with the x-axis. \nHere is a sketch to help you grasp the mathematics of reflecting a point across the x-axis. The coordinate space in my sketch is different from my explanation earlier about the way the coordinate space works in computer graphics (more about that in a bit!). \nNow, look at A. It shows a point drawn where the pointer hits, and B shows the additional point we want to appear: a reflection of the point across the x-axis. This is our goal.\nA sketch illustrating the mathematics of reflecting a point.\nWhat happens to the x coordinates?\nThe variables a/a' and c/c' correspond to prevX and currX respectively, so we can call them \u201cthe x coordinates\u201d. We are reflecting across x, so their values remain the same, and therefore a' = a and c' = c. \nWhat happens to the y coordinates?\nWhat about b' and d'? Those are the ones that have to change, but in what way? Thanks to the slightly misleading sketch I showed you just now (of A and B), you probably think that the y coordinates b' and d' should get the negative values of b and d respectively, but nope. This is computer graphics, remember? The origin is at the top left corner and not at the canvas center, and therefore we get the following values: b = h - b, d' = h - d, where h is the canvas height.\nThis is the new code for the app\u2019s variables and the two lines: the one that fills the pointer\u2019s path and the one mirroring it across the x-axis.\nfunction drawLine() {\n var a = prevX, a_ = a,\n b = prevY, b_ = h-b,\n c = currX, c_ = c,\n d = currY, d_ = h-d;\n\n //... code ...\n\n // Draw line #1, at the pointer's location\n context.moveTo(a, b);\n context.lineTo(c, d);\n\n // Draw line #2, mirroring the line #1\n context.moveTo(a_, b_);\n context.lineTo(c_, d_);\n\n //... code ...\n}\nIn case this was too abstract for you, let\u2019s look at some actual numbers to see how this works.\nLet\u2019s say we have a tiny canvas of w = h = 10. Now let a = 3, b = 2, c = 4 and d = 3.\nSo b' = 10 - 2 = 8 and d' = 10 - 3 = 7.\nWe use the top and the left as references. For the y coordinates this means we count from the top, and 8 from the top is also 2 from the bottom. Similarly, 7 from the top is 3 from the bottom of the canvas. That\u2019s it, really. This is how the single point, and a line (not necessarily a straight one, by the way) is made up of many, many small segments that are similar to point in behavior.\nIf you are still confused, I don\u2019t blame you. \nHere is the result. Draw something and see what happens.\nSee the Pen Mandala Maker Tutorial: Part 3 by Hagar Shilo (@hagarsh) on CodePen.\n\nPart 4: Draw with 8 lines\nI have made yet another confusing sketch, with points C and D, so you understand what we\u2019re trying to do. Later on we\u2019ll look at points E, F, G and H as well. The circled point is the one we\u2019re adding at each particular step. The circled point at C has the coordinates (-3,2) and the circled point at D has the coordinates (-3,-2). Once again, keep in mind that the origin in the sketches is not the same as the origin of the canvas. \nA sketch illustrating points C and D.\nThis is the part where the math gets a bit mathier, as our drawLine function evolves further. We\u2019ll keep using the four new coordinates: a', b', c' and d', and reassign their values for each new location/line. Let\u2019s add two more lines in two new locations on the canvas. Their locations relative to the first two lines are exactly what you see in the sketch above, though the calculation required is different (because of the origin points being different).\nfunction drawLine() {\n\n //... code ... \n\n // Reassign values\n a_ = w-a; b_ = b;\n c_ = w-c; d_ = d;\n\n // Draw the 3rd line\n context.moveTo(a_, b_);\n context.lineTo(c_, d_);\n\n // Reassign values\n a_ = w-a; b_ = h-b;\n c_ = w-c; d_ = h-d;\n\n // Draw the 4th line\n context.moveTo(a_, b_);\n context.lineTo(c_, d_);\n\n //... code ... \nWhat is happening?\nYou might be wondering why we use w and h as separate variables, even though we know they have the same value. Why complicate the code this way for no apparent reason? That\u2019s because we want the symmetry to hold for a rectangular canvas as well, and this way it will. \nAlso, you may have noticed that the values of a' and c' are not reassigned when the fourth line is created. Why write their value assignments twice? It\u2019s for readability, documentation and communication. Maintaining the quadruple structure in the code is meant to help you remember that all the while we are dealing with two y coordinates (current and previous) and two x coordinates (current and previous). \nWhat happens to the x coordinates?\nAs you recall, our x coordinates are a (prevX) and c (currX).\nFor the third line we are adding, a' = w - a and c' = w - c, which means\u2026\nFor the fourth line, the same thing happens to our x coordinates a and c.\nWhat happens to the y coordinates?\nAs you recall, our y coordinates are b (prevY) and d (currY).\nFor the third line we are adding, b' = b and d' = d, which means the y coordinates are the ones not changing this time, making this is a reflection across the y-axis. \nFor the fourth line, b' = h - b and d' = h - d, which we\u2019ve seen before: that\u2019s a reflection across the x-axis.\nWe have four more lines, or locations, to define. Note: the part of the code that\u2019s responsible for drawing a micro-line between the newly calculated coordinates is always the same:\n context.moveTo(a_, b_);\n context.lineTo(c_, d_);\nWe can leave it out of the next code snippets and just focus on the calculations, i.e, the reassignments. \nOnce again, we need some concrete examples to see where we\u2019re going, so here\u2019s another sketch! The circled point E has the coordinates (2,3) and the circled point F has the coordinates (2,-3). The ability to draw at A but also make the drawing appear at E and F (in addition to B, C and D that we already dealt with) is the functionality we are about to add to out code.\nA sketch illustrating points E and F.\nThis is the code for E and F:\n // Reassign for 5\n a_ = w/2+h/2-b; b_ = w/2+h/2-a;\n c_ = w/2+h/2-d; d_ = w/2+h/2-c;\n\n // Reassign for 6\n a_ = w/2+h/2-b; b_ = h/2-w/2+a;\n c_ = w/2+h/2-d; d_ = h/2-w/2+c;\nTheir x coordinates are identical and their y coordinates are reversed to one another.\nThis one will be out final sketch. The circled point G has the coordinates (-2,3) and the circled point H has the coordinates (-2,-3).\nA sketch illustrating points G and H.\nThis is the code:\n // Reassign for 7\n a_ = w/2-h/2+b; b_ = w/2+h/2-a;\n c_ = w/2-h/2+d; d_ = w/2+h/2-c;\n\n // Reassign for 8\n a_ = w/2-h/2+b; b_ = h/2-w/2+a;\n c_ = w/2-h/2+d; d_ = h/2-w/2+c;\n //...code... \n}\nOnce again, the x coordinates of these two points are the same, while the y coordinates are different. And once again I won\u2019t go into the full details, since this has been a long enough journey as it is, and I think we\u2019ve covered all the important principles. But feel free to play around with the code and change it. I really recommend commenting out the code for some of the points to see what your drawing looks like without them.\nI hope you had fun learning! This is our final app:\nSee the Pen Mandala Maker Tutorial: Part 4 by Hagar Shilo (@hagarsh) on CodePen.", "year": "2018", "author": "Hagar Shilo", "author_slug": "hagarshilo", "published": "2018-12-02T00:00:00+00:00", "url": "https://24ways.org/2018/the-art-of-mathematics/", "topic": "code"}
{"rowid": 259, "title": "Designing Your Future", "contents": "I\u2019ve had the pleasure of working for a variety of clients \u2013 both large and small \u2013 over the last 25 years. In addition to my work as a design consultant, I\u2019ve worked as an educator, leading the Interaction Design team at Belfast School of Art, for the last 15 years.\nIn July, 2018 \u2013 frustrated with formal education, not least the ever-present hand of \u2018austerity\u2019 that has ravaged universities in the UK for almost a decade \u2013 I formally reduced my teaching commitment, moving from a full-time role to a half-time role.\nMaking the move from a (healthy!) monthly salary towards a position as a freelance consultant is not without its challenges: one month your salary\u2019s arriving in your bank account (and promptly disappearing to pay all of your bills); the next month, that salary\u2019s been drastically reduced. That can be a shock to the system.\nIn this article, I\u2019ll explore the challenges encountered when taking a life-changing leap of faith. To help you confront \u2018the fear\u2019 \u2013 the nervousness, the sleepless nights and the ever-present worry about paying the bills \u2013 I\u2019ll provide a set of tools that will enable you to take a leap of faith and pursue what deep down drives you.\nIn short: I\u2019ll bare my soul and share everything I\u2019m currently working on to \u2013 once and for all \u2013 make a final bid for freedom.\nThis isn\u2019t easy. I\u2019m sharing my innermost hopes and aspirations, and I might open myself up to ridicule, but I believe that by doing so, I might help others, by providing them with tools to help them make their own leap of faith.\nThe power of visualisation\nAs designers we have skills that we use day in, day out to imagine future possibilities, which we then give form. In our day-to-day work, we use those abilities to design products and services, but I also believe we can use those skills to design something every bit as important: ourselves.\nIn this article I\u2019ll explore three tools that you can use to design your future:\n\nProduct DNA\nArtefacts From the Future\nTomorrow Clients\n\nEach of these tools is designed to help you visualise your future. By giving that future form, and providing a concrete goal to aim for, you put the pieces in place to make that future a reality.\nBrian Eno \u2013 the noted musician, producer and thinker \u2013 states, \u201cHumans are capable of a unique trick: creating realities by first imagining them, by experiencing them in their minds.\u201d Eno helpfully provides a powerful example:\n\nWhen Martin Luther King said, \u201cI have a dream,\u201d he was inviting others to dream that dream with him. Once a dream becomes shared in that way, current reality gets measured against it and then modified towards it.\nThe dream becomes an invisible force which pulls us forward. By this process it starts to come true. The act of imagining something makes it real.\n\nWhen you imagine your future \u2013 designing an alternate, imagined reality in your mind \u2013 you begin the process of making that future real.\nProduct DNA\nThe first tool, which I use regularly \u2013 for myself and for client work \u2013 is a tool called Product DNA. The intention of this tool is to identify beacons from which you can learn, helping you to visualise your future.\nWe all have heroes \u2013 individuals or organisations \u2013 that we look up to. Ask yourself, \u201cWho are your heroes?\u201d If you had to pick three, who would they be and what could you learn from them? (You probably have more than three, but distilling down to three is an exercise in itself.)\nEarlier this year, when I was putting the pieces in place for a change in career direction, I started with my heroes. I chose three individuals that inspired me:\n\nAlan Moore: the author of \u2018Do Design: Why Beauty is Key to Everything\u2019;\nMark Shayler: the founder of Ape, a strategic consultancy; and\nSeth Godin: a writer and educator I\u2019ve admired and followed for many years.\n\nLooking at each of these individuals, I \u2018borrowed\u2019 a little DNA from each of them. That DNA helped me to paint a picture of the kind of work I wanted to do and the direction I wanted to travel.\n\nMoore\u2019s book - \u2018Do Design\u2019 \u2013 had a powerful influence on me, but the primary inspiration I drew from him was the sense of gravitas he conveyed in his work. Moore\u2019s mission is an important one and he conveys that with an appropriate weight of expression.\nShayler\u2019s work appealed to me for its focus on equipping big businesses with a startup mindset. As he puts it: \u201cI believe that you can do the things that you do better.\u201d That sense \u2013 of helping others to be their best selves \u2013 appealed to me.\nFinally, the words Godin uses to describe himself \u2013 \u201cAn Author, Entrepreneur and Most of All, a Teacher\u201d \u2013 resonated with me. The way he positions himself, as, \u201cmost of all, a teacher,\u201d gave me the belief I needed that I could work as an educator, but beyond the ivory tower of academia.\nI\u2019ve been exploring each of these individuals in depth, learning from them and applying what I learn to my practice. They don\u2019t all know it, but they are all \u2018mentors from afar\u2019.\nIn a moment of serendipity \u2013 and largely, I believe, because I\u2019d used this tool to explore his work \u2013 I was recently invited by Alan Moore to help him develop a leadership programme built around his book.\nThe key lesson here is that not only has this exercise helped me to design my future and give it tangible form, it\u2019s also led to a fantastic opportunity to work with Alan Moore, a thinker who I respect greatly.\nArtefacts From the Future\nThe second tool, which I also use regularly, is a tool called \u2018Artefacts From the Future\u2019. These artefacts \u2013 especially when designed as \u2018finished\u2019 pieces \u2013 are useful for creating provocations to help you see the future more clearly.\n\u2018Artefacts From the Future\u2019 can take many forms: they might be imagined magazine articles, news items, or other manifestations of success. By imagining these end points and giving them form, you clarify your goals, establishing something concrete to aim for.\nEarlier this year I revisited this tool to create a provocation for myself. I\u2019d just finished Alla Kholmatova\u2019s excellent book on \u2018Design Systems\u2019, which I would recommend highly. The book wasn\u2019t just filled with valuable insights, it was also beautifully designed.\nOnce I\u2019d finished reading Kholmatova\u2019s book, I started thinking: \u201cPerhaps it\u2019s time for me to write a new book?\u201d Using the magic of \u2018Inspect Element\u2019, I created a fictitious page for a new book I wanted to write: \u2018Designing Delightful Experiences\u2019.\nI wrote a description for the book, considering how I\u2019d pitch it.\n\nThis imagined page was just what I needed to paint a picture in my mind of a possible new book. I contacted the team at Smashing Magazine and pitched the idea to them. I\u2019m happy to say that I\u2019m now working on that book, which is due to be published in 2019.\nWithout this fictional promotional page from the future, the book would have remained as an idea \u2013 loosely defined \u2013 rolling around my mind. By spending some time, turning that idea into something \u2018real\u2019, I had everything I needed to tell the story of the book, sharing it with the publishing team at Smashing Magazine.\nOf course, they could have politely informed me that they weren\u2019t interested, but I\u2019d have lost nothing \u2013 truly \u2013 in the process.\nAs designers, creating these imaginary \u2018Artefacts From the Future\u2019 is firmly within our grasp. All we need to do is let go a little and allow our imaginations to wander.\nIn my experience, working with clients and \u2013 to a lesser extent, students \u2013 it\u2019s the \u2018letting go\u2019 part that\u2019s the hard part. It can be difficult to let down your guard and share a weighty goal, but I\u2019d encourage you to do so. At the end of the day, you have nothing to lose.\nThe key lesson here is that your \u2018Artefacts From the Future\u2019 will focus your mind. They\u2019ll transform your unformed ideas into \u2018tangible evidence\u2019 of future possibilities, which you can use as discussion points and provocations, helping you to shape your future reality.\nTomorrow Clients\nThe third tool, which I developed more recently, is a tool called \u2018Tomorrow Clients\u2019. This tool is designed to help you identify a list of clients that you aspire to work with.\nThe goal is to pinpoint who you would like to work with \u2013 in an ideal world \u2013 and define how you\u2019d position yourself to win them over. Again, this involves \u2018letting go\u2019 and allowing your mind to imagine the possibilities, asking, \u201cWhat if\u2026?\u201d\nBefore I embarked upon the design of my new website, I put together a \u2018soul searching\u2019 document that acted as a focal point for my thinking. I contacted a number of designers for a second opinion to see if my thinking was sound.\nOne of my graduates \u2013 Chris Armstrong, the founder of Niice \u2013 replied with the following: \u201cMight it be useful to consider five to ten companies you\u2019d love to work for, and consider how you\u2019d pitch yourself to them?\u201d\nThis was just the provocation I needed. To add a little focus, I reduced the list to three, asking: \u201cWho would my top three clients be?\u201d\n\nBy distilling the list down I focused on who I\u2019d like to work for and how I\u2019d position myself to entice them to work with me. My list included: IDEO, Adobe and IBM. All are companies I admire and I believed each would be interesting to work for.\nThis exercise might \u2013 on the surface \u2013 appear a little like indulging in fantasy, but I believe it helps you to clarify exactly what it is you are good at and, just as importantly, put that in to words.\nFor each company, I wrote a short pitch outlining why I admired them and what I thought I could add to their already existing skillset.\nFocusing first on Adobe, I suggested establishing an emphasis on educational resources, designed to help those using Adobe\u2019s creative tools to get the most out of them.\nA few weeks ago, I signed a contract with the team working on Adobe XD to create a series of \u2018capsule courses\u2019, focused on UX design. The first of these courses \u2013 exploring UI design \u2013 will be out in 2019.\nI believe that Armstrong\u2019s provocation \u2013 asking me to shift my focus from clients I have worked for in the past to clients I aspire to work for in the future \u2013 made all the difference.\nThe key lesson here is that this exercise encouraged me to raise the bar and look to the future, not the past. In short, it enabled me to proactively design my future.\nIn closing\u2026\nI hope these three tools will prove a welcome addition to your toolset. I use them when working with clients, I also use them when working with myself.\nI passionately believe that you can design your future. I also firmly believe that you\u2019re more likely to make that future a reality if you put some thought into defining what it looks like.\nAs I say to my students and the clients I work with: It\u2019s not enough to want to be a success, the word \u2018success\u2019 is too vague to be a destination. A far better approach is to define exactly what success looks like.\nThe secret is to visualise your future in as much detail as possible. With that future vision in hand as a map, you give yourself something tangible to translate into a reality.", "year": "2018", "author": "Christopher Murphy", "author_slug": "christophermurphy", "published": "2018-12-15T00:00:00+00:00", "url": "https://24ways.org/2018/designing-your-future/", "topic": "process"}
{"rowid": 258, "title": "Mistletoe Offline", "contents": "It\u2019s that time of year, when we gather together as families to celebrate the life of the greatest person in history. This man walked the Earth long before us, but he left behind words of wisdom. Those words can guide us every single day, but they are at the forefront of our minds during this special season.\nI am, of course, talking about Murphy, and the golden rule he gave unto us:\n\nAnything that can go wrong will go wrong.\n\nSo true! I mean, that\u2019s why we make sure we\u2019ve got nice 404 pages. It\u2019s not that we want people to ever get served a File Not Found message, but we acknowledge that, despite our best efforts, it\u2019s bound to happen sometime. Murphy\u2019s Law, innit?\nBut there are some Murphyesque situations where even your lovingly crafted 404 page won\u2019t help. What if your web server is down? What if someone is trying to reach your site but they lose their internet connection? These are all things than can\u2014and will\u2014go wrong.\nI guess there\u2019s nothing we can do about those particular situations, right?\nWrong!\nA service worker is a Murphy-battling technology that you can inject into a visitor\u2019s device from your website. Once it\u2019s installed, it can intercept any requests made to your domain. If anything goes wrong with a request\u2014as is inevitable\u2014you can provide instructions for the browser. That\u2019s your opportunity to turn those server outage frowns upside down. Take those network connection lemons and make network connection lemonade.\nIf you\u2019ve got a custom 404 page, why not make a custom offline page too?\nGet your server in order\nStep one is to make \u2026actually, wait. There\u2019s a step before that. Step zero. Get your site running on HTTPS, if it isn\u2019t already. You won\u2019t be able to use a service worker unless everything\u2019s being served over HTTPS, which makes sense when you consider the awesome power that a service worker wields.\nIf you\u2019re developing locally, service workers will work fine for localhost, even without HTTPS. But for a live site, HTTPS is a must.\nMake an offline page\nAlright, assuming your site is being served over HTTPS, then step one is to create an offline page. Make it as serious or as quirky as is appropriate for your particular brand. If the website is for a restaurant, maybe you could put the telephone number and address of the restaurant on the custom offline page (unsolicited advice: you could also put this on the home page, you know). Here\u2019s an example of the custom offline page for this year\u2019s Ampersand conference.\nWhen you\u2019re done, publish the offline page at suitably imaginative URL, like, say /offline.html.\nPre-cache your offline page\nNow create a JavaScript file called serviceworker.js. This is the script that the browser will look to when certain events are triggered. The first event to handle is what to do when the service worker is installed on the user\u2019s device. When that happens, an event called install is fired. You can listen out for this event using addEventListener:\naddEventListener('install', installEvent => {\n// put your instructions here.\n}); // end addEventListener\nIn this case, you want to make sure that your lovingly crafted custom offline page is put into a nice safe cache. You can use the Cache API to do this. You get to create as many caches as you like, and you can call them whatever you want. Here, I\u2019m going to call the cache Johnny just so I can refer to it as JohnnyCache in the code:\naddEventListener('install', installEvent => {\n installEvent.waitUntil(\n caches.open('Johnny')\n .then( JohnnyCache => {\n JohnnyCache.addAll([\n '/offline.html'\n ]); // end addAll\n }) // end open.then\n ); // end waitUntil\n}); // end addEventListener\nI\u2019m betting that your lovely offline page is linking to a CSS file, maybe an image or two, and perhaps some JavaScript. You can cache all of those at this point:\naddEventListener('install', installEvent => {\n installEvent.waitUntil(\n caches.open('Johnny')\n .then( JohnnyCache => {\n JohnnyCache.addAll([\n '/offline.html',\n '/path/to/stylesheet.css',\n '/path/to/javascript.js',\n '/path/to/image.jpg'\n ]); // end addAll\n }) // end open.then\n ); // end waitUntil\n}); // end addEventListener\nMake sure that the URLs are correct. If just one of the URLs in the list fails to resolve, none of the items in the list will be cached.\nIntercept requests\nThe next event you want to listen for is the fetch event. This is probably the most powerful\u2014and, let\u2019s be honest, the creepiest\u2014feature of a service worker. Once it has been installed, the service worker lurks on the user\u2019s device, waiting for any requests made to your site. Every time the user requests a web page from your site, a fetch event will fire. Every time that page requests a style sheet or an image, a fetch event will fire. You can provide instructions for what should happen each time:\naddEventListener('fetch', fetchEvent => {\n// What happens next is up to you!\n}); // end addEventListener\nLet\u2019s write a fairly conservative script with the following logic:\n\nWhenever a file is requested,\nFirst, try to fetch it from the network,\nBut if that doesn\u2019t work, try to find it in the cache,\nBut if that doesn\u2019t work, and it\u2019s a request for a web page, show the custom offline page instead.\n\nHere\u2019s how that translates into JavaScript:\n// Whenever a file is requested\naddEventListener('fetch', fetchEvent => {\n const request = fetchEvent.request;\n fetchEvent.respondWith(\n // First, try to fetch it from the network\n fetch(request)\n .then( responseFromFetch => {\n return responseFromFetch;\n }) // end fetch.then\n // But if that doesn't work\n .catch( fetchError => {\n // try to find it in the cache\n caches.match(request)\n .then( responseFromCache => {\n if (responseFromCache) {\n return responseFromCache;\n // But if that doesn't work\n } else {\n // and it's a request for a web page\n if (request.headers.get('Accept').includes('text/html')) {\n // show the custom offline page instead\n return caches.match('/offline.html');\n } // end if\n } // end if/else\n }) // end match.then\n }) // end fetch.catch\n ); // end respondWith\n}); // end addEventListener\nI am fully aware that I may have done some owl-drawing there. If you need a more detailed breakdown of what\u2019s happening at each point in the code, I\u2019ve written a whole book for you. It\u2019s the perfect present for Murphymas.\nHook up your service worker script\nYou can publish your service worker script at /serviceworker.js but you still need to tell the browser where to look for it. You can do that using JavaScript. Put this in an existing JavaScript file that you\u2019re calling in to every page on your site, or add this in a script element at the end of every page\u2019s HTML:\nif (navigator.serviceWorker) {\n navigator.serviceWorker.register('/serviceworker.js');\n}\nThat tells the browser to start installing the service worker, but not without first checking that the browser understands what a service worker is. When it comes to JavaScript, feature detection is your friend.\nYou might already have some JavaScript files in a folder like /assets/js/ and you might be tempted to put your service worker script in there too. Don\u2019t do that. If you do, the service worker will only be able to handle requests made to for files within /assets/js/. By putting the service worker script in the root directory, you\u2019re making sure that every request can be intercepted.\nGo further!\nNicely done! You\u2019ve made sure that if\u2014no, when\u2014a visitor can\u2019t reach your website, they\u2019ll get your hand-tailored offline page. You have temporarily defeated the forces of chaos! You have briefly fought the tide of entropy! You have made a small but ultimately futile gesture against the inevitable heat-death of the universe!\nThis is just the beginning. You can do more with service workers.\nWhat if, every time you fetched a page from the network, you stored a copy of that page in a cache? Then if that person tries to reach that page later, but they\u2019re offline, you could show them the cached version.\nOr, what if instead of reaching out the network first, you checked to see if a file is in the cache first? You could serve up that cached version\u2014which would be blazingly fast\u2014and still fetch a fresh version from the network in the background to pop in the cache for next time. That might be a good strategy for images.\nSo many options! The hard part isn\u2019t writing the code, it\u2019s figuring out the steps you want to take. Once you\u2019ve got those steps written out, then it\u2019s a matter of translating them into JavaScript.\nInevitably there will be some obstacles along the way\u2014usually it\u2019s a misplaced curly brace or a missing parenthesis. Don\u2019t be too hard on yourself if your code doesn\u2019t work at first. That\u2019s just Murphy\u2019s Law in action.", "year": "2018", "author": "Jeremy Keith", "author_slug": "jeremykeith", "published": "2018-12-04T00:00:00+00:00", "url": "https://24ways.org/2018/mistletoe-offline/", "topic": "code"}
{"rowid": 257, "title": "The (Switch)-Case for State Machines in User Interfaces", "contents": "You\u2019re tasked with creating a login form. Email, password, submit button, done.\n\u201cThis will be easy,\u201d you think to yourself.\nLogin form by Selecto\nYou\u2019ve made similar forms many times in the past; it\u2019s essentially muscle memory at this point. You\u2019re working closely with a designer, who gives you a beautiful, detailed mockup of a login form. Sure, you\u2019ll have to translate the pixels to meaningful, responsive CSS values, but that\u2019s the least of your problems.\nAs you\u2019re writing up the HTML structure and CSS layout and styles for this form, you realize that you don\u2019t know what the successful \u201clogged in\u201d page looks like. You remind the designer, who readily gives it to you. But then you start thinking more and more about how the login form is supposed to work.\n\nWhat if login fails? Where do those errors show up?\nShould we show errors differently if the user forgot to enter their email, or password, or both?\nOr should the submit button be disabled?\nShould we validate the email field?\nWhen should we show validation errors \u2013 as they\u2019re typing their email, or when they move to the password field, or when they click submit? (Note: many, many login forms are guilty of this.)\nWhen should the errors disappear?\nWhat do we show during the login process? Some loading spinner?\nWhat if loading takes too long, or a server error occurs?\n\nMany more questions come up, and you (and your designer) are understandably frustrated. The lack of upfront specification opens the door to scope creep, which readily finds itself at home in all the unexplored edge cases.\nModeling Behavior\nDescribing all the possible user flows and business logic of an application can become tricky. Ironically, user stories might not tell the whole story \u2013 they often leave out potential edge-cases or small yet important bits of information.\nHowever, one important (and very old) mathematical model of computation can be used for describing the behavior and all possible states of a user interface: the finite state machine.\nThe general idea, as it applies to user interfaces, is that all of our applications can be described (at some level of abstraction) as being in one, and only one, of a finite number of states at any given time. For example, we can describe our login form above in these states:\n\nstart - not submitted yet\nloading - submitted and logging in\nsuccess - successfully logged in\nerror - login failed\n\nAdditionally, we can describe an application as accepting a finite number of events \u2013 that is, all the possible events that can be \u201csent\u201d to the application, either from the user or some other external entity:\n\nSUBMIT - pressing the submit button\nRESOLVE - the server responds, indicating that login is successful\nREJECT - the server responds, indicating that login failed\n\nThen, we can combine these states and events to describe the transitions between them. That is, when the application is in one state, an an event occurs, we can specify what the next state should be:\n\nFrom the start state, when the SUBMIT event occurs, the app should be in the loading state.\nFrom the loading state, when the RESOLVE event occurs, login succeeded and the app should be in the success state.\nIf login fails from the loading state (i.e., when the REJECT event occurs), the app should be in the error state.\nFrom the error state, the user should be able to retry login: when the SUBMIT event occurs here, the app should go to the loading state.\nOtherwise, if any other event occurs, don\u2019t do anything and stay in the same state.\n\nThat\u2019s a pretty thorough description, similar to a user story! It\u2019s also a bit more symbolic than a user story (e.g., \u201cwhen the SUBMIT event occurs\u201d instead of \u201cwhen the user presses the submit button\u201d), and that\u2019s for a reason. By representing states, events, and transitions symbolically, we can visualize what this state machine looks like:\n\nEvery state is represented by a box, and every event is connected to a transition arrow that connects two states. This makes it intuitive to follow the flow and understand what the next state should be given the current state and an event.\nFrom Visuals to Code\nDrawing a state machine doesn\u2019t require any special software; in fact, using paper and pencil (in case anything changes!) does the job quite nicely. However, one common problem is handoff: it doesn\u2019t matter how detailed a user story or how well-designed a visualization is, it eventually has to be coded in order for it to become part of a real application.\nWith the state machine model described above, the same visual description can be mapped directly to code. Traditionally, and as the title suggests, this is done using switch/case statements:\nfunction loginMachine(state, event) {\n switch (state) {\n case 'start':\n if (event === 'SUBMIT') {\n return 'loading';\n }\n break;\n case 'loading':\n if (event === 'RESOLVE') {\n return 'success';\n } else if (event === 'REJECT') {\n return 'error';\n }\n break;\n case 'success':\n // Accept no further events\n break;\n case 'error':\n if (event === 'SUBMIT') {\n return 'loading';\n }\n break;\n default:\n // This should never occur\n return undefined;\n }\n}\n\nconsole.log(loginMachine('start', 'SUBMIT'));\n// => 'loading'\nThis is fine (I suppose) but personally, I find it much easier to use objects:\nconst loginMachine = {\n initial: \"start\",\n states: {\n start: {\n on: { SUBMIT: 'loading' }\n },\n loading: {\n on: {\n REJECT: 'error',\n RESOLVE: 'success'\n }\n },\n error: {\n on: {\n SUBMIT: 'loading'\n }\n },\n success: {}\n }\n};\n\nfunction transition(state, event) {\n return machine\n .states[state] // Look up the state\n .on[event] // Look up the next state based on the event\n || state; // If not found, return the current state\n}\n\nconsole.log(transition('start', 'SUBMIT'));\nAs you might have noticed, the loginMachine is a plain JS object, and can be written in JSON. This is important because it allows the machine to be visualized by a 3rd-party tool, as demonstrated here:\n\nA Common Language Between Designers and Developers\nAlthough finite state machines are a fundamental part of computer science, they have an amazing potential to bridge the application specification gap between designers and developers, as well as project managers, stakeholders, and more. By designing a state machine visually and with code, designers and developers alike can:\n\nidentify all possible states, and potentially missing states\ndescribe exactly what should happen when an event occurs on a given state, and prevent that event from having unintended side-effects in other states (ever click a submit button more than once?)\neliminate impossible states and identify states that are \u201cunreachable\u201d (have no entry transition) or \u201csunken\u201d (have no exit transition)\nadd features with full confidence of knowing what other states it might affect\nsimplify redundant states or complex user flows\ncreate test paths for almost every possible user flow, and easily identify edge cases\ncollaborate better by understanding the entire application model equally.\n\nNot a New Idea\nI\u2019m not the first to suggest that state machines can help bridge the gap between design and development.\n\nVince MingPu Shao wrote an article about designing UI states and communicating with developers effectively with finite state machines\nUser flow diagrams, which visually describe the paths that a user can take through an app to achieve certain goals, are essentially state machines. Numerous tools, from Sketch plugins to standalone apps, exist for creating them.\nIn 1999, Ian Horrocks wrote a book titled \u201cConstructing the User Interface with Statecharts\u201d, which takes state machines to the next level and describes the inherent difficulties (and solutions) with creating complex UIs. The ideas in the book are still relevant today.\nMore than a decade earlier, David Harel published \u201cStatecharts: A Visual Formalism for Complex Systems\u201d, in which the statechart - an extended hierarchical state machine model - is born.\n\nState machines and statecharts have been used for complex systems and user interfaces, both physical and digital, for decades, and are especially prevalent in other industries, such as game development and embedded electronic systems. Even NASA uses statecharts for the Curiosity Rover and more, citing many benefits:\n\nVisualized modeling\nPrecise diagrams\nAutomatic code generation\nComprehensive test coverage\nAccommodation of late-breaking requirements changes\n\nMoving Forward\nIt\u2019s time that we improve how we communicate between designers and developers, much less improve the way we develop UIs to deliver the best, bug-free, optimal user experience. There is so much more to state machines and statecharts than just being a different way of designing and coding. For more resources:\n\nThe World of Statecharts is a comprehensive guide by Erik Mogensen in using statecharts in your applications\nThe Statechart Community on Spectrum is always full of interesting ideas and questions related to state machines, statecharts, and software modeling\nI gave a talk at React Rally over a year ago about how state machines (finite automata) can improve the way we develop applications. The latest one is from Reactive Conf, where I demonstrate how statecharts can be used to automatically generate test cases.\nI have also been working on XState, which is a library for \u201cstate machines and statecharts for the modern web\u201d. You can create and visualize statecharts in JavaScript, and use them in any framework (and soon enough, multiple different languages).\n\nI\u2019m excited about the future of developing web and mobile applications with statecharts, especially with regard to faster design/development cycles, auto-generated testing, better error prevention, comprehensive analytics, and even the use of model-based reinforcement learning and artificial intelligence to greatly improve the user experience.", "year": "2018", "author": "David Khourshid", "author_slug": "davidkhourshid", "published": "2018-12-12T00:00:00+00:00", "url": "https://24ways.org/2018/state-machines-in-user-interfaces/", "topic": "code"}
{"rowid": 256, "title": "Develop Your Naturalist Superpowers with Observable Notebooks and iNaturalist", "contents": "We\u2019re going to level up your knowledge of what animals you might see in an area at a particular time of year - a skill every naturalist* strives for - using technology! Using iNaturalist and Observable Notebooks we\u2019re going to prototype seasonality graphs for particular species in an area, and automatically create a guide to what animals you might see in each month.\n*(a Naturalist is someone who likes learning about nature, not someone who\u2019s a fan of being naked, that\u2019s a \u2018Naturist\u2019\u2026 different thing!)\nLooking for critters in rocky intertidal habitats\nOne of my favourite things to do is going rockpooling, or as we call it over here in California, \u2018tidepooling\u2019. Amounting to the same thing, it\u2019s going to a beach that has rocks where the tide covers then uncovers little pools of water at different times of the day. All sorts of fun creatures and life can be found in this \u2018rocky intertidal habitat\u2019\nA particularly exciting creature that lives here is the Nudibranch, a type of super colourful \u2018sea slug\u2019. There are over 3000 species of Nudibranch worldwide. (The word \u201cnudibranch\u201d comes from the Latin nudus, naked, and the Greek \u03b2\u03c1\u03b1\u03bd\u03c7\u03b9\u03b1 / brankhia, gills.)\n\u200b\n\nThey are however quite tricky to find! Even though they are often brightly coloured and interestingly shaped, some of them are very small, and in our part of the world in the Bay Area in California their appearance in our rockpools is seasonal. We see them more often in Summer months, despite the not-as-low tides as in our Winter and Spring seasons.\nMy favourite place to go tidepooling here is Pillar Point in Half Moon bay (at other times of the year more famously known for the surf competition \u2018Mavericks\u2019). The rockpools there are rich in species diversity, of varied types and water-coverage habitat zones as well as being relatively accessible.\n\u200b\n\nI was rockpooling at Pillar Point recently with my parents and we talked to a lady who remarked that she hadn\u2019t seen any Nudibranchs on her visit this time. I realised that having an idea of what species to find where, and at what time of year is one of the many superpower goals of every budding Naturalist. \nUsing technology and the croudsourced species observations of the iNaturalist community we can shortcut our way to this superpower!\nFinding nearby animals with iNaturalist\nWe\u2019re going to be getting our information about what animals you can see in Pillar Point using iNaturalist. iNaturalist is a really fun platform that helps connect people to nature and report their findings of life in the outdoors. It is also a community of nature-loving people who help each other identify and confirm those observations. iNaturalist is a project run as a joint initiative by the California Academy of Sciences and the National Geographic Society.\nI\u2019ve been using iNaturalist for over two years to record and identify plants and animals that I\u2019ve found in the outdoors. I use their iPhone app to upload my pictures, which then uses machine learning algorithms to make an initial guess at what it is I\u2019ve seen. The community is really active, and I often find someone else has verified or updated my species guess pretty soon after posting. \nThis process is great because once an observation has been identified by at least two people it becomes \u2018verified\u2019 and is considered research grade. Research grade observations get exported and used by scientists, as well as being indexed by the Global Biodiversity Information Facility, GBIF.\n\u200b\n\niNaturalist has a great API and API explorer, which makes interacting and prototyping using iNaturalist data really fun. For example, if you go to the API explorer and expand the Observations : Search and fetch section and then the GET /observations API, you get a selection of input boxes that allow you to play with options that you can then pass to the API when you click the \u2018Try it out\u2019 button.\n\u200b\n\nYou\u2019ll then get a URL that looks a bit like\nhttps://api.inaturalist.org/v1/observations?captive=false &geo=true&verifiable=true&taxon_id=47113&lat=37.495461&lng=-122.499584 &radius=5&order=desc&order_by=created_at \nwhich you can call and interrrogate using a programming language of your choice.\nIf you would like to see an all-JavaScript application that uses the iNaturalist API, take a look at OwlsNearMe.com which Simon and I built one weekend earlier this year. It gets your location and shows you all iNaturalist observations of owls near you and lists which species you are likely to see (not adjusted for season).\nRapid development using Observable Notebooks\nWe\u2019re going to be using Observable Notebooks to prototype our examples, pulling data down from iNaturalist. I really like using visual notebooks like Observable, they are great for learning and building things quickly. You may be familiar with Jupyter notebooks for Python which is similar but takes a bit of setup to get going - I often use these for prototyping too. Observable is amazing for querying and visualising data with JavaScript and since it is a hosted product it doesn\u2019t require any setup at all.\nYou can follow along and play with this example on my Observable notebook. If you create an account there you can fork my notebook and create your own version of this example. \nEach \u2018notebook\u2019 consists of a page with a column of \u2018cells\u2019, similar to what you get in a spreadsheet. A cell can contain Markdown text or JavaScript code and the output of evaluating the cell appears above the code that generated it. There are lots of tutorials out there on Observable Notebooks, I like this code introduction one from Observable (and D3) creator Mike Bostock.\nDeveloping your Naturalist superpowers\nIf you have an idea of what plants and critters you might see in a place at the time you visit, you can hone in on what you want to study and train your Naturalist eye to better identify the life around you.\nFor our example, we care about wildlife we can see at Pillar Point, so we need a way of letting the iNaturalist API know which area we are interested in.\nWe could use a latitide, longitude and radius for this, but a rectangular bounding box is a better shape for the reef. We can use this tool to draw the area we want to search within: boundingbox.klokantech.com\n\u200b\n\nThe tool lets you export the bounding box in several forms using the dropdown at the bottom left under the map givese We are going to use the \u2018DublinCore\u2019 format as it\u2019s closest to the format needed by the iNaturalist API.\n westlimit=-122.50542; southlimit=37.492805; eastlimit=-122.492738; northlimit=37.499811\nA quick map primer:\nThe higher the latitude the more north it is\nThe lower the latitude the more south it is\nLatitude 0 = the equator\n\nThe higher the longitude the more east it is of Greenwich\nThe lower the longitude the more west it is of Greenwich\nLongitude 0 = Greenwich\nIn the iNaturalst API we want to use the parameters nelat, nelng, swlat, swlng to create a query that looks inside a bounding box of Pillar Point near Half Moon Bay in California:\nnelat = highest latitude = north limit = 37.499811\nnelng = highest longitude = east limit = -122.492738\nswlat = smallest latitude = south limit = 37.492805\nswlng = smallest longitude = west limit = 122.50542\nAs API parameters these look like this:\n?nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=122.50542\nThese parameters in this format can be used for most of the iNaturalist API methods.\nNudibranch seasonality in Pillar Point\nWe can use the iNaturalist observation_histogram API to get a count of Nudibranch observations per week-of-year across all time and within our Pillar Point bounding box.\nIn addition to the geographic parameters that we just worked out, we are also sending the taxon_id of 47113, which is iNaturalists internal number associated with the Nudibranch taxon. By using this we can get all species which are under the parent \u2018Order Nudibranchia\u2019. \nAnother useful piece of naturalist knowledge is understanding the biological classification scheme of Taxanomic Rank - roughly, when a species has a Latin name of two words eg \u2018Glaucus Atlanticus\u2019 the first Latin word is the \u2018Genus\u2019 like a family name \u2018Glaucus\u2019, and the second word identifies that particular species, like a given name \u2018Atlanticus\u2019. \nThe two Latin words together indicate a specific species, the term we use colloquially to refer to a type of animal often differs wildly region to region, and sometimes the same common name in two countries can refer to two different species. The common names for the Glaucus Atlanticus (which incidentally is my favourite sea slug) include: sea swallow, blue angel, blue glaucus, blue dragon, blue sea slug and blue ocean slug! Because this gets super confusing, Scientists like using this Latin name format instead.\nThe following piece of code asks the iNaturalist Histogram API to return per-week counts for verified observations of Nudibranchs within our Pillar Point bounding box:\npillar_point_counts_per_week = fetch(\n \"https://api.inaturalist.org/v1/observations/histogram?taxon_id=47113&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&date_field=observed&interval=week_of_year&verifiable=true\"\n ).then(response => {\n return response.json();\n})\nOur next step is to take this data and draw a graph! We\u2019ll be using Vega-Lite for this, which is a fab JavaScript graphing libary that is also easy and fun to use with Observable Notebooks. \n(Here is a great tutorial on exploring data and drawing graphs with Observable and Vega-Lite)\nThe iNaturalist API returns data that looks like this:\n{\n \"total_results\": 53,\n \"page\": 1,\n \"per_page\": 53,\n \"results\": {\n \"week_of_year\": {\n \"1\": 136,\n \"2\": 20,\n \"3\": 150,\n \"4\": 65,\n \"5\": 186,\n \"6\": 74,\n \"7\": 47,\n \"8\": 87,\n \"9\": 64,\n \"10\": 56,\nBut for our Vega-Lite graph we need data that looks like this:\n[{\n \"week\": \"01\",\n \"value\": 136\n}, {\n \"week\": \"02\",\n \"value\": 20\n}, ...]\nWe can convert what we get back from the API to the second format using a loop that iterates over the object keys:\nobjects_to_plot = {\n let objects = [];\n Object.keys(pillar_point_counts_per_week.results.week_of_year).map(function(week_index) {\n objects.push({\n week: `Wk ${week_index.toString()}`,\n observations: pillar_point_counts_per_week.results.week_of_year[week_index]\n });\n })\n return objects;\n}\nWe can then plug this into Vega-Lite to draw us a graph:\nvegalite({\n data: {values: objects_to_plot},\n mark: \"bar\",\n encoding: {\n x: {field: \"week\", type: \"nominal\", sort: null},\n y: {field: \"observations\", type: \"quantitative\"}\n },\n width: width * 0.9\n})\n\nIt\u2019s worth noting that we have a lot of observations of Nudibranchs particularly at Pillar Point due in no small part to the intertidal monitoring research that Alison Young and Rebecca Johnson facilitate for the California Achademy of Sciences. \nSo, what if we want to look for the seasonality of observations of a particular species of adorable sea slug? We want our interface to have a select box with a list of all the species you might find at any time of year. We can do this using the species_counts API to create us an object with the iNaturalist species ID and common & Latin names.\npillar_point_nudibranches = {\n let api_results = await fetch(\n \"https://api.inaturalist.org/v1/observations/species_counts?taxon_id=47113&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&date_field=observed&verifiable=true\"\n ).then(r => r.json())\n\n let species_list = api_results.results.map(i => ({\n value: i.taxon.id,\n label: `${i.taxon.preferred_common_name} (${i.taxon.name})`\n }));\n\n return species_list\n}\nWe can create an interactive select box by importing code from Jeremy Ashkanas\u2019 Observable Notebook: add import {select} from \"@jashkenas/inputs\" to a cell anywhere in our notebook. Observable is magic: like a spreadsheet, the order of the cells doesn\u2019t matter - if one cell is referenced by any other cell then when that cell updates all the other cells refresh themselves. You can also import and reference one notebook from another!\nviewof select_species = select({\n title: \"Which Nudibranch do you want to see seasonality for?\",\n options: [{value: \"\", label: \"All the Nudibranchs!\"}, ...pillar_point_nudibranches],\n value: \"\"\n})\nThen we go back to our old favourite, the histogram API just like before, only this time we are calling it with the value created by our select box ${select_species} as taxon_id instead of the number 47113.\npillar_point_counts_per_month_per_species = fetch(\n `https://api.inaturalist.org/v1/observations/histogram?taxon_id=${select_species}&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&date_field=observed&interval=month_of_year&verifiable=true`\n).then(r => r.json())\nNow for the fun graph bit! As we did before, we re-format the result of the API into a format compatible with Vega-Lite:\nobjects_to_plot_species_month = {\n let objects = [];\n Object.keys(pillar_point_counts_per_month_per_species.results.month_of_year).map(function(month_index) {\n objects.push({\n month: (new Date(2018, (month_index - 1), 1)).toLocaleString(\"en\", {month: \"long\"}),\n observations: pillar_point_counts_per_month_per_species.results.month_of_year[month_index]\n });\n })\n return objects;\n}\n(Note that in the above code we are creating a date object with our specific month in, and using toLocalString() to get the longer English name for the month. Because the JavaScript Date object counts January as 0, we use month_index -1 to get the correct month)\nAnd we draw the graph as we did before, only now if you interact with the select box in Observable the graph will dynamically update!\nvegalite({\n data: {values: objects_to_plot_species_month},\n mark: \"bar\",\n encoding: {\n x: {field: \"month\", type: \"nominal\", sort:null},\n y: {field: \"observations\", type: \"quantitative\"}\n },\n width: width * 0.9\n})\nNow we can see when is the best time of year to plan to go tidepooling in Pillar Point if we want to find a specific species of Nudibranch.\n\u200b\n\nThis tool is great for planning when we to go rockpooling at Pillar Point, but what about if you are going this month and want to pre-train your eye with what to look for in order to impress your friends with your knowledge of Nudibranchs?\nWell\u2026 we can create ourselves a dynamic guide that you can with a list of the species, their photo, name and how many times they have been observed in that month of the year!\nOur select box this time looks as follows, simpler than before but assigning the month value to the variable selected_month.\nviewof selected_month = select({\n title: \"When do you want to see Nudibranchs?\",\n options: [\n { label: \"Whenever\", value: \"\" },\n { label: \"January\", value: \"1\" },\n { label: \"February\", value: \"2\" },\n { label: \"March\", value: \"3\" },\n { label: \"April\", value: \"4\" },\n { label: \"May\", value: \"5\" },\n { label: \"June\", value: \"6\" },\n { label: \"July\", value: \"7\" },\n { label: \"August\", value: \"8\" },\n { label: \"September\", value: \"9\" },\n { label: \"October\", value: \"10\" },\n { label: \"November\", value: \"11\" },\n { label: \"December\", value: \"12\" },\n ],\n value: \"\"\n })\nWe then can use the species_counts API to get all the relevant information about which species we can see in month=${selected_month}. We\u2019ll be able to reference this response object and its values later with the variable we just created, eg: all_species_data.results[0].taxon.name.\nall_species_data = fetch(\n `https://api.inaturalist.org/v1/observations/species_counts?taxon_id=47113&month=${selected_month}&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&verifiable=true`\n).then(r => r.json())\nYou can render HTML directly in a notebook cell using Observable\u2019s html tagged template literal:\n\n\n
If you go to Pillar Point ${\n {\"\": \"\",\n \"1\":\"in January\",\n \"2\":\"in Febrary\",\n \"3\":\"in March\",\n \"4\":\"in April\",\n \"5\":\"in May\",\n \"6\":\"in June\",\n \"7\":\"in July\",\n \"8\":\"in August\",\n \"9\":\"in September\",\n \"10\":\"in October\",\n \"11\":\"in November\",\n \"12\":\"in December\",\n }[selected_month]\n } you might see\u2026
\n\n
\n${all_species_data.results.map(s => `
${s.taxon.name}
\n
Seen ${s.count} times
\n
\n`)}\n
\nThese few lines of HTML are all you need to get this exciting dynamic guide to what Nudibranchs you will see in each month!\n\u200b\n\nPlay with it yourself in this Observable Notebook.\nConclusion\nI hope by playing with these examples you have an idea of how powerful it can be to prototype using Observable Notebooks and how you can use the incredible crowdsourced community data and APIs from iNaturalist to augment your naturalist skills and impress your friends with your new \u2018knowledge of nature\u2019 superpower.\nLastly I strongly encourage you to get outside on a low tide to explore your local rocky intertidal habitat, and all the amazing critters that live there.\nHere is a great introduction video to tidepooling / rockpooling, by Rebecca Johnson and Alison Young from the California Academy of Sciences.", "year": "2018", "author": "Natalie Downe", "author_slug": "nataliedowne", "published": "2018-12-18T00:00:00+00:00", "url": "https://24ways.org/2018/observable-notebooks-and-inaturalist/", "topic": "code"}
{"rowid": 255, "title": "Inclusive Considerations When Restyling Form Controls", "contents": "I would like to begin by saying 2018 was the year that we, as developers, visual designers, browser implementers, and inclusive design and experience specialists rallied together and achieved a long-sought goal: We now have the ability to fully style form controls, across all modern browsers, while retaining their ease of declaration, native functionality and accessibility.\nI would like to begin by saying all these things. However, they\u2019re not true. I think we spent the year debating about what file extension CSS should be written in, or something. Or was that last year? Maybe I\u2019m thinking of next year.\nReturning to reality, styling form controls is more tricky and time consuming these days rather than flat out \u201chard\u201d. In fact, depending on the length of the styling-leash a particular browser provides, there are controls you can style quite a bit. As for browsers with shorter leashes, there are other options to force their controls closer to the visual design you\u2019re tasked to match.\nHowever, when striving for custom styled controls, one must be careful not to forget about the inherent functionality and accessibility that many provide. People expect and deserve the products and services they use and pay for to work for them. If these services are visually pleasing, but only function for those who fit the handful of personas they\u2019ve been designed for, then we\u2019ve potentially deprived many people the experiences they deserve.\nQuick level setting\nGetting down to brass tacks, when creating custom styled form controls that should retain their expected semantics and functionality, we have to consider the following:\n\nMany form elements can be styled directly through standard and browser specific selectors, as well as through some clever styling of markup patterns. We should leverage these native options before reinventing any wheels.\nIt is important to preserve the underlying semantics of interactive controls. We must not unintentionally exclude people who use assistive technologies (ATs) that rely on these semantics. \nMake sure you test what you create. There is a lot of underlying complexity to form controls which may not be immediately apparent if they\u2019re judged solely by their visual presentation in a single browser, or with limited AT testing.\n\nVisually resetting and restyling form controls\nOver the course of 2018, I worked on a project where I tested and reported on the accessibility impact of styling various form controls. In conducting my research, I reviewed many of the form controls available in HTML, testing to see how malleable they were to direct styling from standardized CSS selectors. \nAs I expected, controls such as the various text fields could be restyled rather easily. However, other controls like radio buttons and checkboxes, or sub-elements of special text fields like date, search, and number spinners were resistant to standard-based styling. These particular controls and their sub-elements required specific pseudo-elements to reset and allow for restyling of some of their default presentation.\nSee the Pen form control styling comparisons by Scott (@scottohara) on CodePen.\nhttps://codepen.io/scottohara/pen/gZOrZm/\nOver the years, the ability to directly style form controls has been something many people have clamored for. However, one should realize the benefits of being able to restyle some of these controls may involve more effort than originally anticipated. \nIf you want to restyle a control from the ground up, then you must also recreate any :active, :focus, and :hover states for the control\u2014all those things that were previously taken care of by browsers. Not only that, but anything you restyle should also work with Windows High Contrast mode, styling for dark mode, and other OS-level settings that browser respect without you even realizing. \n\n You ever try playing with the accessibility settings of your display on macOS, or similar Windows setting?\n \nIt is also worth mentioning that any browser prefixed pseudo-elements are not standardized CSS selectors. As MDN mentions at the top of their pages documenting these pseudo-elements:\n\nNon-standard\nThis feature is non-standard and is not on a standards track. Do not use it on production sites facing the Web: it will not work for every user. There may also be large incompatibilities between implementations and the behavior may change in the future.\n\nWhile this may be a deterrent for some, it\u2019s my opinion the risks are often only skin-deep. By which I mean if a non-standard selector does change, the control may look a bit quirky, but likely won\u2019t cease to function. A bug report which requires a CSS selector change can be an easy JIRA ticket to close, after all.\nCan\u2019t make it? Fake it.\nInternet Explorer 11 (IE11) is still neck-and-neck with other browsers in vying for the number 2 spot in desktop browser share. Due to IE not recognizing vendor-prefixed appearance properties, some essential controls like checkboxes won\u2019t render as intended. \nAdditionally, some controls like select boxes, file uploads, and sub-elements of date fields (calendar popups) cannot be modified by just relying on styling their HTML selectors alone. This means that unless your company designs and develops with a progressive enhancement, or graceful degradation mindset, you\u2019ll need to take a different approach in styling.\nGetting clever with markup and CSS\nThe following CodePen demonstrates how we can create a custom checkbox markup pattern. By mindfully utilizing CSS sibling selectors and positioning of the native control, we can create custom visual styling while also retaining the functionality and accessibility expectations of a native checkbox.\nSee the Pen Accessible Styled Native Checkbox by Scott (@scottohara) on CodePen.\nhttps://codepen.io/scottohara/pen/RqEayN/\nCustomizing checkboxes by visually hiding the input and styling well-placed markup with sibling selectors may seem old hat to some. However, many variations of these patterns do not take into account how their method of visually hiding the checkboxes can create discovery issues for certain screen reader navigation methods. For instance, if someone is using a mobile device and exploring by touch, how will they be able to drag their finger over an input that has been reduced to a single pixel, or positioned off screen?\nAs we move away from the simplicity of declaring a single HTML element and using clever CSS and markup patterns to create restyled form controls, we increase the need for additional testing to ensure no expected behaviors are lost. In other words, what should work in theory may not work in practice when you introduce the various different ways people may engage with a form control. It\u2019s worth remembering: what might be typical interactions for ourselves may be problematic if not impossible for others.\nLimitations to cleverness\nCreative coding will allow us to apply more consistent custom styles to some of the more problematic form controls. There will be a varied amount of custom markup, CSS, and sometimes JavaScript that will be needed to preserve the control\u2019s inherent usability and accessibility for each control we take this approach to.\nHowever, this method of restyling still doesn\u2019t solve for the lack of feature parity across different browsers. Nor is it a means to account for controls which don\u2019t have a native HTML element equivalent, such as a switch or multi-thumb range slider? Maybe there\u2019s a control that calls for a visual design or proposed user experience that would require too much fighting with a native control\u2019s behavior to be worth the level of effort to implement. Here\u2019s where we need to take another approach.\nUsing ARIA when appropriate\nSometimes we have no other option than to roll up our sleeves and start building custom form controls from scratch. Fair warning though: just because we\u2019re not leveraging a native HTML control as our foundation, it doesn\u2019t mean we have carte blanche to throw semantics out the window. Enter Accessible Rich Internet Applications (ARIA).\nARIA is a set of attributes that can modify existing elements, or extend HTML to include roles, properties and states that aren\u2019t native to the language. While divs and spans have no meaningful semantic information for us to leverage, with help from the ARIA specification and ARIA Authoring Practices we can incorporate these elements to help create the UI that we need while still following the first rule of Using ARIA:\n\nIf you can use a native HTML element or attribute with the semantics and behavior you require already built in, instead of re-purposing an element and adding an ARIA role, state or property to make it accessible, then do so.\n\nBy using these documents as guidelines, and testing our custom controls with people of various abilities, we can do our best to make sure a custom control performs as expected for as many people as possible.\nExceptions to the rule\nOne example of a control that allows for an exception to the first rule of Using ARIA would be a switch control.\nSwitches and checkboxes are similar components, in that they have both on/checked and off/unchecked states. However, checkboxes are often expected within the context of forms, or used to filter search queries on e-commerce sites. Switches are typically used to instantly enable or deactivate a particular setting at a component or app-based level, as this is their behavior in the native mobile apps in which they were popularized.\nWhile a switch control could be created by visually restyling a checkbox, this does not automatically mean that the underlying semantics and functionality will match the visual representation of the control. For example, the following CodePen restyles checkboxes to look like a switch control, but the semantics of the checkboxes remain which communicate a different way of interacting with the control than what you might expect from a native switch control.\nSee the Pen Switch Boxes - custom styled checkboxes posing as switches by Scott (@scottohara) on CodePen.\nhttps://codepen.io/scottohara/pen/XyvoeE/\nBy adding a role=\"switch\" to these checkboxes, we can repurpose the inherent checked/unchecked states of the native control, it\u2019s inherent ability to be focused by Tab key, and Space key to toggle state.\nBut while this is a valid approach to take in building a switch, how does this actually match up to reality?\nDoes it pass the test(s)?\nWhether deconstructing form controls to fully restyle them, or leveraging them and other HTML elements as a base to expand on, or create, a non-native form control, building it is just the start. We must test that what we\u2019ve restyled or rebuilt works the way people expect it to, if not better.\nWhat we must do here is run a gamut of comparative tests to document the functionality and usability of native form controls. For example:\n\n\nIs the control implemented in all supported browsers?\nIf not: where are the gaps? Will it be necessary to implement a custom solution for the situations that degrade to a standard text field? \nIf so: is each browser\u2019s implementation a good user experience? Is there room for improvement that can be tested against the native baseline? \n\n\nTest with multiple input devices.\nWhere the control is implemented, what is the quality of the user experience when using different input devices, such as mouse, touchscreen, keyboard, speech recognition or switch device, to name a few. \nYou\u2019ll find some HTML5 controls (like date pickers and number spinners) have additional UI elements that may not be announced to AT, or even allow keyboard accessibility. Often these controls can be adjusted by other means, such as text entry, or using arrow keys to increase or decrease values. If restyling or recreating a custom version of a control like these, it may make sense to maintain these native experiences as well.\n\n\nHow well does the control take to custom styles?\nIf a control can be styled enough to not need to be rebuilt from scratch, that\u2019s great! But make sure that there are no adverse affects on the accessibility of it. For instance, range sliders can be restyled and maintain their functionality and accessibility. However, elements like progress bars can be negatively affected by direct styling. \nAlways test with different browser and AT pairings to ensure nothing is lost when controls are restyled. \n\n\nDo specifications match reality?\nIf recreating controls to get around native limitations, such as the inability to style the options of a select element, or requiring a Switch control which is not native to HTML, do your solutions match user expectations? \nFor instance, selects have unique picker interfaces on touch devices. And switches have varied levels of support for different browser and screen reader pairings. Test with real people, and check your analytics. If these experiences don\u2019t match people\u2019s expectations, then maybe another solution is in order? \n\n\nWrapping up\nWhile styling form controls is definitely easier than it\u2019s ever been, that doesn\u2019t mean that it\u2019s at all simple, nor will it likely ever be. The level of difficulty you\u2019re going to face is going to depend entirely on what it is you\u2019re hoping to style, add-on to, or recreate. And even if you build your custom control exactly to specification, you\u2019ll still be reliant on browsers and assistive technologies being able to fully understand the component they\u2019ve been presented.\nForms and their controls are an incredibly important part of what we need the Internet for. Paying bills, scheduling appointments, ordering groceries, renewing your license or even ordering gifts for the holidays. These are all important tasks that people should be able to complete with as little effort as possible. Especially since for some, completing these tasks online might be their only option.\n2018 didn\u2019t end up being the year we got full customization of form controls sorted out. But that\u2019s OK. If we can continue to mindfully work with what we have, and instead challenge ourselves to follow inclusive design principles, well thought out Form Design Patterns, and solve problems with an accessibility first approach, we may come to realize that we can get along just fine without fully branded drop downs. \nAnd hey. There\u2019s always next year, right?", "year": "2018", "author": "Scott O'Hara", "author_slug": "scottohara", "published": "2018-12-13T00:00:00+00:00", "url": "https://24ways.org/2018/inclusive-considerations-when-restyling-form-controls/", "topic": "code"}
{"rowid": 254, "title": "What I Learned in Six Years at GDS", "contents": "When I joined the Government Digital Service in April 2012, GOV.UK was just going into public beta. GDS was a completely new organisation, part of the Cabinet Office, with a mission to stop wasting government money on over-complicated and underperforming big IT projects and instead deliver simple, useful services for the public.\nLots of people who were experts in their fields were drawn in by this inspiring mission, and I learned loads from working with some true leaders. Here are three of the main things I learned.\n1. What is the user need?\n\u2028The main discipline I learned from my time at GDS was to always ask \u2018what is the user need?\u2019 It\u2019s very easy to build something that seems like a good idea, but until you\u2019ve identified what problem you are solving for the user, you can\u2019t be sure that you are building something that is going to help solve an actual problem.\nA really good example of this is GOV.UK Notify. This service was originally conceived of as a status tracker; a \u201cwhere\u2019s my stuff\u201d for government services. For example, if you apply for a passport online, it can take up to six weeks to arrive. After a few weeks, you might feel anxious and phone the Home Office to ask what\u2019s happening. The idea of the status tracker was to allow you to get this information online, saving your time and saving government money on call centres.\nThe project started, as all GDS projects do, with a discovery. The main purpose of a discovery is to identify the users\u2019 needs. At the end of this discovery, the team realised that a status tracker wasn\u2019t the way to address the problem. As they wrote in this blog post: \n\nStatus tracking tools are often just \u2018channel shift\u2019 for anxiety. They solve the symptom and not the problem. They do make it more convenient for people to reduce their anxiety, but they still require them to get anxious enough to request an update in the first place.\n\nWhat would actually address the user need would be to give you the information before you get anxious about where your passport is. For example, when your application is received, email you to let you know when to expect it, and perhaps text you at various points in the process to let you know how it\u2019s going. So instead of a status tracker, the team built GOV.UK Notify, to make it easy for government services to incorporate text, email and even letter notifications into their processes.\nMaking sure you know your user\nAt GDS user needs were taken very seriously. We had a user research lab on site and everyone was required to spend two hours observing user research every six weeks. Ideally you\u2019d observe users working with things you\u2019d built, but even if they weren\u2019t, it was an incredibly valuable experience, and something you should seek out if you are able to.\nEven if we think we understand our users very well, it is very enlightening to see how users actually use your stuff. Partly because in technology we tend to be power users and the average user doesn\u2019t use technology the same way we do. But even if you are building things for other developers, someone who is unfamiliar with it will interact with it in a way that may be very different to what you have envisaged.\nUser needs is not just about building things\nAsking the question \u201cwhat is the user need?\u201d really helps focus on why you are doing what you are doing. It keeps things on track, and helps the team think about what the actual desired end goal is (and should be). \nThinking about user needs has helped me with lots of things, not just building services. For example, you are raising a pull request. What\u2019s the user need? The reviewer needs to be able to easily understand what the change you are proposing is, why you are proposing that change and any areas you need particular help on with the review. \nOr you are writing an email to a colleague. What\u2019s the user need? What are you hoping the reader will learn, understand or do as a result of your email?\n2. Make things open: it makes things better\nThe second important thing I learned at GDS was \u2018make things open: it makes things better\u2019. This works on many levels: being open about your strategy, blogging about what you are doing and what you\u2019ve learned (including mistakes), and \u2013 the part that I got most involved in \u2013 coding in the open.\nTalking about your work helps clarify it\nOne thing we did really well at GDS was blogging \u2013 a lot \u2013 about what we were working on. Blogging about what you are working on is is really valuable for the writer because it forces you to think logically about what you are doing in order to tell a good story. If you are blogging about upcoming work, it makes you think clearly about why you\u2019re doing it; and it also means that people can comment on the blog post. Often people had really useful suggestions or clarifying questions.\nIt\u2019s also really valuable to blog about what you\u2019ve learned, especially if you\u2019ve made a mistake. It makes sure you\u2019ve learned the lesson and helps others avoid making the same mistakes. As well as blogging about lessons learned, GOV.UK also publishes incident reports when there is an outage or service degradation. Being open about things like this really engenders an atmosphere of trust and safe learning; which helps make things better.\nCoding in the open has a lot of benefits\nIn my last year at GDS I was the Open Source Lead, and one of the things I focused on was the requirement that all new government source code should be open. From the start, GDS coded in the open (the GitHub organisation still has the non-intuitive name alphagov, because it was created by the team doing the original Alpha of GOV.UK, before GDS was even formed).\nWhen I first joined GDS I was a little nervous about the fact that anyone could see my code. I worried about people seeing my mistakes, or receiving critical code reviews. (Setting people\u2019s mind at rest about these things is why it\u2019s crucial to have good standards around communication and positive behaviour - even a critical code review should be considerately given). \nBut I quickly realised there were huge advantages to coding in the open. In the same way as blogging your decisions makes you think carefully about whether they are good ones and what evidence you have, the fact that anyone in the world could see your code (even if, in practice, they probably won\u2019t be looking) makes everyone raise their game slightly. The very fact that you know it\u2019s open, makes you make it a bit better.\nIt helps with lots of other things as well, for example it makes it easier to collaborate with people and share your work. And now that I\u2019ve left GDS, it\u2019s so useful to be able to look back at code I worked on to remember how things worked.\nShare what you learn\nIt\u2019s sometimes hard to know where to start with being open about things, but it gets easier and becomes more natural as you practice. It helps you clarify your thoughts and follow through on what you\u2019ve decided to do. Working at GDS when this was a very important principle really helped me learn how to do this well.\n3. Do the hard work to make it simple (tech edition)\n\u2018Start with user needs\u2019 and \u2018Make things open: it makes things better\u2019 are two of the excellent government design principles. They are all good, but the third thing that I want to talk about is number 4: \u2018Do the hard work to make it simple\u2019, and specifically, how this manifests itself in the way we build technology.\nAt GDS, we worked very hard to do the hard work to make the code, systems and technology we built simple for those who came after us. For example, writing good commit messages is taken very seriously. There is commit message guidance, and it was not unusual for a pull request review to ask for a commit message to be rewritten to make a commit message clearer.\nWe worked very hard on making pull requests good, keeping the reviewer in mind and making it clear to the user how best to review it.\nReviewing others\u2019 pull requests is the highest priority so that no-one is blocked, and teams have screens showing the status of open pull requests (using fourth wall) and we even had a \u2018pull request seal\u2019, a bot that publishes pull requests to Slack and gets angry if they are uncommented on for more than two days.\nMaking it easier for developers to support the site\nAnother example of doing the hard work to make it simple was the opsmanual. I spent two years on the web operations team on GOV.UK, and one of the things I loved about that team was the huge efforts everyone went to to be open and inclusive to developers.\nThe team had some people who were really expert in web ops, but they were all incredibly helpful when bringing me on board as a developer with no previous experience of web ops, and also patiently explaining things whenever other devs in similar positions came with questions. \nThe main artefact of this was the opsmanual, which contained write-ups of how to do lots of things. One of the best things was that every alert that might lead to someone being woken up in the middle of the night had a link to documentation on the opsmanual which detailed what the alert meant and some suggested actions that could be taken to address it.\nThis was important because most of the devs on GOV.UK were on the on-call rota, so if they were woken at 3am by an alert they\u2019d never seen before, the opsmanual information might give them everything they needed to solve it, without the years of web ops training and the deep familiarity with the GOV.UK infrastructure that came with working on it every day.\nDevelopers are users too\nDoing the hard work to make it simple means that users can do what they need to do, and this applies even when the users are your developer peers. At GDS I really learned how to focus on simplicity for the user, and how much better this makes things work.\nThese three principles help us make great things\nI learned so much more in my six years at GDS. For example, the civil service has a very fair way of interviewing. I learned about the importance of good comms, working late, responsibly and the value of content design.\nAnd the real heart of what I learned, the guiding principles that help us deliver great products, is encapsulated by the three things I\u2019ve talked about here: think about the user need, make things open, and do the hard work to make it simple.", "year": "2018", "author": "Anna Shipman", "author_slug": "annashipman", "published": "2018-12-08T00:00:00+00:00", "url": "https://24ways.org/2018/what-i-learned-in-six-years-at-gds/", "topic": "business"}
{"rowid": 253, "title": "Clip Paths Know No Bounds", "contents": "CSS Shapes are getting a lot of attention as browser support has increased for properties like shape-outside and clip-path. There are a few ways that we can use CSS Shapes, in particular with the clip-path property, that are not necessarily evident at first glance.\nThe basics of a clip path\nBefore we dig into specific techniques to expand on clip paths, we should first take a look at a basic shape and clip-path. Clip paths can apply a CSS Shape such as a circle(), ellipse(), inset(), or the flexible polygon() to any element. Everywhere in the element that is not within the bounds of our shape will be visually removed.\nUsing the polygon shape function, for example, we can create triangles, stars, or other straight-edged shapes as on Bennett Feely\u2019s Clippy. While fixed units like pixels can be used when defining vertices/points (where the sides meet), percentages will give more flexibility to adapt to the element\u2019s dimensions.\nSee the Pen Clip Path Box by Dan Wilson (@danwilson) on CodePen.\n\nSo for an octagon, we can set eight x, y pairs of percentages to define those points. In this case we start 30% into the width of the box for the first x and at the top of the box for the y and go clockwise. The visible area becomes the interior of the shape made by connecting these points with straight lines.\nclip-path: polygon(\n 30% 0%,\n 70% 0%,\n 100% 30%,\n 100% 70%,\n 70% 100%,\n 30% 100%,\n 0% 70%,\n 0% 30%\n);\nA shape with less vertices than the eye can see\nIt\u2019s reasonable to look at the polygon() function and assume that we need to have one pair of x, y coordinates for every point in our shape. However, we gain some flexibility by thinking outside the box \u2014 or more specifically when we think outside the range of 0% - 100%.\nOur element\u2019s box model will be the ultimate boundary for a clip-path, but we can still define points that exist beyond that natural box for an element.\nSee the Pen CSS Shapes Know No Bounds by Dan Wilson (@danwilson) on CodePen.\n\nBy going beyond the 0% - 100% range we can turn a polygon with three points into a quadrilateral, a pentagon, or a hexagon. In this example the shapes used are all similar triangles defining three points, but due to exceeding the bounds for our element box we visually see one triangle and two pentagons.\nOur earlier octagon can similarly be made with only four points.\nSee the Pen Octagon with four points by Dan Wilson (@danwilson) on CodePen.\n\nMultiple shapes, one clip path\nWe can lean on this power of going beyond the bounds of our element to also create more than one visual shape with a single polygon().\nSee the Pen Multiple shapes from one clip-path by Dan Wilson (@danwilson) on CodePen.\n\nDepending on how we lay it out we can make each shape directly, but since we know we can move around in the space beyond the element\u2019s box, we can draw extra lines to help us get where we need to go next as needed.\nIt can also help us in slicing an element. Combined with CSS Variables, we can work with overlapping elements and clip each one into alternating strips. This example is two elements, each divided into a few rectangles.\nSee the Pen 24w: Sliced Icon by Dan Wilson (@danwilson) on CodePen.\n\nDifferent shapes with fill rules\nA polygon() is not just a collection of points. There is one more key piece to its puzzle according to the specification \u2014 the Fill Rule. The default value we have been using so far is nonzero, and the second option is evenodd. These two values help determine what is considered inside and outside the shape.\nSee the Pen A Star Multiways by Dan Wilson (@danwilson) on CodePen.\n\nAs lines intersect we can get into situations where pieces seemingly on the inside can be considered outside the shape boundary. When using the evenodd fill rule, we can determine if a given point is inside or outside the boundary by drawing a ray from the point in any direction. If the ray crosses an even number of the clip path\u2019s lines, the point is considered outside, and if it crosses an odd number the point is inside.\nOrder of operations\nIt is important to note that there are many CSS properties that affect the final composited appearance of an element via CSS Filters, Blend Modes, and more.\nThese compositing effects are applied in the order:\n\nCSS Filters (e.g. filter: blur(2px))\nClipping (e.g. what this article is about)\nMasking (Clipping\u2019s cousin)\nBlend Modes (e.g. mix-blend-mode: multiply)\nOpacity\n\nThis means if we want to have a star shape and blur it, the blur will happen before the clip. And since blurs are most noticeable around the edge of an element box, the effect might be completely lost since we have clipped away the element\u2019s box edges.\nSee the Pen Order of Filter + Clip by Dan Wilson (@danwilson) on CodePen.\n\nIf we want the edges of the star to be blurred, we do have the option to wrap our clipped element in a blurred parent element. The inner element will be rendered first (with its star clip) and then the parent will blur its contents normally.\nRevealing content with animation\nCSS Shapes can be transitioned and animated, allowing us to animate the visual area of our element without affecting the content within. For example, we can start with visually hidden content (fully clipped) and grow the clip path to reveal the content within. The important caveat for polygon() is that the number of points need to be the same for each keyframe, as well as the fill rule. Otherwise the browser will not have enough information to interpolate the intermediate values. \nSee the Pen Clip Path Shape Reveal by Dan Wilson (@danwilson) on CodePen.\n\nDon\u2019t keep CSS Shapes in a box\nClip paths give us some interesting new possibilities, especially when we think of them as more than just basic shapes. We may be heavily modifying the visual representation of our elements with clip-path, but the underlying content remains unchanged and accessible which makes this property fairly powerful.", "year": "2018", "author": "Dan Wilson", "author_slug": "danwilson", "published": "2018-12-20T00:00:00+00:00", "url": "https://24ways.org/2018/clip-paths-know-no-bounds/", "topic": "code"}
{"rowid": 252, "title": "Turn Jekyll up to Eleventy", "contents": "Sometimes it pays not to over complicate things. While many of the sites we use on a daily basis require relational databases to manage their content and dynamic pages to respond to user input, for smaller, simpler sites, serving pre-rendered static HTML is usually a much cheaper \u2014 and more secure \u2014 option. \nThe JAMstack (JavaScript, reusable APIs, and prebuilt Markup) is a popular marketing term for this way of building websites, but in some ways it\u2019s a return to how things were in the early days of the web, before developers started tinkering with CGI scripts or Personal HomePage. Indeed, my website has always served pre-rendered HTML; first with the aid of Movable Type and more recently using Jekyll, which Anna wrote about in 2013.\nBy combining three approachable languages \u2014 Markdown for content, YAML for data and Liquid for templating \u2014 the ergonomics of Jekyll found broad appeal, influencing the design of the many static site generators that followed. But Jekyll is not without its faults. Aside from notoriously slow build times, it\u2019s also built using Ruby. While this is an elegant programming language, it is yet another ecosystem to understand and manage, and often alongside one we already use: JavaScript. For all my time using Jekyll, I would think to myself \u201cthis, but in Node\u201d. Thankfully, one of Santa\u2019s elves (Zach Leatherman) granted my Atwoodian wish and placed such a static site generator under my tree.\nIntroducing Eleventy\nEleventy is a more flexible alternative Jekyll. Besides being written in Node, it\u2019s less strict about how to organise files and, in addition to Liquid, supports other templating languages like EJS, Pug, Handlebars and Nunjucks. Best of all, its build times are significantly faster (with future optimisations promising further gains).\nAs content is saved using the familiar combination of YAML front matter and Markdown, transitioning from Jekyll to Eleventy may seem like a reasonable idea. Yet as I\u2019ve discovered, there are a few gotchas. If you\u2019ve been considering making the switch, here are a few tips and tricks to help you on your way1.\nNote: Throughout this article, I\u2019ll be converting Matt Cone\u2019s Markdown Guide site as an example. If you want to follow along, start by cloning the git repository, and then change into the project directory:\ngit clone https://github.com/mattcone/markdown-guide.git\ncd markdown-guide\nBefore you start\nIf you\u2019ve used tools like Grunt, Gulp or Webpack, you\u2019ll be familiar with Node.js but, if you\u2019ve been exclusively using Jekyll to compile your assets as well as generate your HTML, now\u2019s the time to install Node.js and set up your project to work with its package manager, NPM:\n\nInstall Node.js:\n\nMac: If you haven\u2019t already, I recommend installing Homebrew, a package manager for the Mac. Then in the Terminal type brew install node.\nWindows: Download the Windows installer from the Node.js website and follow the instructions.\n\nInitiate NPM: Ensure you are in the directory of your project and then type npm init. This command will ask you a few questions before creating a file called package.json. Like RubyGems\u2019s Gemfile, this file contains a list of your project\u2019s third-party dependencies.\n\nIf you\u2019re managing your site with Git, make sure to add node_modules to your .gitignore file too. Unlike RubyGems, NPM stores its dependencies alongside your project files. This folder can get quite large, and as it contains binaries compiled to work with the host computer, it shouldn\u2019t be version controlled. Eleventy will also honour the contents of this file, meaning anything you want Git to ignore, Eleventy will ignore too.\nInstalling Eleventy\nWith Node.js installed and your project setup to work with NPM, we can now install Eleventy as a dependency:\nnpm install --save-dev @11ty/eleventy\nIf you open package.json you should see the following:\n\u2026\n\"devDependencies\": {\n \"@11ty/eleventy\": \"^0.6.0\"\n}\n\u2026\nWe can now run Eleventy from the command line using NPM\u2019s npx command. For example, to covert the README.md file to HTML, we can run the following:\nnpx eleventy --input=README.md --formats=md\nThis command will generate a rendered HTML file at _site/README/index.html. Like Jekyll, Eleventy shares the same default name for its output directory (_site), a pattern we will see repeatedly during the transition.\nConfiguration\nWhereas Jekyll uses the declarative YAML syntax for its configuration file, Eleventy uses JavaScript. This allows its options to be scripted, enabling some powerful possibilities as we\u2019ll see later on.\nWe\u2019ll start by creating our configuration file (.eleventy.js), copying the relevant settings in _config.yml over to their equivalent options:\nmodule.exports = function(eleventyConfig) {\n return {\n dir: {\n input: \"./\", // Equivalent to Jekyll's source property\n output: \"./_site\" // Equivalent to Jekyll's destination property\n }\n };\n};\nA few other things to bear in mind:\n\n\nWhereas Jekyll allows you to list folders and files to ignore under its exclude property, Eleventy looks for these values inside a file called .eleventyignore (in addition to .gitignore).\n\nBy default, Eleventy uses markdown-it to parse Markdown. If your content uses advanced syntax features (such as abbreviations, definition lists and footnotes), you\u2019ll need to pass Eleventy an instance of this (or another) Markdown library configured with the relevant options and plugins.\n\nLayouts\nOne area Eleventy currently lacks flexibility is the location of layouts, which must reside within the _includes directory (see this issue on GitHub).\nWanting to keep our layouts together, we\u2019ll move them from _layouts to _includes/layouts, and then update references to incorporate the layouts sub-folder. We could update the layout: frontmatter property in each of our content files, but another option is to create aliases in Eleventy\u2019s config:\nmodule.exports = function(eleventyConfig) {\n // Aliases are in relation to the _includes folder\n eleventyConfig.addLayoutAlias('about', 'layouts/about.html');\n eleventyConfig.addLayoutAlias('book', 'layouts/book.html');\n eleventyConfig.addLayoutAlias('default', 'layouts/default.html');\n\n return {\n dir: {\n input: \"./\",\n output: \"./_site\"\n }\n };\n}\nDetermining which template language to use\nEleventy will transform Markdown (.md) files using Liquid by default, but we\u2019ll need to tell Eleventy how to process other files that are using Liquid templates. There are a few ways to achieve this, but the easiest is to use file extensions. In our case, we have some files in our api folder that we want to process with Liquid and output as JSON. By appending the .liquid file extension (i.e. basic-syntax.json becomes basic-syntax.json.liquid), Eleventy will know what to do.\nVariables\nOn the surface, Jekyll and Eleventy appear broadly similar, but as each models its content and data a little differently, some template variables will need updating.\nSite variables\nAlongside build settings, Jekyll let\u2019s you store common values in its configuration file which can be accessed in our templates via the site.* namespace. For example, in our Markdown Guide, we have the following values:\ntitle: \"Markdown Guide\"\nurl: https://www.markdownguide.org\nbaseurl: \"\"\nrepo: http://github.com/mattcone/markdown-guide\ncomments: false\nauthor:\n name: \"Matt Cone\"\nog_locale: \"en_US\"\nEleventy\u2019s configuration uses JavaScript which is not suited to storing values like this. However, like Jekyll, we can use data files to store common values. If we add our site-wide values to a JSON file inside a folder called _data and name this file site.json, we can keep the site.* namespace and leave our variables unchanged.\n{\n \"title\": \"Markdown Guide\",\n \"url\": \"https://www.markdownguide.org\",\n \"baseurl\": \"\",\n \"repo\": \"http://github.com/mattcone/markdown-guide\",\n \"comments\": false,\n \"author\": {\n \"name\": \"Matt Cone\"\n },\n \"og_locale\": \"en_US\"\n}\nPage variables\nThe table below shows a mapping of common page variables. As a rule, frontmatter properties are accessed directly, whereas derived metadata values (things like URLs, dates etc.) get prefixed with the page.* namespace:\n\n\n\nJekyll\nEleventy\n\n\n\n\npage.url\npage.url\n\n\npage.date\npage.date\n\n\npage.path\npage.inputPath\n\n\npage.id\npage.outputPath\n\n\npage.name\npage.fileSlug\n\n\npage.content\ncontent\n\n\npage.title\ntitle\n\n\npage.foobar\nfoobar\n\n\n\nWhen iterating through pages, frontmatter values are available via the data object while content is available via templateContent:\n\n\n\nJekyll\nEleventy\n\n\n\n\nitem.url\nitem.url\n\n\nitem.date\nitem.date\n\n\nitem.path\nitem.inputPath\n\n\nitem.name\nitem.fileSlug\n\n\nitem.id\nitem.outputPath\n\n\nitem.content\nitem.templateContent\n\n\nitem.title\nitem.data.title\n\n\nitem.foobar\nitem.data.foobar\n\n\n\nIdeally the discrepancy between page and item variables will change in a future version (see this GitHub issue), making it easier to understand the way Eleventy structures its data.\nPagination variables\nWhereas Jekyll\u2019s pagination feature is limited to paginating posts on one page, Eleventy allows you to paginate any collection of documents or data. Given this disparity, the changes to pagination are more significant, but this table shows a mapping of equivalent variables:\n\n\n\nJekyll\nEleventy\n\n\n\n\npaginator.page\npagination.pageNumber\n\n\npaginator.per_page\npagination.size\n\n\npaginator.posts\npagination.items\n\n\npaginator.previous_page_path\npagination.previousPageHref\n\n\npaginator.next_page_path\npagination.nextPageHref\n\n\n\nFilters\nAlthough Jekyll uses Liquid, it provides a set of filters that are not part of the core Liquid library. There are quite a few \u2014 more than can be covered by this article \u2014 but you can replicate them by using Eleventy\u2019s addFilter configuration option. Let\u2019s convert two used by our Markdown Guide: jsonify and where.\nThe jsonify filter outputs an object or string as valid JSON. As JavaScript provides a native JSON method, we can use this in our replacement filter. addFilter takes two arguments; the first is the name of the filter and the second is the function to which we will pass the content we want to transform:\n// {{ variable | jsonify }}\neleventyConfig.addFilter('jsonify', function (variable) {\n return JSON.stringify(variable);\n});\nJekyll\u2019s where filter is a little more complicated in that it takes two additional arguments: the key to look for, and the value it should match:\n{{ site.members | where: \"graduation_year\",\"2014\" }}\nTo account for this, instead of passing one value to the second argument of addFilter, we can instead pass three: the array we want to examine, the key we want to look for and the value it should match:\n// {{ array | where: key,value }}\neleventyConfig.addFilter('where', function (array, key, value) {\n return array.filter(item => {\n const keys = key.split('.');\n const reducedKey = keys.reduce((object, key) => {\n return object[key];\n }, item);\n\n return (reducedKey === value ? item : false);\n });\n});\nThere\u2019s quite a bit going on within this filter, but I\u2019ll try to explain. Essentially we\u2019re examining each item in our array, reducing key (passed as a string using dot notation) so that it can be parsed correctly (as an object reference) before comparing its value to value. If it matches, item remains in the returned array, else it\u2019s removed. Phew!\nIncludes\nAs with filters, Jekyll provides a set of tags that aren\u2019t strictly part of Liquid either. This includes one of the most useful, the include tag. LiquidJS, the library Eleventy uses, does provide an include tag, but one using the slightly different syntax defined by Shopify. If you\u2019re not passing variables to your includes, everything should work without modification. Otherwise, note that whereas with Jekyll you would do this:\n\n{% include include.html value=\"key\" %}\n\n\n{{ include.value }}\nin Eleventy, you would do this:\n\n{% include \"include.html\", value: \"key\" %}\n\n\n{{ value }}\nA downside of Shopify\u2019s syntax is that variable assignments are no longer scoped to the include and can therefore leak; keep this in mind when converting your templates as you may need to make further adjustments.\nTweaking Liquid\nYou may have noticed in the above example that LiquidJS expects the names of included files to be quoted (else it treats them as variables). We could update our templates to add quotes around file names (the recommended approach), but we could also disable this behaviour by setting LiquidJS\u2019s dynamicPartials option to false. Additionally, Eleventy doesn\u2019t support the include_relative tag, meaning you can\u2019t include files relative to the current document. However, LiquidJS does let us define multiple paths to look for included files via its root option. \nThankfully, Eleventy allows us to pass options to LiquidJS:\neleventyConfig.setLiquidOptions({\n dynamicPartials: false,\n root: [\n '_includes',\n '.'\n ]\n});\nCollections\nJekyll\u2019s collections feature lets authors create arbitrary collections of documents beyond pages and posts. Eleventy provides a similar feature, but in a far more powerful way.\nCollections in Jekyll\nIn Jekyll, creating collections requires you to add the name of your collections to _config.yml and create corresponding folders in your project. Our Markdown Guide has two collections:\ncollections:\n - basic-syntax\n - extended-syntax\nThese correspond to the folders _basic-syntax and _extended-syntax whose content we can iterate over like so:\n{% for syntax in site.extended-syntax %}\n {{ syntax.title }}\n{% endfor %}\nCollections in Eleventy\nThere are two ways you can set up collections in 11ty. The first, and most straightforward, is to use the tag property in content files:\n---\ntitle: Strikethrough\nsyntax-id: strikethrough\nsyntax-summary: \"~~The world is flat.~~\"\ntag: extended-syntax\n---\nWe can then iterate over tagged content like this:\n{% for syntax in collections.extended-syntax %}\n {{ syntax.data.title }}\n{% endfor %}\nEleventy also allows us to configure collections programmatically. For example, instead of using tags, we can search for files using a glob pattern (a way of specifying a set of filenames to search for using wildcard characters):\neleventyConfig.addCollection('basic-syntax', collection => {\n return collection.getFilteredByGlob('_basic-syntax/*.md');\n});\n\neleventyConfig.addCollection('extended-syntax', collection => {\n return collection.getFilteredByGlob('_extended-syntax/*.md');\n});\nWe can extend this further. For example, say we wanted to sort a collection by the display_order property in our document\u2019s frontmatter. We could take the results of collection.getFilteredByGlob and then use JavaScript\u2019s sort method to sort the result:\neleventyConfig.addCollection('example', collection => {\n return collection.getFilteredByGlob('_examples/*.md').sort((a, b) => {\n return a.data.display_order - b.data.display_order;\n });\n});\nHopefully, this gives you just a hint of what\u2019s possible using this approach.\nUsing directory data to manage defaults\nBy default, Eleventy will maintain the structure of your content files when generating your site. In our case, that means /_basic-syntax/lists.md is generated as /_basic-syntax/lists/index.html. Like Jekyll, we can change where files are saved using the permalink property. For example, if we want the URL for this page to be /basic-syntax/lists.html we can add the following:\n---\ntitle: Lists\nsyntax-id: lists\napi: \"no\"\npermalink: /basic-syntax/lists.html\n---\nAgain, this is probably not something we want to manage on a file-by-file basis but again, Eleventy has features that can help: directory data and permalink variables.\nFor example, to achieve the above for all content stored in the _basic-syntax folder, we can create a JSON file that shares the name of that folder and sits inside it, i.e. _basic-syntax/_basic-syntax.json and set our default values. For permalinks, we can use Liquid templating to construct our desired path:\n{\n \"layout\": \"syntax\",\n \"tag\": \"basic-syntax\",\n \"permalink\": \"basic-syntax/{{ title | slug }}.html\"\n}\nHowever, Markdown Guide doesn\u2019t publish syntax examples at individual permanent URLs, it merely uses content files to store data. So let\u2019s change things around a little. No longer tied to Jekyll\u2019s rules about where collection folders should be saved and how they should be labelled, we\u2019ll move them into a folder called _content:\nmarkdown-guide\n\u2514\u2500\u2500 _content\n \u251c\u2500\u2500 basic-syntax\n \u251c\u2500\u2500 extended-syntax\n \u251c\u2500\u2500 getting-started\n \u2514\u2500\u2500 _content.json\nWe will also add a directory data file (_content.json) inside this folder. As directory data is applied recursively, setting permalink to false will mean all content in this folder and its children will no longer be published:\n{\n \"permalink\": false\n}\nStatic files\nEleventy only transforms files whose template language it\u2019s familiar with. But often we may have static assets that don\u2019t need converting, but do need copying to the destination directory. For this, we can use pass-through file copy. In our configuration file, we tell Eleventy what folders/files to copy with the addPassthroughCopy option. Then in the return statement, we enable this feature by setting passthroughFileCopy to true:\nmodule.exports = function(eleventyConfig) {\n \u2026\n\n // Copy the `assets` directory to the compiled site folder\n eleventyConfig.addPassthroughCopy('assets');\n\n return {\n dir: {\n input: \"./\",\n output: \"./_site\"\n },\n passthroughFileCopy: true\n };\n}\nFinal considerations\nAssets\nUnlike Jekyll, Eleventy provides no support for asset compilation or bundling scripts \u2014 we have plenty of choices in that department already. If you\u2019ve been using Jekyll to compile Sass files into CSS, or CoffeeScript into Javascript, you will need to research alternative options, options which are beyond the scope of this article, sadly.\nPublishing to GitHub Pages\nOne of the benefits of Jekyll is its deep integration with GitHub Pages. To publish an Eleventy generated site \u2014 or any site not built with Jekyll \u2014 to GitHub Pages can be quite involved, but typically involves copying the generated site to the gh-pages branch or including that branch as a submodule. Alternatively, you could use a continuous integration service like Travis or CircleCI and push the generated site to your web server. It\u2019s enough to make your head spin! Perhaps for this reason, a number of specialised static site hosts have emerged such as Netlify and Google Firebase. But remember; you can publish a static site almost anywhere!\n\nGoing one louder\nIf you\u2019ve been considering making the switch, I hope this brief overview has been helpful. But it also serves as a reminder why it can be prudent to avoid jumping aboard bandwagons. \nWhile it\u2019s fun to try new software and emerging technologies, doing so can require a lot of work and compromise. For all of Eleventy\u2019s appeal, it\u2019s only a year old so has little in the way of an ecosystem of plugins or themes. It also only has one maintainer. Jekyll on the other hand is a mature project with a large community of maintainers and contributors supporting it.\nI moved my site to Eleventy because the slowness and inflexibility of Jekyll was preventing me from doing the things I wanted to do. But I also had time to invest in the transition. After reading this guide, and considering the specific requirements of your project, you may decide to stick with Jekyll, especially if the output will essentially stay the same. And that\u2019s perfectly fine! \nBut these go to 11.\n\n\n\n\nInformation provided is correct as of Eleventy v0.6.0 and Jekyll v3.8.5\u00a0\u21a9", "year": "2018", "author": "Paul Lloyd", "author_slug": "paulrobertlloyd", "published": "2018-12-11T00:00:00+00:00", "url": "https://24ways.org/2018/turn-jekyll-up-to-eleventy/", "topic": "content"}
{"rowid": 251, "title": "The System, the Search, and the Food Bank", "contents": "Imagine a warehouse, half the length of a football field, with a looped conveyer belt down the center. \nOn the belt are plastic bins filled with assortments of shelf-stable food\u2014one may have two bags of potato chips, seventeen pudding cups, and a box of tissues; the next, a dozen cans of beets. The conveyer belt is ringed with large, empty cardboard boxes, each labeled with categories like \u201cBottled Water\u201d or \u201cCereal\u201d or \u201cCandy.\u201d \nSuch was the scene at my local food bank a few Saturdays ago, when some friends and I volunteered for a shift sorting donated food items. Our job was to fill the labeled cardboard boxes with the correct items nabbed from the swiftly moving, randomly stocked plastic bins.\nI could scarcely believe my good fortune of assignments. You want me to sort things? Into categories? For several hours? And you say there\u2019s an element of time pressure? Listen, is there some sort of permanent position I could be conscripted into.\nLook, I can\u2019t quite explain it: I just know that I love sorting, organizing, and classifying things\u2014groceries at a food bank, but also my bookshelves, my kitchen cabinets, my craft supplies, my dishwasher arrangement, yes I am a delight to live with, why do you ask?\nThe opportunity to create meaning from nothing is at the core of my excitement, which is why I\u2019ve tried to build a career out of organizing digital content, and why I brought a frankly frightening level of enthusiasm to the food bank. \u201cI can\u2019t believe they\u2019re letting me do this,\u201d I whispered in awe to my conveyer belt neighbor as I snapped up a bag of popcorn for the Snacks box with the kind of ferocity usually associated with birds of prey.\nThe jumble of donated items coming into the center need to be sorted in order for the food bank to be able to quantify, package, and distribute the food to those who need it (I sense a metaphor coming on). It\u2019s not just a nice-to-have that we spent our morning separating cookies from carrots\u2014it\u2019s a crucial step in the process. Organization makes the difference between chaos and sense, between randomness and usefulness, whether we\u2019re talking about donated groceries or\u2014there it is\u2014web content.\nThis happens through the magic of criteria matching. In order for us to sort the food bank donations correctly, we needed to know not only the categories we were sorting into, but also the criteria for each category. Does canned ravioli count as Canned Soup? Does enchilada sauce count as Tomatoes? Do protein bars count as Snacks? (Answers: yes, yes, and only if they are under 10 grams of protein or will expire within three months.) \nIs X a Y? was the question at the heart of our food sorting\u2014but it\u2019s also at the heart of any information-seeking behavior. When we are organizing, or looking for, any kind of information, we are asking ourselves:\n\nWhat is the criteria that defines Y?\nDoes X meet that criteria?\n\nWe don\u2019t usually articulate it so concretely because it\u2019s a background process, only leaping to consciousness when we encounter a stumbling block. If cans of broth flew by on the conveyer belt, it didn\u2019t require much thought to place them in the Canned Soup box. Boxed broth, on the other hand, wasn\u2019t allowed, causing a small cognitive hiccup\u2014this X is NOT a Y\u2014that sometimes meant having to re-sort our boxes.\nOn the web, we\u2019re interested\u2014I would hope\u2014in reducing cognitive hiccups for our users. We are interested in making our apps easy to use, our websites easy to navigate, our information easy to access. After all, most of the time, the process of using the internet is one of uniting a question with an answer\u2014Is this article from a trustworthy source? Is this clothing the style I want? Is this company paying their workers a living wage? Is this website one that can answer my question? Is X a Y?\nWe have a responsibility, therefore, to make information easy for our users to find, understand, and act on. This means\u2014well, this means a lot of things, and I\u2019ve got limited space here, so let\u2019s focus on these three lessons from the food bank:\n\n\nUse plain, familiar language. This advice seems to be given constantly, but that\u2019s because it\u2019s solid and it\u2019s not followed enough. Your menu labels, page names, and headings need to reflect the word choice of your users. Think how much harder it would have been to sort food if the boxes were labeled according to nutritional content, grocery store aisle number, or Latin name. How much would it slow sorting down if the Tomatoes box were labeled Nightshades? It sounds silly, but it\u2019s not that different from sites that use industry jargon, company lingo, acronyms (oh, yes, I\u2019ve seen it), or other internally focused language when trying to provide wayfinding for users. Choose words that your audience knows\u2014not only will they be more likely to spot what they\u2019re looking for on your site or app, but you\u2019ll turn up more often in search results.\n\n\nCreate consistency in all things. Missteps in consistency look like my earlier chicken broth example\u2014changing up how something looks, sounds, or functions creates a moment of cognitive dissonance, and those moments add up. The names of products, the names of brands, the names of files and forms and pages, the names of processes and procedures and concepts\u2014these all need to be consistently spelled, punctuated, linked, and referenced, no matter what section or level the user is in. If submenus are visible in one section, they should be visible in all. If calls-to-action are a graphic button in one section, they are the same graphic button in all. Every affordance, every module, every design choice sets up user expectations; consistency keeps those expectations afloat, making for a smoother experience overall.\n\nMake the system transparent. By this, I do not mean that every piece of content should be elevated at all times. The horror. But I do mean that we should make an effort to communicate the boundaries of the digital space from any given corner within. Navigation structures operate just as much as a table of contents as they do a method of moving from one place to another. Page hierarchies help explain content relationships, communicating conceptual relevancy and relative importance. Submenus illustrate which related concepts may be found within a given site section. Take care to show information that conveys the depth and breadth of the system, rather than obscuring it.\n\nThis idea of transparency was perhaps the biggest challenge we experienced in food sorting. Imagine us volunteers as users, each looking for a specific piece of information in the larger system. Like any new visitor to a website, we came into the system not knowing the full picture. We didn\u2019t know every category label around the conveyer belt, nor what criteria each category warranted. \nThe system wasn\u2019t transparent for us, so we had to make it transparent as we went. We had to stop what we were doing and ask questions. We\u2019d ask staff members. We\u2019d ask more seasoned volunteers. We\u2019d ask each other. We\u2019d make guesses, and guess wrongly, and mess up the boxes, and correct our mistakes, and learn.\nThe more we learned, the easier the sorting became. That is, we were able to sort more quickly, more efficiently, more accurately. The better we understood the system, the better we were at interacting with it.\nThe same is true of our users: the better they understand digital spaces, the more effective they are at using them. But visitors to our apps and websites do not have the luxury of learning the whole system. The fumbling trial-and-error method that I used at the food bank can, on a website, drive users away\u2014or, worse, misinform or hurt them. \nThis is why we must make choices that prioritize transparency, consistency, and familiarity. Our users want to know if X is a Y\u2014well-sorted content can give them the answer.", "year": "2018", "author": "Lisa Maria Martin", "author_slug": "lisamariamartin", "published": "2018-12-16T00:00:00+00:00", "url": "https://24ways.org/2018/the-system-the-search-and-the-food-bank/", "topic": "content"}
{"rowid": 250, "title": "Build up Your Leadership Toolbox", "contents": "Leadership. It can mean different things to different people and vary widely between companies. Leadership is more than just a job title. You won\u2019t wake up one day and magically be imbued with all you need to do a good job at leading. If we don\u2019t have a shared understanding of what a Good Leader looks like, how can we work on ourselves towards becoming one? How do you know if you even could be a leader? Can you be a leader without the title?\nWhat even is it?\nI got very frustrated way back in my days as a senior developer when I was given \u201cadvice\u201d about my leadership style; at the time I didn\u2019t have the words to describe the styles and ways in which I was leading to be able to push back. I heard these phrases a lot:\n\nyou need to step up\nyou need to take charge\nyou need to grab the bull by its horns\nyou need to have thicker skin\nyou need to just be more confident in your leading\nyou need to just make it happen\n\nI appreciate some people\u2019s intent was to help me, but honestly it did my head in. WAT?! What did any of this even mean. How exactly do you \u201cstep up\u201d and how are you evaluating what step I\u2019m on? I am confident, what does being even more confident help achieve with leading? Does that not lead you down the path of becoming an arrogant door knob? >___<\nWhile there is no One True Way to Lead, there is an overwhelming pattern of people in positions of leadership within tech industry being held by men. It felt a lot like what people were fundamentally telling me to do was to be more like an extroverted man. I was being asked to demonstrate more masculine associated qualities (#notallmen). I\u2019ll leave the gendered nature of leadership qualities as an exercise in googling for the reader.\nI\u2019ve never had a good manager and at the time had no one else to ask for help, so I turned to my trusted best friends. Books.\nI <3 books\nI refused to buy into that style of leadership as being the only accepted way to be. There had to be room for different kinds of people to be leaders and have different leadership styles.\nThere are three books that changed me forever in how I approach and think about leadership.\n\nPrimal leadership, by Daniel Goleman, Richard Boyatzis and Annie McKee\nQuiet, by Susan Cain\nDaring Greatly - How the Courage to be Vulnerable transforms the way we live, love, parent and Lead, by Bren\u00e9 Brown\n\nI recommend you read them. Ignore the slightly cheesy titles and trust me, just read them.\nPrimal leadership helped to give me the vocabulary and understanding I needed about the different styles of leadership there are, how and when to apply them.\nQuiet really helped me realise how much I was being undervalued and misunderstood in an extroverted world. If I\u2019d had managers or support from someone who valued introverts\u2019 strengths, things would\u2019ve been very different. I would\u2019ve had someone telling others to step down and shut up for a change rather than pushing on me to step up and talk louder over everyone else. It\u2019s OK to be different and needing different things like time to recharge or time to think before speaking. It also improved my ability to work alongside my more extroverted colleagues by giving me an understanding of their world so I could communicate my needs in a language they would get.\nBren\u00e9 Brown\u2019s book I am forever in debt to. Her work gave me the courage to stand up and be my own kind of leader. Even when no-one around me looked or sounded like me, I found my own voice.\nIt takes great courage to be vulnerable and open about what you can and can\u2019t do. Open about your mistakes. Vocalise what you don\u2019t know and asking for help. In some lights, these are seen as weaknesses and many have tried to use them against me, to pull me down and exclude me for talking about them. Dear reader, it did not work, they failed. The truth is, they are my greatest strengths. The privileges I have, I use for good as best and often as I can.\nJust like gender, leadership is not binary\nIf you google for what a leader is, you\u2019ll get many different answers. I personally think Bren\u00e9\u2019s version is the best as it is one that can apply to a wider range of people, irrespective of job title or function.\n\nI define a leader as anyone who takes responsibility for finding potential in people and processes, and who has the courage to develop that potential.\nBren\u00e9 Brown\n\nBeing a leader isn\u2019t about being the loudest in a room, having veto power, talking over people or ignoring everyone else\u2019s ideas. It\u2019s not about \u201ctelling people what to do\u201d. It\u2019s not about an elevated status that you\u2019re better than others. Nor is it about creating a hand wavey far away vision and forgetting to help support people in how to get there.\nBeing a Good Leader is about having a toolbox of leadership styles and skills to choose from depending on the situation. Knowing how and when to apply them is part of the challenge and difficulty in becoming good at it. It is something you will have to continuously work on, forever. There is no Done.\nLeaders are Made, they are not Born.\nBe flexible in your leadership style\n\nTypically, the best, most effective leaders act according to one or more of six distinct approaches to leadership and skillfully switch between the various styles depending on the situation.\n\nFrom the book, Primal Leadership, it gives a summary of 6 leadership styles which are:\n\nVisionary\nCoaching\nAffiliative\nDemocratic\nPacesetting\nCommanding\n\nVisionary, moves people toward a shared dream or future. When change requires a new vision or a clear direction is needed, using a visionary style of leadership helps communicate that picture. By learning how to effectively communicate a story you can help people to move in that direction and give them clarity on why they\u2019re doing what they\u2019re doing.\nCoaching, is about connecting what a person wants and helping to align that with organisation\u2019s goals. It\u2019s a balance of helping someone improve their performance to fulfil their role and their potential beyond.\nAffiliative, creates harmony by connecting people to each other and requires effective communication to aid facilitation of those connections. This style can be very impactful in healing rifts in a team or to help strengthen connections within and across teams. During stressful times having a positive and supportive connection to those around us really helps see us through those times.\nDemocratic, values people\u2019s input and gets commitment through participation. Taking this approach can help build buy-in or consensus and is a great way to get valuable input from people. The tricky part about this style, I find, is that when I gather and listen to everyone\u2019s input, that doesn\u2019t mean the end result is that I have to please everyone.\nThe next two, sadly, are the ones wielded far too often and have the greatest negative impact. It\u2019s where the \u201ctelling people what to do\u201d comes from. When used sparingly and in the right situations, they can be a force for good. However, they must not be your default style.\nPacesetting, when used well, it is about meeting challenging and exciting goals. When you need to get high-quality results from a motivated and well performing team, this can be great to help achieve real focus and drive. Sadly it is so overused and poorly executed it becomes the \u201cjust make it happen\u201d and driver of unrealistic workload which contributes to burnout.\nCommanding, when used appropriately soothes fears by giving clear direction in an emergency or crisis. When shit is on fire, you want to know that your leadership ability can help kick-start a turnaround and bring clarity. Then switch to another style. This approach is also required when dealing with problematic employees or unacceptable behaviour.\nCommanding style seems to be what a lot of people think being a leader is, taking control and commanding a situation. It should be used sparingly and only when absolutely necessary.\nBe responsible for the power you wield\nIf reading through those you find yourself feeling a bit guilty that maybe you overuse some of the styles, or overwhelmed that you haven\u2019t got all of these down and ready to use in your toolbox\u2026\nTake a breath. Take responsibility. Take action.\nNo one is perfect, and it\u2019s OK. You can start right now working on those. You can have a conversation with your team and try being open about how you\u2019re going to try some different styles. You can be vulnerable and own up to mistakes you might\u2019ve made followed with an apology. You can order those books and read them. Those books will give you more examples on those leadership styles and help you to find your own voice.\nThe impact you can have on the lives of those around you when you\u2019re a leader, is huge. You can help be that positive impact, help discover and develop potential in someone.\n\nTime spent understanding people is never wasted.\nCate Huston.\n\nI believe in you. <3 Mazz.", "year": "2018", "author": "Mazz Mosley", "author_slug": "mazzmosley", "published": "2018-12-10T00:00:00+00:00", "url": "https://24ways.org/2018/build-up-your-leadership-toolbox/", "topic": "business"}
{"rowid": 249, "title": "Fast Autocomplete Search for Your Website", "contents": "Every website deserves a great search engine - but building a search engine can be a lot of work, and hosting it can quickly get expensive.\nI\u2019m going to build a search engine for 24 ways that\u2019s fast enough to support autocomplete (a.k.a. typeahead) search queries and can be hosted for free. I\u2019ll be using wget, Python, SQLite, Jupyter, sqlite-utils and my open source Datasette tool to build the API backend, and a few dozen lines of modern vanilla JavaScript to build the interface.\n\nTry it out here, then read on to see how I built it.\nFirst step: crawling the data\nThe first step in building a search engine is to grab a copy of the data that you plan to make searchable.\nThere are plenty of potential ways to do this: you might be able to pull it directly from a database, or extract it using an API. If you don\u2019t have access to the raw data, you can imitate Google and write a crawler to extract the data that you need.\nI\u2019m going to do exactly that against 24 ways: I\u2019ll build a simple crawler using wget, a command-line tool that features a powerful \u201crecursive\u201d mode that\u2019s ideal for scraping websites.\nWe\u2019ll start at the https://24ways.org/archives/ page, which links to an archived index for every year that 24 ways has been running.\nThen we\u2019ll tell wget to recursively crawl the website, using the --recursive flag.\nWe don\u2019t want to fetch every single page on the site - we\u2019re only interested in the actual articles. Luckily, 24 ways has nicely designed URLs, so we can tell wget that we only care about pages that start with one of the years it has been running, using the -I argument like this: -I /2005,/2006,/2007,/2008,/2009,/2010,/2011,/2012,/2013,/2014,/2015,/2016,/2017\nWe want to be polite, so let\u2019s wait for 2 seconds between each request rather than hammering the site as fast as we can: --wait 2\nThe first time I ran this, I accidentally downloaded the comments pages as well. We don\u2019t want those, so let\u2019s exclude them from the crawl using -X \"/*/*/comments\".\nFinally, it\u2019s useful to be able to run the command multiple times without downloading pages that we have already fetched. We can use the --no-clobber option for this.\nTie all of those options together and we get this command:\nwget --recursive --wait 2 --no-clobber \n -I /2005,/2006,/2007,/2008,/2009,/2010,/2011,/2012,/2013,/2014,/2015,/2016,/2017 \n -X \"/*/*/comments\" \n https://24ways.org/archives/ \nIf you leave this running for a few minutes, you\u2019ll end up with a folder structure something like this:\n$ find 24ways.org\n24ways.org\n24ways.org/2013\n24ways.org/2013/why-bother-with-accessibility\n24ways.org/2013/why-bother-with-accessibility/index.html\n24ways.org/2013/levelling-up\n24ways.org/2013/levelling-up/index.html\n24ways.org/2013/project-hubs\n24ways.org/2013/project-hubs/index.html\n24ways.org/2013/credits-and-recognition\n24ways.org/2013/credits-and-recognition/index.html\n...\nAs a quick sanity check, let\u2019s count the number of HTML pages we have retrieved:\n$ find 24ways.org | grep index.html | wc -l\n328\nThere\u2019s one last step! We got everything up to 2017, but we need to fetch the articles for 2018 (so far) as well. They aren\u2019t linked in the /archives/ yet so we need to point our crawler at the site\u2019s front page instead:\nwget --recursive --wait 2 --no-clobber \n -I /2018 \n -X \"/*/*/comments\" \n https://24ways.org/\nThanks to --no-clobber, this is safe to run every day in December to pick up any new content.\nWe now have a folder on our computer containing an HTML file for every article that has ever been published on the site! Let\u2019s use them to build ourselves a search index.\nBuilding a search index using SQLite\nThere are many tools out there that can be used to build a search engine. You can use an open-source search server like Elasticsearch or Solr, a hosted option like Algolia or Amazon CloudSearch or you can tap into the built-in search features of relational databases like MySQL or PostgreSQL.\nI\u2019m going to use something that\u2019s less commonly used for web applications but makes for a powerful and extremely inexpensive alternative: SQLite.\nSQLite is the world\u2019s most widely deployed database, even though many people have never even heard of it. That\u2019s because it\u2019s designed to be used as an embedded database: it\u2019s commonly used by native mobile applications and even runs as part of the default set of apps on the Apple Watch!\nSQLite has one major limitation: unlike databases like MySQL and PostgreSQL, it isn\u2019t really designed to handle large numbers of concurrent writes. For this reason, most people avoid it for building web applications.\nThis doesn\u2019t matter nearly so much if you are building a search engine for infrequently updated content - say one for a site that only publishes new content on 24 days every year.\nIt turns out SQLite has very powerful full-text search functionality built into the core database - the FTS5 extension.\nI\u2019ve been doing a lot of work with SQLite recently, and as part of that, I\u2019ve been building a Python utility library to make building new SQLite databases as easy as possible, called sqlite-utils. It\u2019s designed to be used within a Jupyter notebook - an enormously productive way of interacting with Python code that\u2019s similar to the Observable notebooks Natalie described on 24 ways yesterday.\nIf you haven\u2019t used Jupyter before, here\u2019s the fastest way to get up and running with it - assuming you have Python 3 installed on your machine. We can use a Python virtual environment to ensure the software we are installing doesn\u2019t clash with any other installed packages:\n$ python3 -m venv ./jupyter-venv\n$ ./jupyter-venv/bin/pip install jupyter\n# ... lots of installer output\n# Now lets install some extra packages we will need later\n$ ./jupyter-venv/bin/pip install beautifulsoup4 sqlite-utils html5lib\n# And start the notebook web application\n$ ./jupyter-venv/bin/jupyter-notebook\n# This will open your browser to Jupyter at http://localhost:8888/\nYou should now be in the Jupyter web application. Click New -> Python 3 to start a new notebook.\nA neat thing about Jupyter notebooks is that if you publish them to GitHub (either in a regular repository or as a Gist), it will render them as HTML. This makes them a very powerful way to share annotated code. I\u2019ve published the notebook I used to build the search index on my GitHub account. \n\u200b\n\nHere\u2019s the Python code I used to scrape the relevant data from the downloaded HTML files. Check out the notebook for a line-by-line explanation of what\u2019s going on.\nfrom pathlib import Path\nfrom bs4 import BeautifulSoup as Soup\nbase = Path(\"/Users/simonw/Dropbox/Development/24ways-search\")\narticles = list(base.glob(\"*/*/*/*.html\"))\n# articles is now a list of paths that look like this:\n# PosixPath('...24ways-search/24ways.org/2013/why-bother-with-accessibility/index.html')\ndocs = []\nfor path in articles:\n year = str(path.relative_to(base)).split(\"/\")[1]\n url = 'https://' + str(path.relative_to(base).parent) + '/'\n soup = Soup(path.open().read(), \"html5lib\")\n author = soup.select_one(\".c-continue\")[\"title\"].split(\n \"More information about\"\n )[1].strip()\n author_slug = soup.select_one(\".c-continue\")[\"href\"].split(\n \"/authors/\"\n )[1].split(\"/\")[0]\n published = soup.select_one(\".c-meta time\")[\"datetime\"]\n contents = soup.select_one(\".e-content\").text.strip()\n title = soup.find(\"title\").text.split(\" \u25c6\")[0]\n try:\n topic = soup.select_one(\n '.c-meta a[href^=\"/topics/\"]'\n )[\"href\"].split(\"/topics/\")[1].split(\"/\")[0]\n except TypeError:\n topic = None\n docs.append({\n \"title\": title,\n \"contents\": contents,\n \"year\": year,\n \"author\": author,\n \"author_slug\": author_slug,\n \"published\": published,\n \"url\": url,\n \"topic\": topic,\n })\nAfter running this code, I have a list of Python dictionaries representing each of the documents that I want to add to the index. The list looks something like this:\n[\n {\n \"title\": \"Why Bother with Accessibility?\",\n \"contents\": \"Web accessibility (known in other fields as inclus...\",\n \"year\": \"2013\",\n \"author\": \"Laura Kalbag\",\n \"author_slug\": \"laurakalbag\",\n \"published\": \"2013-12-10T00:00:00+00:00\",\n \"url\": \"https://24ways.org/2013/why-bother-with-accessibility/\",\n \"topic\": \"design\"\n },\n {\n \"title\": \"Levelling Up\",\n \"contents\": \"Hello, 24 ways. Iu2019m Ashley and I sell property ins...\",\n \"year\": \"2013\",\n \"author\": \"Ashley Baxter\",\n \"author_slug\": \"ashleybaxter\",\n \"published\": \"2013-12-06T00:00:00+00:00\",\n \"url\": \"https://24ways.org/2013/levelling-up/\",\n \"topic\": \"business\"\n },\n ...\nMy sqlite-utils library has the ability to take a list of objects like this and automatically create a SQLite database table with the right schema to store the data. Here\u2019s how to do that using this list of dictionaries.\nimport sqlite_utils\ndb = sqlite_utils.Database(\"/tmp/24ways.db\")\ndb[\"articles\"].insert_all(docs)\nThat\u2019s all there is to it! The library will create a new database and add a table to it called articles with the necessary columns, then insert all of the documents into that table.\n(I put the database in /tmp/ for the moment - you can move it to a more sensible location later on.)\nYou can inspect the table using the sqlite3 command-line utility (which comes with OS X) like this:\n$ sqlite3 /tmp/24ways.db\nsqlite> .headers on\nsqlite> .mode column\nsqlite> select title, author, year from articles;\ntitle author year \n------------------------------ ------------ ----------\nWhy Bother with Accessibility? Laura Kalbag 2013 \nLevelling Up Ashley Baxte 2013 \nProject Hubs: A Home Base for Brad Frost 2013 \nCredits and Recognition Geri Coady 2013 \nManaging a Mind Christopher 2013 \nRun Ragged Mark Boulton 2013 \nGet Started With GitHub Pages Anna Debenha 2013 \nCoding Towards Accessibility Charlie Perr 2013 \n...\n\nThere\u2019s one last step to take in our notebook. We know we want to use SQLite\u2019s full-text search feature, and sqlite-utils has a simple convenience method for enabling it for a specified set of columns in a table. We want to be able to search by the title, author and contents fields, so we call the enable_fts() method like this:\ndb[\"articles\"].enable_fts([\"title\", \"author\", \"contents\"])\nIntroducing Datasette\nDatasette is the open-source tool I\u2019ve been building that makes it easy to both explore SQLite databases and publish them to the internet.\nWe\u2019ve been exploring our new SQLite database using the sqlite3 command-line tool. Wouldn\u2019t it be nice if we could use a more human-friendly interface for that?\nIf you don\u2019t want to install Datasette right now, you can visit https://search-24ways.herokuapp.com/ to try it out against the 24 ways search index data. I\u2019ll show you how to deploy Datasette to Heroku like this later in the article.\nIf you want to install Datasette locally, you can reuse the virtual environment we created to play with Jupyter:\n./jupyter-venv/bin/pip install datasette\nThis will install Datasette in the ./jupyter-venv/bin/ folder. You can also install it system-wide using regular pip install datasette.\nNow you can run Datasette against the 24ways.db file we created earlier like so:\n./jupyter-venv/bin/datasette /tmp/24ways.db\nThis will start a local webserver running. Visit http://localhost:8001/ to start interacting with the Datasette web application.\nIf you want to try out Datasette without creating your own 24ways.db file you can download the one I created directly from https://search-24ways.herokuapp.com/24ways-ae60295.db\nPublishing the database to the internet\nOne of the goals of the Datasette project is to make deploying data-backed APIs to the internet as easy as possible. Datasette has a built-in command for this, datasette publish. If you have an account with Heroku or Zeit Now, you can deploy a database to the internet with a single command. Here\u2019s how I deployed https://search-24ways.herokuapp.com/ (running on Heroku\u2019s free tier) using datasette publish:\n$ ./jupyter-venv/bin/datasette publish heroku /tmp/24ways.db --name search-24ways\n-----> Python app detected\n-----> Installing requirements with pip\n\n-----> Running post-compile hook\n-----> Discovering process types\n Procfile declares types -> web\n\n-----> Compressing...\n Done: 47.1M\n-----> Launching...\n Released v8\n https://search-24ways.herokuapp.com/ deployed to Heroku\nIf you try this out, you\u2019ll need to pick a different --name, since I\u2019ve already taken search-24ways.\nYou can run this command as many times as you like to deploy updated versions of the underlying database.\nSearching and faceting\nDatasette can detect tables with SQLite full-text search configured, and will add a search box directly to the page. Take a look at http://search-24ways.herokuapp.com/24ways-b607e21/articles to see this in action.\n\u200b\n\nSQLite search supports wildcards, so if you want autocomplete-style search where you don\u2019t need to enter full words to start getting results you can add a * to the end of your search term. Here\u2019s a search for access* which returns articles on accessibility:\nhttp://search-24ways.herokuapp.com/24ways-ae60295/articles?_search=acces%2A\nA neat feature of Datasette is the ability to calculate facets against your data. Here\u2019s a page showing search results for svg with facet counts calculated against both the year and the topic columns:\nhttp://search-24ways.herokuapp.com/24ways-ae60295/articles?_search=svg&_facet=year&_facet=topic\nEvery page visible via Datasette has a corresponding JSON API, which can be accessed using the JSON link on the page - or by adding a .json extension to the URL:\nhttp://search-24ways.herokuapp.com/24ways-ae60295/articles.json?_search=acces%2A\nBetter search using custom SQL\nThe search results we get back from ../articles?_search=svg are OK, but the order they are returned in is not ideal - they\u2019re actually being returned in the order they were inserted into the database! You can see why this is happening by clicking the View and edit SQL link on that search results page.\nThis exposes the underlying SQL query, which looks like this:\nselect rowid, * from articles where rowid in (\n select rowid from articles_fts where articles_fts match :search\n) order by rowid limit 101\nWe can do better than this by constructing a custom SQL query. Here\u2019s the query we will use instead:\nselect\n snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet,\n articles_fts.rank, articles.title, articles.url, articles.author, articles.year\nfrom articles\n join articles_fts on articles.rowid = articles_fts.rowid\nwhere articles_fts match :search || \"*\"\n order by rank limit 10;\nYou can try this query out directly - since Datasette opens the underling SQLite database in read-only mode and enforces a one second time limit on queries, it\u2019s safe to allow users to provide arbitrary SQL select queries for Datasette to execute.\nThere\u2019s a lot going on here! Let\u2019s break the SQL down line-by-line:\nselect\n snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet,\nWe\u2019re using snippet(), a built-in SQLite function, to generate a snippet highlighting the words that matched the query. We use two unique strings that I made up to mark the beginning and end of each match - you\u2019ll see why in the JavaScript later on.\n articles_fts.rank, articles.title, articles.url, articles.author, articles.year\nThese are the other fields we need back - most of them are from the articles table but we retrieve the rank (representing the strength of the search match) from the magical articles_fts table.\nfrom articles\n join articles_fts on articles.rowid = articles_fts.rowid\narticles is the table containing our data. articles_fts is a magic SQLite virtual table which implements full-text search - we need to join against it to be able to query it.\nwhere articles_fts match :search || \"*\"\n order by rank limit 10;\n:search || \"*\" takes the ?search= argument from the page querystring and adds a * to the end of it, giving us the wildcard search that we want for autocomplete. We then match that against the articles_fts table using the match operator. Finally, we order by rank so that the best matching results are returned at the top - and limit to the first 10 results.\nHow do we turn this into an API? As before, the secret is to add the .json extension. Datasette actually supports multiple shapes of JSON - we\u2019re going to use ?_shape=array to get back a plain array of objects:\nJSON API call to search for articles matching SVG\nThe HTML version of that page shows the time taken to execute the SQL in the footer. Hitting refresh a few times, I get response times between 2 and 5ms - easily fast enough to power a responsive autocomplete feature.\nA simple JavaScript autocomplete search interface\nI considered building this using React or Svelte or another of the myriad of JavaScript framework options available today, but then I remembered that vanilla JavaScript in 2018 is a very productive environment all on its own.\nWe need a few small utility functions: first, a classic debounce function adapted from this one by David Walsh:\nfunction debounce(func, wait, immediate) {\n let timeout;\n return function() {\n let context = this, args = arguments;\n let later = () => {\n timeout = null;\n if (!immediate) func.apply(context, args);\n };\n let callNow = immediate && !timeout;\n clearTimeout(timeout);\n timeout = setTimeout(later, wait);\n if (callNow) func.apply(context, args);\n };\n};\nWe\u2019ll use this to only send fetch() requests a maximum of once every 100ms while the user is typing.\nSince we\u2019re rendering data that might include HTML tags (24 ways is a site about web development after all), we need an HTML escaping function. I\u2019m amazed that browsers still don\u2019t bundle a default one of these:\nconst htmlEscape = (s) => s.replace(\n />/g, '>'\n).replace(\n /Autocomplete search\n\n\nAnd now the autocomplete implementation itself, as a glorious, messy stream-of-consciousness of JavaScript:\n// Embed the SQL query in a multi-line backtick string:\nconst sql = `select\n snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet,\n articles_fts.rank, articles.title, articles.url, articles.author, articles.year\nfrom articles\n join articles_fts on articles.rowid = articles_fts.rowid\nwhere articles_fts match :search || \"*\"\n order by rank limit 10`;\n\n// Grab a reference to the \nconst searchbox = document.getElementById(\"searchbox\");\n\n// Used to avoid race-conditions:\nlet requestInFlight = null;\n\nsearchbox.onkeyup = debounce(() => {\n const q = searchbox.value;\n // Construct the API URL, using encodeURIComponent() for the parameters\n const url = (\n \"https://search-24ways.herokuapp.com/24ways-866073b.json?sql=\" +\n encodeURIComponent(sql) +\n `&search=${encodeURIComponent(q)}&_shape=array`\n );\n // Unique object used just for race-condition comparison\n let currentRequest = {};\n requestInFlight = currentRequest;\n fetch(url).then(r => r.json()).then(d => {\n if (requestInFlight !== currentRequest) {\n // Avoid race conditions where a slow request returns\n // after a faster one.\n return;\n }\n let results = d.map(r => `\n
\n `).join(\"\");\n document.getElementById(\"results\").innerHTML = results;\n });\n}, 100); // debounce every 100ms\nThere\u2019s just one more utility function, used to help construct the HTML results:\nconst highlight = (s) => htmlEscape(s).replace(\n /b4de2a49c8/g, ''\n).replace(\n /8c94a2ed4b/g, ''\n);\nThis is what those unique strings passed to the snippet() function were for.\nAvoiding race conditions in autocomplete\nOne trick in this code that you may not have seen before is the way race-conditions are handled. Any time you build an autocomplete feature, you have to consider the following case:\n\nUser types acces\nBrowser sends request A - querying documents matching acces*\nUser continues to type accessibility\nBrowser sends request B - querying documents matching accessibility*\nRequest B returns. It was fast, because there are fewer documents matching the full term\nThe results interface updates with the documents from request B, matching accessibility*\nRequest A returns results (this was the slower of the two requests)\nThe results interface updates with the documents from request A - results matching access*\n\nThis is a terrible user experience: the user saw their desired results for a brief second, and then had them snatched away and replaced with those results from earlier on.\nThankfully there\u2019s an easy way to avoid this. I set up a variable in the outer scope called requestInFlight, initially set to null.\nAny time I start a new fetch() request, I create a new currentRequest = {} object and assign it to the outer requestInFlight as well.\nWhen the fetch() completes, I use requestInFlight !== currentRequest to sanity check that the currentRequest object is strictly identical to the one that was in flight. If a new request has been triggered since we started the current request we can detect that and avoid updating the results.\nIt\u2019s not a lot of code, really\nAnd that\u2019s the whole thing! The code is pretty ugly, but when the entire implementation clocks in at fewer than 70 lines of JavaScript, I honestly don\u2019t think it matters. You\u2019re welcome to refactor it as much you like.\nHow good is this search implementation? I\u2019ve been building search engines for a long time using a wide variety of technologies and I\u2019m happy to report that using SQLite in this way is genuinely a really solid option. It scales happily up to hundreds of MBs (or even GBs) of data, and the fact that it\u2019s based on SQL makes it easy and flexible to work with.\nA surprisingly large number of desktop and mobile applications you use every day implement their search feature on top of SQLite.\nMore importantly though, I hope that this demonstrates that using Datasette for an API means you can build relatively sophisticated API-backed applications with very little backend programming effort. If you\u2019re working with a small-to-medium amount of data that changes infrequently, you may not need a more expensive database. Datasette-powered applications easily fit within the free tier of both Heroku and Zeit Now.\nFor more of my writing on Datasette, check out the datasette tag on my blog. And if you do build something fun with it, please let me know on Twitter.", "year": "2018", "author": "Simon Willison", "author_slug": "simonwillison", "published": "2018-12-19T00:00:00+00:00", "url": "https://24ways.org/2018/fast-autocomplete-search-for-your-website/", "topic": "code"}
{"rowid": 248, "title": "How to Use Audio on the Web", "contents": "I know what you\u2019re thinking. I never never want to hear sound anywhere near a browser, ever ever, wow! \ud83d\ude49\nYou\u2019re having flashbacks, flashbacks to the days of yore, when we had a element and yup did everyone think that was the most rad thing since