\nThis code only worked in Internet Explorer and in some versions of Safari, but that was plenty of people to befriend. However, MySpace was prepared for this: they also filtered the word javascript from
\nSamy discovered that by inserting a line break into his code, MySpace would not filter out the word javascript. The browser would continue to run the code just fine! Samy had now broken past MySpace\u2019s first line of defence and was able to start running code on his profile page. Now he started looking at what he could do with that code.\nalert(document.body.innerHTML)\nSamy wondered if he could inspect the page\u2019s source to find the details of other MySpace users to befriend. To do this, you would normally use document.body.innerHTML, but MySpace had filtered this too.\nalert(eval('document.body.inne' + 'rHTML'))\nThis isn\u2019t a problem if you build up JavaScript code inside a string and execute it using the eval() function. This trick also worked with XMLHttpRequest.onReadyStateChange, which allowed Samy to send friend requests to the MySpace API and install the JavaScript code on his new friends\u2019 pages.\nOne final obstacle stood in his way. The same origin policy is a security mechanism that prevents scripts hosted on one domain interacting with sites hosted on another domain.\nif (location.hostname == 'profile.myspace.com') {\n document.location = 'http://www.myspace.com'\n + location.pathname + location.search\n}\nSamy discovered that only the http://www.myspace.com domain would accept his API requests, and requests from http://profile.myspace.com were being blocked by the browser\u2019s same-origin policy. By redirecting the browser to http://www.myspace.com, he discovered that he could load profile pages and successfully make requests to MySpace\u2019s API. Samy installed this code on his profile page, and he waited.\n\nOver the course of the next day, over a million people unwittingly installed Samy\u2019s code into their MySpace profile pages and invited their friends. The load of friend requests on MySpace was so large that the site buckled and shut down. It took them two hours to remove Samy\u2019s code and patch the security holes he exploited. Samy was raided by the United States secret service and sentenced to do 90 days of community service.\nThis is the power of installing a little bit of JavaScript on someone else\u2019s website. It is called cross site scripting, and its effects can be devastating. It is suspected that cross-site scripting was to blame for the 2018 British Airways breach that leaked the credit card details of 380,000 people.\nSo how can you help protect yourself from cross-site scripting?\nAlways sanitise user input when it comes in, using a library such as sanitize-html. Open source tools like this benefit from hundreds of hours of work from dozens of experienced contributors. Don\u2019t be tempted to roll your own protection. MySpace was prepared, but they were not prepared enough. It makes no sense to turn this kind of help down.\nYou can also use an auto-escaping templating language to make sure nobody else\u2019s HTML can get into your pages. Both Angular and React will do this for you, and they are extremely convenient to use.\nYou should also implement a content security policy to restrict the domains that content like scripts and stylesheets can be loaded from. Loading content from sites not under your control is a significant security risk, and you should use a CSP to lock this down to only the sources you trust. CSP can also block the use of the eval() function.\nFor content not under your control, consider setting up sub-resource integrity protection. This allows you to add hashes to stylesheets and scripts you include on your website. Hashes are like fingerprints for digital files; if the content changes, so does the fingerprint. Adding hashes will allow your browser to keep your site safe if the content changes without you knowing.\nnpm audit: Protecting yourself from code you don\u2019t own\nJavaScript and npm run the modern web. Together, they make it easy to take advantage of the world\u2019s largest public registry of open source software. How do you protect yourself from code written by someone you\u2019ve never met? Enter npm audit.\nnpm audit reviews the security of your website\u2019s dependency tree. You can start using it by upgrading to the latest version of npm:\nnpm install npm -g\nnpm audit\nWhen you run npm audit, npm submits a description of your dependencies to the Registry, which returns a report of known vulnerabilities for the packages you have installed.\n\nIf your website has a known cross-site scripting vulnerability, npm audit will tell you about it. What\u2019s more, if the vulnerability has been patched, running npm audit fix will automatically install the patched package for you!\nSecuring your site like it\u2019s 2019\nThe truth is that since the early days of the web, the stakes of a security breach have become much, much higher. The web is so much more than fandom and mailing DVDs - online banking is now mainstream, social media and dating websites store intimate information about our personal lives, and we are even inviting the internet into our homes.\nHowever, we have powerful new allies helping us stay safe. There are more resources than ever before to teach us how to write secure code. Tools like Angular and React are designed with security features baked-in from the start. We have a new generation of security tools like npm audit to watch over our dependencies.\nAs we roll over into 2019, let\u2019s take the opportunity to reflect on the security of the code we write and be grateful for the everything we\u2019ve learned in the last twenty years.", "year": "2018", "author": "Katie Fenn", "author_slug": "katiefenn", "published": "2018-12-01T00:00:00+00:00", "url": "https://24ways.org/2018/securing-your-site-like-its-1999/", "topic": "code"}
{"rowid": 252, "title": "Turn Jekyll up to Eleventy", "contents": "Sometimes it pays not to over complicate things. While many of the sites we use on a daily basis require relational databases to manage their content and dynamic pages to respond to user input, for smaller, simpler sites, serving pre-rendered static HTML is usually a much cheaper \u2014 and more secure \u2014 option. \nThe JAMstack (JavaScript, reusable APIs, and prebuilt Markup) is a popular marketing term for this way of building websites, but in some ways it\u2019s a return to how things were in the early days of the web, before developers started tinkering with CGI scripts or Personal HomePage. Indeed, my website has always served pre-rendered HTML; first with the aid of Movable Type and more recently using Jekyll, which Anna wrote about in 2013.\nBy combining three approachable languages \u2014 Markdown for content, YAML for data and Liquid for templating \u2014 the ergonomics of Jekyll found broad appeal, influencing the design of the many static site generators that followed. But Jekyll is not without its faults. Aside from notoriously slow build times, it\u2019s also built using Ruby. While this is an elegant programming language, it is yet another ecosystem to understand and manage, and often alongside one we already use: JavaScript. For all my time using Jekyll, I would think to myself \u201cthis, but in Node\u201d. Thankfully, one of Santa\u2019s elves (Zach Leatherman) granted my Atwoodian wish and placed such a static site generator under my tree.\nIntroducing Eleventy\nEleventy is a more flexible alternative Jekyll. Besides being written in Node, it\u2019s less strict about how to organise files and, in addition to Liquid, supports other templating languages like EJS, Pug, Handlebars and Nunjucks. Best of all, its build times are significantly faster (with future optimisations promising further gains).\nAs content is saved using the familiar combination of YAML front matter and Markdown, transitioning from Jekyll to Eleventy may seem like a reasonable idea. Yet as I\u2019ve discovered, there are a few gotchas. If you\u2019ve been considering making the switch, here are a few tips and tricks to help you on your way1.\nNote: Throughout this article, I\u2019ll be converting Matt Cone\u2019s Markdown Guide site as an example. If you want to follow along, start by cloning the git repository, and then change into the project directory:\ngit clone https://github.com/mattcone/markdown-guide.git\ncd markdown-guide\nBefore you start\nIf you\u2019ve used tools like Grunt, Gulp or Webpack, you\u2019ll be familiar with Node.js but, if you\u2019ve been exclusively using Jekyll to compile your assets as well as generate your HTML, now\u2019s the time to install Node.js and set up your project to work with its package manager, NPM:\n\nInstall Node.js:\n\nMac: If you haven\u2019t already, I recommend installing Homebrew, a package manager for the Mac. Then in the Terminal type brew install node.\nWindows: Download the Windows installer from the Node.js website and follow the instructions.\n\nInitiate NPM: Ensure you are in the directory of your project and then type npm init. This command will ask you a few questions before creating a file called package.json. Like RubyGems\u2019s Gemfile, this file contains a list of your project\u2019s third-party dependencies.\n\nIf you\u2019re managing your site with Git, make sure to add node_modules to your .gitignore file too. Unlike RubyGems, NPM stores its dependencies alongside your project files. This folder can get quite large, and as it contains binaries compiled to work with the host computer, it shouldn\u2019t be version controlled. Eleventy will also honour the contents of this file, meaning anything you want Git to ignore, Eleventy will ignore too.\nInstalling Eleventy\nWith Node.js installed and your project setup to work with NPM, we can now install Eleventy as a dependency:\nnpm install --save-dev @11ty/eleventy\nIf you open package.json you should see the following:\n\u2026\n\"devDependencies\": {\n \"@11ty/eleventy\": \"^0.6.0\"\n}\n\u2026\nWe can now run Eleventy from the command line using NPM\u2019s npx command. For example, to covert the README.md file to HTML, we can run the following:\nnpx eleventy --input=README.md --formats=md\nThis command will generate a rendered HTML file at _site/README/index.html. Like Jekyll, Eleventy shares the same default name for its output directory (_site), a pattern we will see repeatedly during the transition.\nConfiguration\nWhereas Jekyll uses the declarative YAML syntax for its configuration file, Eleventy uses JavaScript. This allows its options to be scripted, enabling some powerful possibilities as we\u2019ll see later on.\nWe\u2019ll start by creating our configuration file (.eleventy.js), copying the relevant settings in _config.yml over to their equivalent options:\nmodule.exports = function(eleventyConfig) {\n return {\n dir: {\n input: \"./\", // Equivalent to Jekyll's source property\n output: \"./_site\" // Equivalent to Jekyll's destination property\n }\n };\n};\nA few other things to bear in mind:\n\n\nWhereas Jekyll allows you to list folders and files to ignore under its exclude property, Eleventy looks for these values inside a file called .eleventyignore (in addition to .gitignore).\n\nBy default, Eleventy uses markdown-it to parse Markdown. If your content uses advanced syntax features (such as abbreviations, definition lists and footnotes), you\u2019ll need to pass Eleventy an instance of this (or another) Markdown library configured with the relevant options and plugins.\n\nLayouts\nOne area Eleventy currently lacks flexibility is the location of layouts, which must reside within the _includes directory (see this issue on GitHub).\nWanting to keep our layouts together, we\u2019ll move them from _layouts to _includes/layouts, and then update references to incorporate the layouts sub-folder. We could update the layout: frontmatter property in each of our content files, but another option is to create aliases in Eleventy\u2019s config:\nmodule.exports = function(eleventyConfig) {\n // Aliases are in relation to the _includes folder\n eleventyConfig.addLayoutAlias('about', 'layouts/about.html');\n eleventyConfig.addLayoutAlias('book', 'layouts/book.html');\n eleventyConfig.addLayoutAlias('default', 'layouts/default.html');\n\n return {\n dir: {\n input: \"./\",\n output: \"./_site\"\n }\n };\n}\nDetermining which template language to use\nEleventy will transform Markdown (.md) files using Liquid by default, but we\u2019ll need to tell Eleventy how to process other files that are using Liquid templates. There are a few ways to achieve this, but the easiest is to use file extensions. In our case, we have some files in our api folder that we want to process with Liquid and output as JSON. By appending the .liquid file extension (i.e. basic-syntax.json becomes basic-syntax.json.liquid), Eleventy will know what to do.\nVariables\nOn the surface, Jekyll and Eleventy appear broadly similar, but as each models its content and data a little differently, some template variables will need updating.\nSite variables\nAlongside build settings, Jekyll let\u2019s you store common values in its configuration file which can be accessed in our templates via the site.* namespace. For example, in our Markdown Guide, we have the following values:\ntitle: \"Markdown Guide\"\nurl: https://www.markdownguide.org\nbaseurl: \"\"\nrepo: http://github.com/mattcone/markdown-guide\ncomments: false\nauthor:\n name: \"Matt Cone\"\nog_locale: \"en_US\"\nEleventy\u2019s configuration uses JavaScript which is not suited to storing values like this. However, like Jekyll, we can use data files to store common values. If we add our site-wide values to a JSON file inside a folder called _data and name this file site.json, we can keep the site.* namespace and leave our variables unchanged.\n{\n \"title\": \"Markdown Guide\",\n \"url\": \"https://www.markdownguide.org\",\n \"baseurl\": \"\",\n \"repo\": \"http://github.com/mattcone/markdown-guide\",\n \"comments\": false,\n \"author\": {\n \"name\": \"Matt Cone\"\n },\n \"og_locale\": \"en_US\"\n}\nPage variables\nThe table below shows a mapping of common page variables. As a rule, frontmatter properties are accessed directly, whereas derived metadata values (things like URLs, dates etc.) get prefixed with the page.* namespace:\n\n\n\nJekyll\nEleventy\n\n\n\n\npage.url\npage.url\n\n\npage.date\npage.date\n\n\npage.path\npage.inputPath\n\n\npage.id\npage.outputPath\n\n\npage.name\npage.fileSlug\n\n\npage.content\ncontent\n\n\npage.title\ntitle\n\n\npage.foobar\nfoobar\n\n\n\nWhen iterating through pages, frontmatter values are available via the data object while content is available via templateContent:\n\n\n\nJekyll\nEleventy\n\n\n\n\nitem.url\nitem.url\n\n\nitem.date\nitem.date\n\n\nitem.path\nitem.inputPath\n\n\nitem.name\nitem.fileSlug\n\n\nitem.id\nitem.outputPath\n\n\nitem.content\nitem.templateContent\n\n\nitem.title\nitem.data.title\n\n\nitem.foobar\nitem.data.foobar\n\n\n\nIdeally the discrepancy between page and item variables will change in a future version (see this GitHub issue), making it easier to understand the way Eleventy structures its data.\nPagination variables\nWhereas Jekyll\u2019s pagination feature is limited to paginating posts on one page, Eleventy allows you to paginate any collection of documents or data. Given this disparity, the changes to pagination are more significant, but this table shows a mapping of equivalent variables:\n\n\n\nJekyll\nEleventy\n\n\n\n\npaginator.page\npagination.pageNumber\n\n\npaginator.per_page\npagination.size\n\n\npaginator.posts\npagination.items\n\n\npaginator.previous_page_path\npagination.previousPageHref\n\n\npaginator.next_page_path\npagination.nextPageHref\n\n\n\nFilters\nAlthough Jekyll uses Liquid, it provides a set of filters that are not part of the core Liquid library. There are quite a few \u2014 more than can be covered by this article \u2014 but you can replicate them by using Eleventy\u2019s addFilter configuration option. Let\u2019s convert two used by our Markdown Guide: jsonify and where.\nThe jsonify filter outputs an object or string as valid JSON. As JavaScript provides a native JSON method, we can use this in our replacement filter. addFilter takes two arguments; the first is the name of the filter and the second is the function to which we will pass the content we want to transform:\n// {{ variable | jsonify }}\neleventyConfig.addFilter('jsonify', function (variable) {\n return JSON.stringify(variable);\n});\nJekyll\u2019s where filter is a little more complicated in that it takes two additional arguments: the key to look for, and the value it should match:\n{{ site.members | where: \"graduation_year\",\"2014\" }}\nTo account for this, instead of passing one value to the second argument of addFilter, we can instead pass three: the array we want to examine, the key we want to look for and the value it should match:\n// {{ array | where: key,value }}\neleventyConfig.addFilter('where', function (array, key, value) {\n return array.filter(item => {\n const keys = key.split('.');\n const reducedKey = keys.reduce((object, key) => {\n return object[key];\n }, item);\n\n return (reducedKey === value ? item : false);\n });\n});\nThere\u2019s quite a bit going on within this filter, but I\u2019ll try to explain. Essentially we\u2019re examining each item in our array, reducing key (passed as a string using dot notation) so that it can be parsed correctly (as an object reference) before comparing its value to value. If it matches, item remains in the returned array, else it\u2019s removed. Phew!\nIncludes\nAs with filters, Jekyll provides a set of tags that aren\u2019t strictly part of Liquid either. This includes one of the most useful, the include tag. LiquidJS, the library Eleventy uses, does provide an include tag, but one using the slightly different syntax defined by Shopify. If you\u2019re not passing variables to your includes, everything should work without modification. Otherwise, note that whereas with Jekyll you would do this:\n\n{% include include.html value=\"key\" %}\n\n\n{{ include.value }}\nin Eleventy, you would do this:\n\n{% include \"include.html\", value: \"key\" %}\n\n\n{{ value }}\nA downside of Shopify\u2019s syntax is that variable assignments are no longer scoped to the include and can therefore leak; keep this in mind when converting your templates as you may need to make further adjustments.\nTweaking Liquid\nYou may have noticed in the above example that LiquidJS expects the names of included files to be quoted (else it treats them as variables). We could update our templates to add quotes around file names (the recommended approach), but we could also disable this behaviour by setting LiquidJS\u2019s dynamicPartials option to false. Additionally, Eleventy doesn\u2019t support the include_relative tag, meaning you can\u2019t include files relative to the current document. However, LiquidJS does let us define multiple paths to look for included files via its root option. \nThankfully, Eleventy allows us to pass options to LiquidJS:\neleventyConfig.setLiquidOptions({\n dynamicPartials: false,\n root: [\n '_includes',\n '.'\n ]\n});\nCollections\nJekyll\u2019s collections feature lets authors create arbitrary collections of documents beyond pages and posts. Eleventy provides a similar feature, but in a far more powerful way.\nCollections in Jekyll\nIn Jekyll, creating collections requires you to add the name of your collections to _config.yml and create corresponding folders in your project. Our Markdown Guide has two collections:\ncollections:\n - basic-syntax\n - extended-syntax\nThese correspond to the folders _basic-syntax and _extended-syntax whose content we can iterate over like so:\n{% for syntax in site.extended-syntax %}\n {{ syntax.title }}\n{% endfor %}\nCollections in Eleventy\nThere are two ways you can set up collections in 11ty. The first, and most straightforward, is to use the tag property in content files:\n---\ntitle: Strikethrough\nsyntax-id: strikethrough\nsyntax-summary: \"~~The world is flat.~~\"\ntag: extended-syntax\n---\nWe can then iterate over tagged content like this:\n{% for syntax in collections.extended-syntax %}\n {{ syntax.data.title }}\n{% endfor %}\nEleventy also allows us to configure collections programmatically. For example, instead of using tags, we can search for files using a glob pattern (a way of specifying a set of filenames to search for using wildcard characters):\neleventyConfig.addCollection('basic-syntax', collection => {\n return collection.getFilteredByGlob('_basic-syntax/*.md');\n});\n\neleventyConfig.addCollection('extended-syntax', collection => {\n return collection.getFilteredByGlob('_extended-syntax/*.md');\n});\nWe can extend this further. For example, say we wanted to sort a collection by the display_order property in our document\u2019s frontmatter. We could take the results of collection.getFilteredByGlob and then use JavaScript\u2019s sort method to sort the result:\neleventyConfig.addCollection('example', collection => {\n return collection.getFilteredByGlob('_examples/*.md').sort((a, b) => {\n return a.data.display_order - b.data.display_order;\n });\n});\nHopefully, this gives you just a hint of what\u2019s possible using this approach.\nUsing directory data to manage defaults\nBy default, Eleventy will maintain the structure of your content files when generating your site. In our case, that means /_basic-syntax/lists.md is generated as /_basic-syntax/lists/index.html. Like Jekyll, we can change where files are saved using the permalink property. For example, if we want the URL for this page to be /basic-syntax/lists.html we can add the following:\n---\ntitle: Lists\nsyntax-id: lists\napi: \"no\"\npermalink: /basic-syntax/lists.html\n---\nAgain, this is probably not something we want to manage on a file-by-file basis but again, Eleventy has features that can help: directory data and permalink variables.\nFor example, to achieve the above for all content stored in the _basic-syntax folder, we can create a JSON file that shares the name of that folder and sits inside it, i.e. _basic-syntax/_basic-syntax.json and set our default values. For permalinks, we can use Liquid templating to construct our desired path:\n{\n \"layout\": \"syntax\",\n \"tag\": \"basic-syntax\",\n \"permalink\": \"basic-syntax/{{ title | slug }}.html\"\n}\nHowever, Markdown Guide doesn\u2019t publish syntax examples at individual permanent URLs, it merely uses content files to store data. So let\u2019s change things around a little. No longer tied to Jekyll\u2019s rules about where collection folders should be saved and how they should be labelled, we\u2019ll move them into a folder called _content:\nmarkdown-guide\n\u2514\u2500\u2500 _content\n \u251c\u2500\u2500 basic-syntax\n \u251c\u2500\u2500 extended-syntax\n \u251c\u2500\u2500 getting-started\n \u2514\u2500\u2500 _content.json\nWe will also add a directory data file (_content.json) inside this folder. As directory data is applied recursively, setting permalink to false will mean all content in this folder and its children will no longer be published:\n{\n \"permalink\": false\n}\nStatic files\nEleventy only transforms files whose template language it\u2019s familiar with. But often we may have static assets that don\u2019t need converting, but do need copying to the destination directory. For this, we can use pass-through file copy. In our configuration file, we tell Eleventy what folders/files to copy with the addPassthroughCopy option. Then in the return statement, we enable this feature by setting passthroughFileCopy to true:\nmodule.exports = function(eleventyConfig) {\n \u2026\n\n // Copy the `assets` directory to the compiled site folder\n eleventyConfig.addPassthroughCopy('assets');\n\n return {\n dir: {\n input: \"./\",\n output: \"./_site\"\n },\n passthroughFileCopy: true\n };\n}\nFinal considerations\nAssets\nUnlike Jekyll, Eleventy provides no support for asset compilation or bundling scripts \u2014 we have plenty of choices in that department already. If you\u2019ve been using Jekyll to compile Sass files into CSS, or CoffeeScript into Javascript, you will need to research alternative options, options which are beyond the scope of this article, sadly.\nPublishing to GitHub Pages\nOne of the benefits of Jekyll is its deep integration with GitHub Pages. To publish an Eleventy generated site \u2014 or any site not built with Jekyll \u2014 to GitHub Pages can be quite involved, but typically involves copying the generated site to the gh-pages branch or including that branch as a submodule. Alternatively, you could use a continuous integration service like Travis or CircleCI and push the generated site to your web server. It\u2019s enough to make your head spin! Perhaps for this reason, a number of specialised static site hosts have emerged such as Netlify and Google Firebase. But remember; you can publish a static site almost anywhere!\n\nGoing one louder\nIf you\u2019ve been considering making the switch, I hope this brief overview has been helpful. But it also serves as a reminder why it can be prudent to avoid jumping aboard bandwagons. \nWhile it\u2019s fun to try new software and emerging technologies, doing so can require a lot of work and compromise. For all of Eleventy\u2019s appeal, it\u2019s only a year old so has little in the way of an ecosystem of plugins or themes. It also only has one maintainer. Jekyll on the other hand is a mature project with a large community of maintainers and contributors supporting it.\nI moved my site to Eleventy because the slowness and inflexibility of Jekyll was preventing me from doing the things I wanted to do. But I also had time to invest in the transition. After reading this guide, and considering the specific requirements of your project, you may decide to stick with Jekyll, especially if the output will essentially stay the same. And that\u2019s perfectly fine! \nBut these go to 11.\n\n\n\n\nInformation provided is correct as of Eleventy v0.6.0 and Jekyll v3.8.5\u00a0\u21a9", "year": "2018", "author": "Paul Lloyd", "author_slug": "paulrobertlloyd", "published": "2018-12-11T00:00:00+00:00", "url": "https://24ways.org/2018/turn-jekyll-up-to-eleventy/", "topic": "content"}
{"rowid": 241, "title": "Jank-Free Image Loads", "contents": "There are a few fundamental problems with embedding images in pages of hypertext; perhaps chief among them is this: text is very light and loads rather fast; images are much heavier and arrive much later. Consequently, millions (billions?) of times a day, a hapless Web surfer will start reading some text on a page, and then \u2014\nYour browser doesn\u2019t support HTML5 video. Here is\n a link to the video instead.\n\n\u2014 oops! \u2014 an image pops in above it, pushing said text down the page, and our poor reader loses their place.\nBy default, partially-loaded pages have the user experience of a slippery fish, or spilled jar of jumping beans. For the rest of this article, I shall call that jarring, no-good jumpiness by its name: jank. And I\u2019ll chart a path into a jank-free future \u2013 one in which it\u2019s easy and natural to author
elements that load like this:\nYour browser doesn\u2019t support HTML5 video. Here is\n a link to the video instead.\n\nJank is a very old problem, and there is a very old solution to it: the width and height attributes on
. The idea is: if we stick an image\u2019s dimensions right into the HTML, browsers can know those dimensions before the image loads, and reserve some space on the layout for it so that nothing gets bumped down the page when the image finally arrives.\n\nwidth\nSpecifies the intended width of the image in pixels. When given together with the height, this allows user agents to reserve screen space for the image before the image data has arrived over the network.\n\n\u2014The HTML 3.2 Specification, published on January 14 1997\nUnfortunately for us, when width and height were first spec\u2019d and implemented, layouts were largely fixed and images were usually only intended to render at their fixed, actual dimensions. When image sizing gets fluid, width and height get weird:\nSee the Pen fluid width + fixed height = distortion by Eric Portis (@eeeps) on CodePen.\n\nwidth and height are too rigid for the responsive world. What we need, and have needed for a very long time, is a way to specify fixed aspect ratios, to pair with our fluid widths.\nI have good news, bad news, and great news.\nThe good news is, there are ways to do this, now, that work in every browser. Responsible sites, and responsible developers, go through the effort to do them.\nThe bad news is that these techniques are all terrible, cumbersome hacks. They\u2019re difficult to remember, difficult to understand, and they can interact with other pieces of CSS in unexpected ways.\nSo, the great news: there are two on-the-horizon web platform features that are trying to make no-jank, fixed-aspect-ratio, fluid-width images a natural part of the web platform.\naspect-ratio in CSS\nThe first proposed feature? An aspect-ratio property in CSS!\nThis would allow us to write CSS like this:\nimg {\n width: 100%;\n}\n\n.thumb {\n aspect-ratio: 1/1;\n}\n\n.hero {\n aspect-ratio: 16/9;\n}\nThis\u2019ll work wonders when we need to set aspect ratios for whole classes of images, which are all sized to fit within pre-defined layout slots, like the .thumb and .hero images, above.\nAlas, the harder problem, in my experience, is not images with known-ahead-of-time aspect ratios. It\u2019s images \u2013 possibly user generated images \u2013 that can have any aspect ratio. The really tricky problem is unknown-when-you\u2019re-writing-your-CSS aspect ratios that can vary per-image. Using aspect-ratio to reserve space for images like this requires inline styles:\n
\nAnd inline styles give me the heebie-jeebies! As a web developer of a certain age, I have a tiny man in a blue beanie permanently embedded deep within my hindbrain, who cries out in agony whenever I author a style=\"\" attribute. And you know what? The old man has a point! By sticking super-high-specificity inline styles in my content, I\u2019m cutting off my, (or anyone else\u2019s) ability to change those aspect ratios, for whatever reason, later.\nHow might we specify aspect ratios at a lower level? How might we give browsers information about an image\u2019s dimensions, without giving them explicit instructions about how to style it?\nI\u2019ll tell you: we could give browsers the intrinsic aspect ratio of the image in our HTML, rather than specifying an extrinsic aspect ratio!\nA brief note on intrinsic and extrinsic sizing\nWhat do I mean by \u201cintrinsic\u201d and \u201cextrinsic?\u201d\nThe intrinsic size of an image is, put simply, how big it\u2019d be if you plopped it onto a page and applied no CSS to it whatsoever. An 800\u00d7600 image has an intrinsic width of 800px.\nThe extrinsic size of an image, then, is how large it ends up after CSS has been applied. Stick a width: 300px rule on that same 800\u00d7600 image, and its intrinsic size (accessible via the Image.naturalWidth property, in JavaScript) doesn\u2019t change: its intrinsic size is still 800px. But this image now has an extrinsic size (accessible via Image.clientWidth) of 300px.\nIt surprised me to learn this year that height and width are interpreted as presentational hints and that they end up setting extrinsic dimensions (albeit ones that, unlike inline styles, have absolutely no specificity).\nCSS aspect-ratio lets us avoid setting extrinsic heights and widths \u2013 and instead lets us give images (or anything else) an extrinsic aspect ratio, so that as soon as we set one dimension (possibly to a fluid width, like 100%!), the other dimension is set automatically in relation to it.\nThe last tool I\u2019m going to talk about gets us out of the extrinsic sizing game all together \u2014 which, I think, is only appropriate for a feature that we\u2019re going to be using in HTML.\nintrinsicsize in HTML\nThe proposed intrinsicsize attribute will let you do this:\n
\nThat tells the browser, \u201chey, this image.jpg that I\u2019m using here \u2013 I know you haven\u2019t loaded it yet but I\u2019m just going to let you know right away that it\u2019s going to have an intrinsic size of 800\u00d7600.\u201d This gives the browser enough information to reserve space on the layout for the image, and ensures that any and all extrinsic sizing instructions, specified in our CSS, will layer cleanly on top of this, the image\u2019s intrinsic size.\nYou may ask (I did!): wait, what if my
references multiple resources, which all have different intrinsic sizes? Well, if you\u2019re using srcset, intrinsicsize is a bit of a misnomer \u2013 what the attribute will do then, is specify an intrinsic aspect ratio:\n
\nIn the future (and behind the \u201cExperimental Web Platform Features\u201d flag right now, in Chrome 71+), asking this image for its .naturalWidth would not return 3 \u2013 it will return whatever 75vw is, given the current viewport width. And Image.naturalHeight will return that width, divided by the intrinsic aspect ratio: 3/2.\nCan\u2019t wait\nI seem to have gotten myself into the weeds a bit. Sizing on the web is complicated!\nDon\u2019t let all of these details bury the big takeaway here: sometime soon (\ud83e\udd1e 2019\u203d \ud83e\udd1e), we\u2019ll be able to toss our terrible aspect-ratio hacks into the dustbin of history, get in the habit of setting aspect-ratios in CSS and/or intrinsicsizes in HTML, and surf a less-frustrating, more-performant, less-janky web. I can\u2019t wait!", "year": "2018", "author": "Eric Portis", "author_slug": "ericportis", "published": "2018-12-21T00:00:00+00:00", "url": "https://24ways.org/2018/jank-free-image-loads/", "topic": "code"}
{"rowid": 264, "title": "Dynamic Social Sharing Images", "contents": "Way back when social media was new, you could be pretty sure that whatever you posted would be read by those who follow you. If you\u2019d written a blog post and you wanted to share it with those who follow you, you could post a link and your followers would see it in their streams. Oh heady days! \nWith so many social channels and a proliferation of content and promotions flying past in everyone\u2019s streams, it\u2019s no longer enough to share content on social media, you have to actively sell it if you want it to be seen. You really need to make the most of every opportunity to catch a reader\u2019s attention if you\u2019re trying to get as many eyes as possible on that sweet, sweet social content.\nOne of the best ways to grab attention with your posts or tweets is to include an image. There\u2019s heaps of research that says that having images in your posts helps them stand out to followers. Reports I found showed figures from anything from 35% to 150% improvement from just having image in a post. Unfortunately, the details were surrounded with gross words like engagement and visual marketing assets and so I had to close the page before I started to hate myself too much.\nSo without hard stats to quote, we\u2019ll call it a rule of thumb. The rule of thumb is that posts with images will grab more attention than those without, so it makes sense that when adding pages to a website, you should make sure that they have social media sharing images associated with them.\nAdding sharing images\nThe process for declaring an image to be used in places like Facebook and Twitter is very simple, and at this point is familiar to many of us. You add a meta tag to the head of the page to point to the location of the image to use. When a link to the page is added to a post, the social network will fetch the page, look for the meta tag and then use the image you specified.\n
\nThere\u2019s a good post on this over at CSS-Tricks if you need to bone up on the details of this and other similar meta tags for social media sharing.\nThis is all fine and well for content that has a very obvious choice of image to go along with it, but what if you don\u2019t necessarily have an image? One approach is to use stock photography, but that\u2019s not going to be right for every situation.\nThis was something we faced with 24 ways in 2017. We wanted to add images to the tweets we post each day announcing a new article. Some articles have images, but not all, and there tended not to be any consistency in terms of imagery from one article to the next. We always have an author photograph, but those don\u2019t usually lend themselves directly to being the main \u2018hero\u2019 image for an article.\nPutting his thinking cap on, Paul came up with a design for an image that used the author photo along with a quote extracted from the article.\nOne of the hand-made sharing images from 2017\nEach day we would pick a quote from the article, and Paul would manually compose an image to be uploaded to the site. The results were great, but the whole process was a bit too labour intensive and relied on an individual (Paul) being available each day to do the work. I thought we could probably improve this.\nHatching a new plan\nOne initial idea I came up with was to script the image editor to dynamically build a new image by pulling content from our database. Sketch has plugins available to pull JSON content into a design, and our CMS can easily output JSON data, so that was one possibility.\nThe more I thought about this and how much I wish graphic design tools worked just a little bit more like CSS, the obvious solution hit me. We should just build it with CSS!\nIn fact, as the author name and image already exist in our CMS, and the visual styling is based on the design of the website, couldn\u2019t this just be another page on the site generated by the CMS?\nBreaking it down, I figured the steps needed would be something like:\n\nCreate the CSS to lay out a component that could be turned into an image\nAdd a new field to articles in the CMS to hold a handpicked quote\nBuild a new article template in the CMS to output the author name and quote dynamically for any article\n\u2026 um \u2026 screenshot?\n\nI thought I\u2019d get cracking and see if I could figure out the final steps later.\nBuilding the page\nThe first thing to tackle was the basic HTML and CSS to lay out the components for our image. That bit was really easy, as I just asked Paul to do it. Everyone should have a Paul.\nPaul\u2019s code uses a fixed dimension container in the HTML, set to 600 \u00d7 315px. This is to make it the correct aspect ratio for Facebook\u2019s recommended image size. It\u2019s useful to remember here that it doesn\u2019t need to be responsive or robust, as the page only needs to lay out correctly for a screenshot and a fixed size in a known browser.\nWith the markup and CSS in place, I turned this into a new template. Our CMS can easily display content through any number of templates, so I created a version of the article template that was totally stripped down. It only included the author details and the quote, along with Paul\u2019s markup.\nI also added the quote as a new field on the article in the CMS, so each \u2018image\u2019 could be quickly and easily customised in the editing process.\nI added a new field to the article template to capture the quote.\nWith very little effort, we quickly had a page to dynamically generate our \u2018image\u2019 right from the CMS. You can see any of them by adding /sharing onto the end of an article URL for any 2018 article.\nOur automatically generated layout direct from the CMS\nIt soon became clear that the elusive Step 4 was going to be the tricky part. I can create a small page on the site that looks like an image, but how should I go about turning it into one? An obvious route is to screenshot the page by hand, but that\u2019s going back to some of the manual steps I was trying to eliminate, and also opens up a possibility for errors to be made. But it did lead me to the thought\u2026 how could I automatically take a screenshot?\nEnter Puppeteer\nPuppeteer is a Node.js library that provides a nice API onto Headless Chrome. What is Headless Chrome, you ask? It\u2019s just a version of the Chrome browser than runs from the command line without ever drawing anything to a user interface window. It loads pages, renders CSS, runs JavaScript, pretty much every normal thing that Chrome on the desktop does, but without a clicky user interface.\nHeadless Chrome can be used for all sorts of things such as running automated tests on front-end code after making changes, or\u2026 get this\u2026 rendering pages that can be used for screenshots. The actual process of writing some code to control Chrome and to take the screenshot is where Puppeteer comes in. Puppeteer puts a friendly layer in front of big old scary Chrome to enable us to interact with it using simple JavaScript code running in Node.\nUsing Puppeteer, I can write a small script that will repeatably turn a URL into an image. So simple is it to do this, that\u2019s it\u2019s actually Puppeteer\u2019s \u2018hello world\u2019 example.\nFirst you install Puppeteer. It downloads a compatible headless browser (actually Chromium) as a dependancy, so you don\u2019t need to worry about installing that. At the command line:\nnpm i puppeteer\nThen save a new file as example.js with this code:\nconst puppeteer = require('puppeteer');\n\n(async () => {\n const browser = await puppeteer.launch();\n const page = await browser.newPage();\n await page.goto('https://example.com');\n await page.screenshot({path: 'example.png'});\n await browser.close();\n})();\nand then run it using Node:\nnode example.js\nThis will output an image file example.png to disk, which contains a screenshot of, in this case https://example.com. The logic of the code is reasonably easy to follow:\n\nLaunch a browser\nOpen up a new page\nGoto a URL\nTake a screenshot\nClose the browser\n\nThe async function and await keywords are a way to have the script pause and wait for normally asynchronous code to return before proceeding. That\u2019s useful with actions like loading a web page that might take some time to complete. They\u2019re used with Promises, and the effect is to make asynchronous code behave as if it\u2019s synchronous. You can read more about async and await at MDN if you\u2019re interested.\nThat\u2019s a good proof-of-concept using the basic Puppeteer example. I can take a screenshot of a URL! But what happens if I put the URL of my new special page in there?\nOur content is up in the corner of the image with lots of empty space.\nThat\u2019s not great. It\u2019s okay, but not great. It looks like that, by default, Puppeteer takes a screenshot with a resolution of 800 \u00d7 600, so we need to find out how to adjust that. Fortunately, the docs aren\u2019t the worst and I was able to find the page.setViewport() method pretty easily.\nconst puppeteer = require('puppeteer');\n\n(async () => {\n const browser = await puppeteer.launch();\n const page = await browser.newPage();\n await page.goto('https://24ways.org/2018/clip-paths-know-no-bounds/sharing');\n await page.setViewport({\n width: 600,\n height: 315\n });\n await page.screenshot({path: 'example.png'});\n await browser.close();\n})();\nThis worked! The screenshot is now 600 \u00d7 315 as expected. That\u2019s exactly what we asked for. Trouble is, that\u2019s a bit low res and it is nearly 2019 after all. While in those docs, I noticed the deviceScaleFactor option that can be passed to page.setViewport(). Setting that to 2 gives us an image of the same area of the screen, but with twice as many pixels.\n await page.setViewport({\n width: 600,\n height: 315,\n deviceScaleFactor: 2\n });\nPerfect! We now have a programmatic way of turning a URL into an image.\nImproving the script\nRather than having a script with a fixed URL in it that outputs an image called example.png, the next step is to make that a bit more dynamic. The aim here is to have a script that we can run with a URL as an argument and have it output an image for that one page. That way we can run it manually, or hook it into part of our site\u2019s build process to automate the generation of the image.\nOur goal is to call the script like this:\nnode shoot-sharing-image.js https://24ways.org/2018/clip-paths-know-no-bounds/\nAnd I want the image to come out with the name clip-paths-know-no-bounds.png. To do that, I need to have my script look for command arguments, and then to split the URL up to grab the slug from it.\n// Get the URL and the slug segment from it\nconst url = process.argv[2];\nconst segments = url.split('/');\n// Get the second-to-last segment (the slug)\nconst slug = segments[segments.length-2];\nWe can then use these variables later in the script, remembering to add sharing back onto the end of the URL to get our dedicated page.\n(async () => {\n const browser = await puppeteer.launch();\n const page = await browser.newPage();\n await page.goto(url + 'sharing');\n await page.setViewport({\n width: 600,\n height: 315,\n deviceScaleFactor: 2\n });\n await page.screenshot({path: slug + '.png'});\n await browser.close();\n})();\nOnce you\u2019re generating the image with Node, there\u2019s all sorts of things you can do with it. An obvious step is to move it to the correct location within your site or project.\nYou can also run optimisations on the file. I\u2019m using imagemin with pngquant to reduce the file size a little.\nconst imagemin = require('imagemin');\nconst imageminPngquant = require('imagemin-pngquant');\n\nawait imagemin([slug + '.png'], 'build', {\n plugins: [\n imageminPngquant({quality: '75-90'})\n ]\n});\n\nYou can see the completed example as a gist.\nIntegrating it with your CMS\nSo we now have a command we can run to take a URL and generate a custom image for that URL. It\u2019s in a format that can be called by any sort of build script, or triggered from a publishing hook in a CMS. Exactly how you do that is going to depend on the way your site is built and the technology stack you\u2019re using, but it\u2019s likely not too hard as long as you can run a command as part of the process.\nFor 24 ways this year, I\u2019ve been running the script by hand once each article is ready. My script adds the file to a git repo and pushes to a deployment remote that is configured to automatically deploy static assets to our server. Along with our theme of making incremental improvements, next year I\u2019ll look to automate this one step further.\nWe may also look at having a few slightly different layouts to choose from, so that each day isn\u2019t exactly the same as the last. Interestingly, we could even try some A/B tests to see if there\u2019s any particular format of image or type of quote that does a better job of grabbing attention. There are lots of possibilities!\n\nBy using a bit of ingenuity, some custom CMS templates, and the very useful Puppeteer project, we\u2019ve been able to reliably produce dynamic social media sharing images for all of our articles. In doing so, we reduced the dependancy on any individual for producing those images, and opened up a world of possibilities in how we use those images.\nI hope you\u2019ll give it a try!", "year": "2018", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2018-12-24T00:00:00+00:00", "url": "https://24ways.org/2018/dynamic-social-sharing-images/", "topic": "code"}
{"rowid": 256, "title": "Develop Your Naturalist Superpowers with Observable Notebooks and iNaturalist", "contents": "We\u2019re going to level up your knowledge of what animals you might see in an area at a particular time of year - a skill every naturalist* strives for - using technology! Using iNaturalist and Observable Notebooks we\u2019re going to prototype seasonality graphs for particular species in an area, and automatically create a guide to what animals you might see in each month.\n*(a Naturalist is someone who likes learning about nature, not someone who\u2019s a fan of being naked, that\u2019s a \u2018Naturist\u2019\u2026 different thing!)\nLooking for critters in rocky intertidal habitats\nOne of my favourite things to do is going rockpooling, or as we call it over here in California, \u2018tidepooling\u2019. Amounting to the same thing, it\u2019s going to a beach that has rocks where the tide covers then uncovers little pools of water at different times of the day. All sorts of fun creatures and life can be found in this \u2018rocky intertidal habitat\u2019\nA particularly exciting creature that lives here is the Nudibranch, a type of super colourful \u2018sea slug\u2019. There are over 3000 species of Nudibranch worldwide. (The word \u201cnudibranch\u201d comes from the Latin nudus, naked, and the Greek \u03b2\u03c1\u03b1\u03bd\u03c7\u03b9\u03b1 / brankhia, gills.)\n\u200b\n\nThey are however quite tricky to find! Even though they are often brightly coloured and interestingly shaped, some of them are very small, and in our part of the world in the Bay Area in California their appearance in our rockpools is seasonal. We see them more often in Summer months, despite the not-as-low tides as in our Winter and Spring seasons.\nMy favourite place to go tidepooling here is Pillar Point in Half Moon bay (at other times of the year more famously known for the surf competition \u2018Mavericks\u2019). The rockpools there are rich in species diversity, of varied types and water-coverage habitat zones as well as being relatively accessible.\n\u200b\n\nI was rockpooling at Pillar Point recently with my parents and we talked to a lady who remarked that she hadn\u2019t seen any Nudibranchs on her visit this time. I realised that having an idea of what species to find where, and at what time of year is one of the many superpower goals of every budding Naturalist. \nUsing technology and the croudsourced species observations of the iNaturalist community we can shortcut our way to this superpower!\nFinding nearby animals with iNaturalist\nWe\u2019re going to be getting our information about what animals you can see in Pillar Point using iNaturalist. iNaturalist is a really fun platform that helps connect people to nature and report their findings of life in the outdoors. It is also a community of nature-loving people who help each other identify and confirm those observations. iNaturalist is a project run as a joint initiative by the California Academy of Sciences and the National Geographic Society.\nI\u2019ve been using iNaturalist for over two years to record and identify plants and animals that I\u2019ve found in the outdoors. I use their iPhone app to upload my pictures, which then uses machine learning algorithms to make an initial guess at what it is I\u2019ve seen. The community is really active, and I often find someone else has verified or updated my species guess pretty soon after posting. \nThis process is great because once an observation has been identified by at least two people it becomes \u2018verified\u2019 and is considered research grade. Research grade observations get exported and used by scientists, as well as being indexed by the Global Biodiversity Information Facility, GBIF.\n\u200b\n\niNaturalist has a great API and API explorer, which makes interacting and prototyping using iNaturalist data really fun. For example, if you go to the API explorer and expand the Observations : Search and fetch section and then the GET /observations API, you get a selection of input boxes that allow you to play with options that you can then pass to the API when you click the \u2018Try it out\u2019 button.\n\u200b\n\nYou\u2019ll then get a URL that looks a bit like\nhttps://api.inaturalist.org/v1/observations?captive=false &geo=true&verifiable=true&taxon_id=47113&lat=37.495461&lng=-122.499584 &radius=5&order=desc&order_by=created_at \nwhich you can call and interrrogate using a programming language of your choice.\nIf you would like to see an all-JavaScript application that uses the iNaturalist API, take a look at OwlsNearMe.com which Simon and I built one weekend earlier this year. It gets your location and shows you all iNaturalist observations of owls near you and lists which species you are likely to see (not adjusted for season).\nRapid development using Observable Notebooks\nWe\u2019re going to be using Observable Notebooks to prototype our examples, pulling data down from iNaturalist. I really like using visual notebooks like Observable, they are great for learning and building things quickly. You may be familiar with Jupyter notebooks for Python which is similar but takes a bit of setup to get going - I often use these for prototyping too. Observable is amazing for querying and visualising data with JavaScript and since it is a hosted product it doesn\u2019t require any setup at all.\nYou can follow along and play with this example on my Observable notebook. If you create an account there you can fork my notebook and create your own version of this example. \nEach \u2018notebook\u2019 consists of a page with a column of \u2018cells\u2019, similar to what you get in a spreadsheet. A cell can contain Markdown text or JavaScript code and the output of evaluating the cell appears above the code that generated it. There are lots of tutorials out there on Observable Notebooks, I like this code introduction one from Observable (and D3) creator Mike Bostock.\nDeveloping your Naturalist superpowers\nIf you have an idea of what plants and critters you might see in a place at the time you visit, you can hone in on what you want to study and train your Naturalist eye to better identify the life around you.\nFor our example, we care about wildlife we can see at Pillar Point, so we need a way of letting the iNaturalist API know which area we are interested in.\nWe could use a latitide, longitude and radius for this, but a rectangular bounding box is a better shape for the reef. We can use this tool to draw the area we want to search within: boundingbox.klokantech.com\n\u200b\n\nThe tool lets you export the bounding box in several forms using the dropdown at the bottom left under the map givese We are going to use the \u2018DublinCore\u2019 format as it\u2019s closest to the format needed by the iNaturalist API.\n westlimit=-122.50542; southlimit=37.492805; eastlimit=-122.492738; northlimit=37.499811\nA quick map primer:\nThe higher the latitude the more north it is\nThe lower the latitude the more south it is\nLatitude 0 = the equator\n\nThe higher the longitude the more east it is of Greenwich\nThe lower the longitude the more west it is of Greenwich\nLongitude 0 = Greenwich\nIn the iNaturalst API we want to use the parameters nelat, nelng, swlat, swlng to create a query that looks inside a bounding box of Pillar Point near Half Moon Bay in California:\nnelat = highest latitude = north limit = 37.499811\nnelng = highest longitude = east limit = -122.492738\nswlat = smallest latitude = south limit = 37.492805\nswlng = smallest longitude = west limit = 122.50542\nAs API parameters these look like this:\n?nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=122.50542\nThese parameters in this format can be used for most of the iNaturalist API methods.\nNudibranch seasonality in Pillar Point\nWe can use the iNaturalist observation_histogram API to get a count of Nudibranch observations per week-of-year across all time and within our Pillar Point bounding box.\nIn addition to the geographic parameters that we just worked out, we are also sending the taxon_id of 47113, which is iNaturalists internal number associated with the Nudibranch taxon. By using this we can get all species which are under the parent \u2018Order Nudibranchia\u2019. \nAnother useful piece of naturalist knowledge is understanding the biological classification scheme of Taxanomic Rank - roughly, when a species has a Latin name of two words eg \u2018Glaucus Atlanticus\u2019 the first Latin word is the \u2018Genus\u2019 like a family name \u2018Glaucus\u2019, and the second word identifies that particular species, like a given name \u2018Atlanticus\u2019. \nThe two Latin words together indicate a specific species, the term we use colloquially to refer to a type of animal often differs wildly region to region, and sometimes the same common name in two countries can refer to two different species. The common names for the Glaucus Atlanticus (which incidentally is my favourite sea slug) include: sea swallow, blue angel, blue glaucus, blue dragon, blue sea slug and blue ocean slug! Because this gets super confusing, Scientists like using this Latin name format instead.\nThe following piece of code asks the iNaturalist Histogram API to return per-week counts for verified observations of Nudibranchs within our Pillar Point bounding box:\npillar_point_counts_per_week = fetch(\n \"https://api.inaturalist.org/v1/observations/histogram?taxon_id=47113&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&date_field=observed&interval=week_of_year&verifiable=true\"\n ).then(response => {\n return response.json();\n})\nOur next step is to take this data and draw a graph! We\u2019ll be using Vega-Lite for this, which is a fab JavaScript graphing libary that is also easy and fun to use with Observable Notebooks. \n(Here is a great tutorial on exploring data and drawing graphs with Observable and Vega-Lite)\nThe iNaturalist API returns data that looks like this:\n{\n \"total_results\": 53,\n \"page\": 1,\n \"per_page\": 53,\n \"results\": {\n \"week_of_year\": {\n \"1\": 136,\n \"2\": 20,\n \"3\": 150,\n \"4\": 65,\n \"5\": 186,\n \"6\": 74,\n \"7\": 47,\n \"8\": 87,\n \"9\": 64,\n \"10\": 56,\nBut for our Vega-Lite graph we need data that looks like this:\n[{\n \"week\": \"01\",\n \"value\": 136\n}, {\n \"week\": \"02\",\n \"value\": 20\n}, ...]\nWe can convert what we get back from the API to the second format using a loop that iterates over the object keys:\nobjects_to_plot = {\n let objects = [];\n Object.keys(pillar_point_counts_per_week.results.week_of_year).map(function(week_index) {\n objects.push({\n week: `Wk ${week_index.toString()}`,\n observations: pillar_point_counts_per_week.results.week_of_year[week_index]\n });\n })\n return objects;\n}\nWe can then plug this into Vega-Lite to draw us a graph:\nvegalite({\n data: {values: objects_to_plot},\n mark: \"bar\",\n encoding: {\n x: {field: \"week\", type: \"nominal\", sort: null},\n y: {field: \"observations\", type: \"quantitative\"}\n },\n width: width * 0.9\n})\n\nIt\u2019s worth noting that we have a lot of observations of Nudibranchs particularly at Pillar Point due in no small part to the intertidal monitoring research that Alison Young and Rebecca Johnson facilitate for the California Achademy of Sciences. \nSo, what if we want to look for the seasonality of observations of a particular species of adorable sea slug? We want our interface to have a select box with a list of all the species you might find at any time of year. We can do this using the species_counts API to create us an object with the iNaturalist species ID and common & Latin names.\npillar_point_nudibranches = {\n let api_results = await fetch(\n \"https://api.inaturalist.org/v1/observations/species_counts?taxon_id=47113&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&date_field=observed&verifiable=true\"\n ).then(r => r.json())\n\n let species_list = api_results.results.map(i => ({\n value: i.taxon.id,\n label: `${i.taxon.preferred_common_name} (${i.taxon.name})`\n }));\n\n return species_list\n}\nWe can create an interactive select box by importing code from Jeremy Ashkanas\u2019 Observable Notebook: add import {select} from \"@jashkenas/inputs\" to a cell anywhere in our notebook. Observable is magic: like a spreadsheet, the order of the cells doesn\u2019t matter - if one cell is referenced by any other cell then when that cell updates all the other cells refresh themselves. You can also import and reference one notebook from another!\nviewof select_species = select({\n title: \"Which Nudibranch do you want to see seasonality for?\",\n options: [{value: \"\", label: \"All the Nudibranchs!\"}, ...pillar_point_nudibranches],\n value: \"\"\n})\nThen we go back to our old favourite, the histogram API just like before, only this time we are calling it with the value created by our select box ${select_species} as taxon_id instead of the number 47113.\npillar_point_counts_per_month_per_species = fetch(\n `https://api.inaturalist.org/v1/observations/histogram?taxon_id=${select_species}&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&date_field=observed&interval=month_of_year&verifiable=true`\n).then(r => r.json())\nNow for the fun graph bit! As we did before, we re-format the result of the API into a format compatible with Vega-Lite:\nobjects_to_plot_species_month = {\n let objects = [];\n Object.keys(pillar_point_counts_per_month_per_species.results.month_of_year).map(function(month_index) {\n objects.push({\n month: (new Date(2018, (month_index - 1), 1)).toLocaleString(\"en\", {month: \"long\"}),\n observations: pillar_point_counts_per_month_per_species.results.month_of_year[month_index]\n });\n })\n return objects;\n}\n(Note that in the above code we are creating a date object with our specific month in, and using toLocalString() to get the longer English name for the month. Because the JavaScript Date object counts January as 0, we use month_index -1 to get the correct month)\nAnd we draw the graph as we did before, only now if you interact with the select box in Observable the graph will dynamically update!\nvegalite({\n data: {values: objects_to_plot_species_month},\n mark: \"bar\",\n encoding: {\n x: {field: \"month\", type: \"nominal\", sort:null},\n y: {field: \"observations\", type: \"quantitative\"}\n },\n width: width * 0.9\n})\nNow we can see when is the best time of year to plan to go tidepooling in Pillar Point if we want to find a specific species of Nudibranch.\n\u200b\n\nThis tool is great for planning when we to go rockpooling at Pillar Point, but what about if you are going this month and want to pre-train your eye with what to look for in order to impress your friends with your knowledge of Nudibranchs?\nWell\u2026 we can create ourselves a dynamic guide that you can with a list of the species, their photo, name and how many times they have been observed in that month of the year!\nOur select box this time looks as follows, simpler than before but assigning the month value to the variable selected_month.\nviewof selected_month = select({\n title: \"When do you want to see Nudibranchs?\",\n options: [\n { label: \"Whenever\", value: \"\" },\n { label: \"January\", value: \"1\" },\n { label: \"February\", value: \"2\" },\n { label: \"March\", value: \"3\" },\n { label: \"April\", value: \"4\" },\n { label: \"May\", value: \"5\" },\n { label: \"June\", value: \"6\" },\n { label: \"July\", value: \"7\" },\n { label: \"August\", value: \"8\" },\n { label: \"September\", value: \"9\" },\n { label: \"October\", value: \"10\" },\n { label: \"November\", value: \"11\" },\n { label: \"December\", value: \"12\" },\n ],\n value: \"\"\n })\nWe then can use the species_counts API to get all the relevant information about which species we can see in month=${selected_month}. We\u2019ll be able to reference this response object and its values later with the variable we just created, eg: all_species_data.results[0].taxon.name.\nall_species_data = fetch(\n `https://api.inaturalist.org/v1/observations/species_counts?taxon_id=47113&month=${selected_month}&nelat=37.499811&nelng=-122.492738&swlat=37.492805&swlng=-122.50542&verifiable=true`\n).then(r => r.json())\nYou can render HTML directly in a notebook cell using Observable\u2019s html tagged template literal:\n\n\n
If you go to Pillar Point ${\n {\"\": \"\",\n \"1\":\"in January\",\n \"2\":\"in Febrary\",\n \"3\":\"in March\",\n \"4\":\"in April\",\n \"5\":\"in May\",\n \"6\":\"in June\",\n \"7\":\"in July\",\n \"8\":\"in August\",\n \"9\":\"in September\",\n \"10\":\"in October\",\n \"11\":\"in November\",\n \"12\":\"in December\",\n }[selected_month]\n } you might see\u2026
\n\n
\n${all_species_data.results.map(s => `
${s.taxon.name}
\n
Seen ${s.count} times
\n
\n`)}\n
\nThese few lines of HTML are all you need to get this exciting dynamic guide to what Nudibranchs you will see in each month!\n\u200b\n\nPlay with it yourself in this Observable Notebook.\nConclusion\nI hope by playing with these examples you have an idea of how powerful it can be to prototype using Observable Notebooks and how you can use the incredible crowdsourced community data and APIs from iNaturalist to augment your naturalist skills and impress your friends with your new \u2018knowledge of nature\u2019 superpower.\nLastly I strongly encourage you to get outside on a low tide to explore your local rocky intertidal habitat, and all the amazing critters that live there.\nHere is a great introduction video to tidepooling / rockpooling, by Rebecca Johnson and Alison Young from the California Academy of Sciences.", "year": "2018", "author": "Natalie Downe", "author_slug": "nataliedowne", "published": "2018-12-18T00:00:00+00:00", "url": "https://24ways.org/2018/observable-notebooks-and-inaturalist/", "topic": "code"}
{"rowid": 254, "title": "What I Learned in Six Years at GDS", "contents": "When I joined the Government Digital Service in April 2012, GOV.UK was just going into public beta. GDS was a completely new organisation, part of the Cabinet Office, with a mission to stop wasting government money on over-complicated and underperforming big IT projects and instead deliver simple, useful services for the public.\nLots of people who were experts in their fields were drawn in by this inspiring mission, and I learned loads from working with some true leaders. Here are three of the main things I learned.\n1. What is the user need?\n\u2028The main discipline I learned from my time at GDS was to always ask \u2018what is the user need?\u2019 It\u2019s very easy to build something that seems like a good idea, but until you\u2019ve identified what problem you are solving for the user, you can\u2019t be sure that you are building something that is going to help solve an actual problem.\nA really good example of this is GOV.UK Notify. This service was originally conceived of as a status tracker; a \u201cwhere\u2019s my stuff\u201d for government services. For example, if you apply for a passport online, it can take up to six weeks to arrive. After a few weeks, you might feel anxious and phone the Home Office to ask what\u2019s happening. The idea of the status tracker was to allow you to get this information online, saving your time and saving government money on call centres.\nThe project started, as all GDS projects do, with a discovery. The main purpose of a discovery is to identify the users\u2019 needs. At the end of this discovery, the team realised that a status tracker wasn\u2019t the way to address the problem. As they wrote in this blog post: \n\nStatus tracking tools are often just \u2018channel shift\u2019 for anxiety. They solve the symptom and not the problem. They do make it more convenient for people to reduce their anxiety, but they still require them to get anxious enough to request an update in the first place.\n\nWhat would actually address the user need would be to give you the information before you get anxious about where your passport is. For example, when your application is received, email you to let you know when to expect it, and perhaps text you at various points in the process to let you know how it\u2019s going. So instead of a status tracker, the team built GOV.UK Notify, to make it easy for government services to incorporate text, email and even letter notifications into their processes.\nMaking sure you know your user\nAt GDS user needs were taken very seriously. We had a user research lab on site and everyone was required to spend two hours observing user research every six weeks. Ideally you\u2019d observe users working with things you\u2019d built, but even if they weren\u2019t, it was an incredibly valuable experience, and something you should seek out if you are able to.\nEven if we think we understand our users very well, it is very enlightening to see how users actually use your stuff. Partly because in technology we tend to be power users and the average user doesn\u2019t use technology the same way we do. But even if you are building things for other developers, someone who is unfamiliar with it will interact with it in a way that may be very different to what you have envisaged.\nUser needs is not just about building things\nAsking the question \u201cwhat is the user need?\u201d really helps focus on why you are doing what you are doing. It keeps things on track, and helps the team think about what the actual desired end goal is (and should be). \nThinking about user needs has helped me with lots of things, not just building services. For example, you are raising a pull request. What\u2019s the user need? The reviewer needs to be able to easily understand what the change you are proposing is, why you are proposing that change and any areas you need particular help on with the review. \nOr you are writing an email to a colleague. What\u2019s the user need? What are you hoping the reader will learn, understand or do as a result of your email?\n2. Make things open: it makes things better\nThe second important thing I learned at GDS was \u2018make things open: it makes things better\u2019. This works on many levels: being open about your strategy, blogging about what you are doing and what you\u2019ve learned (including mistakes), and \u2013 the part that I got most involved in \u2013 coding in the open.\nTalking about your work helps clarify it\nOne thing we did really well at GDS was blogging \u2013 a lot \u2013 about what we were working on. Blogging about what you are working on is is really valuable for the writer because it forces you to think logically about what you are doing in order to tell a good story. If you are blogging about upcoming work, it makes you think clearly about why you\u2019re doing it; and it also means that people can comment on the blog post. Often people had really useful suggestions or clarifying questions.\nIt\u2019s also really valuable to blog about what you\u2019ve learned, especially if you\u2019ve made a mistake. It makes sure you\u2019ve learned the lesson and helps others avoid making the same mistakes. As well as blogging about lessons learned, GOV.UK also publishes incident reports when there is an outage or service degradation. Being open about things like this really engenders an atmosphere of trust and safe learning; which helps make things better.\nCoding in the open has a lot of benefits\nIn my last year at GDS I was the Open Source Lead, and one of the things I focused on was the requirement that all new government source code should be open. From the start, GDS coded in the open (the GitHub organisation still has the non-intuitive name alphagov, because it was created by the team doing the original Alpha of GOV.UK, before GDS was even formed).\nWhen I first joined GDS I was a little nervous about the fact that anyone could see my code. I worried about people seeing my mistakes, or receiving critical code reviews. (Setting people\u2019s mind at rest about these things is why it\u2019s crucial to have good standards around communication and positive behaviour - even a critical code review should be considerately given). \nBut I quickly realised there were huge advantages to coding in the open. In the same way as blogging your decisions makes you think carefully about whether they are good ones and what evidence you have, the fact that anyone in the world could see your code (even if, in practice, they probably won\u2019t be looking) makes everyone raise their game slightly. The very fact that you know it\u2019s open, makes you make it a bit better.\nIt helps with lots of other things as well, for example it makes it easier to collaborate with people and share your work. And now that I\u2019ve left GDS, it\u2019s so useful to be able to look back at code I worked on to remember how things worked.\nShare what you learn\nIt\u2019s sometimes hard to know where to start with being open about things, but it gets easier and becomes more natural as you practice. It helps you clarify your thoughts and follow through on what you\u2019ve decided to do. Working at GDS when this was a very important principle really helped me learn how to do this well.\n3. Do the hard work to make it simple (tech edition)\n\u2018Start with user needs\u2019 and \u2018Make things open: it makes things better\u2019 are two of the excellent government design principles. They are all good, but the third thing that I want to talk about is number 4: \u2018Do the hard work to make it simple\u2019, and specifically, how this manifests itself in the way we build technology.\nAt GDS, we worked very hard to do the hard work to make the code, systems and technology we built simple for those who came after us. For example, writing good commit messages is taken very seriously. There is commit message guidance, and it was not unusual for a pull request review to ask for a commit message to be rewritten to make a commit message clearer.\nWe worked very hard on making pull requests good, keeping the reviewer in mind and making it clear to the user how best to review it.\nReviewing others\u2019 pull requests is the highest priority so that no-one is blocked, and teams have screens showing the status of open pull requests (using fourth wall) and we even had a \u2018pull request seal\u2019, a bot that publishes pull requests to Slack and gets angry if they are uncommented on for more than two days.\nMaking it easier for developers to support the site\nAnother example of doing the hard work to make it simple was the opsmanual. I spent two years on the web operations team on GOV.UK, and one of the things I loved about that team was the huge efforts everyone went to to be open and inclusive to developers.\nThe team had some people who were really expert in web ops, but they were all incredibly helpful when bringing me on board as a developer with no previous experience of web ops, and also patiently explaining things whenever other devs in similar positions came with questions. \nThe main artefact of this was the opsmanual, which contained write-ups of how to do lots of things. One of the best things was that every alert that might lead to someone being woken up in the middle of the night had a link to documentation on the opsmanual which detailed what the alert meant and some suggested actions that could be taken to address it.\nThis was important because most of the devs on GOV.UK were on the on-call rota, so if they were woken at 3am by an alert they\u2019d never seen before, the opsmanual information might give them everything they needed to solve it, without the years of web ops training and the deep familiarity with the GOV.UK infrastructure that came with working on it every day.\nDevelopers are users too\nDoing the hard work to make it simple means that users can do what they need to do, and this applies even when the users are your developer peers. At GDS I really learned how to focus on simplicity for the user, and how much better this makes things work.\nThese three principles help us make great things\nI learned so much more in my six years at GDS. For example, the civil service has a very fair way of interviewing. I learned about the importance of good comms, working late, responsibly and the value of content design.\nAnd the real heart of what I learned, the guiding principles that help us deliver great products, is encapsulated by the three things I\u2019ve talked about here: think about the user need, make things open, and do the hard work to make it simple.", "year": "2018", "author": "Anna Shipman", "author_slug": "annashipman", "published": "2018-12-08T00:00:00+00:00", "url": "https://24ways.org/2018/what-i-learned-in-six-years-at-gds/", "topic": "business"}
{"rowid": 242, "title": "Creating My First Chrome Extension", "contents": "Writing a Chrome Extension isn\u2019t as scary at it seems!\nNot too long ago, I used a Chrome extension called 20 Cubed. I\u2019m far-sighted, and being a software engineer makes it difficult to maintain distance vision. So I used 20 Cubed to remind myself to look away from my screen and rest my eyes. I loved its simple interface and design. I loved it so much, I often forgot to turn it off in the middle of presentations, where it would take over my entire screen. Oops.\nUnfortunately, the developer stopped updating the extension and removed it from Chrome\u2019s extension library. I was so sad. None of the other eye rest extensions out there matched my design aesthetic, so I decided to create my own! Want to do the same?\nFortunately, Google has some respectable documentation on how to create an extension. And remember, Chrome extensions are just HTML, CSS, and JavaScript. You can add libraries and frameworks, or you can just code the \u201cold-fashioned\u201d way. Sky\u2019s the limit!\nSetup\nBut first, some things you\u2019ll need to know about before getting started:\n\nCallbacks\nTimeouts\nChrome Dev Tools\n\nDeveloping with Chrome extension methods requires a lot of callbacks. If you\u2019ve never experienced the joy of callback hell, creating a Chrome extension will introduce you to this concept. However, things can get confusing pretty quickly. I\u2019d highly recommend brushing up on that subject before getting started.\nHyperbole and a Half\nTimeouts and Intervals are another thing you might want to brush up on. While creating this extension, I didn\u2019t consider the fact that I\u2019d be juggling three timers. And I probably would\u2019ve saved time organizing those and reading up on the Chrome extension Alarms documentation beforehand. But more on that in a bit.\nOn the note of organization, abstraction is important! You might have any combination of the following:\n\nThe Chrome extension options page\nThe popup from the Chrome Menu\nThe windows or tabs you create\nThe background scripts\n\nAnd that can get unwieldy. You might also edit the existing tabs or windows in the browser, which you\u2019ll probably want as a separate script too. Note that this tutorial only covers creating your own customized window rather than editing existing windows or tabs.\nAlright, now that you know all that up front, let\u2019s get going!\nDocumentation\nTL;DR READ THE DOCS.\nA few things to get started:\n\nRead Google\u2019s primer on browser extensions\nHave a look at their Getting started tutorial\nCheck out their overview on Chrome Extensions\n\nThis overview discusses the Chrome extension files, architecture, APIs, and communication between pages. Funnily enough, I only discovered the Overview page after creating my extension.\nThe manifest.json file gives the browser information about the extension, including general information, where to find your extension files and icons, and API permissions required. Here\u2019s what my manifest.json looked like, for example:\nhttps://github.com/jennz0r/eye-rest/blob/master/manifest.json\nBecause I\u2019m a visual learner, I found the images that describe the extension\u2019s architecture most helpful.\n\nTo clarify this diagram, the background.js file is the extension\u2019s event handler. It\u2019s constantly listening for browser events, which you\u2019ll feed to it using the Chrome Extension API. Google says that an effective background script is only loaded when it is needed and unloaded when it goes idle.\nThe Popup is the little window that appears when you click on an extension\u2019s icon in the Chrome Menu. It consists of markup and scripts, and you can tell the browser where to find it in the manifest.json under page_action: { \"default_popup\": FILE_NAME_HERE }.\nThe Options page is exactly as it says. This displays customizable options only visible to the user when they either right-click on the Chrome menu and choose \u201cOptions\u201d under an extension. This also consists of markup and scripts, and you can tell the browser where to find it in the manifest.json under options_page: FILE_NAME_HERE.\nContent scripts are any scripts that will interact with any web windows or tabs that the user has open. These scripts will also interact with any tabs or windows opened by your extension.\nDebugging\nA quick note: don\u2019t forget the debugging tutorial!\nJust like any other Chrome window, every piece of an extension has an inspector and dev tools. If (read: when) you run into errors (as I did), you\u2019re likely to have several inspector windows open \u2013 one for the background script, one for the popup, one for the options, and one for the window or tab the extension is interacting with.\nFor example, I kept seeing the error \u201cThis request exceeds the MAX_WRITE_OPERATIONS_PER_HOUR quota.\u201d Well, it turns out there are limitations on how often you can sync stored information.\nAnother error I kept seeing was \u201cAlarm delay is less than minimum of 1 minutes. In released .crx, alarm \u201cALARM_NAME_HERE\u201d will fire in approximately 1 minutes\u201d. Well, it turns out there are minimum interval times for alarms.\nChrome Extension creation definitely benefits from debugging skills. Especially with callbacks and listeners, good old fashioned console.log can really help!\nMe adding a ton of `console.log`s while trying to debug my alarms.\nEye Rest Functionality\nOk, so what is the extension I created? Again, it\u2019s a way to rest your eyes every twenty minutes for twenty seconds. So, the basic functionality should look like the following:\n\nIf the extension is running AND\nIf the user has not clicked Pause in the Popup HTML AND\nIf the counter in the Popup HTML is down to 00:00 THEN\n\nOpen a new window with Timer HTML AND\nStart a 20 sec countdown in Timer HTML AND\nReset the Popup HTML counter to 20:00\n\nIf the Timer HTML is down to 0 sec THEN\n\nClose that window. Rinse. Repeat.\n\n\nSounds simple enough, but wow, these timers became convoluted! Of all the Chrome extensions I decided to create, I decided to make one that\u2019s heavily dependent on time, intervals, and having those in sync with each other. In other words, I made this unnecessarily complicated and didn\u2019t realize until I started coding.\nFor visual reference of my confusion, check out the GitHub repository for Eye Rest. (And yes, it\u2019s a pun.)\nAPI\nNow let\u2019s discuss the APIs that I used to build this extension.\nAlarms\nWhat even are alarms? I didn\u2019t know either.\nAlarms are basically Chrome\u2019s setTimeout and setInterval. They exist because, as Google says\u2026\n\nDOM-based timers, such as window.setTimeout() or window.setInterval(), are not honored in non-persistent background scripts if they trigger when the event page is dormant.\n\nFor more information, check out this background migration doc.\nOne interesting note about alarms in Chrome extensions is that they are persistent. Garbage collection with Chrome extension alarms seems unreliable at best. I didn\u2019t have much luck using the clearAll method to remove alarms I created on previous extension loads or installs. A workaround (read: hack) is to specify a unique alarm name every time your extension is loaded and clearing any other alarms without that unique name.\nBackground Scripts\nFor Eye Rest, I have two background scripts. One is my actual initializer and event listener, and the other is a helpers file.\nI wanted to share a couple of functions between my Background and Popup scripts. Specifically, the clearAndCreateAlarm function. I wanted my background script to clear any existing alarms, create a new alarm, and add remaining time until the next alarm to local storage immediately upon extension load. To make the function available to the Background script, I added helpers.js as the first item under background > scripts in my manifest.json.\nI also wanted my Popup script to do the same things when the user has unpaused the extension\u2019s functionality. To make the function available to the Popup script, I just include the helpers script in the Popup HTML file.\nOther APIs\nWindows\nI use the Windows API to create the Timer window when the time of my alarm is up. The window creation is initiated by my Background script.\nOne day, while coding late into the evening, I found it very confusing that the window.create method included url as an option. I assumed it was meant to be an external web address. A friend pondered that there must be an option to specify the window\u2019s HTML. Until then, it hadn\u2019t dawned on me that the url could be relative. Duh. I was tired!\nI pass the timer.html as the url option, as well as type, size, position, and other visual options.\nStorage\nMaybe you want to pass information back and forth between the Background script and your Popup script? You can do that using Chrome or local storage. One benefit of using local storage over Chrome\u2019s storage is avoiding quotas and write operation maximums.\nI wanted to pass the time at which the latest alarm was set, the time to the next alarm, and whether or not the timer is paused between the Background and Popup scripts. Because the countdown should change every second, it\u2019s quite complicated and requires lots of writes. That\u2019s why I went with the user\u2019s local storage. You can see me getting and setting those variables in my Background, Helper, and Popup scripts. Just search for date, nextAlarmTime, and isPaused.\nDeclarative Content\nThe Declarative Content API allows you to show your extension\u2019s page action based on several type of matches, without needing to take a host permission or inject a content script. So you\u2019ll need this to get your extension to work in the browser!\nYou can see me set this in my Background script. Because I want my extension\u2019s popup to appear on every page one is browsing, I leave the page matchers empty.\nThere are many more APIs for Chrome apps and extensions, so make sure to surf around and see what features are available!\nThe Extension\nHere\u2019s what my original Popup looked like before I added styles.\nAnd here\u2019s what it looks like with new styles. I guess I\u2019m going for a Nickelodeon feel.\nAnd here\u2019s the Timer window and Popup together! \nPublishing\nPublishing is a cinch. You just zip up your files, create a new or use an existing Google Developer account, upload the files, add some details, and pay a one time $5 fee. That\u2019s all! Then your extension will be available on the Chrome extension store! Neato :D\nMy extension is now available for you to install.\nConclusion\nI thought creating a time based Chrome Extension would be quick and easy. I was wrong. It was more complicated than I thought! But it\u2019s definitely achievable with some time, persistence, and good ole Google searches.\nEventually, I\u2019d like to add more interactive elements to Eye Rest. For example, hitting the YouTube API to grab a silly or cute video as a reward for looking away during the 20 sec countdown and not closing the timer window. This harkens back to one of my first web projects, Toothtimer, from 2012. Or maybe a way to change the background colors of the Timer and Popup!\nEither way, with Eye Rest\u2019s framework built out, I\u2019m feeling fearless about future feature adds! Building this Chrome extension took some broken nails, achy shoulders, and tired eyes, but now Eye Rest can tell me to give my eyes a break every 20 minutes.", "year": "2018", "author": "Jennifer Wong", "author_slug": "jenniferwong", "published": "2018-12-05T00:00:00+00:00", "url": "https://24ways.org/2018/my-first-chrome-extension/", "topic": "code"}
{"rowid": 257, "title": "The (Switch)-Case for State Machines in User Interfaces", "contents": "You\u2019re tasked with creating a login form. Email, password, submit button, done.\n\u201cThis will be easy,\u201d you think to yourself.\nLogin form by Selecto\nYou\u2019ve made similar forms many times in the past; it\u2019s essentially muscle memory at this point. You\u2019re working closely with a designer, who gives you a beautiful, detailed mockup of a login form. Sure, you\u2019ll have to translate the pixels to meaningful, responsive CSS values, but that\u2019s the least of your problems.\nAs you\u2019re writing up the HTML structure and CSS layout and styles for this form, you realize that you don\u2019t know what the successful \u201clogged in\u201d page looks like. You remind the designer, who readily gives it to you. But then you start thinking more and more about how the login form is supposed to work.\n\nWhat if login fails? Where do those errors show up?\nShould we show errors differently if the user forgot to enter their email, or password, or both?\nOr should the submit button be disabled?\nShould we validate the email field?\nWhen should we show validation errors \u2013 as they\u2019re typing their email, or when they move to the password field, or when they click submit? (Note: many, many login forms are guilty of this.)\nWhen should the errors disappear?\nWhat do we show during the login process? Some loading spinner?\nWhat if loading takes too long, or a server error occurs?\n\nMany more questions come up, and you (and your designer) are understandably frustrated. The lack of upfront specification opens the door to scope creep, which readily finds itself at home in all the unexplored edge cases.\nModeling Behavior\nDescribing all the possible user flows and business logic of an application can become tricky. Ironically, user stories might not tell the whole story \u2013 they often leave out potential edge-cases or small yet important bits of information.\nHowever, one important (and very old) mathematical model of computation can be used for describing the behavior and all possible states of a user interface: the finite state machine.\nThe general idea, as it applies to user interfaces, is that all of our applications can be described (at some level of abstraction) as being in one, and only one, of a finite number of states at any given time. For example, we can describe our login form above in these states:\n\nstart - not submitted yet\nloading - submitted and logging in\nsuccess - successfully logged in\nerror - login failed\n\nAdditionally, we can describe an application as accepting a finite number of events \u2013 that is, all the possible events that can be \u201csent\u201d to the application, either from the user or some other external entity:\n\nSUBMIT - pressing the submit button\nRESOLVE - the server responds, indicating that login is successful\nREJECT - the server responds, indicating that login failed\n\nThen, we can combine these states and events to describe the transitions between them. That is, when the application is in one state, an an event occurs, we can specify what the next state should be:\n\nFrom the start state, when the SUBMIT event occurs, the app should be in the loading state.\nFrom the loading state, when the RESOLVE event occurs, login succeeded and the app should be in the success state.\nIf login fails from the loading state (i.e., when the REJECT event occurs), the app should be in the error state.\nFrom the error state, the user should be able to retry login: when the SUBMIT event occurs here, the app should go to the loading state.\nOtherwise, if any other event occurs, don\u2019t do anything and stay in the same state.\n\nThat\u2019s a pretty thorough description, similar to a user story! It\u2019s also a bit more symbolic than a user story (e.g., \u201cwhen the SUBMIT event occurs\u201d instead of \u201cwhen the user presses the submit button\u201d), and that\u2019s for a reason. By representing states, events, and transitions symbolically, we can visualize what this state machine looks like:\n\nEvery state is represented by a box, and every event is connected to a transition arrow that connects two states. This makes it intuitive to follow the flow and understand what the next state should be given the current state and an event.\nFrom Visuals to Code\nDrawing a state machine doesn\u2019t require any special software; in fact, using paper and pencil (in case anything changes!) does the job quite nicely. However, one common problem is handoff: it doesn\u2019t matter how detailed a user story or how well-designed a visualization is, it eventually has to be coded in order for it to become part of a real application.\nWith the state machine model described above, the same visual description can be mapped directly to code. Traditionally, and as the title suggests, this is done using switch/case statements:\nfunction loginMachine(state, event) {\n switch (state) {\n case 'start':\n if (event === 'SUBMIT') {\n return 'loading';\n }\n break;\n case 'loading':\n if (event === 'RESOLVE') {\n return 'success';\n } else if (event === 'REJECT') {\n return 'error';\n }\n break;\n case 'success':\n // Accept no further events\n break;\n case 'error':\n if (event === 'SUBMIT') {\n return 'loading';\n }\n break;\n default:\n // This should never occur\n return undefined;\n }\n}\n\nconsole.log(loginMachine('start', 'SUBMIT'));\n// => 'loading'\nThis is fine (I suppose) but personally, I find it much easier to use objects:\nconst loginMachine = {\n initial: \"start\",\n states: {\n start: {\n on: { SUBMIT: 'loading' }\n },\n loading: {\n on: {\n REJECT: 'error',\n RESOLVE: 'success'\n }\n },\n error: {\n on: {\n SUBMIT: 'loading'\n }\n },\n success: {}\n }\n};\n\nfunction transition(state, event) {\n return machine\n .states[state] // Look up the state\n .on[event] // Look up the next state based on the event\n || state; // If not found, return the current state\n}\n\nconsole.log(transition('start', 'SUBMIT'));\nAs you might have noticed, the loginMachine is a plain JS object, and can be written in JSON. This is important because it allows the machine to be visualized by a 3rd-party tool, as demonstrated here:\n\nA Common Language Between Designers and Developers\nAlthough finite state machines are a fundamental part of computer science, they have an amazing potential to bridge the application specification gap between designers and developers, as well as project managers, stakeholders, and more. By designing a state machine visually and with code, designers and developers alike can:\n\nidentify all possible states, and potentially missing states\ndescribe exactly what should happen when an event occurs on a given state, and prevent that event from having unintended side-effects in other states (ever click a submit button more than once?)\neliminate impossible states and identify states that are \u201cunreachable\u201d (have no entry transition) or \u201csunken\u201d (have no exit transition)\nadd features with full confidence of knowing what other states it might affect\nsimplify redundant states or complex user flows\ncreate test paths for almost every possible user flow, and easily identify edge cases\ncollaborate better by understanding the entire application model equally.\n\nNot a New Idea\nI\u2019m not the first to suggest that state machines can help bridge the gap between design and development.\n\nVince MingPu Shao wrote an article about designing UI states and communicating with developers effectively with finite state machines\nUser flow diagrams, which visually describe the paths that a user can take through an app to achieve certain goals, are essentially state machines. Numerous tools, from Sketch plugins to standalone apps, exist for creating them.\nIn 1999, Ian Horrocks wrote a book titled \u201cConstructing the User Interface with Statecharts\u201d, which takes state machines to the next level and describes the inherent difficulties (and solutions) with creating complex UIs. The ideas in the book are still relevant today.\nMore than a decade earlier, David Harel published \u201cStatecharts: A Visual Formalism for Complex Systems\u201d, in which the statechart - an extended hierarchical state machine model - is born.\n\nState machines and statecharts have been used for complex systems and user interfaces, both physical and digital, for decades, and are especially prevalent in other industries, such as game development and embedded electronic systems. Even NASA uses statecharts for the Curiosity Rover and more, citing many benefits:\n\nVisualized modeling\nPrecise diagrams\nAutomatic code generation\nComprehensive test coverage\nAccommodation of late-breaking requirements changes\n\nMoving Forward\nIt\u2019s time that we improve how we communicate between designers and developers, much less improve the way we develop UIs to deliver the best, bug-free, optimal user experience. There is so much more to state machines and statecharts than just being a different way of designing and coding. For more resources:\n\nThe World of Statecharts is a comprehensive guide by Erik Mogensen in using statecharts in your applications\nThe Statechart Community on Spectrum is always full of interesting ideas and questions related to state machines, statecharts, and software modeling\nI gave a talk at React Rally over a year ago about how state machines (finite automata) can improve the way we develop applications. The latest one is from Reactive Conf, where I demonstrate how statecharts can be used to automatically generate test cases.\nI have also been working on XState, which is a library for \u201cstate machines and statecharts for the modern web\u201d. You can create and visualize statecharts in JavaScript, and use them in any framework (and soon enough, multiple different languages).\n\nI\u2019m excited about the future of developing web and mobile applications with statecharts, especially with regard to faster design/development cycles, auto-generated testing, better error prevention, comprehensive analytics, and even the use of model-based reinforcement learning and artificial intelligence to greatly improve the user experience.", "year": "2018", "author": "David Khourshid", "author_slug": "davidkhourshid", "published": "2018-12-12T00:00:00+00:00", "url": "https://24ways.org/2018/state-machines-in-user-interfaces/", "topic": "code"}