{"rowid": 247, "title": "Managing Flow and Rhythm with CSS Custom Properties", "contents": "An important part of designing user interfaces is creating consistent vertical rhythm between elements. Creating consistent, predictable space doesn\u2019t just make your web pages and views look better, but it can also improve the scan-ability. \nBrowsers ship with default CSS and these styles often create consistent rhythm for flow elements out of the box. The problem is though that we often reset these styles with a reset. Elements such as
and
also have no default margin or padding associated with them. \nI\u2019ve tried all sorts of weird and wonderful techniques to find a balance between using inherited CSS while also levelling the playing field for component driven front-ends with very little success. This experimentation is how I landed on the flow utility, though and I\u2019m going to show you how it works. Let\u2019s dive in!\nThe Flow utility\nWith the ever-growing number of folks working with component libraries and design systems, we could benefit from a utility that creates space for us, only when it\u2019s appropriate to do so. The problem with my previous attempts at fixing this is that the spacing values were very rigid. \nThat\u2019s fine for 90% of contexts, but sometimes, it\u2019s handy to be able to tweak the values based on the exact context of your component. This is where CSS Custom Properties come in handy.\nThe code\n.flow {\n --flow-space: 1em;\n} \n\n.flow > * + * { \n margin-top: var(--flow-space);\n}\nWhat this code does is enable you to add a class of flow to an element which will then add margin-top to sibling elements within that element. We use the lobotomised owl selector to select these siblings. This approach enables an almost anonymous and automatic system which is ideal for component library based front-ends where components probably don\u2019t have any idea what surrounds them. \nThe other important part of this utility is the usage of the --flow-space custom property. We define it in the .flow component and each element within it will be spaced by --flow-space, by default. The beauty about setting this as a custom property is that custom properties also participate in the cascade, so we can utilise specificity to change it if we need it. Pretty cool, right? Let\u2019s look at some examples.\nA basic example\nSee the Pen CSS Flow Utility: Basic implementation by Andy Bell (@hankchizljaw) on CodePen.\nhttps://codepen.io/hankchizljaw/pen/LXqerj\nWhat we\u2019ve got in this example is some basic HTML content that has a class of flow on the parent article element. Because there\u2019s a very heavy-handed reset added as a dependency, all of the content would have been squished together without the flow utility. \nBecause our --flow-space custom property is set to 1em, the space between elements is 1X the font size of the element in question. This means that a

in this context has a calculated margin-top value of 28.8px, because it has an assigned font size of 1.8rem. If we were to globally change the --flow-space value to 1.1em for example, we\u2019d affect everything because margin values would be calculated as 1.1X the font size. \nThis example looks great because using font size as the basis of rhythm works really well. What if we wanted to to tweak certain elements within this article, though? \nSee the Pen CSS Flow Utility: Tweaked Basic implementation by Andy Bell (@hankchizljaw) on CodePen.\nhttps://codepen.io/hankchizljaw/pen/qQgxaY\nI like lots of whitespace with my article layouts, so the 1em space isn\u2019t going to cut it for all elements. I like to provide plenty of space between headed sections, so I increase the --flow-space in these instances:\nh2 {\n --flow-space: 3rem;\n}\nNotice also how I also switch over to using rem units? I want to make sure that these overrides are always based on the root font size. This is a personal preference of mine and you can use whatever units you want. Just be aware that it\u2019s better for accessibility to use flexible units like em, rem and %, so that a user\u2019s font size preferences are honoured. \nA more advanced example\nAlthough the flow utility is super useful for a plethora of contexts, it really shines when working with a few unrelated components. Instead of having to write specific layout CSS just for your particular context, you can use flow and --flow-space to create predictable and contextual space.\nSee the Pen CSS Flow Utility: Unrelated components by Andy Bell (@hankchizljaw) on CodePen.\nhttps://codepen.io/hankchizljaw/pen/ZmPGyL\nIn this example, we\u2019ve got ourselves a little prototype layout that features a media element, followed by a grid of features. By using flow, it was really quick and easy to generate space between those two main elements. It was also easy to create space within the components. For example, I added it to the .media__content element, so that the article\u2019s content would space itself:\n
\n ...\n
\nSomething to remember though: the custom properties cascade in the same way that other CSS values do, so you\u2019ve got to keep that in mind. We\u2019ve got a great example of that in this example where because we\u2019ve got the flow utility on our .features component, which has a --flow-space override: the child elements of .features will inherit that value, so we\u2019ve had to set another value on the .features__list element.\n\u201cBut what about old browsers?\u201d, I hear you cry\nWe\u2019re using CSS Custom Properties that at the time of writing, have about 88% support. One thing we can do to remedy the other 12% of browsers is to set a default, traditional margin-top value of 1em, so it calculates itself based on the element\u2019s font-size:\n.flow {\n --flow-space: 1em;\n}\n\n.flow > * + * { \n margin-top: 1em;\n margin-top: var(--flow-space);\n}\nThanks to the cascading and declarative nature of CSS, we can set that default margin-top value and then immediately set it to use the custom property instead. Browsers that understand Custom Properties will automatically apply them\u2014those that don\u2019t will ignore them. Yay for the cascade and progressive enhancement! \nWrapping up\nThis tiny little utility can bring great power for when you want to consistently space elements, vertically. It also\u2014thanks to the power of the modern web\u2014allows us to create contextual overrides without creating modifier classes or shame CSS. \nIf you\u2019ve got other methods of doing this sort of work, please let me know on Twitter. I\u2019d love to see what you\u2019re working on!", "year": "2018", "author": "Andy Bell", "author_slug": "andybell", "published": "2018-12-07T00:00:00+00:00", "url": "https://24ways.org/2018/managing-flow-and-rhythm-with-css-custom-properties/", "topic": "code"} {"rowid": 246, "title": "Designing Your Site Like It\u2019s 1998", "contents": "It\u2019s 20 years to the day since my wife and I started Stuff & Nonsense, our little studio and my outlet for creative ideas on the web. To celebrate this anniversary\u2014and my fourteenth contribution to 24 ways\u2014 I\u2019d like to explain how I would\u2019ve developed a design for Planes, Trains and Automobiles, one of my favourite Christmas films.\nMy design for Planes, Trains and Automobiles is fixed at 800px wide.\nDeveloping a framework\nI\u2019ll start by using frames to set up the framework for this new website. Frames are individual pages\u2014one for navigation, the other for my content\u2014pulled together to form a frameset. Space is limited on lower-resolution screens, so by using frames I can ensure my navigation always remains visible. I can include any number of frames inside a element.\nI add two rows to my ; the first is for my navigation and is 50px tall, the second is for my content and will resize to fill any available space. As I don\u2019t want frame borders or any space between my frames, I set frameborder and framespacing attributes to 0:\n\n[\u2026]\n\nNext I add the source of my two frame documents. I don\u2019t want people to be able to resize or scroll my navigation, so I add the noresize attribute to that frame:\n\n\n\n\nI do want links from my navigation to open in the content frame, so I give each a name so I can specify where I want links to open:\n\n\n\n\nThe framework for this website is simple as it contains only two horizontal rows. Should I need a more complex layout, I can nest as many framesets\u2014and as many individual documents\u2014as I need:\n\n \n \n \n \n \n\nLetterbox framesets were common way to deal with multiple screen sizes. In a letterbox, the central frameset had a fixed height and width, while the frames on the top, right, bottom, and left expanded to fill any remaining space.\nHandling older browsers\nSadly not every browser supports frames, so I should send a helpful message to people who use older browsers asking them to upgrade. Happily, I can do that using noframes content:\n\n<body>\n<p>This page uses frames, but your browser doesn\u2019t support them. \n Please upgrade your browser.</p>\n</body>\n\nForcing someone back into a frame\nSometimes, someone may follow a link to a page from a portal or search engine, or they might attempt to open it in a new window or tab. If that page properly belongs inside a , people could easily miss out on other parts of a design. This short script will prevent this happening and because it\u2019s vanilla Javascript, it doesn\u2019t require a library such as jQuery:\n\n\nLaying out my page\nBefore starting my layout, I add a few basic background and colour styles. I must include these attributes in every page on my website:\n\nI want absolute control over how people experience my design and don\u2019t want to allow it to stretch, so I first need a which limits the width of my layout to 800px. The align attribute will keep this
in the centre of someone\u2019s screen:\n
\n \n \n \n
[\u2026]
\nAlthough they were developed for displaying tabular information, the cells and rows which make up the element make it ideal for the precise implementation of a design. I need several tables\u2014often nested inside each other\u2014to implement my design. These include tables for a banner and three rows of content:\n
\n
[\u2026]
\n \n
\n
[\u2026]
\n \n \n [\u2026]
\n [\u2026]
\n\nThe width of the first table\u2014used for my banner\u2014is fixed to match the logo it contains. As I don\u2019t need borders, padding, or spacing between these cells, I use attributes to remove them:\n\n \n \n \n
\"Logo\"
\nThe next table\u2014which contains the largest image, introduction, and a call-to-action\u2014is one of the most complex parts of my design, so I need to ensure its layout is pixel perfect. To do that I add an extra row at the top of this table and fill each of its cells with tiny transparent images:\n\n \n \n\nThe height and width of these \u201cshims\u201d or \u201cspacers\u201d is only 1px but they will stretch to any size without increasing their weight on the page. This makes them perfect for performant website development.\nFor the hero of this design, I splice up the large image into three separate files and apply each slice as a background to the table cells. I also match the height of those cells to the background images:\n\n \u00a0\n [\u2026]\n\n\n\n \u00a0\n\nI use tables and spacer images throughout the rest of this design to lay out the various types of content with perfect precision. For example, to add a single-pixel border around my two columns of content, I first apply a blue background to an outer table along with 1px of cellspacing, then simply nest an inner table\u2014this time with a white background\u2014inside it:\n\n \n \n \n
\n \n[\u2026]\n
\n
\nAdding details\nTables are fabulous tools for laying out a page, but they\u2019re also useful for implementing details on those pages. I can use a table to add a gradient background, rounded corners, and a shadow to the button which forms my \u201cBuy the DVD\u201d call-to-action. First, I splice my button graphic into three slices; two fixed-width rounded ends, plus a narrow gradient which stretches and makes this button responsive. Then, I add those images as backgrounds and use spacers to perfectly size my button:\n\n \n \n\n \n\n \n \n
\n
\n Buy the DVD\n
\n
\nI use those same elements to add details to headlines and lists too. Adding a \u201cbullet\u201d to each item in a list needs only two additional table cells, a circular graphic, and a spacer:\n\n \n \n \n \n \n
\u00a0\u00a0Directed by John Hughes
\nImplementing a typographic hierarchy\nSo far I\u2019ve explained how to use frames, tables, and spacers to develop a layout for my content, but what about styling that content? I use elements to change the typeface from the browser\u2019s default to any font installed on someone\u2019s device:\nPlanes, Trains and Automobiles is a comedy film [\u2026]\nTo adjust the size of those fonts, I use the size attribute and a value between the smallest (1) and the largest (7) where 3 is the browser\u2019s default. I use a size of 4 for this headline and 2 for the text which follows:\nSteve Martin\n\nAn American actor, comedian, writer, producer, and musician.\nWhen I need to change the typeface, perhaps from a sans-serif like Arial to a serif like Times New Roman, I must change the value of the face attribute on every element on all pages on my website.\nNB: I use as many
elements as needed to create space between headlines and paragraphs.\nView the final result (and especially the source.)\nMy modern day design for Planes, Trains and Automobiles.\nI can imagine many people reading this and thinking \u201cThis is terrible advice because we don\u2019t develop websites like this in 2018.\u201d That\u2019s true.\nWe have the ability to embed any number of web fonts into our products and websites and have far more control over type features, leading, ligatures, and sizes:\nfont-variant-caps: titling-caps;\nfont-variant-ligatures: common-ligatures;\nfont-variant-numeric: oldstyle-nums;\nGrid has simplified the implementation of even the most complex compound grid down to just a few lines of CSS:\nbody {\n display: grid;\n grid-template-columns: 3fr 1fr 2fr 2fr 1fr 3fr;\n grid-template-rows: auto;\n grid-column-gap: 2vw;\n grid-row-gap: 1vh;\n}\nFlexbox has made it easy to develop flexible components such as navigation links:\nnav ul { display: flex; }\nnav li { flex: 1; }\nJust one line of CSS can create multiple columns of fluid type:\nmain { column-width: 12em; }\nCSS Shapes enable text to flow around irregular shapes including polygons:\n[src*=\"main-img\"] {\n float: left;\n shape-outside: polygon(\u2026);\n}\nToday, we wouldn\u2019t dream of using images and a table to add a gradient, rounded corners, and a shadow to a button or link, preferring instead:\n.btn {\n background: linear-gradient(#8B1212, #DD3A3C);\n border-radius: 1em;\n box-shadow: 0 2px 4px 0 rgba(0,0,0,0.50), inset 0 -1px 1px 0 rgba(0,0,0,0.50);\n}\nCSS Custom Properties, feature and media queries, filters, pseudo-elements, and SVG; the list of advances in HTML, CSS, and other technologies goes on. So does our understanding of how best to use them by separating content, structure, presentation, and behaviour. As 2018 draws to a close, we\u2019re certain we know how to design and develop products and websites better than we did at the end of 1998.\nStrange as it might seem looking back, in 1998 we were also certain our techniques and technologies were the best for the job. That\u2019s why it\u2019s dangerous to believe with absolute certainty that the frameworks and tools we increasingly rely on today\u2014tools like Bootstrap, Bower, and Brunch, Grunt, Gulp, Node, Require, React, and Sass\u2014will be any more relevant in the future than elements, frames, layout tables, and spacer images are today.\nI have no prediction for what the web will be like twenty years from now. However, I want to believe we\u2019ll build on what we\u2019ve learned during these past two decades about the importance of accessibility, flexibility, and usability, and that the mistakes we made while infatuated by technologies won\u2019t be repeated.\n\nHead over to my website if you\u2019d like to read about how I\u2019d implement my design for \u2018Planes, Trains and Automobiles\u2019 today.", "year": "2018", "author": "Andy Clarke", "author_slug": "andyclarke", "published": "2018-12-23T00:00:00+00:00", "url": "https://24ways.org/2018/designing-your-site-like-its-1998/", "topic": "code"} {"rowid": 244, "title": "It\u2019s Beginning to Look a Lot Like XSSmas", "contents": "I dread the office Secret Santa. I have a knack for choosing well-meaning but inappropriate presents, like a bottle of port for a teetotaller, a cheese-tasting experience for a vegan, or heaven forbid, Spurs socks for an Arsenal supporter. Ok, the last one was intentional.\nIt\u2019s the same with gifting code. Once, I made a pattern library for A List Apart which I open sourced, and a few weeks later, a glaring security vulnerability was found in it. My gift was so generous that it enabled unrestricted access to any file on any public-facing server that hosted it.\nWith platforms like GitHub and npm, giving the gift of code is so easy it\u2019s practically a no-brainer. This giant, open source yankee swap helps us do our jobs without starting from scratch with every project. But like any gift-giving, it\u2019s also risky.\nVulnerabilities and Open Source\nOpen source code is not inherently more or less vulnerable than closed-source code. What makes it higher risk is that the same piece of code gets reused in lots of places, meaning a hacker can use the same exploit mechanism on the same vulnerable code in different apps.\nGraph showing the number of open source vulnerabilities published per year, from the State of Open Source Security 2017\nIn the first 24 ways article this year, Katie referenced a few different types of vulnerability:\n\nCross-site Request Forgery (also known as CSRF)\nSQL Injection\nCross-site Scripting (also known as XSS)\n\nThere are many more types of vulnerability, and those that live under the same category share similarities. \nFor example, my favourite \u2013 is it weird to have a favourite vulnerability? \u2013 is Cross Site Scripting (XSS), which allows for the injection of scripts into web pages. This is a really common vulnerability often unwittingly added by developers. OWASP (the Open Web Application Security Project) wrote a great article about how to prevent opening the door to XSS attacks \u2013 share it generously with your colleagues.\nMost vulnerabilities like this are not added intentionally \u2013 they\u2019re doors left ajar due to the way something has been scripted, like the over-generous code in my pattern library. \nOthers, though, are added intentionally. A few months ago, a hacker, disguised as a helpful elf, offered to take over the maintenance of a popular npm package that had been unmaintained for a couple of years. The owner had moved onto other projects, and was keen to see it continue to be maintained by someone else, so transferred ownership. Fast-forward 3 months, it was discovered that the individual had quietly added a malicious package to the codebase, and the obfuscated code in it had been unwittingly installed onto thousands of apps. The code added was designed to harvest Bitcoin if it was run alongside another application. It was only spotted due to a developer\u2019s curiosity.\nAnother tactic to get developers to unwittingly install malicious packages into their codebase is \u201ctyposquatting\u201d \u2013 back in August last year, npm reported that a user had been publishing packages with very similar names to popular packages (for example, crossenv instead of cross-env). \nThis is a big wakeup call for open source maintainers. Techniques like this are likely to be used more as the maintenance of open source libraries becomes an increasing burden to their owners. After all, starting a new project often has a greater reward than maintaining an existing one, but remember, an open source library is for life, not just for Christmas.\nSanta\u2019s on his sleigh\nIf you use open source libraries, chances are that these libraries also use open source libraries. Your app may only have a handful of dependencies, but tucked in the back of that sleigh may be a whole extra sack of dependencies known as deep dependencies (ones that you didn\u2019t directly install, but are dependencies of that dependency), and these can contain vulnerabilities too.\nLet\u2019s look at the npm package santa as an example. santa has 8 direct dependencies listed on npm. That seems pretty manageable. But that\u2019s just the tip of the iceberg \u2013 have a look at the full dependency tree which contains 109 dependencies \u2013 more dependencies than there are Christmas puns in this article. Only one of these direct dependencies has a vulnerability (at the time of writing), but there are actually 13 other known vulnerabilities in santa, which have been introduced through its deeper dependencies.\nFixing vulnerabilities \u2013 the ultimate christmas gift\nIf you\u2019re a maintainer of open source libraries, taking good care of them is the ultimate gift you can give. Keep your dependencies up to date, use a security tool to monitor and alert you when new vulnerabilities are found in your code, and fix or patch them promptly. This will help keep the whole open source ecosystem healthy.\nWhen you find out about a new vulnerability, you have some options:\nFix the vulnerability via an upgrade\nYou can often fix a vulnerability by upgrading the library to the latest version. Make sure you\u2019re using software that monitors your dependencies for new security issues and lets you know when a fix is ready, otherwise you may be unwittingly using a vulnerable version.\nPatch the vulnerable code\nSometimes, a fix for a vulnerable library isn\u2019t possible. This is often the case when a library is no longer being maintained, or the version of the library being used might be so out of date that upgrading it would cause a breaking change. Patches are bits of code that will fix that particular issue, but won\u2019t change anything else.\nSwitch to a different library\nIf the library you\u2019re using has no fix or patch, you may be better of switching it out for another one, particularly if it looks like it\u2019s being unmaintained.\nResponsibly disclosing vulnerabilities\nKnowing how to responsibly disclose vulnerabilities is something I\u2019m ashamed to admit that I didn\u2019t know about before I joined a security company. But it\u2019s so important! On discovering a new vulnerability, a developer has a few options: \n\nA malicious developer will exploit that vulnerability for their own gain. \nA reckless (or inexperienced) developer will disclose that vulnerability to the world without following a responsible disclosure process. This opens the door to an unethical developer exploiting the vulnerability. At Snyk, we monitor social media for mentions of newly found vulnerabilities so we can add them to our database and share fixes before they get exploited.\nAn ethical and aware developer will follow what\u2019s known as a \u201cresponsible disclosure process\u201d. They will contact the maintainer of the code privately, allowing reasonable time for them to release a fix for the issue and to give others who use that vulnerable code a chance to fix it too.\n\nIt\u2019s important to understand this process if you\u2019re a maintainer or contributor of code. It can be daunting when a report comes in, but understanding and following the right steps will help reduce the risk to the people who use that code.\nSo what does responsible disclosure look like? I\u2019ll take Node.js\u2019s security disclosure policy as an example. They ask that all security issues that are found in Node.js are reported there. (There\u2019s a separate process for bug found in third-party npm packages). Once you\u2019ve reported a vulnerability, they promise to acknowledge it within 24 hours, and to give a more detailed response within 48 hours. If they find that the issue is indeed a security bug, they\u2019ll give you regular updates about the progress they\u2019re making towards fixing it. As part of this, they\u2019ll figure out which versions are affected, and prepare fixes for them. They\u2019ll assign the vulnerability a CVE (Common Vulnerabilities and Exposures) ID and decide on an embargo date for public disclosure. On the date of the embargo, they announce the vulnerability in their Node.js security mailing list and deploy fixes to nodejs.org.\nTim Kadlec published an in-depth article about responsible disclosures if you\u2019re interested in knowing more. It has some interesting horror stories of what happened when the disclosure process was not followed.\nEncourage responsible disclosure\nAdd a SECURITY.md file to your project so someone who wants to message you about a vulnerability can do so without having to hunt around for contact details. Last year, Snyk published a State of Open Source Security report that found 79.5% of maintainers do not have a public disclosure policy. Those that did were considerably more likely to get notified privately about a vulnerability \u2013 73% of maintainers who had one had been notified, vs 21% of maintainers who hadn\u2019t published one one.\nStats from the State of Open Source Security 2017\nBug bounties\nSome companies run bug bounties to encourage the responsible disclosure of vulnerabilities. By offering a reward for finding and safely disclosing a vulnerability, it also reduces the enticement of exploiting a vulnerability over reporting it and getting a quick cash reward. Hackerone is a community of ethical hackers who pentest apps that have signed up for the scheme and get paid when they find a new vulnerability. Wordpress is one such participant, and you can see the long list of vulnerabilities that have been disclosed as part of that program.\nIf you don\u2019t have such a bounty, be prepared to get the odd vulnerability extortion email. Scott Helme, who founded securityheaders.com and report-uri.com, wrote a post about some of the requests he gets for a report about a critical vulnerability in exchange for money. \n\nOn one hand, I want to be as responsible as possible and if my users are at risk then I need to know and patch this issue to protect them. On the other hand this is such irresponsible and unethical behaviour that interacting with this person seems out of the question.\n\nA gift worth giving\nIt\u2019s time to brush the dust off those old gifts that we shared and forgot about. Practice good hygiene and run them through your favourite security tool \u2013 I\u2019m just a little biased towards Snyk, but as Katie mentioned, there\u2019s also npm audit if you use Node.js, and most source code managers like GitHub and GitLab have basic vulnerability alert capabilities.\nStats from the State of Open Source Security 2017\nMost importantly, patch or upgrade away those vulnerabilities away, and if you want to share that Christmas spirit, open fixes for your favourite open source projects, too.", "year": "2018", "author": "Anna Debenham", "author_slug": "annadebenham", "published": "2018-12-17T00:00:00+00:00", "url": "https://24ways.org/2018/its-beginning-to-look-a-lot-like-xssmas/", "topic": "code"} {"rowid": 253, "title": "Clip Paths Know No Bounds", "contents": "CSS Shapes are getting a lot of attention as browser support has increased for properties like shape-outside and clip-path. There are a few ways that we can use CSS Shapes, in particular with the clip-path property, that are not necessarily evident at first glance.\nThe basics of a clip path\nBefore we dig into specific techniques to expand on clip paths, we should first take a look at a basic shape and clip-path. Clip paths can apply a CSS Shape such as a circle(), ellipse(), inset(), or the flexible polygon() to any element. Everywhere in the element that is not within the bounds of our shape will be visually removed.\nUsing the polygon shape function, for example, we can create triangles, stars, or other straight-edged shapes as on Bennett Feely\u2019s Clippy. While fixed units like pixels can be used when defining vertices/points (where the sides meet), percentages will give more flexibility to adapt to the element\u2019s dimensions.\nSee the Pen Clip Path Box by Dan Wilson (@danwilson) on CodePen.\n\nSo for an octagon, we can set eight x, y pairs of percentages to define those points. In this case we start 30% into the width of the box for the first x and at the top of the box for the y and go clockwise. The visible area becomes the interior of the shape made by connecting these points with straight lines.\nclip-path: polygon(\n 30% 0%,\n 70% 0%,\n 100% 30%,\n 100% 70%,\n 70% 100%,\n 30% 100%,\n 0% 70%,\n 0% 30%\n);\nA shape with less vertices than the eye can see\nIt\u2019s reasonable to look at the polygon() function and assume that we need to have one pair of x, y coordinates for every point in our shape. However, we gain some flexibility by thinking outside the box \u2014 or more specifically when we think outside the range of 0% - 100%.\nOur element\u2019s box model will be the ultimate boundary for a clip-path, but we can still define points that exist beyond that natural box for an element.\nSee the Pen CSS Shapes Know No Bounds by Dan Wilson (@danwilson) on CodePen.\n\nBy going beyond the 0% - 100% range we can turn a polygon with three points into a quadrilateral, a pentagon, or a hexagon. In this example the shapes used are all similar triangles defining three points, but due to exceeding the bounds for our element box we visually see one triangle and two pentagons.\nOur earlier octagon can similarly be made with only four points.\nSee the Pen Octagon with four points by Dan Wilson (@danwilson) on CodePen.\n\nMultiple shapes, one clip path\nWe can lean on this power of going beyond the bounds of our element to also create more than one visual shape with a single polygon().\nSee the Pen Multiple shapes from one clip-path by Dan Wilson (@danwilson) on CodePen.\n\nDepending on how we lay it out we can make each shape directly, but since we know we can move around in the space beyond the element\u2019s box, we can draw extra lines to help us get where we need to go next as needed.\nIt can also help us in slicing an element. Combined with CSS Variables, we can work with overlapping elements and clip each one into alternating strips. This example is two elements, each divided into a few rectangles.\nSee the Pen 24w: Sliced Icon by Dan Wilson (@danwilson) on CodePen.\n\nDifferent shapes with fill rules\nA polygon() is not just a collection of points. There is one more key piece to its puzzle according to the specification \u2014 the Fill Rule. The default value we have been using so far is nonzero, and the second option is evenodd. These two values help determine what is considered inside and outside the shape.\nSee the Pen A Star Multiways by Dan Wilson (@danwilson) on CodePen.\n\nAs lines intersect we can get into situations where pieces seemingly on the inside can be considered outside the shape boundary. When using the evenodd fill rule, we can determine if a given point is inside or outside the boundary by drawing a ray from the point in any direction. If the ray crosses an even number of the clip path\u2019s lines, the point is considered outside, and if it crosses an odd number the point is inside.\nOrder of operations\nIt is important to note that there are many CSS properties that affect the final composited appearance of an element via CSS Filters, Blend Modes, and more.\nThese compositing effects are applied in the order:\n\nCSS Filters (e.g. filter: blur(2px))\nClipping (e.g. what this article is about)\nMasking (Clipping\u2019s cousin)\nBlend Modes (e.g. mix-blend-mode: multiply)\nOpacity\n\nThis means if we want to have a star shape and blur it, the blur will happen before the clip. And since blurs are most noticeable around the edge of an element box, the effect might be completely lost since we have clipped away the element\u2019s box edges.\nSee the Pen Order of Filter + Clip by Dan Wilson (@danwilson) on CodePen.\n\nIf we want the edges of the star to be blurred, we do have the option to wrap our clipped element in a blurred parent element. The inner element will be rendered first (with its star clip) and then the parent will blur its contents normally.\nRevealing content with animation\nCSS Shapes can be transitioned and animated, allowing us to animate the visual area of our element without affecting the content within. For example, we can start with visually hidden content (fully clipped) and grow the clip path to reveal the content within. The important caveat for polygon() is that the number of points need to be the same for each keyframe, as well as the fill rule. Otherwise the browser will not have enough information to interpolate the intermediate values. \nSee the Pen Clip Path Shape Reveal by Dan Wilson (@danwilson) on CodePen.\n\nDon\u2019t keep CSS Shapes in a box\nClip paths give us some interesting new possibilities, especially when we think of them as more than just basic shapes. We may be heavily modifying the visual representation of our elements with clip-path, but the underlying content remains unchanged and accessible which makes this property fairly powerful.", "year": "2018", "author": "Dan Wilson", "author_slug": "danwilson", "published": "2018-12-20T00:00:00+00:00", "url": "https://24ways.org/2018/clip-paths-know-no-bounds/", "topic": "code"} {"rowid": 257, "title": "The (Switch)-Case for State Machines in User Interfaces", "contents": "You\u2019re tasked with creating a login form. Email, password, submit button, done.\n\u201cThis will be easy,\u201d you think to yourself.\nLogin form by Selecto\nYou\u2019ve made similar forms many times in the past; it\u2019s essentially muscle memory at this point. You\u2019re working closely with a designer, who gives you a beautiful, detailed mockup of a login form. Sure, you\u2019ll have to translate the pixels to meaningful, responsive CSS values, but that\u2019s the least of your problems.\nAs you\u2019re writing up the HTML structure and CSS layout and styles for this form, you realize that you don\u2019t know what the successful \u201clogged in\u201d page looks like. You remind the designer, who readily gives it to you. But then you start thinking more and more about how the login form is supposed to work.\n\nWhat if login fails? Where do those errors show up?\nShould we show errors differently if the user forgot to enter their email, or password, or both?\nOr should the submit button be disabled?\nShould we validate the email field?\nWhen should we show validation errors \u2013 as they\u2019re typing their email, or when they move to the password field, or when they click submit? (Note: many, many login forms are guilty of this.)\nWhen should the errors disappear?\nWhat do we show during the login process? Some loading spinner?\nWhat if loading takes too long, or a server error occurs?\n\nMany more questions come up, and you (and your designer) are understandably frustrated. The lack of upfront specification opens the door to scope creep, which readily finds itself at home in all the unexplored edge cases.\nModeling Behavior\nDescribing all the possible user flows and business logic of an application can become tricky. Ironically, user stories might not tell the whole story \u2013 they often leave out potential edge-cases or small yet important bits of information.\nHowever, one important (and very old) mathematical model of computation can be used for describing the behavior and all possible states of a user interface: the finite state machine.\nThe general idea, as it applies to user interfaces, is that all of our applications can be described (at some level of abstraction) as being in one, and only one, of a finite number of states at any given time. For example, we can describe our login form above in these states:\n\nstart - not submitted yet\nloading - submitted and logging in\nsuccess - successfully logged in\nerror - login failed\n\nAdditionally, we can describe an application as accepting a finite number of events \u2013 that is, all the possible events that can be \u201csent\u201d to the application, either from the user or some other external entity:\n\nSUBMIT - pressing the submit button\nRESOLVE - the server responds, indicating that login is successful\nREJECT - the server responds, indicating that login failed\n\nThen, we can combine these states and events to describe the transitions between them. That is, when the application is in one state, an an event occurs, we can specify what the next state should be:\n\nFrom the start state, when the SUBMIT event occurs, the app should be in the loading state.\nFrom the loading state, when the RESOLVE event occurs, login succeeded and the app should be in the success state.\nIf login fails from the loading state (i.e., when the REJECT event occurs), the app should be in the error state.\nFrom the error state, the user should be able to retry login: when the SUBMIT event occurs here, the app should go to the loading state.\nOtherwise, if any other event occurs, don\u2019t do anything and stay in the same state.\n\nThat\u2019s a pretty thorough description, similar to a user story! It\u2019s also a bit more symbolic than a user story (e.g., \u201cwhen the SUBMIT event occurs\u201d instead of \u201cwhen the user presses the submit button\u201d), and that\u2019s for a reason. By representing states, events, and transitions symbolically, we can visualize what this state machine looks like:\n\nEvery state is represented by a box, and every event is connected to a transition arrow that connects two states. This makes it intuitive to follow the flow and understand what the next state should be given the current state and an event.\nFrom Visuals to Code\nDrawing a state machine doesn\u2019t require any special software; in fact, using paper and pencil (in case anything changes!) does the job quite nicely. However, one common problem is handoff: it doesn\u2019t matter how detailed a user story or how well-designed a visualization is, it eventually has to be coded in order for it to become part of a real application.\nWith the state machine model described above, the same visual description can be mapped directly to code. Traditionally, and as the title suggests, this is done using switch/case statements:\nfunction loginMachine(state, event) {\n switch (state) {\n case 'start':\n if (event === 'SUBMIT') {\n return 'loading';\n }\n break;\n case 'loading':\n if (event === 'RESOLVE') {\n return 'success';\n } else if (event === 'REJECT') {\n return 'error';\n }\n break;\n case 'success':\n // Accept no further events\n break;\n case 'error':\n if (event === 'SUBMIT') {\n return 'loading';\n }\n break;\n default:\n // This should never occur\n return undefined;\n }\n}\n\nconsole.log(loginMachine('start', 'SUBMIT'));\n// => 'loading'\nThis is fine (I suppose) but personally, I find it much easier to use objects:\nconst loginMachine = {\n initial: \"start\",\n states: {\n start: {\n on: { SUBMIT: 'loading' }\n },\n loading: {\n on: {\n REJECT: 'error',\n RESOLVE: 'success'\n }\n },\n error: {\n on: {\n SUBMIT: 'loading'\n }\n },\n success: {}\n }\n};\n\nfunction transition(state, event) {\n return machine\n .states[state] // Look up the state\n .on[event] // Look up the next state based on the event\n || state; // If not found, return the current state\n}\n\nconsole.log(transition('start', 'SUBMIT'));\nAs you might have noticed, the loginMachine is a plain JS object, and can be written in JSON. This is important because it allows the machine to be visualized by a 3rd-party tool, as demonstrated here:\n\nA Common Language Between Designers and Developers\nAlthough finite state machines are a fundamental part of computer science, they have an amazing potential to bridge the application specification gap between designers and developers, as well as project managers, stakeholders, and more. By designing a state machine visually and with code, designers and developers alike can:\n\nidentify all possible states, and potentially missing states\ndescribe exactly what should happen when an event occurs on a given state, and prevent that event from having unintended side-effects in other states (ever click a submit button more than once?)\neliminate impossible states and identify states that are \u201cunreachable\u201d (have no entry transition) or \u201csunken\u201d (have no exit transition)\nadd features with full confidence of knowing what other states it might affect\nsimplify redundant states or complex user flows\ncreate test paths for almost every possible user flow, and easily identify edge cases\ncollaborate better by understanding the entire application model equally.\n\nNot a New Idea\nI\u2019m not the first to suggest that state machines can help bridge the gap between design and development.\n\nVince MingPu Shao wrote an article about designing UI states and communicating with developers effectively with finite state machines\nUser flow diagrams, which visually describe the paths that a user can take through an app to achieve certain goals, are essentially state machines. Numerous tools, from Sketch plugins to standalone apps, exist for creating them.\nIn 1999, Ian Horrocks wrote a book titled \u201cConstructing the User Interface with Statecharts\u201d, which takes state machines to the next level and describes the inherent difficulties (and solutions) with creating complex UIs. The ideas in the book are still relevant today.\nMore than a decade earlier, David Harel published \u201cStatecharts: A Visual Formalism for Complex Systems\u201d, in which the statechart - an extended hierarchical state machine model - is born.\n\nState machines and statecharts have been used for complex systems and user interfaces, both physical and digital, for decades, and are especially prevalent in other industries, such as game development and embedded electronic systems. Even NASA uses statecharts for the Curiosity Rover and more, citing many benefits:\n\nVisualized modeling\nPrecise diagrams\nAutomatic code generation\nComprehensive test coverage\nAccommodation of late-breaking requirements changes\n\nMoving Forward\nIt\u2019s time that we improve how we communicate between designers and developers, much less improve the way we develop UIs to deliver the best, bug-free, optimal user experience. There is so much more to state machines and statecharts than just being a different way of designing and coding. For more resources:\n\nThe World of Statecharts is a comprehensive guide by Erik Mogensen in using statecharts in your applications\nThe Statechart Community on Spectrum is always full of interesting ideas and questions related to state machines, statecharts, and software modeling\nI gave a talk at React Rally over a year ago about how state machines (finite automata) can improve the way we develop applications. The latest one is from Reactive Conf, where I demonstrate how statecharts can be used to automatically generate test cases.\nI have also been working on XState, which is a library for \u201cstate machines and statecharts for the modern web\u201d. You can create and visualize statecharts in JavaScript, and use them in any framework (and soon enough, multiple different languages).\n\nI\u2019m excited about the future of developing web and mobile applications with statecharts, especially with regard to faster design/development cycles, auto-generated testing, better error prevention, comprehensive analytics, and even the use of model-based reinforcement learning and artificial intelligence to greatly improve the user experience.", "year": "2018", "author": "David Khourshid", "author_slug": "davidkhourshid", "published": "2018-12-12T00:00:00+00:00", "url": "https://24ways.org/2018/state-machines-in-user-interfaces/", "topic": "code"} {"rowid": 264, "title": "Dynamic Social Sharing Images", "contents": "Way back when social media was new, you could be pretty sure that whatever you posted would be read by those who follow you. If you\u2019d written a blog post and you wanted to share it with those who follow you, you could post a link and your followers would see it in their streams. Oh heady days! \nWith so many social channels and a proliferation of content and promotions flying past in everyone\u2019s streams, it\u2019s no longer enough to share content on social media, you have to actively sell it if you want it to be seen. You really need to make the most of every opportunity to catch a reader\u2019s attention if you\u2019re trying to get as many eyes as possible on that sweet, sweet social content.\nOne of the best ways to grab attention with your posts or tweets is to include an image. There\u2019s heaps of research that says that having images in your posts helps them stand out to followers. Reports I found showed figures from anything from 35% to 150% improvement from just having image in a post. Unfortunately, the details were surrounded with gross words like engagement and visual marketing assets and so I had to close the page before I started to hate myself too much.\nSo without hard stats to quote, we\u2019ll call it a rule of thumb. The rule of thumb is that posts with images will grab more attention than those without, so it makes sense that when adding pages to a website, you should make sure that they have social media sharing images associated with them.\nAdding sharing images\nThe process for declaring an image to be used in places like Facebook and Twitter is very simple, and at this point is familiar to many of us. You add a meta tag to the head of the page to point to the location of the image to use. When a link to the page is added to a post, the social network will fetch the page, look for the meta tag and then use the image you specified.\n\nThere\u2019s a good post on this over at CSS-Tricks if you need to bone up on the details of this and other similar meta tags for social media sharing.\nThis is all fine and well for content that has a very obvious choice of image to go along with it, but what if you don\u2019t necessarily have an image? One approach is to use stock photography, but that\u2019s not going to be right for every situation.\nThis was something we faced with 24 ways in 2017. We wanted to add images to the tweets we post each day announcing a new article. Some articles have images, but not all, and there tended not to be any consistency in terms of imagery from one article to the next. We always have an author photograph, but those don\u2019t usually lend themselves directly to being the main \u2018hero\u2019 image for an article.\nPutting his thinking cap on, Paul came up with a design for an image that used the author photo along with a quote extracted from the article.\nOne of the hand-made sharing images from 2017\nEach day we would pick a quote from the article, and Paul would manually compose an image to be uploaded to the site. The results were great, but the whole process was a bit too labour intensive and relied on an individual (Paul) being available each day to do the work. I thought we could probably improve this.\nHatching a new plan\nOne initial idea I came up with was to script the image editor to dynamically build a new image by pulling content from our database. Sketch has plugins available to pull JSON content into a design, and our CMS can easily output JSON data, so that was one possibility.\nThe more I thought about this and how much I wish graphic design tools worked just a little bit more like CSS, the obvious solution hit me. We should just build it with CSS!\nIn fact, as the author name and image already exist in our CMS, and the visual styling is based on the design of the website, couldn\u2019t this just be another page on the site generated by the CMS?\nBreaking it down, I figured the steps needed would be something like:\n\nCreate the CSS to lay out a component that could be turned into an image\nAdd a new field to articles in the CMS to hold a handpicked quote\nBuild a new article template in the CMS to output the author name and quote dynamically for any article\n\u2026 um \u2026 screenshot?\n\nI thought I\u2019d get cracking and see if I could figure out the final steps later.\nBuilding the page\nThe first thing to tackle was the basic HTML and CSS to lay out the components for our image. That bit was really easy, as I just asked Paul to do it. Everyone should have a Paul.\nPaul\u2019s code uses a fixed dimension container in the HTML, set to 600 \u00d7 315px. This is to make it the correct aspect ratio for Facebook\u2019s recommended image size. It\u2019s useful to remember here that it doesn\u2019t need to be responsive or robust, as the page only needs to lay out correctly for a screenshot and a fixed size in a known browser.\nWith the markup and CSS in place, I turned this into a new template. Our CMS can easily display content through any number of templates, so I created a version of the article template that was totally stripped down. It only included the author details and the quote, along with Paul\u2019s markup.\nI also added the quote as a new field on the article in the CMS, so each \u2018image\u2019 could be quickly and easily customised in the editing process.\nI added a new field to the article template to capture the quote.\nWith very little effort, we quickly had a page to dynamically generate our \u2018image\u2019 right from the CMS. You can see any of them by adding /sharing onto the end of an article URL for any 2018 article.\nOur automatically generated layout direct from the CMS\nIt soon became clear that the elusive Step 4 was going to be the tricky part. I can create a small page on the site that looks like an image, but how should I go about turning it into one? An obvious route is to screenshot the page by hand, but that\u2019s going back to some of the manual steps I was trying to eliminate, and also opens up a possibility for errors to be made. But it did lead me to the thought\u2026 how could I automatically take a screenshot?\nEnter Puppeteer\nPuppeteer is a Node.js library that provides a nice API onto Headless Chrome. What is Headless Chrome, you ask? It\u2019s just a version of the Chrome browser than runs from the command line without ever drawing anything to a user interface window. It loads pages, renders CSS, runs JavaScript, pretty much every normal thing that Chrome on the desktop does, but without a clicky user interface.\nHeadless Chrome can be used for all sorts of things such as running automated tests on front-end code after making changes, or\u2026 get this\u2026 rendering pages that can be used for screenshots. The actual process of writing some code to control Chrome and to take the screenshot is where Puppeteer comes in. Puppeteer puts a friendly layer in front of big old scary Chrome to enable us to interact with it using simple JavaScript code running in Node.\nUsing Puppeteer, I can write a small script that will repeatably turn a URL into an image. So simple is it to do this, that\u2019s it\u2019s actually Puppeteer\u2019s \u2018hello world\u2019 example.\nFirst you install Puppeteer. It downloads a compatible headless browser (actually Chromium) as a dependancy, so you don\u2019t need to worry about installing that. At the command line:\nnpm i puppeteer\nThen save a new file as example.js with this code:\nconst puppeteer = require('puppeteer');\n\n(async () => {\n const browser = await puppeteer.launch();\n const page = await browser.newPage();\n await page.goto('https://example.com');\n await page.screenshot({path: 'example.png'});\n await browser.close();\n})();\nand then run it using Node:\nnode example.js\nThis will output an image file example.png to disk, which contains a screenshot of, in this case https://example.com. The logic of the code is reasonably easy to follow:\n\nLaunch a browser\nOpen up a new page\nGoto a URL\nTake a screenshot\nClose the browser\n\nThe async function and await keywords are a way to have the script pause and wait for normally asynchronous code to return before proceeding. That\u2019s useful with actions like loading a web page that might take some time to complete. They\u2019re used with Promises, and the effect is to make asynchronous code behave as if it\u2019s synchronous. You can read more about async and await at MDN if you\u2019re interested.\nThat\u2019s a good proof-of-concept using the basic Puppeteer example. I can take a screenshot of a URL! But what happens if I put the URL of my new special page in there?\nOur content is up in the corner of the image with lots of empty space.\nThat\u2019s not great. It\u2019s okay, but not great. It looks like that, by default, Puppeteer takes a screenshot with a resolution of 800 \u00d7 600, so we need to find out how to adjust that. Fortunately, the docs aren\u2019t the worst and I was able to find the page.setViewport() method pretty easily.\nconst puppeteer = require('puppeteer');\n\n(async () => {\n const browser = await puppeteer.launch();\n const page = await browser.newPage();\n await page.goto('https://24ways.org/2018/clip-paths-know-no-bounds/sharing');\n await page.setViewport({\n width: 600,\n height: 315\n });\n await page.screenshot({path: 'example.png'});\n await browser.close();\n})();\nThis worked! The screenshot is now 600 \u00d7 315 as expected. That\u2019s exactly what we asked for. Trouble is, that\u2019s a bit low res and it is nearly 2019 after all. While in those docs, I noticed the deviceScaleFactor option that can be passed to page.setViewport(). Setting that to 2 gives us an image of the same area of the screen, but with twice as many pixels.\n await page.setViewport({\n width: 600,\n height: 315,\n deviceScaleFactor: 2\n });\nPerfect! We now have a programmatic way of turning a URL into an image.\nImproving the script\nRather than having a script with a fixed URL in it that outputs an image called example.png, the next step is to make that a bit more dynamic. The aim here is to have a script that we can run with a URL as an argument and have it output an image for that one page. That way we can run it manually, or hook it into part of our site\u2019s build process to automate the generation of the image.\nOur goal is to call the script like this:\nnode shoot-sharing-image.js https://24ways.org/2018/clip-paths-know-no-bounds/\nAnd I want the image to come out with the name clip-paths-know-no-bounds.png. To do that, I need to have my script look for command arguments, and then to split the URL up to grab the slug from it.\n// Get the URL and the slug segment from it\nconst url = process.argv[2];\nconst segments = url.split('/');\n// Get the second-to-last segment (the slug)\nconst slug = segments[segments.length-2];\nWe can then use these variables later in the script, remembering to add sharing back onto the end of the URL to get our dedicated page.\n(async () => {\n const browser = await puppeteer.launch();\n const page = await browser.newPage();\n await page.goto(url + 'sharing');\n await page.setViewport({\n width: 600,\n height: 315,\n deviceScaleFactor: 2\n });\n await page.screenshot({path: slug + '.png'});\n await browser.close();\n})();\nOnce you\u2019re generating the image with Node, there\u2019s all sorts of things you can do with it. An obvious step is to move it to the correct location within your site or project.\nYou can also run optimisations on the file. I\u2019m using imagemin with pngquant to reduce the file size a little.\nconst imagemin = require('imagemin');\nconst imageminPngquant = require('imagemin-pngquant');\n\nawait imagemin([slug + '.png'], 'build', {\n plugins: [\n imageminPngquant({quality: '75-90'})\n ]\n});\n\nYou can see the completed example as a gist.\nIntegrating it with your CMS\nSo we now have a command we can run to take a URL and generate a custom image for that URL. It\u2019s in a format that can be called by any sort of build script, or triggered from a publishing hook in a CMS. Exactly how you do that is going to depend on the way your site is built and the technology stack you\u2019re using, but it\u2019s likely not too hard as long as you can run a command as part of the process.\nFor 24 ways this year, I\u2019ve been running the script by hand once each article is ready. My script adds the file to a git repo and pushes to a deployment remote that is configured to automatically deploy static assets to our server. Along with our theme of making incremental improvements, next year I\u2019ll look to automate this one step further.\nWe may also look at having a few slightly different layouts to choose from, so that each day isn\u2019t exactly the same as the last. Interestingly, we could even try some A/B tests to see if there\u2019s any particular format of image or type of quote that does a better job of grabbing attention. There are lots of possibilities!\n\nBy using a bit of ingenuity, some custom CMS templates, and the very useful Puppeteer project, we\u2019ve been able to reliably produce dynamic social media sharing images for all of our articles. In doing so, we reduced the dependancy on any individual for producing those images, and opened up a world of possibilities in how we use those images.\nI hope you\u2019ll give it a try!", "year": "2018", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2018-12-24T00:00:00+00:00", "url": "https://24ways.org/2018/dynamic-social-sharing-images/", "topic": "code"} {"rowid": 241, "title": "Jank-Free Image Loads", "contents": "There are a few fundamental problems with embedding images in pages of hypertext; perhaps chief among them is this: text is very light and loads rather fast; images are much heavier and arrive much later. Consequently, millions (billions?) of times a day, a hapless Web surfer will start reading some text on a page, and then \u2014\nYour browser doesn\u2019t support HTML5 video. Here is\n a link to the video instead.\n\n\u2014 oops! \u2014 an image pops in above it, pushing said text down the page, and our poor reader loses their place.\nBy default, partially-loaded pages have the user experience of a slippery fish, or spilled jar of jumping beans. For the rest of this article, I shall call that jarring, no-good jumpiness by its name: jank. And I\u2019ll chart a path into a jank-free future \u2013 one in which it\u2019s easy and natural to author elements that load like this:\nYour browser doesn\u2019t support HTML5 video. Here is\n a link to the video instead.\n\nJank is a very old problem, and there is a very old solution to it: the width and height attributes on . The idea is: if we stick an image\u2019s dimensions right into the HTML, browsers can know those dimensions before the image loads, and reserve some space on the layout for it so that nothing gets bumped down the page when the image finally arrives.\n\nwidth\nSpecifies the intended width of the image in pixels. When given together with the height, this allows user agents to reserve screen space for the image before the image data has arrived over the network.\n\n\u2014The HTML 3.2 Specification, published on January 14 1997\nUnfortunately for us, when width and height were first spec\u2019d and implemented, layouts were largely fixed and images were usually only intended to render at their fixed, actual dimensions. When image sizing gets fluid, width and height get weird:\nSee the Pen fluid width + fixed height = distortion by Eric Portis (@eeeps) on CodePen.\n\nwidth and height are too rigid for the responsive world. What we need, and have needed for a very long time, is a way to specify fixed aspect ratios, to pair with our fluid widths.\nI have good news, bad news, and great news.\nThe good news is, there are ways to do this, now, that work in every browser. Responsible sites, and responsible developers, go through the effort to do them.\nThe bad news is that these techniques are all terrible, cumbersome hacks. They\u2019re difficult to remember, difficult to understand, and they can interact with other pieces of CSS in unexpected ways.\nSo, the great news: there are two on-the-horizon web platform features that are trying to make no-jank, fixed-aspect-ratio, fluid-width images a natural part of the web platform.\naspect-ratio in CSS\nThe first proposed feature? An aspect-ratio property in CSS!\nThis would allow us to write CSS like this:\nimg {\n width: 100%;\n}\n\n.thumb {\n aspect-ratio: 1/1;\n}\n\n.hero {\n aspect-ratio: 16/9;\n}\nThis\u2019ll work wonders when we need to set aspect ratios for whole classes of images, which are all sized to fit within pre-defined layout slots, like the .thumb and .hero images, above.\nAlas, the harder problem, in my experience, is not images with known-ahead-of-time aspect ratios. It\u2019s images \u2013 possibly user generated images \u2013 that can have any aspect ratio. The really tricky problem is unknown-when-you\u2019re-writing-your-CSS aspect ratios that can vary per-image. Using aspect-ratio to reserve space for images like this requires inline styles:\n\nAnd inline styles give me the heebie-jeebies! As a web developer of a certain age, I have a tiny man in a blue beanie permanently embedded deep within my hindbrain, who cries out in agony whenever I author a style=\"\" attribute. And you know what? The old man has a point! By sticking super-high-specificity inline styles in my content, I\u2019m cutting off my, (or anyone else\u2019s) ability to change those aspect ratios, for whatever reason, later.\nHow might we specify aspect ratios at a lower level? How might we give browsers information about an image\u2019s dimensions, without giving them explicit instructions about how to style it?\nI\u2019ll tell you: we could give browsers the intrinsic aspect ratio of the image in our HTML, rather than specifying an extrinsic aspect ratio!\nA brief note on intrinsic and extrinsic sizing\nWhat do I mean by \u201cintrinsic\u201d and \u201cextrinsic?\u201d\nThe intrinsic size of an image is, put simply, how big it\u2019d be if you plopped it onto a page and applied no CSS to it whatsoever. An 800\u00d7600 image has an intrinsic width of 800px.\nThe extrinsic size of an image, then, is how large it ends up after CSS has been applied. Stick a width: 300px rule on that same 800\u00d7600 image, and its intrinsic size (accessible via the Image.naturalWidth property, in JavaScript) doesn\u2019t change: its intrinsic size is still 800px. But this image now has an extrinsic size (accessible via Image.clientWidth) of 300px.\nIt surprised me to learn this year that height and width are interpreted as presentational hints and that they end up setting extrinsic dimensions (albeit ones that, unlike inline styles, have absolutely no specificity).\nCSS aspect-ratio lets us avoid setting extrinsic heights and widths \u2013 and instead lets us give images (or anything else) an extrinsic aspect ratio, so that as soon as we set one dimension (possibly to a fluid width, like 100%!), the other dimension is set automatically in relation to it.\nThe last tool I\u2019m going to talk about gets us out of the extrinsic sizing game all together \u2014 which, I think, is only appropriate for a feature that we\u2019re going to be using in HTML.\nintrinsicsize in HTML\nThe proposed intrinsicsize attribute will let you do this:\n\nThat tells the browser, \u201chey, this image.jpg that I\u2019m using here \u2013 I know you haven\u2019t loaded it yet but I\u2019m just going to let you know right away that it\u2019s going to have an intrinsic size of 800\u00d7600.\u201d This gives the browser enough information to reserve space on the layout for the image, and ensures that any and all extrinsic sizing instructions, specified in our CSS, will layer cleanly on top of this, the image\u2019s intrinsic size.\nYou may ask (I did!): wait, what if my references multiple resources, which all have different intrinsic sizes? Well, if you\u2019re using srcset, intrinsicsize is a bit of a misnomer \u2013 what the attribute will do then, is specify an intrinsic aspect ratio:\n\nIn the future (and behind the \u201cExperimental Web Platform Features\u201d flag right now, in Chrome 71+), asking this image for its .naturalWidth would not return 3 \u2013 it will return whatever 75vw is, given the current viewport width. And Image.naturalHeight will return that width, divided by the intrinsic aspect ratio: 3/2.\nCan\u2019t wait\nI seem to have gotten myself into the weeds a bit. Sizing on the web is complicated!\nDon\u2019t let all of these details bury the big takeaway here: sometime soon (\ud83e\udd1e 2019\u203d \ud83e\udd1e), we\u2019ll be able to toss our terrible aspect-ratio hacks into the dustbin of history, get in the habit of setting aspect-ratios in CSS and/or intrinsicsizes in HTML, and surf a less-frustrating, more-performant, less-janky web. I can\u2019t wait!", "year": "2018", "author": "Eric Portis", "author_slug": "ericportis", "published": "2018-12-21T00:00:00+00:00", "url": "https://24ways.org/2018/jank-free-image-loads/", "topic": "code"} {"rowid": 260, "title": "The Art of Mathematics: A Mandala Maker Tutorial", "contents": "In front-end development, there\u2019s often a great deal of focus on tools that aim to make our work more efficient. But what if you\u2019re new to web development? When you\u2019re just starting out, the amount of new material can be overwhelming, particularly if you don\u2019t have a solid background in Computer Science. But the truth is, once you\u2019ve learned a little bit of JavaScript, you can already make some pretty impressive things.\nA couple of years back, when I was learning to code, I started working on a side project. I wanted to make something colorful and fun to share with my friends. This is what my app looks like these days:\nMandala Maker user interface\nThe coolest part about it is the fact that it\u2019s a tool: anyone can use it to create something original and brand new. \nIn this tutorial, we\u2019ll build a smaller version of this app \u2013 a symmetrical drawing tool in ES5, JavaScript and HTML5. The tutorial app will have eight reflections, a color picker and a Clear button. Once we\u2019re done, you\u2019re on your own and can tweak it as you please. Be creative!\nPreparations: a blank canvas\nThe first thing you\u2019ll need for this project is a designated drawing space. We\u2019ll use the HTML5 canvas element and give it a width and a height of 600px (you can set the dimensions to anything else if you like).\nFiles\nCreate 3 files: index.html, styles.css, main.js. Don\u2019t forget to include your JS and CSS files in your HTML. \n\n\n\n \n \n \n\n\n \n

Your browser doesn't support canvas.

\n
\n\n\nI\u2019ll ask you to update your HTML file at a later point, but the CSS file we\u2019ll start with will stay the same throughout the project. This is the full CSS we are going to use:\nbody {\n background-color: #ccc;\n text-align: center;\n}\n\ncanvas {\n touch-action: none;\n background-color: #fff;\n}\n\nbutton {\n font-size: 110%;\n}\nNext steps\nWe are done with our preparations and ready to move on to the actual tutorial, which is made up of 4 parts:\n\nBuilding a simple drawing app with one line and one color \nAdding a Clear button and a color picker\nAdding more functionality: 2 line drawing (add the first reflection)\nAdding more functionality: 8 line drawing (add 6 more reflections!)\n\nInteractive demos\nThis tutorial will be accompanied by four CodePens, one at the end of each section. In my own app I originally used mouse events, and only added touch events when I realized mobile device support was (A) possible, and (B) going to make my app way more accessible. For the sake of code simplicity, I decided that in this tutorial app I will only use one event type, so I picked a third option: pointer events. These are supported by some desktop browsers and some mobile browsers. An up-to-date version of Chrome is probably your best bet.\nPart 1: A simple drawing app\nLet\u2019s get started with our main.js file. Our basic drawing app will be made up of 6 functions: init, drawLine, stopDrawing, recordPointerLocation, handlePointerMove, handlePointerDown. It also has nine variables:\nvar canvas, context, w, h,\n prevX = 0, currX = 0, prevY = 0, currY = 0,\n draw = false;\nThe variables canvas and context let us manipulate the canvas. w is the canvas width and h is the canvas height. The four coordinates are used for tracking the current and previous location of the pointer. A short line is drawn between (prevX, prevY) and (currX, currY) repeatedly many times while we move the pointer upon the canvas. For your drawing to appear, three conditions must be met: the pointer (be it a finger, a trackpad or a mouse) must be down, it must be moving and the movement has to be on the canvas. If these three conditions are met, the boolean draw is set to true. \n1. init\nResponsible for canvas set up, this listens to pointer events and the location of their coordinates and sets everything in motion by calling other functions, which in turn handle touch and movement events. \nfunction init() {\n canvas = document.querySelector(\"canvas\");\n context = canvas.getContext(\"2d\");\n w = canvas.width;\n h = canvas.height;\n\n canvas.onpointermove = handlePointerMove;\n canvas.onpointerdown = handlePointerDown;\n canvas.onpointerup = stopDrawing;\n canvas.onpointerout = stopDrawing;\n}\n2. drawLine\nThis is called to action by handlePointerMove() and draws the pointer path. It only runs if draw = true. It uses canvas methods you can read about in the canvas API documentation. You can also learn to use the canvas element in this tutorial.\nlineWidth and linecap set the properties of our paint brush, or digital pen, but pay attention to beginPath and closePath. Between those two is where the magic happens: moveTo and lineTo take canvas coordinates as arguments and draw from (a,b) to (c,d), which is to say from (prevX,prevY) to (currX,currY).\nfunction drawLine() {\n var a = prevX,\n b = prevY,\n c = currX,\n d = currY;\n\n context.lineWidth = 4;\n context.lineCap = \"round\";\n\n context.beginPath();\n context.moveTo(a, b);\n context.lineTo(c, d);\n context.stroke();\n context.closePath();\n}\n3. stopDrawing\nThis is used by init when the pointer is not down (onpointerup) or is out of bounds (onpointerout).\nfunction stopDrawing() {\n draw = false;\n}\n4. recordPointerLocation\nThis tracks the pointer\u2019s location and stores its coordinates. Also, you need to know that in computer graphics the origin of the coordinate space (0,0) is at the top left corner, and all elements are positioned relative to it. When we use canvas we are dealing with two coordinate spaces: the browser window and the canvas itself. This function converts between the two: it subtracts the canvas offsetLeft and offsetTop so we can later treat the canvas as the only coordinate space. If you are confused, read more about it.\nfunction recordPointerLocation(e) {\n prevX = currX;\n prevY = currY;\n currX = e.clientX - canvas.offsetLeft;\n currY = e.clientY - canvas.offsetTop;\n}\n5. handlePointerMove\nThis is set by init to run when the pointer moves. It checks if draw = true. If so, it calls recordPointerLocation to get the path and drawLine to draw it.\nfunction handlePointerMove(e) {\n if (draw) {\n recordPointerLocation(e);\n drawLine();\n }\n}\n6. handlePointerDown\nThis is set by init to run when the pointer is down (finger is on touchscreen or mouse it clicked). If it is, calls recordPointerLocation to get the path and sets draw to true. That\u2019s because we only want movement events from handlePointerMove to cause drawing if the pointer is down.\nfunction handlePointerDown(e) {\n recordPointerLocation(e);\n draw = true;\n}\nFinally, we have a working drawing app. But that\u2019s just the beginning!\nSee the Pen Mandala Maker Tutorial: Part 1 by Hagar Shilo (@hagarsh) on CodePen.\n\nPart 2: Add a Clear button and a color picker\nNow we\u2019ll update our HTML file, adding a menu div with an input of the type and class color and a button of the class clear.\n\n \n

Your browser doesn't support canvas.

\n
\n
\n \n \n
\n\nColor picker\nThis is our new color picker function. It targets the input element by its class and gets its value. \nfunction getColor() {\n return document.querySelector(\".color\").value;\n}\nUp until now, the app used a default color (black) for the paint brush/digital pen. If we want to change the color we need to use the canvas property strokeStyle. We\u2019ll update drawLine by adding strokeStyle to it and setting it to the input value by calling getColor.\nfunction drawLine() {\n //...code... \n context.strokeStyle = getColor();\n context.lineWidth = 4;\n context.lineCap = \"round\";\n\n //...code... \n}\nClear button\nThis is our new Clear function. It responds to a button click and displays a dialog asking the user if she really wants to delete the drawing.\nfunction clearCanvas() {\n if (confirm(\"Want to clear?\")) {\n context.clearRect(0, 0, w, h);\n }\n}\nThe method clearRect takes four arguments. The first two (0,0) mark the origin, which is actually the top left corner of the canvas. The other two (w,h) mark the full width and height of the canvas. This means the entire canvas will be erased, from the top left corner to the bottom right corner. \nIf we were to give clearRect a slightly different set of arguments, say (0,0,w/2,h), the result would be different. In this case, only the left side of the canvas would clear up.\nLet\u2019s add this event handler to init:\nfunction init() {\n //...code...\n canvas.onpointermove = handleMouseMove;\n canvas.onpointerdown = handleMouseDown;\n canvas.onpointerup = stopDrawing;\n canvas.onpointerout = stopDrawing;\n document.querySelector(\".clear\").onclick = clearCanvas;\n}\nSee the Pen Mandala Maker Tutorial: Part 2 by Hagar Shilo (@hagarsh) on CodePen.\n\nPart 3: Draw with 2 lines\nIt\u2019s time to make a line appear where no pointer has gone before. A ghost line! \nFor that we are going to need four new coordinates: a', b', c' and d' (marked in the code as a_, b_, c_ and d_). In order for us to be able to add the first reflection, first we must decide if it\u2019s going to go over the y-axis or the x-axis. Since this is an arbitrary decision, it doesn\u2019t matter which one we choose. Let\u2019s go with the x-axis. \nHere is a sketch to help you grasp the mathematics of reflecting a point across the x-axis. The coordinate space in my sketch is different from my explanation earlier about the way the coordinate space works in computer graphics (more about that in a bit!). \nNow, look at A. It shows a point drawn where the pointer hits, and B shows the additional point we want to appear: a reflection of the point across the x-axis. This is our goal.\nA sketch illustrating the mathematics of reflecting a point.\nWhat happens to the x coordinates?\nThe variables a/a' and c/c' correspond to prevX and currX respectively, so we can call them \u201cthe x coordinates\u201d. We are reflecting across x, so their values remain the same, and therefore a' = a and c' = c. \nWhat happens to the y coordinates?\nWhat about b' and d'? Those are the ones that have to change, but in what way? Thanks to the slightly misleading sketch I showed you just now (of A and B), you probably think that the y coordinates b' and d' should get the negative values of b and d respectively, but nope. This is computer graphics, remember? The origin is at the top left corner and not at the canvas center, and therefore we get the following values: b = h - b, d' = h - d, where h is the canvas height.\nThis is the new code for the app\u2019s variables and the two lines: the one that fills the pointer\u2019s path and the one mirroring it across the x-axis.\nfunction drawLine() {\n var a = prevX, a_ = a,\n b = prevY, b_ = h-b,\n c = currX, c_ = c,\n d = currY, d_ = h-d;\n\n //... code ...\n\n // Draw line #1, at the pointer's location\n context.moveTo(a, b);\n context.lineTo(c, d);\n\n // Draw line #2, mirroring the line #1\n context.moveTo(a_, b_);\n context.lineTo(c_, d_);\n\n //... code ...\n}\nIn case this was too abstract for you, let\u2019s look at some actual numbers to see how this works.\nLet\u2019s say we have a tiny canvas of w = h = 10. Now let a = 3, b = 2, c = 4 and d = 3.\nSo b' = 10 - 2 = 8 and d' = 10 - 3 = 7.\nWe use the top and the left as references. For the y coordinates this means we count from the top, and 8 from the top is also 2 from the bottom. Similarly, 7 from the top is 3 from the bottom of the canvas. That\u2019s it, really. This is how the single point, and a line (not necessarily a straight one, by the way) is made up of many, many small segments that are similar to point in behavior.\nIf you are still confused, I don\u2019t blame you. \nHere is the result. Draw something and see what happens.\nSee the Pen Mandala Maker Tutorial: Part 3 by Hagar Shilo (@hagarsh) on CodePen.\n\nPart 4: Draw with 8 lines\nI have made yet another confusing sketch, with points C and D, so you understand what we\u2019re trying to do. Later on we\u2019ll look at points E, F, G and H as well. The circled point is the one we\u2019re adding at each particular step. The circled point at C has the coordinates (-3,2) and the circled point at D has the coordinates (-3,-2). Once again, keep in mind that the origin in the sketches is not the same as the origin of the canvas. \nA sketch illustrating points C and D.\nThis is the part where the math gets a bit mathier, as our drawLine function evolves further. We\u2019ll keep using the four new coordinates: a', b', c' and d', and reassign their values for each new location/line. Let\u2019s add two more lines in two new locations on the canvas. Their locations relative to the first two lines are exactly what you see in the sketch above, though the calculation required is different (because of the origin points being different).\nfunction drawLine() {\n\n //... code ... \n\n // Reassign values\n a_ = w-a; b_ = b;\n c_ = w-c; d_ = d;\n\n // Draw the 3rd line\n context.moveTo(a_, b_);\n context.lineTo(c_, d_);\n\n // Reassign values\n a_ = w-a; b_ = h-b;\n c_ = w-c; d_ = h-d;\n\n // Draw the 4th line\n context.moveTo(a_, b_);\n context.lineTo(c_, d_);\n\n //... code ... \nWhat is happening?\nYou might be wondering why we use w and h as separate variables, even though we know they have the same value. Why complicate the code this way for no apparent reason? That\u2019s because we want the symmetry to hold for a rectangular canvas as well, and this way it will. \nAlso, you may have noticed that the values of a' and c' are not reassigned when the fourth line is created. Why write their value assignments twice? It\u2019s for readability, documentation and communication. Maintaining the quadruple structure in the code is meant to help you remember that all the while we are dealing with two y coordinates (current and previous) and two x coordinates (current and previous). \nWhat happens to the x coordinates?\nAs you recall, our x coordinates are a (prevX) and c (currX).\nFor the third line we are adding, a' = w - a and c' = w - c, which means\u2026\nFor the fourth line, the same thing happens to our x coordinates a and c.\nWhat happens to the y coordinates?\nAs you recall, our y coordinates are b (prevY) and d (currY).\nFor the third line we are adding, b' = b and d' = d, which means the y coordinates are the ones not changing this time, making this is a reflection across the y-axis. \nFor the fourth line, b' = h - b and d' = h - d, which we\u2019ve seen before: that\u2019s a reflection across the x-axis.\nWe have four more lines, or locations, to define. Note: the part of the code that\u2019s responsible for drawing a micro-line between the newly calculated coordinates is always the same:\n context.moveTo(a_, b_);\n context.lineTo(c_, d_);\nWe can leave it out of the next code snippets and just focus on the calculations, i.e, the reassignments. \nOnce again, we need some concrete examples to see where we\u2019re going, so here\u2019s another sketch! The circled point E has the coordinates (2,3) and the circled point F has the coordinates (2,-3). The ability to draw at A but also make the drawing appear at E and F (in addition to B, C and D that we already dealt with) is the functionality we are about to add to out code.\nA sketch illustrating points E and F.\nThis is the code for E and F:\n // Reassign for 5\n a_ = w/2+h/2-b; b_ = w/2+h/2-a;\n c_ = w/2+h/2-d; d_ = w/2+h/2-c;\n\n // Reassign for 6\n a_ = w/2+h/2-b; b_ = h/2-w/2+a;\n c_ = w/2+h/2-d; d_ = h/2-w/2+c;\nTheir x coordinates are identical and their y coordinates are reversed to one another.\nThis one will be out final sketch. The circled point G has the coordinates (-2,3) and the circled point H has the coordinates (-2,-3).\nA sketch illustrating points G and H.\nThis is the code:\n // Reassign for 7\n a_ = w/2-h/2+b; b_ = w/2+h/2-a;\n c_ = w/2-h/2+d; d_ = w/2+h/2-c;\n\n // Reassign for 8\n a_ = w/2-h/2+b; b_ = h/2-w/2+a;\n c_ = w/2-h/2+d; d_ = h/2-w/2+c;\n //...code... \n}\nOnce again, the x coordinates of these two points are the same, while the y coordinates are different. And once again I won\u2019t go into the full details, since this has been a long enough journey as it is, and I think we\u2019ve covered all the important principles. But feel free to play around with the code and change it. I really recommend commenting out the code for some of the points to see what your drawing looks like without them.\nI hope you had fun learning! This is our final app:\nSee the Pen Mandala Maker Tutorial: Part 4 by Hagar Shilo (@hagarsh) on CodePen.", "year": "2018", "author": "Hagar Shilo", "author_slug": "hagarshilo", "published": "2018-12-02T00:00:00+00:00", "url": "https://24ways.org/2018/the-art-of-mathematics/", "topic": "code"} {"rowid": 242, "title": "Creating My First Chrome Extension", "contents": "Writing a Chrome Extension isn\u2019t as scary at it seems!\nNot too long ago, I used a Chrome extension called 20 Cubed. I\u2019m far-sighted, and being a software engineer makes it difficult to maintain distance vision. So I used 20 Cubed to remind myself to look away from my screen and rest my eyes. I loved its simple interface and design. I loved it so much, I often forgot to turn it off in the middle of presentations, where it would take over my entire screen. Oops.\nUnfortunately, the developer stopped updating the extension and removed it from Chrome\u2019s extension library. I was so sad. None of the other eye rest extensions out there matched my design aesthetic, so I decided to create my own! Want to do the same?\nFortunately, Google has some respectable documentation on how to create an extension. And remember, Chrome extensions are just HTML, CSS, and JavaScript. You can add libraries and frameworks, or you can just code the \u201cold-fashioned\u201d way. Sky\u2019s the limit!\nSetup\nBut first, some things you\u2019ll need to know about before getting started:\n\nCallbacks\nTimeouts\nChrome Dev Tools\n\nDeveloping with Chrome extension methods requires a lot of callbacks. If you\u2019ve never experienced the joy of callback hell, creating a Chrome extension will introduce you to this concept. However, things can get confusing pretty quickly. I\u2019d highly recommend brushing up on that subject before getting started.\nHyperbole and a Half\nTimeouts and Intervals are another thing you might want to brush up on. While creating this extension, I didn\u2019t consider the fact that I\u2019d be juggling three timers. And I probably would\u2019ve saved time organizing those and reading up on the Chrome extension Alarms documentation beforehand. But more on that in a bit.\nOn the note of organization, abstraction is important! You might have any combination of the following:\n\nThe Chrome extension options page\nThe popup from the Chrome Menu\nThe windows or tabs you create\nThe background scripts\n\nAnd that can get unwieldy. You might also edit the existing tabs or windows in the browser, which you\u2019ll probably want as a separate script too. Note that this tutorial only covers creating your own customized window rather than editing existing windows or tabs.\nAlright, now that you know all that up front, let\u2019s get going!\nDocumentation\nTL;DR READ THE DOCS.\nA few things to get started:\n\nRead Google\u2019s primer on browser extensions\nHave a look at their Getting started tutorial\nCheck out their overview on Chrome Extensions\n\nThis overview discusses the Chrome extension files, architecture, APIs, and communication between pages. Funnily enough, I only discovered the Overview page after creating my extension.\nThe manifest.json file gives the browser information about the extension, including general information, where to find your extension files and icons, and API permissions required. Here\u2019s what my manifest.json looked like, for example:\nhttps://github.com/jennz0r/eye-rest/blob/master/manifest.json\nBecause I\u2019m a visual learner, I found the images that describe the extension\u2019s architecture most helpful.\n\nTo clarify this diagram, the background.js file is the extension\u2019s event handler. It\u2019s constantly listening for browser events, which you\u2019ll feed to it using the Chrome Extension API. Google says that an effective background script is only loaded when it is needed and unloaded when it goes idle.\nThe Popup is the little window that appears when you click on an extension\u2019s icon in the Chrome Menu. It consists of markup and scripts, and you can tell the browser where to find it in the manifest.json under page_action: { \"default_popup\": FILE_NAME_HERE }.\nThe Options page is exactly as it says. This displays customizable options only visible to the user when they either right-click on the Chrome menu and choose \u201cOptions\u201d under an extension. This also consists of markup and scripts, and you can tell the browser where to find it in the manifest.json under options_page: FILE_NAME_HERE.\nContent scripts are any scripts that will interact with any web windows or tabs that the user has open. These scripts will also interact with any tabs or windows opened by your extension.\nDebugging\nA quick note: don\u2019t forget the debugging tutorial!\nJust like any other Chrome window, every piece of an extension has an inspector and dev tools. If (read: when) you run into errors (as I did), you\u2019re likely to have several inspector windows open \u2013 one for the background script, one for the popup, one for the options, and one for the window or tab the extension is interacting with.\nFor example, I kept seeing the error \u201cThis request exceeds the MAX_WRITE_OPERATIONS_PER_HOUR quota.\u201d Well, it turns out there are limitations on how often you can sync stored information.\nAnother error I kept seeing was \u201cAlarm delay is less than minimum of 1 minutes. In released .crx, alarm \u201cALARM_NAME_HERE\u201d will fire in approximately 1 minutes\u201d. Well, it turns out there are minimum interval times for alarms.\nChrome Extension creation definitely benefits from debugging skills. Especially with callbacks and listeners, good old fashioned console.log can really help!\nMe adding a ton of `console.log`s while trying to debug my alarms.\nEye Rest Functionality\nOk, so what is the extension I created? Again, it\u2019s a way to rest your eyes every twenty minutes for twenty seconds. So, the basic functionality should look like the following:\n\nIf the extension is running AND\nIf the user has not clicked Pause in the Popup HTML AND\nIf the counter in the Popup HTML is down to 00:00 THEN\n\nOpen a new window with Timer HTML AND\nStart a 20 sec countdown in Timer HTML AND\nReset the Popup HTML counter to 20:00\n\nIf the Timer HTML is down to 0 sec THEN\n\nClose that window. Rinse. Repeat.\n\n\nSounds simple enough, but wow, these timers became convoluted! Of all the Chrome extensions I decided to create, I decided to make one that\u2019s heavily dependent on time, intervals, and having those in sync with each other. In other words, I made this unnecessarily complicated and didn\u2019t realize until I started coding.\nFor visual reference of my confusion, check out the GitHub repository for Eye Rest. (And yes, it\u2019s a pun.)\nAPI\nNow let\u2019s discuss the APIs that I used to build this extension.\nAlarms\nWhat even are alarms? I didn\u2019t know either.\nAlarms are basically Chrome\u2019s setTimeout and setInterval. They exist because, as Google says\u2026\n\nDOM-based timers, such as window.setTimeout() or window.setInterval(), are not honored in non-persistent background scripts if they trigger when the event page is dormant.\n\nFor more information, check out this background migration doc.\nOne interesting note about alarms in Chrome extensions is that they are persistent. Garbage collection with Chrome extension alarms seems unreliable at best. I didn\u2019t have much luck using the clearAll method to remove alarms I created on previous extension loads or installs. A workaround (read: hack) is to specify a unique alarm name every time your extension is loaded and clearing any other alarms without that unique name.\nBackground Scripts\nFor Eye Rest, I have two background scripts. One is my actual initializer and event listener, and the other is a helpers file.\nI wanted to share a couple of functions between my Background and Popup scripts. Specifically, the clearAndCreateAlarm function. I wanted my background script to clear any existing alarms, create a new alarm, and add remaining time until the next alarm to local storage immediately upon extension load. To make the function available to the Background script, I added helpers.js as the first item under background > scripts in my manifest.json.\nI also wanted my Popup script to do the same things when the user has unpaused the extension\u2019s functionality. To make the function available to the Popup script, I just include the helpers script in the Popup HTML file.\nOther APIs\nWindows\nI use the Windows API to create the Timer window when the time of my alarm is up. The window creation is initiated by my Background script.\nOne day, while coding late into the evening, I found it very confusing that the window.create method included url as an option. I assumed it was meant to be an external web address. A friend pondered that there must be an option to specify the window\u2019s HTML. Until then, it hadn\u2019t dawned on me that the url could be relative. Duh. I was tired!\nI pass the timer.html as the url option, as well as type, size, position, and other visual options.\nStorage\nMaybe you want to pass information back and forth between the Background script and your Popup script? You can do that using Chrome or local storage. One benefit of using local storage over Chrome\u2019s storage is avoiding quotas and write operation maximums.\nI wanted to pass the time at which the latest alarm was set, the time to the next alarm, and whether or not the timer is paused between the Background and Popup scripts. Because the countdown should change every second, it\u2019s quite complicated and requires lots of writes. That\u2019s why I went with the user\u2019s local storage. You can see me getting and setting those variables in my Background, Helper, and Popup scripts. Just search for date, nextAlarmTime, and isPaused.\nDeclarative Content\nThe Declarative Content API allows you to show your extension\u2019s page action based on several type of matches, without needing to take a host permission or inject a content script. So you\u2019ll need this to get your extension to work in the browser!\nYou can see me set this in my Background script. Because I want my extension\u2019s popup to appear on every page one is browsing, I leave the page matchers empty.\nThere are many more APIs for Chrome apps and extensions, so make sure to surf around and see what features are available!\nThe Extension\nHere\u2019s what my original Popup looked like before I added styles.\nAnd here\u2019s what it looks like with new styles. I guess I\u2019m going for a Nickelodeon feel.\nAnd here\u2019s the Timer window and Popup together! \nPublishing\nPublishing is a cinch. You just zip up your files, create a new or use an existing Google Developer account, upload the files, add some details, and pay a one time $5 fee. That\u2019s all! Then your extension will be available on the Chrome extension store! Neato :D\nMy extension is now available for you to install.\nConclusion\nI thought creating a time based Chrome Extension would be quick and easy. I was wrong. It was more complicated than I thought! But it\u2019s definitely achievable with some time, persistence, and good ole Google searches.\nEventually, I\u2019d like to add more interactive elements to Eye Rest. For example, hitting the YouTube API to grab a silly or cute video as a reward for looking away during the 20 sec countdown and not closing the timer window. This harkens back to one of my first web projects, Toothtimer, from 2012. Or maybe a way to change the background colors of the Timer and Popup!\nEither way, with Eye Rest\u2019s framework built out, I\u2019m feeling fearless about future feature adds! Building this Chrome extension took some broken nails, achy shoulders, and tired eyes, but now Eye Rest can tell me to give my eyes a break every 20 minutes.", "year": "2018", "author": "Jennifer Wong", "author_slug": "jenniferwong", "published": "2018-12-05T00:00:00+00:00", "url": "https://24ways.org/2018/my-first-chrome-extension/", "topic": "code"} {"rowid": 258, "title": "Mistletoe Offline", "contents": "It\u2019s that time of year, when we gather together as families to celebrate the life of the greatest person in history. This man walked the Earth long before us, but he left behind words of wisdom. Those words can guide us every single day, but they are at the forefront of our minds during this special season.\nI am, of course, talking about Murphy, and the golden rule he gave unto us:\n\nAnything that can go wrong will go wrong.\n\nSo true! I mean, that\u2019s why we make sure we\u2019ve got nice 404 pages. It\u2019s not that we want people to ever get served a File Not Found message, but we acknowledge that, despite our best efforts, it\u2019s bound to happen sometime. Murphy\u2019s Law, innit?\nBut there are some Murphyesque situations where even your lovingly crafted 404 page won\u2019t help. What if your web server is down? What if someone is trying to reach your site but they lose their internet connection? These are all things than can\u2014and will\u2014go wrong.\nI guess there\u2019s nothing we can do about those particular situations, right?\nWrong!\nA service worker is a Murphy-battling technology that you can inject into a visitor\u2019s device from your website. Once it\u2019s installed, it can intercept any requests made to your domain. If anything goes wrong with a request\u2014as is inevitable\u2014you can provide instructions for the browser. That\u2019s your opportunity to turn those server outage frowns upside down. Take those network connection lemons and make network connection lemonade.\nIf you\u2019ve got a custom 404 page, why not make a custom offline page too?\nGet your server in order\nStep one is to make \u2026actually, wait. There\u2019s a step before that. Step zero. Get your site running on HTTPS, if it isn\u2019t already. You won\u2019t be able to use a service worker unless everything\u2019s being served over HTTPS, which makes sense when you consider the awesome power that a service worker wields.\nIf you\u2019re developing locally, service workers will work fine for localhost, even without HTTPS. But for a live site, HTTPS is a must.\nMake an offline page\nAlright, assuming your site is being served over HTTPS, then step one is to create an offline page. Make it as serious or as quirky as is appropriate for your particular brand. If the website is for a restaurant, maybe you could put the telephone number and address of the restaurant on the custom offline page (unsolicited advice: you could also put this on the home page, you know). Here\u2019s an example of the custom offline page for this year\u2019s Ampersand conference.\nWhen you\u2019re done, publish the offline page at suitably imaginative URL, like, say /offline.html.\nPre-cache your offline page\nNow create a JavaScript file called serviceworker.js. This is the script that the browser will look to when certain events are triggered. The first event to handle is what to do when the service worker is installed on the user\u2019s device. When that happens, an event called install is fired. You can listen out for this event using addEventListener:\naddEventListener('install', installEvent => {\n// put your instructions here.\n}); // end addEventListener\nIn this case, you want to make sure that your lovingly crafted custom offline page is put into a nice safe cache. You can use the Cache API to do this. You get to create as many caches as you like, and you can call them whatever you want. Here, I\u2019m going to call the cache Johnny just so I can refer to it as JohnnyCache in the code:\naddEventListener('install', installEvent => {\n installEvent.waitUntil(\n caches.open('Johnny')\n .then( JohnnyCache => {\n JohnnyCache.addAll([\n '/offline.html'\n ]); // end addAll\n }) // end open.then\n ); // end waitUntil\n}); // end addEventListener\nI\u2019m betting that your lovely offline page is linking to a CSS file, maybe an image or two, and perhaps some JavaScript. You can cache all of those at this point:\naddEventListener('install', installEvent => {\n installEvent.waitUntil(\n caches.open('Johnny')\n .then( JohnnyCache => {\n JohnnyCache.addAll([\n '/offline.html',\n '/path/to/stylesheet.css',\n '/path/to/javascript.js',\n '/path/to/image.jpg'\n ]); // end addAll\n }) // end open.then\n ); // end waitUntil\n}); // end addEventListener\nMake sure that the URLs are correct. If just one of the URLs in the list fails to resolve, none of the items in the list will be cached.\nIntercept requests\nThe next event you want to listen for is the fetch event. This is probably the most powerful\u2014and, let\u2019s be honest, the creepiest\u2014feature of a service worker. Once it has been installed, the service worker lurks on the user\u2019s device, waiting for any requests made to your site. Every time the user requests a web page from your site, a fetch event will fire. Every time that page requests a style sheet or an image, a fetch event will fire. You can provide instructions for what should happen each time:\naddEventListener('fetch', fetchEvent => {\n// What happens next is up to you!\n}); // end addEventListener\nLet\u2019s write a fairly conservative script with the following logic:\n\nWhenever a file is requested,\nFirst, try to fetch it from the network,\nBut if that doesn\u2019t work, try to find it in the cache,\nBut if that doesn\u2019t work, and it\u2019s a request for a web page, show the custom offline page instead.\n\nHere\u2019s how that translates into JavaScript:\n// Whenever a file is requested\naddEventListener('fetch', fetchEvent => {\n const request = fetchEvent.request;\n fetchEvent.respondWith(\n // First, try to fetch it from the network\n fetch(request)\n .then( responseFromFetch => {\n return responseFromFetch;\n }) // end fetch.then\n // But if that doesn't work\n .catch( fetchError => {\n // try to find it in the cache\n caches.match(request)\n .then( responseFromCache => {\n if (responseFromCache) {\n return responseFromCache;\n // But if that doesn't work\n } else {\n // and it's a request for a web page\n if (request.headers.get('Accept').includes('text/html')) {\n // show the custom offline page instead\n return caches.match('/offline.html');\n } // end if\n } // end if/else\n }) // end match.then\n }) // end fetch.catch\n ); // end respondWith\n}); // end addEventListener\nI am fully aware that I may have done some owl-drawing there. If you need a more detailed breakdown of what\u2019s happening at each point in the code, I\u2019ve written a whole book for you. It\u2019s the perfect present for Murphymas.\nHook up your service worker script\nYou can publish your service worker script at /serviceworker.js but you still need to tell the browser where to look for it. You can do that using JavaScript. Put this in an existing JavaScript file that you\u2019re calling in to every page on your site, or add this in a script element at the end of every page\u2019s HTML:\nif (navigator.serviceWorker) {\n navigator.serviceWorker.register('/serviceworker.js');\n}\nThat tells the browser to start installing the service worker, but not without first checking that the browser understands what a service worker is. When it comes to JavaScript, feature detection is your friend.\nYou might already have some JavaScript files in a folder like /assets/js/ and you might be tempted to put your service worker script in there too. Don\u2019t do that. If you do, the service worker will only be able to handle requests made to for files within /assets/js/. By putting the service worker script in the root directory, you\u2019re making sure that every request can be intercepted.\nGo further!\nNicely done! You\u2019ve made sure that if\u2014no, when\u2014a visitor can\u2019t reach your website, they\u2019ll get your hand-tailored offline page. You have temporarily defeated the forces of chaos! You have briefly fought the tide of entropy! You have made a small but ultimately futile gesture against the inevitable heat-death of the universe!\nThis is just the beginning. You can do more with service workers.\nWhat if, every time you fetched a page from the network, you stored a copy of that page in a cache? Then if that person tries to reach that page later, but they\u2019re offline, you could show them the cached version.\nOr, what if instead of reaching out the network first, you checked to see if a file is in the cache first? You could serve up that cached version\u2014which would be blazingly fast\u2014and still fetch a fresh version from the network in the background to pop in the cache for next time. That might be a good strategy for images.\nSo many options! The hard part isn\u2019t writing the code, it\u2019s figuring out the steps you want to take. Once you\u2019ve got those steps written out, then it\u2019s a matter of translating them into JavaScript.\nInevitably there will be some obstacles along the way\u2014usually it\u2019s a misplaced curly brace or a missing parenthesis. Don\u2019t be too hard on yourself if your code doesn\u2019t work at first. That\u2019s just Murphy\u2019s Law in action.", "year": "2018", "author": "Jeremy Keith", "author_slug": "jeremykeith", "published": "2018-12-04T00:00:00+00:00", "url": "https://24ways.org/2018/mistletoe-offline/", "topic": "code"} {"rowid": 263, "title": "Securing Your Site like It\u2019s 1999", "contents": "Running a website in the early years of the web was a scary business. The web was an evolving medium, and people were finding new uses for it almost every day. From book stores to online auctions, the web was an expanding universe of new possibilities.\nAs the web evolved, so too did the knowledge of its inherent security vulnerabilities. Clever tricks that were played on one site could be copied on literally hundreds of other sites. It was a normal sight to log in to a website to find nothing working because someone had breached its defences and deleted its database. Lessons in web security in those days were hard-earned.\nWhat follows are examples of critical mistakes that brought down several early websites, and how you can help protect yourself and your team from the same fate.\nBad input validation: Trusting anything the user sends you\nOur story begins in the most unlikely place: Animal Crossing. Animal Crossing was a 2001 video game set in a quaint town, filled with happy-go-lucky inhabitants that co-exist peacefully. Like most video games, Animal Crossing was the subject of many fan communities on the early web.\nOne such unofficial web forum was dedicated to players discussing their adventures in Animal Crossing. Players could trade secrets, ask for help, and share pictures of their virtual homes. This might sound like a model community to you, but you would be wrong.\nOne day, a player discovered a hidden field in the forum\u2019s user profile form. Normally, this page allows users to change their name, their password, or their profile photo. This person discovered that the hidden field contained their unique user ID, which identifies them when the forum\u2019s backend saves profile changes to its database. They discovered that by modifying the form to change the user ID, they could make changes to any other player\u2019s profile.\nNeedless to say, this idyllic online community descended into chaos. Users changed each other\u2019s passwords, deleted each other\u2019s messages, and attacked each-other under the cover of complete anonymity. What happened?\nThere aren\u2019t any official rules for developing software on the web. But if there were, my golden rule would be:\nNever trust user input. Ever.\nAlways ask yourself how users will send you data that isn\u2019t what it seems to be. If the nicest community of gamers playing the happiest game on earth can turn on each other, nowhere on the web is safe.\nMake sure you validate user input to make sure it\u2019s of the correct type (e.g. string, number, JSON string) and that it\u2019s the length that you were expecting. Don\u2019t forget that user input doesn\u2019t become safe once it is stored in your database; any data that originates from outside your network can still be dangerous and must be escaped before it is inserted into HTML.\nMake sure to check a user\u2019s actions against what they are allowed to do. Create a clear access control policy that defines what actions a user may take, and to whose data they are allowed access to. For example, a newly-registered user should not be allowed to change the user profile of a web forum\u2019s owner.\nFinally, never rely on client-side validation. Validating user input in the browser is a convenience to the user, not a security measure. Always assume the user has full control over any data sent from the browser and make sure you validate any data sent to your backend from the outside world.\nSQL injection: Allowing the user to run their own database queries\nA long time ago, my favourite website was a web forum dedicated to the Final Fantasy video game series. Like the users of the Animal Crossing forum, I\u2019d while away many hours arguing with other people on the internet about my favourite characters, my favourite stories, and the greatest controversies of the day.\nOne day, I noticed people were acting strangely. Users were being uncharacteristically nasty and posting in private areas of the forum they wouldn\u2019t normally have access to. Then messages started disappearing, and user accounts for well-respected people were banned.\nIt turns out someone had discovered a way of logging in to any other user account, using a secret password that allowed them to do literally anything they wanted. What was this password that granted untold power to those who wielded it?\n' OR '1'='1\nSQL is a computer language that is used to query databases. When you fill out a login form, just like the one above, your username and your password are usually inserted into an SQL query like this:\n\nSELECT COUNT(*)\nFROM USERS\nWHERE USERNAME='Alice'\nAND PASSWORD='hunter2'\nThis query selects users from the database that match the username Alice and the password hunter2. If there is at least one user matching record, the user will be granted access. Let\u2019s see what happens when we use our magic password instead!\n\nSELECT COUNT(*)\nFROM USERS\nWHERE USERNAME='Admin'\nAND PASSWORD='' OR '1'='1'\nDoes the password look like part of the query to you? That\u2019s because it is! This password is a deliberate attempt to inject our own SQL into the query, hence the term SQL injection. The query is now looking for users matching the username Admin, with a password that is blank, or 1=1. In an SQL query, 1=1 is always true, which makes this query select every single record in the database. As long as the forum software is checking for at least one matching user, it will grant the person logging in access. This password will work for any user registered on the forum!\nSo how can you protect yourself from SQL injection?\nNever build SQL queries by concatenating strings. Instead, use parameterised query tools. PHP offers prepared statements, and Node.JS has the knex package. Alternatively, you can use an ORM tool, such as Propel or sequelize.\nExpert help in the form of language features or software tools is a key ally for securing your code. Get all the help you can!\nCross site request forgery: Getting other users to do your dirty work for you\nDo you remember Netflix? Not the Netflix we have now, the Netflix that used to rent you DVDs by mailing them to you. My next story is about how someone managed to convince Netflix users to send him their DVDs - free of charge.\nHave you ever clicked on a hyperlink, only to find something that you weren\u2019t expecting? If you were lucky, you might have just gotten Rickrolled. If you were unlucky\u2026\nLet\u2019s just say there are older and fouler things than Rick Astley in the dark places of the web.\nWhat if you could convince people to visit a page you controlled? And what if those people were Netflix users, and they were logged in? In 2006, Dave Ferguson did just that. He created a harmless-looking page with an image on it:\n\nDid you notice the source URL of the image? It\u2019s deliberately crafted to add a particular DVD to your queue. Sprinkle in a few more requests to change the user\u2019s name and shipping address, and you could ship yourself DVDs completely free of charge!\nThis attack is possible when websites unconditionally trust a user\u2019s session cookies without checking where HTTP requests come from.\nThe first check you can make is to verify that a request\u2019s origin and referer headers match the location of the website. These headers can\u2019t be programmatically set.\nAnother check you can use is to add CSRF tokens to your web forms, to verify requests have come from an actual form on your website. Tokens are long, unpredictable, unique strings that are generated by your server and inserted into web forms. When users complete a form, the form data sent to the server can be checked for a recently generated token. This is an effective deterrent of CSRF attacks because CSRF tokens aren\u2019t stored in cookies.\nYou can also set SameSite=Strict when setting cookies with the Set-Cookie HTTP header. This communicates to browsers that cookies are not to be sent with cross-site requests. This is a relatively new feature, though it is well supported in evergreen browsers.\nCross site scripting: Someone else\u2019s code running on your website\nIn 2005, Samy Kamkar became famous for having lots of friends. Lots and lots of friends.\nSamy enjoyed using MySpace which, at the time, was the world\u2019s largest social network. Social networks at that time were more limited than today. For instance, MySpace let you upload photos to your photo gallery, but capped the limit at twelve. Twelve photos. At least you didn\u2019t have to wade through photos of avocado toast back then\u2026\nSamy discovered that MySpace also locked down the kinds of content that you could post on your MySpace page. He discovered he could inject and
tags into his headline, but