{"rowid": 39, "title": "Meet for Learning", "contents": "\u201cI\u2019ve never worked in a place like this,\u201d said one of my direct reports during our daily stand-up meeting.\n\nAnd with that statement, my mind raced to the most important thing about lawyering that I\u2019ve learned from decades of watching lawyers lawyer on TV: don\u2019t ask a question you don\u2019t know the answer to.\n\nBut I couldn\u2019t stop myself. I wanted to learn more. The thought developed in my mind. The words formed in my mouth. And the vocalization occurred: \u201cA place like this?\u201d\n\n\u201cI\u2019ve never worked where people are so honest and transparent about things.\u201d\n\nDesigning a learning-centered culture\n\nBefore we started Center Centre, Jared Spool and I discussed both the larger goals and the smaller details of this new UX design school. We talked about things like user experience, curriculum, and structure.\n\nWe discussed the pattern we saw in our research. Hiring managers told us time and again that great designers have excellent technical and interpersonal skills. But, more importantly, the best designers are lifelong learners\u2014they are willing and able to learn how to do new things. Learning this led us to ask a critical question: how would we intentionally design a learning-centered experience?\n\nTo craft the experience we were aiming for, we knew we had to create a learning-centered culture for our students and our employees. We knew that our staff would need to model the behaviors our students needed to learn. We knew the best way to shape the culture was to work with our direct reports\u2014our directs\u2014to develop the behaviors we wanted them to exemplify.\n\nTo craft the experience we were aiming for, we knew we had to create a learning-centered culture for our students and our employees. We knew that our staff would need to model the behaviors our students needed to learn.\n\nBuilding a learning team\n\nOur learning-centered culture starts with our staff. We believe in transparency. Transparency builds trust. Effective organizations have effective teams who trust each other as individuals.\n\nOne huge way we build that trust and provide opportunities for transparency is in our meetings. (I know, I know\u2014meetings! Yuck!) But seriously, running and participating in effective meetings is a great opportunity to build a learning-centered culture.\n\nMeetings\u2014when done well\u2014allow individuals time to come together, to share, and to listen. These behaviors, executed on a consistent and regular basis, build honest and trusting relationships.\n\nAn effective meeting is one that achieves the desired outcomes of that meeting. While different meetings aim for different results, at Center Centre all meetings have a secondary goal: meet for learning.\n\nA framework for learning-centered meetings\n\nWe\u2019ve developed a framework for our meetings. We use it for all our meetings, which means attendees know what to expect. It also saves us from reinventing the wheel in each meeting.\n\nThese basic steps help our meetings focus on the valuable face-to-face interaction we\u2019re having, and help us truly begin to learn from one another.\n\n An agenda for a staff meeting.\n\nUse effective meeting basics\n\n\n\tPrepare for the meeting before the meeting.\n\tIf you\u2019re running the meeting, prepare a typed agenda and share it before the meeting. Agendas have start times for each item.\n\tStart the meeting on time. Don\u2019t wait for stragglers.\n\tDefine ground rules. Get input from attendees. Recurring meetings don\u2019t have to do this every time.\n\tKeep to the meeting agenda. Put off-topic questions and ideas in a parking lot, a visual document that everyone can see, so you can address the questions and ideas later.\n\tFinish on time. And if you\u2019ve reached the meeting\u2019s goals, finish early.\n\n\n Parking lots where ideas on sticky notes can be posted for later consideration.\n\nFocus to learn\n\n\n\tHave tech-free meetings: no laptops, no phones, no things with notifications.\n\tBring a notebook and a pen.\n\tTake notes by hand. You\u2019re not taking minutes, you\u2019re writing to learn.\n\n\nCome with a learning mindset\n\n\n\tAsk: what are our goals for this meeting? (Hopefully answered by the meeting agenda.)\n\tAsk: what can I learn overall?\n\tAsk: what can I learn from each of my colleagues?\n\tAsk: what can I share that will help the team learn overall?\n\tAsk: what can I share that will help each of my colleagues learn?\n\n\nInvesting in regularly scheduled learning-centered meetings\n\nAt Center Centre, we have two types of recurring all-staff meetings: daily stand-ups and weekly staff meetings. (We are a small organization, so it makes sense to meet as an entire group.)\n\nYes, that means we spend thirty minutes each day in stand-up, for a total of two and a half hours of stand-up meeting time each week. And, yes, we also have a weekly ninety-minute sit-down staff meeting on top of that. This investment in time is an investment in learning.\n\nWe use these meetings to build our transparency, and, therefore, our trust. The regularity of these meetings helps us maintain ongoing, open sharing about our responsibilities, our successes, and our learning.\n\nFor instance, we answer five questions in our stand-up:\n\n\n\tWhat did I get done since the last stand-up (I reported at)?\n\tWhat is my goal to accomplish before the next stand-up?\n\tWhat\u2019s preventing me from getting these things done, if anything?\n\tWhat\u2019s the highest risk or most unknown thing right now about what I\u2019m trying to get done?\n\tWhat is the most important thing I learned since the last time we met and how will what I learned change the way I approach things in the future?\n\n\nEach person writes out their answers to these questions before the meeting. Each person brings their answers printed on paper to the meeting. And each person brings a pen to jot down notes.\n\n Notes compiled for a stand-up meeting.\n\nDuring the stand-up, each person shares their answers to the five questions. To sustain a learning-centered culture, the fifth question is the most important question to answer. It allows individual reflection focused on learning. Sometimes this isn\u2019t an easy question to answer. It makes us stretch. It makes us think.\n\nBy sharing our individual answers to the fifth question, we open ourselves up to the group. When we honestly share what we\u2019ve learned, we openly admit that we didn\u2019t know something. Sharing like this would be scary (and even risky) if we didn\u2019t have a learning-centered culture.\n\nWe often share the actual process of how we learned something. By listening, each of us is invited to learn more about the topic at hand, consider what more there is to learn about that topic, and even gain insights into other methods of learning\u2014which can be applied to other topics.\n\nSharing the answers to the fifth question also allows opportunities for further conversations. We often take what someone has individually learned and find ways to apply it for our entire team in support of our organization. We are, after all, learning together.\n\nBuilding individual learners\n\nWe strive to grow together as a team at Center Centre, but we don\u2019t lose sight of the importance of the individuals who form our team. As individuals, we bring our goals, dreams, abilities, and prior knowledge to the team.\n\nTo build learning teams, we must build individual learners. A team made up of lifelong learners, who share their learning and learn from each other, is a team that will continually produce better results.\n\nAs a manager, I need to meet each direct where they are with their current abilities and knowledge. Then, I can help them take their skills and knowledge base to the next levels. This process requires each individual direct to engage in professional development.\n\nWe believe effective managers help their directs engage in behaviors that support growth and development. Effective managers encourage and support learning.\n\n\n\nOur weekly one-on-ones\n\nOne way we encourage learning is through weekly one-on-ones. Each of my directs meets with me, individually, for thirty minutes each week. The meeting is their meeting. It is not my meeting.\n\nMy direct sets the agenda. They talk about what they want to talk about. They can talk about work. They can talk about things outside of work. They can talk about their health, their kids, and even their cat. Whatever is important to them is important to me. I listen. I take notes.\n\nAlthough the direct sets the specific agenda, the meeting has three main parts. Approximately ten minutes for them (the direct), ten minutes for me (the manager), and ten minutes for us to talk about their future within\u2014and beyond\u2014our organization.\n\nCoaching for future performance\n\nThe final third of our one-on-one is when I coach my directs. Coaching looks to the direct\u2019s future performance. It focuses on developing the direct\u2019s skills.\n\nCoaching isn\u2019t hard. It doesn\u2019t take much time. For me, it usually takes less than five minutes a week during a one-on-one.\n\nThe first time I coach one of my directs, I ask them to brainstorm about the skills they want to improve. They usually already have an idea about this. It\u2019s often something they\u2019ve wanted to work on for some time, but didn\u2019t think they had the time or the knowhow to improve.\n\nIf a direct doesn\u2019t know what they want to improve, we discuss their job responsibilities\u2014specifically the aspects of the job that concern them.\n\nCoaching provides an opportunity for me to ask, \u201cIn your job, what are the required skills that you feel like you don\u2019t have (or know well enough, or perform effectively, or use with ease)?\u201d\n\nSometimes I have to remind a direct that it\u2019s okay not to know how to do something (even if it\u2019s a required part of their job). After all, our organization is a learning organization. In a learning organization, no one knows everything but everyone is willing to learn anything.\n\nAfter we review the job responsibilities together, I ask my direct what skill they\u2019d like to work to improve. Whatever they choose, we focus on that skill for coaching\u2014I\u2019ve found my directs work better when they\u2019re internally motivated.\n\nSometimes the first time I talk with a direct about coaching, they get a bit anxious. If this happens, I share a personal story about my professional learning journey. I say something like:\n\n\n\tI didn\u2019t know how to make a school before we started to make Center Centre.\n\n\tI didn\u2019t know how to manage an entire team of people\u2014day in and day out\u2014until I started managing a team of people every day.\n\n\tWhen I realized that I was the boss\u2014and that the success of the school would hinge, at least in part, on my skills as a manager\u2014I was a bit terrified. I was missing an important skill set that I needed to know (and I needed to know well).\n\n\tWhen I first understood this, I felt bad\u2014like I should have already known how to be a great manager. But then I realized, I\u2019d never faced this situation. I\u2019d never needed to know how to use this skill set in this way.\n\n\tI worked through my anxiety about feeling inadequate. I decided I\u2019d better learn how to be an effective manager because the school needed me to be one. You needed me to be one.\n\n\tEvery day, I work to improve my management skills. You\u2019ve probably noticed that some days I\u2019m better at it than others. I try not to beat myself up about this, although it\u2019s hard\u2014I\u2019d like to be perfect at it. But I\u2019m not.\n\n\tI know that if I make a conscious, daily effort to learn how to be a better manager, I\u2019ll continue to improve. So that\u2019s what I do.\n\n\tEvery day I learn. I learn by doing. I learn how to be better than I was the day before. That\u2019s what I ask of you.\n\n\nOnce we determine the skill the direct wants to learn, we figure out how they can go about learning it. I ask: \u201cHow could you learn this skill?\u201d\n\nWe brainstorm for two or three minutes about this. We write down every idea that comes to mind, and we write it so both of us can easily see the options (both whiteboards and sticky notes on the wall work well for this exercise).\n\n\n\tRead a book. Research online. Watch a virtual seminar. Listen to a podcast. Talk to a mentor. Reach out to an expert. Attend a conference. Shadow someone else while they do the skill. Join a professional organization.\n\n\nThe goal is to get the direct on a path of self-development. I\u2019m coaching their development, but I\u2019m not the main way my direct will learn this new skill.\n\nI ask my direct which path seems like the best place to start. I let them choose whatever option they want (as long as it works with our budget). They are more likely to follow through if they are in control of this process.\n\nNext, we work to break down the selected path into tasks. We only plan one week\u2019s worth of tasks. The tasks are small, and the deadlines are short. My direct reports when each task is completed.\n\nAt our next one-on-one, I ask my direct about their experience learning this new skill.\n\nRinse. Repeat.\n\nThat\u2019s it. I spend five minutes a week talking with each direct about their individual learning. They develop their professional skills, and together we\u2019re creating a learning-centered culture.\n\nAsking questions I don\u2019t know the answer to\n\nWhen my direct said, \u201cI\u2019ve never worked where people are so honest and transparent about things,\u201d it led me to believe that all this is working. We are building a learning-centered culture.\n\nThis week I was reminded that creating a learning-centered culture starts not just with the staff, but with me. When I challenge myself to learn and then share what I\u2019m currently learning, my directs want to learn more about what I\u2019m learning about.\n\nFor example, I decided I needed to improve my writing skills. A few weeks ago, I realized that I was sorely out of practice and I felt like I had lost my voice. So I started to write. I put words on paper. I felt overwhelmed. I felt like I didn\u2019t know how to write anymore (at least not well or effectively).\n\nI bought some books on writing (mostly Peter Elbow\u2019s books like Writing with Power, Writing Without Teachers, and Vernacular Eloquence), and I read them. I read them all. Reading these books was part of my personal coaching. I used the same steps to coach myself as I use with my directs when I coach them.\n\nIn stand-ups, I started sharing what I accomplished (like I completed one of the books) and what I learned by doing\u2014specific things, like engaging in freewriting and an open-ended writing process.\n\nThis week, I went to lunch with one of my directs. She said, \u201cYou\u2019ve been talking about freewriting a lot. You\u2019re really excited about it. Freewriting seems like it\u2019s helping your writing process. Would you tell me more about it?\u201d\n\nSo I shared the details with her. I shared the reasons why I think freewriting is helping. I\u2019m not focused on perfection. Instead, each day I\u2019m focused on spending ten, uninterrupted minutes writing down whatever comes to my mind. It\u2019s opening my writing mind. It\u2019s allowing my words to flow more freely. And it\u2019s helping me feel less self-conscious about my writing.\n\nShe said, \u201cLeslie, when you say you\u2019re self-conscious about your writing, I laugh. Not because it\u2019s funny. But because when I read what you write, I think, \u2018What is there to improve?\u2019 I think you\u2019re a great writer. It\u2019s interesting to know that you think you can be a better writer. I like learning about your learning process. I think I could do freewriting. I\u2019m going to give it a try.\u201d\n\nThere\u2019s something magical about all of this. I\u2019m not even sure I can eloquently put it into words. I just know that our working environment is something very different. I\u2019ve never experienced anything quite like it. Somehow, by sharing that I don\u2019t know everything and that I\u2019m always working to learn more, I invite my directs to be really open about what they don\u2019t know. And they see it\u2019s possible always to learn and grow.\n\nI\u2019m glad I ignore all the lawyering I\u2019ve learned from watching TV. I\u2019m glad I ask the questions I don\u2019t know the answers to. And I\u2019m glad my directs do the same. When we meet for learning, we accelerate and amplify the learning process\u2014building individual learners and learning teams. Embracing the unknown and working toward understanding is what makes our culture a learning-centered culture.\n\nPhotos by Summer Kohlhorst.", "year": "2014", "author": "Leslie Jensen-Inman", "author_slug": "lesliejenseninman", "published": "2014-12-20T00:00:00+00:00", "url": "https://24ways.org/2014/meet-for-learning/", "topic": "process"} {"rowid": 28, "title": "Why You Should Design for Open Source", "contents": "Let\u2019s be honest. Most designers don\u2019t like working for nothing. We rally against spec work and make a stand for contracts and getting paid. That\u2019s totally what you should do as a professional designer in the industry. It\u2019s your job. It\u2019s your hard-working skill. It\u2019s your bread and butter. Get paid.\n\nHowever, I\u2019m going to make a case for why you could also consider designing for open source. First, I should mention that not all open source work is free work. Some companies hire open source contributors to work on their projects full-time, usually because that project is used by said company. There are other companies that encourage open source contribution and even offer 20%-time for these projects (where you can spend one day a week contributing to open source). These are super rad situations to be in. However, whether you\u2019re able to land a gig doing this type of work, or you\u2019ve decided to volunteer your time and energy, designing for open source can be rewarding in many other ways.\n\nPortfolio building\n\nNew designers often find themselves in a catch-22 situation: they don\u2019t have enough work experience showcased in their portfolio, which leads to them not getting much work because their portfolio is bare. These new designers often turn to unsolicited redesigns to fill their portfolio. An unsolicited redesign is a proof of concept in which a designer attempts to redesign a popular website. You can see many of these concepts on sites like Dribbble and Behance and there are even websites dedicated to showcasing these designs, such as Uninvited Designs. There\u2019s even a subreddit for them.\n\nThere are quite a few negative opinions on unsolicited redesigns, though some people see things from both sides. If you feel like doing one or two of these to fill your portfolio, that\u2019s of course up to you. But here\u2019s a better suggestion. Why not contribute design for an open source project instead?\n\nYou can easily find many projects in great need of design work, from branding to information design, documentation, and website or application design. The benefits to doing this are far better than an unsolicited redesign. You get a great portfolio piece that actually has greater potential to get used (especially if the core team is on board with it). It\u2019s a win-win situation.\n\nNot all designers are in need of portfolio filler, but there are other benefits to contributing design.\n\nGiving back to the community\n\nMy first experience with voluntary work was when I collaborated with my friend, Vineet Thapar, on a pro bono project for the W3C\u2019s Web Accessibility Initiative redesign project back in 2004. I was very excited to contribute CSS to a website that would get used by the W3C! Unfortunately, it decided to go a different direction and my work did not get used. However, it was still pretty exciting to have the opportunity, and I don\u2019t regret a moment of that work. I learned a lot about accessibility from this experience and it helped me land some of the jobs I\u2019ve had since.\n\nAlmost a decade later, I got super into Sass. One of the core maintainers, Chris Eppstein, lamented on Twitter one day that the Sass website and brand was in dire need of design help. That led to the creation of an open source task force, Team Sass Design, and we revived the brand and the website, which launched at SassConf in 2013.\n\nIt helped me in my current job. I showed it during my portfolio review when I interviewed for the role. Then I was able to use inspiration from a technique I\u2019d tried on the Sass website to help create the more feature-rich design system that my team at work is building. But most importantly, I soon learned that it is exhilarating to be a part of the Sass community. This is the biggest benefit of all. It feels really good to give back to the technology I love and use for getting my work done.\n\nBen Werdmuller writes about the need for design in open source. It\u2019s great to see designers contributing to open source in awesome ways. When A List Apart\u2019s website went open source, Anna Debenham contributed by helping build its pattern library. Bevan Stephens worked with FontForge on the design of its website. There are also designers who have created their own open source projects. There\u2019s Dan Cederholm\u2019s Pears, which shares common patterns in markup and style. There\u2019s also Brad Frost\u2019s Pattern Lab, which shares his famous method of atomic design and applies it to a design system. These systems and patterns have been used in real-world projects, such as RetailMeNot, so designers have contributed to the web in an even larger way simply by putting their work out there for others to use. That\u2019s kind of fun to think about.\n\nHow to get started\n\nSo are you stoked about getting into the open source community? That\u2019s great!\n\nInitially, you might get worried or uncomfortable in getting involved. That\u2019s okay. But first consider that the project is open source for a reason. Your contribution (no matter how large or small) can help in a big way.\n\nIf you find a project you\u2019re interested in helping, make sure you do your research. Sometimes project team members will be attached to their current design. Is there already a designer on the core team? Reach out to that designer first. Don\u2019t be too aggressive with why you think your design is better than theirs. Rather, offer some constructive feedback and a proposal of what would make the design better. Chances are, if the designer cares about the project, and you make a strong case, they\u2019ll be up for it.\n\nAre there contribution guidelines? It\u2019s proper etiquette to read these and follow the community\u2019s rules. You\u2019ll have a better chance of getting your work accepted, and it shows that you take the time to care and add to the overall quality of the project. Does the project lack guidelines? Consider starting a draft for that before getting started in the design.\n\nWhen contributing to open source, use your initiative to solve problems in a manageable way. Huge pull requests are hard to review and will often either get neglected or rejected. Work in small, modular, and iterative contributions.\n\nSo this is my personal take on what I\u2019ve learned from my experience and why I love open source. I\u2019d love to hear from you if you have your own experience in doing this and what you\u2019ve learned along the way as well. Please share in the comments!\n\nThanks Drew McLellan, Eric Suzanne, Kyle Neath for sharing their thoughts with me on this!", "year": "2014", "author": "Jina Anne", "author_slug": "jina", "published": "2014-12-19T00:00:00+00:00", "url": "https://24ways.org/2014/why-you-should-design-for-open-source/", "topic": "design"} {"rowid": 48, "title": "A Holiday Wish", "contents": "A friend and I were talking the other day about why clients spend more on toilet cleaning than design, and how the industry has changed since the mid-1990s, when we got our starts. Early in his career, my friend wrote a fine CSS book, but for years he has called himself a UX designer. And our conversation got me thinking about how I reacted to that title back when I first started hearing it.\n\n\u201cJust what this business needs,\u201d I said to myself, \u201canother phony expert.\u201d\n\nOkay, so I was wrong about UX, but my touchiness was not altogether unfounded. In the beginning, our industry was divided between freelance jack-of-all-trade punks, who designed and built and coded and hosted and Photoshopped and even wrote the copy when the client couldn\u2019t come up with any, and snot-slick dot-com mega-agencies that blew up like Alice and handed out titles like impoverished nobles in the years between the world wars. \n\nI was the former kind of designer, a guy who, having failed or just coasted along at a cluster of other careers, had suddenly, out of nowhere, blossomed into a web designer\u2014an immensely curious designer slash coder slash writer with a near-insatiable lust to shave just one more byte from every image. We had modems back then, and I dreamed in sixteen colors. My source code was as pretty as my layouts (arguably prettier) and I hoovered up facts and opinions from newsgroups and bulletin boards as fast as any loudmouth geek could throw them. It was a beautiful life.\n\nBut soon, too soon, the professional digital agencies arose, buying loft buildings downtown, jacking in at T1 speeds, charging a hundred times what I did, and communicating with their clients in person, in large artfully bedecked rooms, wearing hand-tailored Barney\u2019s suits and bringing back the big city bullshit I thought I\u2019d left behind when I quit advertising to become a web designer. \n\nJust like the big bad ad agencies of my early career, the new digital agencies stocked every meeting with a totem pole worth of ranks and titles. If the client brought five upper middle managers to the meeting, the agency did likewise. If fifteen stakeholders got to ask for a bigger logo, fifteen agency personnel showed up to take notes on the percentage of enlargement required.\n\nBut my biggest gripe was with the titles.\n\nThe bigger and more expensive the agency, the lousier it ran with newly invented titles. Nobody was a designer any more. Oh, no. Designer, apparently, wasn\u2019t good enough. Designer was not what you called someone you threw that much money at.\n\nInstead of designers, there were user interaction leads and consulting middleware integrators and bilabial experience park rangers and you name it. At an AIGA Miami event where I was asked to speak in the 1990s, I once watched the executive creative director of the biggest dot-com agency of the day make a presentation where he spent half his time bragging that the agency had recently shaved down the number of titles for people who basically did design stuff from forty-six to just twenty-three\u2014he presented this as though it were an Einsteinian coup\u2014and the other half of his time showing a film about the agency\u2019s newly opened branch in Oslo. The Oslo footage was shot in December. I kept wondering which designer in the audience who lived in the constant breezy balminess of Miami they hoped to entice to move to dark, wintry Norway. But I digress.\n\nShortly after I viewed this presentation, the dot-com world imploded, brought about largely by the euphoric excess of the agencies and their clients. But people still needed websites, and my practice flourished\u2014to the point where, in 1999, I made the terrifying transition from guy in his underwear working freelance out of his apartment to head of a fledgling design studio. (Note: you never stop working on that change.)\n\nI had heard about experience design in the 1990s, but assumed it was a gig for people who only knew one font. \n\nBut sometime around 2004 or 2005, among my freelance and small-studio colleagues, like a hobbit in the Shire, I began hearing whispers in the trees of a new evil stirring. The fires of Mordor were burning. Web designers were turning in their HTML editing tools and calling themselves UXers.\n\nI wasn\u2019t sure if they pronounced it \u201cuck-sir,\u201d or \u201cyou-ex-er,\u201d but I trusted their claims to authenticity about as far as I trusted the actors in a Doctor Pepper commercial when they claimed to be Peppers. I\u2019m an UXer, you\u2019re an UXer, wouldn\u2019t you like to be an UXer too? No thanks, said I. I still make things. With my hands.\n\nSuch was my thinking. I may have earned an MFA at the end of some long-past period of soul confusion, but I have working-class roots and am profoundly suspicious of, well, everything, but especially of anything that smacks of pretense. I got exporting GIFs. I didn\u2019t get how white papers and bullet points helped anybody do anything.\n\nI was wrong. And gradually I came to know I was wrong. And before other members of my tribe embraced UX, and research, and content strategy, and the other airier consultant services, I was on board. It helped that my wife of the time was a librarian from Michigan, so I\u2019d already bought into the cult of information architecture. And if I wasn\u2019t exactly the seer who first understood how borderline academic practices related to UX could become as important to our medium and industry as our craft skills, at least I was down a lot faster than Judd Apatow got with feminism. But I digress.\n\nI love the web and all the people in it. Today I understand design as a strategic practice above all. The promise of the web, to make all knowledge accessible to all people, won\u2019t be won by HTML5, WCAG 2, and responsive web design alone. \n\nWe are all designers. You may call yourself a front-end developer, but if you spend hours shaving half-seconds off an interaction, that\u2019s user experience and you, my friend, are a designer. If the client asks, \u201cCan you migrate all my old content to the new CMS?\u201d and you answer, \u201cOf course we can, but should we?\u201d, you are a designer. Even our users are designers. Think about it. \n\nOnce again, as in the dim dumb dot-com past, we seem to be divided by our titles. But, O, my friends, our varied titles are only differing facets of the same bright gem. Sisters, brothers, we are all designers. Love on! Love on!\n\nAnd may all your web pages, cards, clusters, clumps, asides, articles, and relational databases be bright.", "year": "2014", "author": "Jeffrey Zeldman", "author_slug": "jeffreyzeldman", "published": "2014-12-18T00:00:00+00:00", "url": "https://24ways.org/2014/a-holiday-wish/", "topic": "ux"} {"rowid": 43, "title": "Content Production Planning", "contents": "While everyone agrees that getting the content of a website right is vital to its success, unless you\u2019re lucky enough to have an experienced editor or content strategist on board, planning content production often seems to fall through the cracks. One reason is that, for most of the team, it feels like someone else\u2019s problem. Not necessarily a specific person\u2019s problem. Just someone else\u2019s. It\u2019s only when everyone starts urgently asking when the content is going to be ready, that it becomes clear the answer is, \u201cNot as soon as we\u2019d like it\u201d.\n\nThe good news is that there are some quick and simple things you can do, even if you\u2019re not the official content person on a project, to get everyone on the same content planning page. \n\nContent production planning boils down to answering three deceptively simple questions:\n\n\n\tWhat content do you need?\n\tHow much of it do you need?\n\tWho\u2019s going to make it?\n\n\nEven if it\u2019s not your job to come up with the answers, by asking these questions early enough and agreeing who is going to come up with the answers, you\u2019ll be a long way towards avoiding the last-minute content problems which so often plague projects.\n\nHow much content do we need?\n\nPeople tend to underestimate two crucial things about content: how much content they need, and how long that content takes to produce.\n\nWhen I ask someone how big their website is \u2013 how many pages it contains \u2013 I usually double or triple the answer I get. That\u2019s because almost everyone\u2019s mental model of their website greatly underestimates its true size. You can see the problem for yourself if you look at a site map. Site maps are great at representing a mental model of a website. But because they\u2019re a deliberate simplification they naturally lead us to underestimate how much content is involved in populating them.\n\nSeveral years ago I was asked to help a client create a new microsite (their word) which they wanted ready in two weeks for a conference they were attending. Here\u2019s the site map they had in mind. At first glance it looks like a pretty small website. Maybe twenty to thirty pages?\n\n\n\nThat\u2019s what the client thought.\n\nBut see those boxes which are multiple boxes stacked on top of one another, for product categories, descriptions and supporting material? They\u2019re known as page stacks, and page stacks are the content strategy equivalent of Here Be Dragons. \n\n\n\nSay we have:\n\n\n\tfive product categories\n\teach with five products\n\twhich all have two or three supporting documents\n\n\nThose are still fairly small numbers. But small numbers multiplied by other small numbers tend to lead to big numbers.\n\n\n\n5 categories = 5 category descriptions\n\nplus\n\n5 categories \u00d7 5 products each = 25 product descriptions\n\nplus\n\n25 products \u00d7 2.5 (average) supporting documents = 63 supporting documents\n\nequals\n\n93 pages\n\n\n\nSuddenly our twenty- or thirty-page website is running towards one hundred.\n\nThat\u2019s probably enough to get most project teams to sit up and take notice. But there\u2019s still the danger of underestimating how long it\u2019s going to take to create the content. After all, assuming the supporting documents already exist in some form, there are only about twenty-five to thirty pages of new copy to write.\n\nHow much work is it?\n\nAgain, we have the problem that small numbers when multiplied by other small numbers tend to lead to big numbers. Let\u2019s make a rough guess that it\u2019ll take four hours to write each product category and description page we need. That feels a little conservative if we\u2019re writing stuff from scratch, but assuming the person doing it already knows the products fairly well it\u2019s not unreasonable.\n\n\n\n30 pages \u00d7 4 hours each = 120 hours\n\n120 hours \u00f7 7.5 working hours a day = 16 days\n\n\n\nOuch.\n\nAt this point it\u2019s pretty clear we\u2019re not getting this site launched in two weeks. \n\nThe goal is the conversation\n\nBy breaking down the site into its content components, and putting some rough estimates on how long each might take to produce, the client instantly realised that there was no way they would be ready to launch it in two weeks. Although we still didn\u2019t know exactly when it would be ready, getting to that realisation right at the start of the project was a major win for everybody. Without it, the design agency would have bust a gut to get the design, front-end and CMS all done in double-quick time, only to find it was all for nothing as barely half the content was ready. As it was, an early discussion about content, albeit a brief one, bought everyone time to tackle the project properly, without pulling any long nights or working weekends.\n\nIf you haven\u2019t been able to get people to discuss content plans for the project, these kinds of rough estimates should give you enough evidence to get everyone to start taking it seriously. Your goal is to get everyone on the project to a place where they are ready to talk in detail about who is going to create this content, and how long it\u2019s really going to take them, and to get to those conversations before lack of content becomes a problem.\n\nBe careful though. It\u2019s best to talk in ranges and round numbers when your estimates are this uncertain. And watch those multipliers. Given small numbers multiplied by other small numbers lead to big numbers, changing just one number can greatly change the overall estimate. I like to run a couple of different scenarios to check what things look like if I\u2019ve under- or overestimated either how many pages we\u2019re going to need, or how long they\u2019re going to take to create. For example:\n\n\n\nTop end: 30 pages \u00d7 5 hours = 150 hours, or 20 days\n\nBottom end: 25 pages \u00d7 4 hours = 100 hours, or 13.3 days\n\n\n\nSo rather than say, \u201cI estimate the content will take around sixteen days to produce\u201d, I\u2019m going to say, \u201cI think the content will take about three to four weeks to produce\u201d. Even with qualifiers like estimate and around, sixteen days sounds too precise. Whereas three to four weeks instantly conveys that this is just a rough figure.\n\nWho\u2019s going to make it?\n\nSo, people tend to underestimate two crucial things about content: how much content they need, and how long content takes to write. At this stage, you\u2019re still in danger of the latter, because it\u2019s tempting to simply estimate how much time content takes to write (or record, if we\u2019re talking audio or visual content), and overlook all the other work that needs to goes on around it. \n\nTake 24 ways as an example. In terms of our three deceptively simple questions: what is practical articles about web design; how many is twenty-four, one for each day of Advent; and who are experts working on the web, one to write each article. \n\nBut there\u2019s another who you might not have considered. \n\nSomeone needs to select those authors in the first place, make sure they deliver their articles on time (and find someone to replace them if they don\u2019t), review drafts, copy-edit and proofread final versions, upload them to the site, promote them, keep an eye on the comments and make sure there are still presents under the tree on Christmas morning.\n\nEven if each of those tasks only takes an hour or so, it then needs multiplying by twenty-four (except the presents, obviously). And as we\u2019ve already seen, small numbers multiplied by small numbers quickly turn into much bigger numbers. Just a few hours per article, when multiplied by twenty-four articles, easily multiplies up to days or even weeks of effort.\n\nTo get a more accurate estimate of how long the different kinds of content are going to take, you need to break down the content production work into its constituent stages, starting with planning, moving on through the main work of creation, to reviewing, approvals and finally publishing. You need to think about who needs to be involved at each step, and how much time they\u2019ll need to do their bit. \n\nTaken together, these things make up your content workflow. The workflow will be different for each organisation, but might look something like this:\n\n\n\tEddie the web editor will work out the key messages and objectives for each page, and agree them with Mo the marketing director.\n\tEddie will then get Cal, the copywriter, to write the first draft.\n\tAs part of that, Cal will interview Sam the subject expert to understand the intricacies of the subject and get all the facts straight.\n\tOnce Cal\u2019s done the first draft, it\u2019ll go to Sam to check for accuracy, while Eddie reviews it for style and message.\n\tOnce Cal has incorporated their feedback it\u2019s time to get Mo to have a look at the final draft.\n\tIf Mo\u2019s happy, it\u2019ll get a final proofread, be uploaded to the CMS, and Mo will give the final sign-off and release it for publishing.\n\n\nYou can plot this on a table, with the stages of the content production process down the side, and the key roles or personnel along with top. Then the team can estimate how much time they think each of them needs at each stage.\n\n\n \n \n Mo (marketing director)\n Sam (subject expert)\n Eddie (web editor)\n Cal (copywriter)\n \n \n Outline: define key messages and objectives\n \n \n 30 min\n \n \n \n Review outline\n 15 min\n \n \n \n \n \n First draft\n \n 30 min\n \n 3 hours\n \n \n Review 1st draft\n \n 30 min\n 30 min\n \n \n \n 2nd draft\n \n \n \n 1 hour\n \n \n Review 2nd draft\n 15 min\n 15 min\n 15 min\n \n \n \n Final amendments\n \n \n \n 30 min\n \n \n Proofread\n \n \n 15 min\n \n \n \n Upload\n \n \n \n 15 min\n \n \n Sign-off\n 10 min\n \n \n \n \n \n TOTAL\n 40 min\n 1 hour 15 min\n 1 hour 30 min\n 4 hours 45 min\n \n\n\nYou can then bring out your calculator again, and come up with some more big scary numbers showing how much time it\u2019s going to take for the whole team to get all the content needed not just written, but also planned, reviewed, approved and published.\n\nWith an experienced team you can run this exercise as a group workshop and get some fairly accurate estimates pretty quickly. If this is all a bit new to you, check out Gather Content\u2019s Content Production Planning for Agencies ebook for a useful guide to common content roles, ballpark estimates for how much time each one needs on a typical piece of content, and how to run a process and estimating workshop to dig into them in more detail. \n\nOn a small team, one person might play many roles, but you should still sanity-check your estimates by breaking down the process and putting a rough estimate on each stage. With only a couple of people involved, it\u2019s even easier to only include the core activity like writing or recording in your estimates, and forget to allow time for the planning, reviewing, proofreading, publishing and promoting you\u2019ll still need to do. And even in a team of one, if at all possible you should find at least one other person to act as a second pair of eyes, and give anything you produce a quick once-over and proofread before it\u2019s published.\n\nDepending on the kind of content you\u2019re making, you should also consider what will happen after it\u2019s published. The full content life cycle should include promotion, monitoring and regular reviews to make sure content stays accurate and up to date. Making sure you have the time and resources available to do all those things for each piece of content is essential for creating a sustainable content programme.\n\nThe proof of the pudding\n\nEven after digging into workflow and getting the whole team involved in estimating, you\u2019re still largely in the realm of the guesstimate. The good news, though, is that you can quite quickly start finding out if your guesstimates are right or not. As soon as you can, pilot the production process with some real content. This is a double-win: you start finding out how long it really takes to produce all this fab new content, and you get real content to work with in designs and prototypes.\n\nOnce you\u2019ve run a few things through your process, you\u2019ll be able to refine your estimates, confirm your workflow, and give everyone involved a clear idea of when it will all be ready, and what you need from them.\n\nKeeping it all on track\n\nAt this point I like to pull everything together into the content strategist\u2019s favourite tool: the spreadsheet.\n\nA simple content production checklist is a bit like a content inventory or audit, but for the content you don\u2019t yet have, not the stuff already done. You can grab an example here.\n\nEach piece of content gets its own row, with columns for basic information like page title, ID (which should match the site map), and who\u2019s responsible for making it. You can capture simple details like target audience and key messages here too, though for more complex content, page description tables like those described by Relly Annett-Baker in \u201cExtracting the Content\u201d may be a better tool to use. Just adapt these columns to whatever makes sense for your content.\n\nI then have columns to track where each piece is in the production process. I usually keep this simple, with a column each to mark whether it\u2019s draft, final or uploaded. The status column on the left automatically shows the item\u2019s status, using a simple traffic light colour scheme for whether the item is still to do (red), in draft (amber), or done (green). Seeing the whole thing slowly turn from red to green is a nice motivator.\n\nIf you want to track the workflow in more detail, a kanban board in a tool like Trello is a great way for a team to collaborate on content production, track each item\u2019s progress, and keep an eye out for bottlenecks and delays. \n\nGetting to the content strategy conversation\n\nIt\u2019s a relatively simple exercise, then, to decide not just what kinds of pages you need, but also how many of them: put some rough estimates of effort on the tasks needed to create those pages \u2013 not just the writing, but all the other stages of planning, reviewing, approving, publishing and promoting \u2013 and then multiply all those things together. This will quickly bring some reality to grand visions and overambitious plans. Do it early enough, and even when the final big scary number is a lot bigger and scarier than everyone thought, you\u2019ll still have time to do something about it.\n\nAs well as getting everyone on board for some proper content planning activities, that big scary number is your opportunity to get to the real core questions of content strategy: do we really need all this content? Where can existing content be reused and repurposed? How do we prioritise our efforts? What really matters to our readers and users?\n\nTime and again, case studies show that less content delivers more: more leads, more sales, more self-service support and savings in the call centre. Although that argument is primarily one you should make from a good-for-the-users perspective, it doesn\u2019t hurt to be able to make it from the cheaper-for-the-business perspective as well, and to have some big scary numbers to back that up.", "year": "2014", "author": "Sophie Dennis", "author_slug": "sophiedennis", "published": "2014-12-17T00:00:00+00:00", "url": "https://24ways.org/2014/content-production-planning/", "topic": "content"} {"rowid": 42, "title": "An Overview of SVG Sprite Creation Techniques", "contents": "SVG can be used as an icon system to replace icon fonts. The reasons why SVG makes for a superior icon system are numerous, but we won\u2019t be going over them in this article. If you don\u2019t use SVG icons and are interested in knowing why you may want to use them, I recommend you check out \u201cInline SVG vs Icon Fonts\u201d by Chris Coyier \u2013 it covers the most important aspects of both systems and compares them with each other to help you make a better decision about which system to choose.\n\nOnce you\u2019ve made the decision to use SVG instead of icon fonts, you\u2019ll need to think of the best way to optimise the delivery of your icons, and ways to make the creation and use of icons faster.\n\nJust like bitmaps, we can create image sprites with SVG \u2013 they don\u2019t look or work exactly alike, but the basic concept is pretty much the same.\n\nThere are several ways to create SVG sprites, and this article will give you an overview of three of them. While we\u2019re at it, we\u2019re going to take a look at some of the available tools used to automate sprite creation and fallback for us.\n\nPrerequisites\n\nThe content of this article assumes you are familiar with SVG. If you\u2019ve never worked with SVG before, you may want to look at some of the introductory tutorials covering SVG syntax, structure and embedding techniques. I recommend the following:\n\n\n\tSVG basics: Using SVG.\n\tStructure: Structuring, Grouping, and Referencing in SVG \u2014 The , , and Elements. We\u2019ll mention and quite a bit in this article.\n\tEmbedding techniques: Styling and Animating SVGs with CSS. The article covers several topics, but the section linked focuses on embedding techniques.\n\tA compendium of SVG resources compiled by Chris Coyier \u2014 contains resources to almost every aspect of SVG you might be interested in.\n\n\nAnd if you\u2019re completely new to the concept of spriting, Chris Coyier\u2019s CSS Sprites explains all about them.\n\nAnother important SVG feature is the viewBox attribute. For some of the techniques, knowing your way around this attribute is not required, but it\u2019s definitely more useful if you understand \u2013 even if just vaguely \u2013 how it works. The last technique mentioned in the article requires that you do know the attribute\u2019s syntax and how to use it. To learn all about viewBox, you can refer to my blog post about SVG coordinate systems.\n\nWith the prerequisites in place, let\u2019s move on to spriting SVGs!\n\nBefore you sprite\u2026\n\nIn order to create an SVG sprite with your icons, you\u2019ll of course need to have these icons ready for use.\n\nSome spriting tools require that you place your icons in a folder to which a certain spriting process is to be applied. As such, for all of the upcoming sections we\u2019ll work on the assumption that our SVG icons are placed in a folder named SVG.\n\nEach icon is an individual .svg file.\n\nYou\u2019ll need to make sure each icon is well-prepared and optimised for use \u2013 make sure you\u2019ve cleaned up the code by running it through one of the optimisation tools or processes available (or doing it manually if it\u2019s not tedious).\n\nAfter prepping the icon files and placing them in a folder, we\u2019re ready to create our SVG sprite.\n\nHTML inline SVG sprites\n\nSince SVG is XML code, it can be embedded inline in an HTML document as a code island using the element. Chris Coyier wrote about this technique first on CSS-Tricks.\n\nThe embedded SVG will serve as a container for our icons and is going to be the actual sprite we\u2019re going to use. So we\u2019ll start by including the SVG in our document.\n\n\n\n\n\n \n\n\n\n\n\nNext, we\u2019re going to place the icons inside the . Each icon will be wrapped in a element we can then reference and use elsewhere in the page using the SVG element. The element has many benefits, and we\u2019re using it because it allows us to define a symbol (which is a convenient markup for an icon) without rendering that symbol on the screen. The elements defined inside will only be rendered when they are referenced \u2013 or called \u2013 by the element.\n\nMoreover, can have its own viewBox attribute, which makes it possible to control the positioning of its content inside its container at any time.\n\nBefore we move on, I\u2019d like to shed some light on the style=\"display:none;\" part of the snippet above. Without setting the display of the SVG to none, and even though its contents are not rendered on the page, the SVG will still take up space in the page, resulting in a big empty area. In order to avoid that, we\u2019re hiding the SVG entirely with CSS.\n\nNow, suppose we have a Twitter icon in the icons folder. twitter.svg might look something like this:\n\n\n\n\n\n\n\n\nWe don\u2019t need the root svg element, so we\u2019ll strip the code and only keep the parts that make up the Twitter icon\u2019s shape, which in this example is just the element.Let\u2019s drop that into the sprite container like so:\n\n\n \n \n \n\n \n \n \n \n\n \n\n\nRepeat for the other icons.\n\nThe value of the element\u2019s viewBox attribute depends on the size of the SVG. You don\u2019t need to know how the viewBox works to use it in this case. Its value is made up of four parts: the first two will almost always be \u201c0 0\u201d; the second two will be equal to the size of the icon. For example, our Twitter icon is 32px by 32px (see twitter.svg above), so the viewBox value is \u201c0 0 32 32\u201d.\n\nThat said, it is certainly useful to understand how the viewBox works \u2013 it can help you troubleshoot SVG sometimes and gives you better control over it, allowing you to scale, position and even crop SVGs manually without having to resort to an editor. My blog post explains all about the viewBox attribute and its related attributes.\n\nOnce you have your SVG sprite ready, you can display the icons anywhere on the page by referencing them using the SVG element:\n\n\n \n\n\nAnd that\u2019s all there is to it!\n\nHTML-inline SVG sprites are simple to create and use, but when you have a lot of icons (and the more icon sets you create) it can easily become daunting if you have to manually transfer the icons into the . Fortunately, you don\u2019t have to do that. Fabrice Weinberg created a Grunt plugin called grunt-svgstore which takes the icons in your SVG folder and generates the SVG sprites for you; all you have to do is just drop the sprites into your page and use the icons like we did earlier.\n\nThis technique works in all browsers supporting SVG. There seems to be a bug in Safari on iOS which causes the icons not to show up when the SVG sprite is defined at the bottom of the document after the references to the icons, so it\u2019s safest to include the sprite before you use the icons until this bug is fixed.\n\nThis technique has one disadvantage: the SVG sprite cannot be cached. We\u2019re saving an extra HTTP request here but the browser cannot cache the image, so we aren\u2019t speeding up any subsequent page loads by inlining the SVG. There must be a better way \u2013 and there is.\n\nStyling the icons is possible, but getting deep into the styles becomes a bit harder owing to the nature of the contents of the element \u2013 these contents are cloned into a shadow DOM, and hence selecting elements in CSS the traditional way is not possible. However, some techniques to work around that do exist, and give us slightly more styling flexibility. Animations work as expected.\n\nReferencing an external SVG sprite in HTML\n\nInstead of including the SVG inline in the document, you can reference the sprite and the icons inside it externally, taking advantage of fragment identifiers to select individual icons in the sprite.\n\nFor example, the above reference to the Twitter icon would look something like this instead:\n\n\n \n\n\n\nicons.svg is the name of the SVG file that contains all of our icons as symbols, and the fragment identifier #twitter-icon is the reference to the wrapping the Twitter icon\u2019s contents. Very convenient, isn\u2019t it? The browser will request the sprite and then cache it, speeding up subsequent page loads. Win!\n\nThis technique also works in all browsers supporting SVG except Internet Explorer \u2013 not even IE9+ with SVG support permits this technique. No version of IE supports referencing an external SVG in .\n\nFortunately (again), Jonathan Neil has created a plugin called svg4everybody which fills this gap in IE; you can reference an external sprite in and also provide fallback for browsers that do not support SVG. However, it requires you to have the fallback images (PNG or JPEG, for example) available to do so. For details, refer to the plugin\u2019s Github repository\u2019s readme file.\n\nCSS inline SVG sprites\n\nAnother way to create an SVG sprite is by inlining the SVG icons in a style sheet using data URIs, and providing fallback for non-supporting browsers \u2013 also within the CSS.\n\nUsing this approach, we\u2019re turning the style sheet into the sprite that includes our icons. The style sheet is normally cached by the browser, so we have that concern out of the way.\n\nThis technique is put into practice in Filament Group\u2019s icon system approach, which uses their Grunticon plugin \u2013 or its sister Grumpicon web app \u2013 for generating the necessary CSS for the sprite. As such, we\u2019re going to cover this technique by following a workflow that uses one of these tools.\n\nAgain, we start with our icon SVG files. To focus on the actual spriting method and not on the tooling, I\u2019ll go over the process of sprite creation using the Grumpicon web app, instead of the Grunticon plugin. Both tools generate the same resources that we\u2019re going to use for the icon system. Whether you choose the web app or the Grunt set-up, after processing your SVG folder you\u2019re going to end up with the same set of resources that we\u2019ll be using throughout this section.\n\nThe first step is to drop your icons into the Grumpicon web app.\n\n Grumpicon homepage screenshot.\n\nThe application will then show you a preview of your icons, and a download button will allow you to download the generated files. These files will contain everything you need for your icon system \u2013 all that\u2019s left is for you to drop the generated files and code into your project as recommended and you\u2019ll have your sprite and icons ready to use anywhere you want in your page.\n\nGrumpicon generates five files and one folder in the downloaded package: a png folder containing PNG versions of your icons; three style sheets (that we\u2019ll go over briefly); a loader script file; and preview.html which is a live example showing you the other files in action.\n\nThe script in the loader goes into the of your page. This script handles browser and feature detection, and requests the necessary style sheet depending on browser support for SVG and base64 data URIs. If you view the source code of the preview page, you can see exactly how the script is added.\n\nicons.data.svg.css is the style sheet that contains your icons \u2013 the sprite. The icons are embedded inline inside the style sheet using data URIs, and applied to elements of your choice as background images, using class names. For example:\n\n.twitter-icon{\n background-image: url('data:image/svg+xml;\u2026'); /* the ellipsis is where the icon\u2019s data would go */\n background-repeat: no-repeat;\n background-position: 50% 50%;\n height: 2em;\n width: 2em;\n /* etc. */\n}\n\nThen, you only have to apply the twitter-icon class name to an element in your HTML to apply the icon as a background to it:\n\n\n\nAnd that\u2019s all you need to do to get an icon on the page.\n\nicons.data.svg.css, along with the other two style sheets and the png folder should be added to your CSS folder.\n\nicons.data.png.css is the style sheet the script will load in browsers that don\u2019t support SVG, such as IE8. Fallback for the inline SVG is provided as a base64-encoded PNG. For instance, the fallback for the Twitter icon from our example would look like so:\n\n.twitter-icon{\n background-image: url('data:image/png;base64;\u2026\u2019);\n /* etc. */\n}\n\nicons.fallback.css is the style sheet required for browsers that don\u2019t support base64-encoded PNGs \u2013 the PNG images are loaded as usual using the image\u2019s URL. The script will load this style sheet for IE6 and IE7, for example.\n\n.twitter-icon{\n background-image: url(png/twitter-icon.png);\n /* etc. */\n}\n\nThis technique is very different from the previous one. The sprite in this case is literally the style sheet, not an SVG container, and the icon usage is very similar to that of a CSS sprite \u2013 the icons are provided as background images.\n\nThis technique has advantages and disadvantages. For the sake of brevity, I won\u2019t go into further details, but the main limitations worth mentioning are that SVGs embedded as background images cannot be styled with CSS; and animations are restricted to those defined inside the for each icon. CSS interactions (such as hover effects) don\u2019t work either. Thus, to apply an effect for an icon that changes its color on hover, for example, you\u2019ll need to export a set of SVGs for each colour in order for Grumpicon to create matching fallback PNG images that can then be used for the animation.\n\nFor more details about the Grumpicon workflow, I recommend you check out \u201cA Designer\u2019s Guide to Grumpicon\u201d on Filament Group\u2019s website.\n\nUsing SVG fragment identifiers and views\n\nThis spriting technique is, again, different from the previous ones, and it is my personal favourite.\n\nSVG comes with a standard way of cropping to a specific area in a particular SVG image. If you\u2019ve ever worked with CSS sprites before then this definitely sounds familiar: it\u2019s almost exactly what we do with CSS sprites \u2013 the image containing all of the icons is cropped, so to speak, to show only the one icon that we want in the background positioning area of the element, using background size and positioning properties.\n\nInstead of using background properties, we\u2019ll be using SVG\u2019s viewBox attribute to crop our SVG to the specific icon we want.\n\nWhat I like about this technique is that it is more visual than the previous ones. Using this technique, the SVG sprite is treated like an actual image containing other images (the icons), instead of treating it as a piece of code containing other code.\n\nAgain, our SVG icons are placed inside a main SVG container that is going to be our SVG sprite. If you\u2019re working in a graphics editor, position or arrange your icons inside the canvas any way you want them to be, and then export the graphic as is. Of course, the less empty space there is in your SVG, the better.\n\nIn our example, the sprite contains three icons as shown in the following image. The sprite is open in Sketch. Notice how the SVG is just big enough to fit the icons inside it. It doesn\u2019t have to be like this, but it\u2019s cleaner this way.\n\n Screenshot showing the SVG sprite containing our icons.\n\nNow, suppose you want to display only the Instagram icon. Using the SVG viewBox attribute, we can crop the SVG to the icon. The Instagram icon is positioned at 64px along the positive x-axis, and zero pixels along the y-axis. It is also 32px by 32px in size.\n\n Screenshot showing the position (offset) of the Instagram icon inside the SVG sprite, and its size.\n\nUsing this information, we can specify the value of the viewBox as: 64 0 32 32. This area of the view box contains only the Instagram icon. 64 0 specifies the top-left corner of the view box area, and 32 32 specify its dimensions.\n\nNow, if we were to change the viewBox value on the SVG sprite to this value, only the Instagram icon will be visible inside the SVG viewport. Great. But how do we use this information to display the icon in our page using our sprite?\n\nSVG comes with a native way to link to portions or areas of an image using fragment identifiers. Fragment identifiers are used to link into a particular view area of an SVG document. Thus, using a fragment identifier and the boundaries of the area that we want (from the viewBox), we can link to that area and display it.\n\nFor example, if you want to display the icon from the sprite using an tag, you can reference the icon in the sprite like so:\n\n\"Settings\n\nThe fragment identifier in the snippet above (#svgView(viewBox(64, 0, 32, 32))) is the important part. This will result in only the Instagram icon\u2019s area of the sprite being displayed.\n\nThere is also another way to do this, using the SVG element. The element can be used to define a view area and then reference that area somewhere else. For example, to define the view box containing the Instagram icon, we can do the following:\n\n\n\nThen, we can reference this view in our element like this:\n\n\"Instagram\n\nThe best part about this technique \u2013 besides the ability to reference an external SVG and hence make use of browser caching \u2013 is that it allows us to use practically any SVG embedding technique and does not restrict us to specific tags.\n\nIt goes without saying that this feature can be used for more than just icon systems, owing to viewBox\u2019s power in controlling an SVG\u2019s viewable area.\n\nSVG fragment identifiers have decent browser support, but the technique is buggy in Safari: there is a bug that causes problems when loading a server SVG file and then using fragment identifiers with it. Bear Travis has documented the issue and a workaround.\n\nWhere to go from here\n\nPick the technique that works best for your project. Each technique has its own pros and cons, relating to convenience and maintainability, performance, and styling and scripting. Each technique also requires its own fallback mechanism.\n\nThe spriting techniques mentioned here are not the only techniques available. Other methods exist, such as SVG stacks, and others may surface in future, but these are the three main ones today.\n\nThe third technique using SVG\u2019s built-in viewBox features is my favourite, and with better browser support and fewer (ideally, no) bugs, I believe it is more likely to become the standard way to create and use SVG sprites. Fallback techniques can be created, of course, in one of many possible ways.\n\nDo you use SVG for your icon system? If so, which is your favourite technique? Do you know or have worked with other ways for creating SVG sprites?", "year": "2014", "author": "Sara Soueidan", "author_slug": "sarasoueidan", "published": "2014-12-16T00:00:00+00:00", "url": "https://24ways.org/2014/an-overview-of-svg-sprite-creation-techniques/", "topic": "code"} {"rowid": 35, "title": "SEO in 2015 (and Why You Should Care)", "contents": "If your business is healthy, you can always find plenty of reasons to leave SEO on your to-do list in perpetuity. After all, SEO is technical, complicated, time-consuming and potentially dangerous. The SEO industry is full of self-proclaimed gurus whose lack of knowledge can be deadly. There\u2019s the terrifying fact that even if you dabble in SEO in the most gentle and innocent way, you might actually end up in a worse state than you were to begin with.\n\nTo make matters worse, Google keeps changing the rules. There have been a bewildering number of major updates, which despite their cuddly names have had a horrific impact on website owners worldwide.\n\nFear aside, there\u2019s also the issue of time. It\u2019s probably tricky enough to find the time to read this article. Setting up, planning and executing an SEO campaign might well seem like an insurmountable obstacle.\n\nSo why should you care enough about SEO to do it anyway?\n\nThe main reason is that you probably already see between 30% and 60% of your website traffic come from the search engines. That might make you think that you don\u2019t need to bother, because you\u2019re already doing so well. But you\u2019re almost certainly wrong.\n\nIf you have a look through the keyword data in your Google Webmaster Tools account, you\u2019ll probably see that around 30\u201350% of the keywords used to find your website are brand names \u2013 the names of your products or companies. These are searches carried out by people who already know about you. But the people who don\u2019t know who you are but are searching for what you sell aren\u2019t finding you right now. This is your opportunity.\n\nIf a person goes looking for a company or product by name, Google will steer them towards what they\u2019re looking for. Their intelligence does have limits, however, and even though they know your name they won\u2019t be completely clear about what you sell. That\u2019s where SEO would come in.\n\nStill need more convincing? How about the fact that the seeming complexities of SEO mean that your competition are almost certainly neglecting it too. They have the same reservations as you about complexity, time and danger, and hopefully they aren\u2019t reading this article and so are none the wiser of the well-kept secret: that 70% of SEO is easy.\n\nI\u2019m going to lead you through what you need to do to tap into that stream of people looking for what you sell right now.\n\nWhat is real SEO?\n\nReal SEO is all about helping Google understand the content of your website. It\u2019s about steering, guiding and assisting Google. Not manipulating it.\n\nIt\u2019s easy to assume that Google already understands the content and relevance of each and every page on your website, but the fact is that it needs a fair amount of hand-holding. Fortunately, helping Google along really isn\u2019t very difficult at all.\n\nRest assured that real SEO has nothing to do with keyword stuffing, keyword density, hacks, tricks or cunning techniques. If you hear any of these terms from your SEO advisor, run away from them as quickly as you can.\n\nUnderstanding your current situation \u2013 Google Analytics\n\nBefore you can do anything to improve your SEO status, you need to get an idea of how you\u2019re already doing. Below is a very quick and easy way of doing so.\n\n1. Open up your Google Analytics account.\n\n2. Click on the date range selector on the top-right of the interface and change the year of the first date to last year. So 12 Dec 2014 will become 12 Dec 2013. Then click on Apply.\n\n3. Click on the All Sessions rectangle towards the top-left, click once on Organic Traffic and click Apply.\n\n4. Click the little black-and-white squares icon that has now appeared under the date selector on the top-right, and drag the slider all the way over to Higher Precision.\n\n5. Change the interval buttons on the top-right of the graph to Week to make this easier to digest.\n\nAt this point your graph should look something like this:\n\n\n\nIt\u2019s worth noting the approximate proportion of your visitors that currently come from organic sources.\n\n6. Click the little downwards arrow to the right of the All Sessions rectangle and choose Remove, so that we\u2019re only looking at the organic traffic on its own.\n\n7. Click on Select a metric next to the Sessions button above the graph and select Pages / Session. You should then see something like this:\n\n\n\nIn the example above we can see that the quantity of traffic has been increasing since the middle of August, but the quality of the traffic (as measured by the number of pages per session) has fallen significantly. \n\nHow you choose to view this is down to your own graph, recent history and interpretation of events, but this should give you an indication of how things stand at the present time. Trends are often much more revealing than a snapshot of a brief moment in time.\n\nYour Google Webmaster Tools data\n\nIf you\u2019re not very familiar with your Google Webmaster Tools account, it\u2019s really worth taking ten to fifteen minutes to see what\u2019s on offer. I can\u2019t recommend this enough. From the point of view of an SEO health check, I\u2019d advise you to look into the HTML Improvements, Crawl Errors and Crawl Stats, and most importantly the Search Queries.\n\nFrom what you see here and the trends shown in your Analytics data, you should now have a good idea of your current status. If you want to explore further, I recommend Screaming Frog as a good diagnostics tool, or Botify if your website is large or unusually complex.\n\nCombining the data into something useful\n\nYour Google Analytics session will have shown you how you\u2019re doing from an SEO point of view in terms of the quantity and, to some extent, the quality of your visitors. But it\u2019s only showing you what is already working. In other words: the people who are finding you on the search engines, and clicking on your links.\n\nThe Google Webmaster Tools search query data, on the other hand, will give you a better idea of what isn\u2019t working. It will show you the keyword searches that are getting you listed in the results, but which aren\u2019t necessarily getting clicked. And it doesn\u2019t take much by the way of expertise to see why.\n\nFor example, if you see your targeted keyword, which you feel is extremely relevant, has generated over 2,000 impressions in the last month but produced only two clicks, you\u2019ll probably find a very low average position. Bear in mind that an average position of fourteen will mean being around halfway down the second page of results. Think about how rarely you go beyond the first two or three listings, never mind to the second page of results, and you\u2019ll understand why the click-through rate is so low.\n\nSo now you have an idea of what you\u2019re being found for at the present time. But what about the other terms?\n\nWhat would you like to be found for?\n\nThis is one of the more common SEO mistakes, on a number of different levels. \n\nMany businesses assume that they don\u2019t need to worry about keyword research. They think they know what terms people use to find what they sell, and they also assume that Google understands the content on their website. This is incorrect on all counts.\n\nA better starting point is to brainstorm a small number of your most obvious keywords, then run them through Google\u2019s Keyword Planner. Ignore the information in the Ad group ideas tab, and instead go straight to the Keyword ideas tab. Rather than wade through the very unfriendly interface, I recommend downloading the data as a spreadsheet, in which not only is more detail included, but you can also slice, dice, sort and report the data as required.\n\nFrom there you can delete all the irrelevant columns, and start working your way through the list, deleting any irrelevant keywords as you go along.\n\nIt\u2019s around this stage that you may hit a problem in terms of where to focus your efforts. The number of reported searches for a given keyword is of course important, but so is the level of competition. Ideally, you\u2019d like keywords with plenty of searches but not too much competition.\n\nI personally like to factor both together by adding a column that simply divides the number of searches squared by the level of competition:\n\n(number of searches \u00d7 number of searches) \u00f7 competition\n\nThere are plenty of alternatives to this basic formula, but I like it for ease of use and simplicity. Once I\u2019ve added this column, I then sort the data by this value (largest to smallest) and I then only usually need ten to fifteen keywords at most to give me plenty of ideas to work with.\n\nThis is a slightly involved but effective methodology for keyword research, as what you\u2019re left with is a list of keywords that both Google and you consider to be relevant to the content of your website. And relevance is an important concept in SEO.\n\nReal SEO keyword research is about making sure that your customers, website and Google are all in agreement and alignment over the content of your website. Other sources of inspiration and ideas include having a look at what terms your competition are targeting, Google Trends and, of course, Google Suggest. If you\u2019re not sure where to find these things, you can probably work out where to search for them!\n\nIf you want to dive further into understanding your current search engine status, search for some of the better keywords that you just discovered and see where you rank compared to your competition. Note that it\u2019s vital to avoid Google serving up personalised results, so either use the privacy, incognito or anonymous mode of your browser for the searches, or use a browser that you don\u2019t normally use. I hope this is Internet Explorer. If what you find isn\u2019t great, don\u2019t despair: everything in SEO is fixable (terms and conditions may apply).\n\nPutting it all together\n\nYou should now have a good idea of where things stand with your current search engine traffic, and a solid list of keywords that you\u2019re not getting visitors for but very much want.\n\nAll that\u2019s left now is to work out how to use these keywords. But before we do, let\u2019s take a quick step back.\n\nIf you have in any way kept up with what\u2019s been happening in SEO over the last couple of years, you\u2019ll have probably heard about Google updates with names like Panda, Hummingbird, Phantom, Pirate and more.\n\nI won\u2019t go into the technical details of what Google is doing, but it is important to understand why they\u2019re trying to do it. At the most basic level, Google understands that there\u2019s a very real problem with people who are trying manipulate its index. In response to this, Google is trying to clean up its results. They don\u2019t want people getting fed up with bad results and considering other options \u2013 have you even tried Bing?\n\nThis is extremely important. Remember earlier when I said that 70% of SEO was easy? That rule still applies. So, for example, if you have a list of keywords that you know are relevant to what you sell, then all you need to do is create great content for them. Incredibly, that\u2019s all there is to it (terms and conditions apply again, unfortunately \u2013 see below).\n\nThere is, however, one simple rule to be consistently followed without exception: that the content you create should not only be good quality and completely original, but it should also be written primarily for the human visitor and not the search engine spider.\n\nIn other words, if you create some fantastic content for a keyword like \u201cchoosing a small business HR service\u201d, then the article should not only make perfect sense if read out loud (as opposed to the same phrase being repeated fifteen times), but also provide real value to the person reading it.\n\nSo the process is simple:\n\n\n\tChoose your keywords\n\tCreate spectacular content\n\n\nWait. Is it really that simple?\n\nUnfortunately there\u2019s a lot more to the other 30% of SEO than just creating great content and waiting for the visitors. There are issues like helping Google understand the content on your pages and website, incoming links, page authority, domain authority, usage patterns, spam factors, canonical issues and much more.\n\nBut there\u2019s the often overlooked fact about Google: it actually does a reasonable job of working out what\u2019s on your website and (to some extent) understanding the gist of it. If you\u2019ve never done any SEO on your website but still get some traffic from Google, this is why.\n\nEven without dabbling in the other 30% of SEO, by creating the right content for the right visitors using the precise language and terminology that your potential customers are using, you\u2019re significantly better off than your competition. And you can only gain from this.\n\nWhen you\u2019ve checked this off your to-do list and made it an ingrained part of your content creation process, then you\u2019re ready to delve into the other 30% of SEO. The not-so-easy side.\n\nUntil then, work on understanding your current situation, exploring the opportunities, creating a list of good keywords, creating the right content for them, and starting 2015 with a little bit of smart, safe and real SEO.", "year": "2014", "author": "Dave Collins", "author_slug": "davecollins", "published": "2014-12-15T00:00:00+00:00", "url": "https://24ways.org/2014/seo-in-2015-and-why-you-should-care/", "topic": "business"} {"rowid": 33, "title": "Five Ways to Animate Responsibly", "contents": "It\u2019s been two years since I wrote about \u201cFlashless Animation\u201d on this very site. Since then, animation has steadily begun popping up on websites, from sleek app-like user interfaces to interactive magazine-like spreads. It\u2019s an exciting time for web animation wonks, interaction developers, UXers, UI designers and a host of other acronyms! \n\nBut in our rush to experiment with animation it seems that we\u2019re having fewer conversations about whether or not we should use it, and more discussions about what we can do with it. We spend more time fretting over how to animate all the things at 60fps than we do devising ways to avoid incapacitating users with vestibular disorders.\n\nI love web animation. I live it. And I make adorably silly things with it that have no place on a self-respecting production website. I know it can be abused. We\u2019ve all made fun of Flash-turbation. But how quickly we forget the lessons we learned from that period of web design. Parallax scrolling effects may be the skip intro of this generation. Surely we have learned better in the sobering up period between Flash and the web animation API.\n\nSo here are five bits of advice we can use to pull back from the edge of animation abuse. With these thoughts in mind, we can make 2015 the year web animation came into its own. \n\nAnimate deliberately\n\nSadly, animation is considered decorative by the bulk of the web development community. UI designers and interaction developers know better, of course. But when I\u2019m teaching a workshop on animation for interaction, I know that my students face an uphill battle against decision makers who consider it nice to have, and tack it on at the end of a project, if at all. \n\nThis stigma is hard to shake. But it starts with us using animation deliberately or not at all. Poorly considered, tacked-on animation will often cause more harm than good. Users may complain that it\u2019s too slow or too fast, or that they have no idea what just happened.\n\nWhen I was at Chrome Dev Summit this year, I had the privilege to speak with Roma Sha, the UX lead behind Polymer\u2019s material design (with the wonderful animation documentation). I asked her what advice she\u2019d give to people using animation and transitions in their own designs. She responded simply: animate deliberately. If you cannot afford to slow down to think about animation and make well-informed and well-articulated decisions on behalf of the user, it is better that you not attempt it at all. Animation takes energy to perform, and a bad animation is worse than none at all. \n\nIt takes more than twelve principles\n\nWe always try to draw correlations between disparate things that spark our interest. Recently it feels like more and more people are putting the The Illusion of Life on their reading shelf next to Understanding Comics. These books give us so many useful insights from other industries. However, we should never mistake a website for a comic book or an animated feature film. Some of these concepts, while they help us see our work in a new light, can be more or less relevant to producing said work. \n\n\nThe illusion of life from cento lodigiani on Vimeo.\n\nI am specifically thinking of the twelve principles of animation put forth by Disney studio veterans in that great tome The Illusion of Life. These principles are very useful for making engaging, lifelike animation, like a ball bouncing or a squirrel scampering, or the physics behind how a lightbox should feel transitioning off a page. But they provide no direction at all for when or how something should be animated as part of a greater interactive experience, like how long a drop-down should take to fully extend or if a group of manipulable objects should be animated sequentially or as a whole.\n\nThe twelve principles are a great place to start, but we have so much more to learn. I\u2019ve documented at least six more functions of interactive animation that apply to web and app design. When thinking about animation, we should consider why and how, not just what, the physics. Beautiful physics mean nothing if the animation is superfluous or confusing.\n\nUseful and necessary, then beautiful\n\nThere is a Shaker saying: \u201cDon\u2019t make something unless it is both necessary and useful; but if it is both necessary and useful, don\u2019t hesitate to make it beautiful.\u201d When it comes to animation and the web, currently there is very little documentation about what makes it useful or necessary. We tend to focus more on the beautiful, the delightful, the aesthetic. And while aesthetics are important, they take a back seat to the user\u2019s overall experience. \n\n\n\nThe first time I saw the load screen for Pokemon Yellow on my Game Boy, I was enthralled. By the sixth time, I was mashing the start button as soon as Game Freak\u2019s logo hit the screen. What\u2019s delightful and meaningful to us while working on a project is not always so for our users. And even when a purely delightful animation is favorably received, as with Pokemon Yellow\u2019s adorable opening screen, too many repetitions of the cutest but ultimately useless animation, and users start to resent it as a hindrance.\n\n \n\nIf an animation doesn\u2019t help the user in some way, by showing them where they are or how two elements on a page relate to each other, then it\u2019s using up battery juice and processing cycles solely for the purpose of delight. Hardly the best use of resources.\n\nRather than animating solely for the sake of delight, we should first be able to articulate two things the animation does for the user. As an example, take this menu icon from Finethought.com (found via Use Your Interface). The menu icon does two things when clicked: \n\n\n\tIt gives the user feedback by animating, letting the user know its been clicked.\n\tIt demonstrates its changed relationship to the page\u2019s content by morphing into a close button.\n\n\n\n\nAssuming we have two good reasons to animate something, there is no reason our third cannot be to delight the user. \n\nGo four times faster\n\nThere is a rule of thumb in the world of traditional animation which is applicable to web animation: however long you think your animation should last, take that time and halve it. Then halve it again! When we work on an animation for hours, our sense of time dilates. What seems fast to us is actually unbearably slow for most users. In fact, the most recent criticism from users of animated interfaces on websites seems to be, \u201cIt\u2019s so slow!\u201d A good animation is unobtrusive, and that often means running fast.\n\nWhen getting your animations ready for prime time, reduce those durations to 25% of their original speed: a four-second fade out should be over in one. \n\nInstall a kill switch\n\nNo matter how thoughtful and necessary an animation, there will be people who become physically sick from seeing it. For these people, we must add a way to turn off animations on the website. \n\nFortunately, web designers are already thinking of ways to empower users to make their own decisions about how they experience the web. As an example, this site for the animated film Little from the Fish Shop allows users to turn off most of the parallax effects. While it doesn\u2019t remove the animation entirely, this website does reduce the most nauseating of the animations. \t\n\n\n\n\n\nAnimation is a powerful tool in our web design arsenal. But we must take care: if we abuse animation it might get a bad reputation; if we underestimate it, it won\u2019t be prioritized. But if we wield it thoughtfully, use it where it is both necessary and useful, and empower users to turn it off, animation is a tool that will help us build things that are easier to use and more delightful for years to come.\n\nLet\u2019s make 2015 the year web animation went to work for users.", "year": "2014", "author": "Rachel Nabors", "author_slug": "rachelnabors", "published": "2014-12-14T00:00:00+00:00", "url": "https://24ways.org/2014/five-ways-to-animate-responsibly/", "topic": "ux"} {"rowid": 25, "title": "The Introvert Owner\u2019s Manual", "contents": "Nobody realizes that some people expend tremendous energy merely to be normal.\nAlbert Camus\n\n\n\u201cWhatever you plan, just make sure there are lots of people there,\u201d said my husband in the run-up to his birthday last year. A few months later, before my own birthday, I uttered, \u201cWhatever you plan, just make sure it is only me and you.\u201d\n\nI am an introvert. It is very likely some of you are too, or that you live, work or fraternise with one. Despite there being quite a few of us out there \u2013 some say as many as one third of the population, others as little as ten per cent \u2013 I think our professional and social lives are biased towards a definition of normality that is more accepting of the extrovert. I hope that by reading this article you will gain some insight to what goes on inside the head of the introvert(s) that you know and understand how to relate to them in a way that respects their disposition.\n\nBefore we go any further, I should define what exactly being an introvert means, and, equally important, what it does not. Only once this is established will you be able to handle your introvert correctly.\n\nWhat defines an introvert\n\nThe simplest and most accurate way of describing an introvert is that she uses up energy in social situations and needs to be in solitude to recharge.\n\nTo explain what I mean, let us take the example of the The Sims: when you create a Sim, you can choose (among other characteristics) whether it will be outgoing or not. If the Sim is outgoing, when you play the game you need to make sure it interacts as much as possible with other Sims or its mood indicator (the plumbob) will become red and that is a bad thing. Conversely, if your Sim is not outgoing, when you put it in too many social situations its plumbob will become red too.\n\nSo your (real life) introvert might think you are great (you might even be her best friend, her spouse or her child), but if her plumbob is red, or nearly, she might just need a little time and space to recharge before she is ready to interact.\n\nThis is not the same thing as being shy or in a bad mood all the time. We are not necessarily awkward in social situations, but, if we have not had the time to recharge, being social might be almost impossible. This explains why your introvert will likely ask who will be at the gathering you have planned, for how long she will have to stay there, and what she will be doing before and after the event. It is the equivalent of you wanting to know if there will be power sockets in the restaurant to charge your iPhone \u2013 asking this does not mean you don\u2019t want to attend.\n\nThe explanation above might be a simplistic way of looking at things, but I would say it is one that introverts can relate to; call it a minimalist approach to socialisation.\n\nCaring for your introvert\n\nArticles and conversations about introversion usually focus on how to fix the condition and how to make introverts more outgoing: a clear example of our society\u2019s bias towards the normality of extroversion. Avoid this. You will not be able to convert your introvert into an extrovert. Believe it or not, there is nothing wrong with her.\n\nIn her 2012 TED talk, \u201cThe power of introverts\u201d, Susan Cain pointed to the fact that places like school and work are designed for extroverts: students and workers are required to constantly work in groups and speaking up is highly valued. Both types are evaluated against the same criteria and more often than not people are expected to excel at being outspoken to be considered well rounded.\n\nObviously, this is not the right way to appraise your introvert. Comparing your introvert with an extrovert using the same parameters and simply asking her to behave more like an extrovert is a mistake and something that will only perpetuate an introvert\u2019s idea that the problem lies with her.\n\nSpeaking up\n\nYour introvert is likely to have strong opinions and ideas, and to have been listening to other people speak at meetings and workshops. Help her voice those thoughts by creating an environment where everyone stops and listens when someone speaks instead of one which fosters interruptions. Show her that it is acceptable for someone to take time to think before they speak: silences are OK. Allow her the freedom to be herself instead of pressuring her to change an innate quality.\n\nIt is not uncommon to find an introvert who likes to express ideas in writing. The world of web professionals excels in the spread of knowledge that is shared and sought through the written word. Give your introvert the necessary time and tools to write about the job, if she is that way inclined; this might be a good alternative to asking her to speak out.\n\nGroup work\n\nI remember the sinking feeling whenever I heard my teachers say the dreaded words: \u201cAnd now you\u2019re going to break out into groups of\u2026\u201d Being an introvert does not mean you do not like people (or like to be around or work with others). It is just that activities such as group work will invariably drain your introvert\u2019s energy rapidly. Your introvert\u2019s batteries will need to be fully charged for her to be at her best and afterwards she will most likely need to recharge.\n\nQuiet time\n\nThese days, one of the things that I value most at work is the ability to have moments to create and to think in solitude. When I am able to have those moments at the right time I will in turn be happy to participate in group conversations and tasks. Allow your introvert to have those moments: this does not mean she will have to work from home one day a week (but maybe it will); it might simply mean allowing her to take her laptop and her notebook and work from the empty side of the office, or from the coffee shop downstairs for an hour or two. In all likelihood she will come back fully recharged and ready to engage in more social activities \u2013 her plumbob will again be bright green.\n\nLeadership\n\nDo not think that your introvert cannot lead. Cain notes that introverted leaders are more likely to let their proactive employees run with their ideas instead of taking the ideas as their own. I would say that is a positive attribute in a leader. Maybe next time a project starts, talk to your introvert about the possibility of her being in a leadership position or of having more responsibility: you might be surprised at her ability to plan and foresee potential obstacles in the project.\n\nFinal thoughts\n\nYou would not tell someone with dyslexia to get better at spelling without giving her the right tools and enough time to do so. Equally, do not ask your introvert to be more outgoing, or to turn her frown upside down, without giving her the space to do so.\n\nI believe that everyone is an introvert at some point. Everyone needs a moment of solitude now and then, and the work we do requires frequent periods of deep focus and concentration. Striving to create workplaces, classrooms, homes that allow introverts to shine and be comfortable in their skin has the potential to also make those places more balanced for everyone else.\n\nResources and further reading\n\n\n\tThe power of introverts\n\t10 myths about introverts\n\tSusan Cain\u2019s 2014 TED Talk | Announcing the Quiet Revolution\n\tHelp Shy Kids \u2014 Don\u2019t Punish Them\n\tThe Introvert Advantage\n\t6 Things You Thought Wrong About Introverts\n\tExtraversion and introversion", "year": "2014", "author": "Inayaili de Le\u00f3n Persson", "author_slug": "inayailideleon", "published": "2014-12-13T00:00:00+00:00", "url": "https://24ways.org/2014/the-introvert-owners-manual/", "topic": "process"} {"rowid": 45, "title": "Is Agile Harder for Agencies?", "contents": "I once sat in a pitch meeting and watched a new business exec tell a potential client that his agency followed an agile workflow process at all times. The potential client nodded wisely, and they both agreed that agile was indeed the way to go.\n\nThe meeting progressed and they signed off on a contract for a massive project, to be delivered in a standard waterfall fashion, with all manner of phases and key deliverables.\n\nOf course both of them left the meeting perfectly happy, because neither of them knew nor cared what an agile workflow process might be.\n\nThat was about five years ago. As 2015 heaves into view I think it\u2019s fair to say that attitudes have changed. Perhaps the same number of people claim to do Agile\u2122 now as in 2010, but I think more of them are telling the truth.\n\nAs a developer in an agency that works primarily with larger organisations, this year I have started to see a shift from agencies pushing agile methodologies with their clients, to clients requesting and even demanding agile practices from their agencies. Only a couple of years ago this would have been unusual behaviour.\n\nSo what\u2019s the problem?\n\nWe should be happy then, no? Those of us in agencies will get to spend more time delivering great products, and less time arguing over out-of-date functional specs or battling through an adversarial change management procedure because somebody had a good idea during development rather than planning. We get to be a little bit more like our brothers and sisters in vaunted teams like the Government Digital Service, which is using agile approaches to great effect on projects that have a real benefit to their users.\n\nAlmost. Unfortunately, it seems to be the case that adhering to an agile framework such as scrum is more difficult within an agency/client structure than it is for an in-house development team.\n\nThis is no surprise. The Agile Manifesto was written in 2001 by a group of software developers for their own use. Many of the underlying principles of a framework like Scrum assume the existence of an in-house team, working on a highly technical project, and working for the business that employs them. The agency/client model must to some extent be retrofitted into agile frameworks. It can be done though, and there are plenty of agencies out there doing it well.\n\nThis article isn\u2019t meant to be another introduction to agile techniques \u2013 there are too many of those online already. This article is for people just dipping their toes into this way of working. I\u2019ve laid out a few of the key reasons why adopting a more fully agile approach seems difficult, at least initially, for those of us working in agencies.\n\n1. Agile asks more of your clients\n\nWhen a team adopts Scrum everyone has to get used to a number of unfamiliar roles and rituals. Few team members have a steeper learning curve than the person designated as the product owner.\n\nThe product owner carries a lot of weight on their shoulders. They have to uphold the overall vision for the project. They are also meant to be the primary author of the project\u2019s user stories (short atomic descriptions of project features which are testable and relate to a real business need). They should own this list of stories (called a backlog) and should be able to prioritise the order in which the stories are developed, to ensure that the project is delivering real value to the business early and often.\n\nWhen a burst of work is completed (bursts of work in Scrum are called sprints), the product owner leads a review or show-and-tell session with the wider project stakeholders. The product owner needs to understand the work that has been completed, and must champion it to the business. Finally, and most importantly, the product owner is responsible for managing the feedback and requests from stakeholders in such a way that they don\u2019t derail the project team\u2019s agreed workload for any given sprint, without upsetting or offending any of the stakeholders \u2013 some of whom may outrank the product owner.\n\nIf you follow that spec, this is a job for a superhuman in any organisational context. And within the agency/client structure this superhuman needs to be client-side for the process to be at its most effective.\n\nSo your client, who in the past might have briefed a project to an agency team and then had the work presented back to them every few weeks, is now asked to be involved with the team on a daily basis; to fight on behalf of the team when new or difficult requests come in from senior figures within their organisation; and to present the agency\u2019s work to their own colleagues after each sprint. It\u2019s a big change if all that gets dropped into someone\u2019s lap without warning.\n\nThere are several ways agencies can mitigate this issue. The ScrumAlliance suggests some alternative ways to structure the product owner role. The approach I have taken in the past is simply to start slow, and gradually move more of the product owner role over to the client side as and when they feel comfortable with it. If you\u2019re working together long-term on a project, and you both see tangible improvements in the quality of the work after adopting an agile process, then your client is more likely to be open to further changes as the partnership progresses.\n\n2. My client wants fixed costs, fixed deadlines and a fixed scope\n\nI know. Mine too. Of course they do \u2013 it is the way that agencies and clients have agreed to work in digital and other creative service industries for a very long time. On both sides of the fence we\u2019re used to thinking about projects in this way.\n\nOf the three, fixing scope is the one that agile purists would rail hardest against. The more time we spend working on digital projects, the less sense it makes. James Archer, CEO of UI/UX design agency Forty puts it like this:\n\n\n\tFor me, the Agile approach is really about acknowledging that disturbing truth that every project manager knows, but has trouble admitting. The truth that the project plan is wrong. Scope creep. Change orders. Shifting priorities. New directions. We act shocked and appalled when those things happen during our carefully planned project, even though they happen on every project ever.\n\n\nSuccessful relationships require trust and honesty, and we shouldn\u2019t be afraid of discussing this aspect of project management. If you do move away from a fixed scope of work, then the other two items (costs and timings) can be fixed \u2013 more or less. If you can get your clients to buy into this from a standing start then you are doing well. In fact you probably deserve a promotion. For most of us this is a continual discussion.\n\nAnyway, as soon as you\u2019ve made headway on the argument that it makes little or no sense to try and fix the scope of a digital project, you usually run into a related concern, which we\u2019ll look at next.\n\n3. Fear of uncontrolled costs\n\nWe all know that a dog is for life, not just for Christmas. At this time of year perhaps we should reiterate to everyone that digital products and services also need support and love once we have taken the decision to bring them into the world.\n\nMore organisations are realising that their investment in digital platforms should be viewed as an operational expenditure rather than a capital expenditure. But from time to time we will find ourselves working on projects for people who have a finite amount of money to invest in a product at a given point in time. When agencies start talking about these projects as rolling investments those responsible can understandably worry about their costs running out of control.\n\nThere\u2019s another factor at play here. Agile, on the whole, prefers to derive a cost for services from the hours a team spends working on a project. In other industries this is referred to as charging for time and materials, and there seems to be an ingrained distrust in this approach among people in general. See, for example, the Citizens Advice Bureau\u2019s \u201cTop tips for employing a builder\u201d:\n\n\n\t\u201cBear in mind that if you pay a daily rate, this makes it easier for a builder to string the work out and get more money so agree what you will do if the job takes longer than expected.\u201d\n\n\nIt\u2019s hard not to feel stung if you are in the builder\u2019s shoes here, as we are when we\u2019re talking about our role as an agency. But if you\u2019ve ever haggled with a builder over time and materials, and also moaned about your clients misunderstanding agile methods, take a moment to reflect on the similarities from your client\u2019s point of view.\n\nAgain, there are some things we can do to mitigate this issue. Some agencies put in place a service level agreement around their team\u2019s velocity (an agile-related term related to how much work a team delivers in any given sprint) and this can help.\n\nAs the industry moves further towards a long-term approach to investment in digital I hope this fear will subside. But that shift in approach leads to the final concern I want to address.\n\n4. Agency structures need shaking up\n\nIf you work for a company that has spent many years developing a business model around the waterfall process, you may have to break through many layers of entrenched thinking in order to establish new practices and effect organisational change.\n\nThere are consultancies that exist specifically to help agencies through their own agile transformation. One of these companies, AgencyAgile, provides a helpful list of common pitfalls. They emphasise the need to look at your whole agency\u2019s structure, rather than simply encouraging project teams to adopt new workflows.\n\n\n\tEven awesomely run Agile projects can have a limited impact on the overall organization.\n\n\nIf you\u2019re serious about changing the way your company approaches projects then try talking to people who sit outside the usual project delivery team. Speak to the finance department if you have one, and try to convince your senior management team if they\u2019re not already on board. And definitely speak to your new business people, who go out there and win the projects you get to work on.\n\nIt\u2019s these people who need to understand the potential business benefits of working in a new way, and also which of their existing habits and behaviours they might need to change to accommodate a new approach.\n\nOtherwise you\u2019ll find yourself with a team of designers, developers and project managers who are ready and waiting to deliver work in an iterative and collaborative way, but by the time they get hold of the project a cost has already been agreed, a deadline has been imposed, and a functional requirements document has been painstakingly put together. Nobody wins in this situation.\n\nConclusion\n\nSo where should we go from here? I certainly don\u2019t have hard and fast answers \u2013 I\u2019m not sure that they exist in a one-size-fits-all approach for agencies.\n\nThere are plenty of smart people thinking about this problem. It\u2019s a hot topic right now. Earlier in the year a London-based meetup was established called Agile for Agencies. If you\u2019re in the capital and want to discuss these issues with your peers it\u2019s a great opportunity to do so.\n\nI\u2019ve mentioned James Archer and Forty already. Both James and Paul Boag have written in the last twelve months on this subject. They both come out on the side of the argument that suggests you adopt agile principles, but don\u2019t have to worry about the rituals if they don\u2019t fit in with your practices.\n\nPersonally, I think the rituals and the discipline mandated by an agile framework like Scrum can provide a great deal of value to your team, even it if is hard to implement within an agency culture that has traditionally structured its work and its services in another way.\n\nIn whatever way you figure out the details, when your teams collaborate with your clients rather than work for them at arm\u2019s length, and when everyone prioritises frequent delivery, reflection and iteration over exhaustive scoping and planning, I believe you\u2019ll see a tangible difference in the quality of the work that you create.", "year": "2014", "author": "Charlie Perrins", "author_slug": "charlieperrins", "published": "2014-12-12T00:00:00+00:00", "url": "https://24ways.org/2014/is-agile-harder-for-agencies/", "topic": "process"} {"rowid": 27, "title": "Putting Design on the Map", "contents": "The web can leave us feeling quite detached from the real world. Every site we make is really just a set of abstract concepts manifested as tools for communication and expression. At any minute, websites can disappear, overwritten by a newfangled version or simply gone. I think this is why so many of us have desires to create a product, write a book, or play with the internet of things. We need to keep in touch with the physical world and to prove (if only to ourselves) that we do make real things.\n\nI could go on and on about preserving the web, the challenges of writing a book, or thoughts about how we can deal with the need to make real things. Instead, I\u2019m going to explore something that gives us a direct relationship between a website and the physical world \u2013 maps.\n\n\n\tA map does not just chart, it unlocks and formulates meaning; it forms bridges between here and there, between disparate ideas that we did not know were previously connected.\nReif Larsen, The Selected Works of T.S. Spivet\n\n\nThe simplest form of map on a website tends to be used for showing where a place is and often directions on how to get to it. That\u2019s an incredibly powerful tool. So why is it, then, that so many sites just plonk in a default Google Map and leave it as that? You wouldn\u2019t just use dark grey Helvetica on every site, would you? Where\u2019s the personality? Where\u2019s the tailored experience? Where is the design?\n\nJumping into design\n\nLet\u2019s keep this simple \u2013 we all want to be better web folk, not cartographers. We don\u2019t need to go into the history, mathematics or technology of map making (although all of those areas are really interesting to research). For the sake of our sanity, I\u2019m going to gloss over some of the technical areas and focus on the practical concepts.\n\nTiles\n\nIf you\u2019ve ever noticed a map loading in sections, it\u2019s because it uses tiles that are downloaded individually instead of requiring the user to download everything that they might need. These tiles come in many styles and can be used for anything that covers large areas, such as base maps and data. You\u2019ve seen examples of alternative base maps when you use Google Maps as Google provides both satellite imagery and road maps, both of which are forms of base maps. They are used to provide context for the real world, or any other world for that matter. A marker on a blank page is useless.\n\nThe tiles are representations of the physical; they do not have to be photographic imagery to provide context. This means you can design the map itself. The easiest way to conceive this is by comparing Google\u2019s road maps with Ordnance Survey road maps. Everything about the two maps is different: the colours, the label fonts and the symbols used. Yet they still provide the exact same context (other maps may provide different context such as terrain contours).\n\n Comparison of Google Maps (top) and the Ordnance Survey (bottom).\n\nCarefully designing the base map tiles is as important as any other part of the website. The most obvious, yet often overlooked, aspect are aesthetics and branding. Maps could fit in with the rest of the site; for example, by matching the colours and line weights, they can enhance the full design rather than inhibiting it. You\u2019re also able to define the exact purpose of the map, so instead of showing everything you could specify which symbols or labels to show and hide.\n\nI\u2019ve not done any real research on the accessibility of base maps but, having looked at some of the available options, I think a focus on the typography of labels and the colour of the various elements is crucial. While you can choose to hide labels, quite often they provide the data required to make sense of the map. Therefore, make sure each zoom level is not too cluttered and shows enough to give context. Also be as careful when choosing the typeface as you are in any other design work. As for colour, you need to pay closer attention to issues like colour-blindness when using colour to convey information. Quite often a spectrum of colour will be used to show data, or to show the topography, so you need to be aware that some people struggle to see colour differences within a spectrum.\n\nA nice example of a customised base map can be found on Michael K Owens\u2019 check-in pages:\n\n One of Michael K Owens\u2019 check-in pages.\n\nAs I\u2019ve already mentioned, tiles are not just for base maps: they are also for data. In the screenshot below you can see how Plymouth Marine Laboratory uses tiles to show data with a spectrum of colour.\n\n A map from the Marine Operational Ecology data portal, showing data of adult cod in the North Sea.\n\nTechnical\n\nYou\u2019re probably wondering how to design the base layers. I will briefly explain the concepts here and give you tools to use at the end of the article. If you\u2019re worried about the time it takes to design the maps, don\u2019t be \u2013 you can automate most of it. You don\u2019t need to manually draw each tile for the entire world!\n\nWe\u2019ve learned the importance of web standards the hard way, so you\u2019ll be glad (and I won\u2019t have to explain the advantages) of the standard for web mapping from the Open Geospatial Consortium (OGC) called the Web Map Service (WMS). You can use conventional file formats for the imagery but you need a way to query for the particular tiles to show for the area and zoom level, that is what WMS does.\n\nFeatures\n\nTiles are great for covering large areas but sometimes you need specific smaller areas. We call these features and they usually consist of polygons, lines or points. Examples include postcode boundaries and routes between places, or even something more dynamic such as borders of nations changing over time.\n\nShowing features on a map presents interesting design challenges. If the colour or shape conveys some kind of data beyond geographical boundaries then it needs to be made obvious. This is actually really hard, without building complicated user interfaces. For example, in the image below, is it obvious that there is a relationship between the colours? Does it need a way of showing what the colours represent?\n\n Choropleth map showing ranked postcode areas, using ViziCities.\n\n\n\tFeatures are represented by means of lines or colors; and the effective use of lines or colors requires more than knowledge of the subject \u2013 it requires artistic judgement.\nErwin Josephus Raisz, cartographer (1893\u20131968)\n\n\nWhere lots of boundaries are small and close together (such as a high street or shopping centre) will it be obvious where the boundaries are and what they represent? When designing maps, the hardest challenge is dealing with how the data is represented and how it is understood by the user.\n\nTechnical\n\nAs you probably gathered, we use WMS for tiles and another standard called the web feature service (WFS) for specific features. I need to stress that the difference between the two is that WMS is for tiling, whereas WFS is for specific features. Both can use similar file formats but should be used for their particular use cases. You may be wondering why you can\u2019t just use a vector format such as KML, GeoJSON (or even SVG) \u2013 and you can \u2013 but the issue is the same as for WMS: you need a way to query the data to get the correct area and zoom level.\n\nUser interface\n\nThere is of course never a correct way to design an interface as there are so many different factors to take into consideration for each individual project. Maps can be used in a variety of ways, to provide simple information about directions or for complex visualisations to explain large amounts of data. I would like to just touch on matters that need to be taken into account when working with maps.\n\nAs I mentioned at the beginning, there are so many Google Maps on the web that people seem to think that its UI is the only way you can use a map. To some degree we don\u2019t want to change that, as people know how to use them; but does every map require a zoom slider or base map toggle? In fact, does the user need to zoom at all? The answer to that one is generally yes, zooming does provide more context to where the map is zoomed in on.\n\nIn some cases you will need to let users choose what goes on the map (such as data layers or directions), so how do they show and hide the data? Does a simple drop-down box work, or do you need search? Google\u2019s base map toggle is quite nice since it doesn\u2019t offer many options yet provides very different contexts and styling.\n\nIt isn\u2019t until we get to this point that we realise just plonking a quick Google map is really quite ridiculous, especially when compared to the amount of effort we make in other areas such as colour, typography or how the CSS is written. Each of these is important but we need to make sure the whole site is designed, and that includes the maps as much as any other content.\n\nPutting it into practice\n\nI could ramble on for ages about what we can do to customise maps to fit a site\u2019s personality and correctly represent the data. I wanted to focus on concepts and standards because tools constantly change and it is never good to just rely on a tool to do the work. That said, there are a large variety of tools that will help you turn these concepts into reality. This is not a comparison; I just want to show you a few of the many options you have for maps on the web.\n\nGoogle\n\nOK, I\u2019ve been quite critical so far about Google Maps but that is only because there is such a large amount of the default maps across the web. You can style them almost as much as anything else. They may not allow you to use custom WMS layers but Google Maps does have its own version, called styled maps. Using an array of map features (in the sense of roads and lakes and landmarks rather than the kind WFS is used for), you can style the base map with JavaScript. It even lets you toggle visibility, which helps to avoid the issue of too much clutter on the map. As well as lacking WMS, it doesn\u2019t support WFS, but it does support GeoJSON and KML so you can still show the features on the map. You should also check out Google Maps Engine (the new version of My Maps), which provides an interface for creating more advanced maps with a selection of different base maps. A premium version is available, essentially for creating map-based visualisations, and it provides a step up from the main Google Maps offering. A useful feature in some cases is that it gives you access to many datasets.\n\nLeaflet\n\nYou have probably seen Leaflet before. It isn\u2019t quite as popular as Google Maps but it is definitely used often and for good reason. Leaflet is a lightweight open source JavaScript library. It is not a service so you don\u2019t have to worry about API throttling and longevity. It gives you two options for tiling, the ability to use WMS, or to directly get the file using variables in the filename such as /{z}/{x}/{y}.png. I would recommend using WMS over dynamic file names because it is a standard, but the ability to use variables in a file name could be useful in some situations. Leaflet has a strong community and a well-documented API.\n\nMapbox\n\nAs a freemium service, Mapbox may not be perfect for every use case but it\u2019s definitely worth looking into. The service offers incredible customisation tools as well as lots of data sources and hosting for the maps. It also provides plenty of libraries for the various platforms, so you don\u2019t have to only use the maps on the web.\n\nMapbox is a service, though its map design tool is open source. Mapbox Studio is a vector-only version of their previous tool called Tilemill. Earlier I wrote about how typography and colour are as important to maps as they are to the rest of a website; if you thought, \u201cYes, but how on earth can I design those parts of a map?\u201d then this is the tool for you. It is incredibly easy to use. Essentially each map has a stylesheet.\n\nIf you do not want to open a paid-for Mapbox account, then you can export the tiles (as PNG, SVG etc.) to use with other map tools.\n\nOpenLayers\n\nAfter a long wait, OpenLayers 3 has been released. It is similar to Leaflet in that it is a library not a service, but it has a much broader scope. During the last year I worked on the GIS portal at Plymouth Marine Laboratory (which I used to show the data tiles earlier), it essentially used OpenLayers 2 to create a web-based geographic information system, taking a large amount of data and permitting analysis (such as graphs) without downloading entire datasets and complicated software. OpenLayers 3 has improved greatly on the previous version in both performance and accessibility. It is the ideal tool for complex map-based web apps, though it can be used for the simple use cases too.\n\nOpenStreetMap\n\nI couldn\u2019t write an article about maps on the web without at least mentioning OpenStreetMap. It is the place to go for crowd-sourced data about any location, with complete road maps and a strong API.\n\nViziCities\n\nThe newest project on this list is ViziCities by Robin Hawkes and Peter Smart. It is a open source 3-D visualisation tool, currently in the very early stages of development. The basic example shows 3-D buildings around the world using OpenStreetMap data. Robin has used it to create some incredible demos such as real-time London underground trains, and planes landing at an airport. Edward Greer and I are currently working on using ViziCities to show ideal housing areas based on particular personas. We chose it because the 3-D aspect gives us interesting possibilities for the data we are able to visualise (such as bar charts on the actual map instead of in the UI). Despite not being a completely stable, fully featured system, ViziCities is worth taking a look at for some use cases and is definitely going to go from strength to strength.\n\n\n\nSo there you have it \u2013 a whistle-stop tour of how maps can be customised. Now please stop plonking in maps without thinking about it and design them as you design the rest of your content.", "year": "2014", "author": "Shane Hudson", "author_slug": "shanehudson", "published": "2014-12-11T00:00:00+00:00", "url": "https://24ways.org/2014/putting-design-on-the-map/", "topic": "design"} {"rowid": 30, "title": "Making Sites More Responsive, Responsibly", "contents": "With digital projects we\u2019re used to shifting our thinking to align with our target audience. We may undertake research, create personas, identify key tasks, or observe usage patterns, with our findings helping to refine our ongoing creations.\u00a0A product\u2019s overall experience can make or break its success, and when it comes to defining these experiences our development choices play a huge role alongside more traditional user-focused activities.\n\nThe popularisation of responsive web design is a great example of how we are able to shape the web\u2019s direction through using technology to provide better experiences. If we think back to the move from table-based layouts to CSS, initially our clients often didn\u2019t know or care about the difference in these approaches, but\u00a0we\u00a0did. Responsive design was similar in this respect \u2013 momentum grew through the web industry choosing to use an approach that we felt would give a better experience, and which was more future-friendly.\u00a0\n\nWe tend to think of responsive design as a means of displaying content appropriately across a range of devices, but the technology and our implementation of it can facilitate much more. A responsive layout not only helps your content work when the newest smartphone comes out, but it also ensures your layout suitably adapts if a visually impaired user drastically changes the size of the text.\n\n The 24 ways site at 400% on a Retina MacBook Pro displays a layout more typically used for small screens.\n\nWhen we think more broadly, we realise that our technical choices and approaches to implementation can have knock-on effects for the greater good, and beyond our initial target audiences. We can make our experiences more\u00a0responsive to people\u2019s needs, enhancing their usability and accessibility along the way.\n\nBeing responsibly responsive\n\nOf course, when we think about being more responsive, there\u2019s a fine line between creating useful functionality and becoming intrusive and overly complex. In the excellent Responsible Responsive Design, Scott Jehl states that:\n\n\nA responsible responsive design equally considers the following throughout a project:\n\nUsability: The way a website\u2019s user interface is presented to the user, and how that UI responds to browsing conditions and user interactions.\nAccess: The ability for users of all devices, browsers, and assistive technologies to access and understand a site\u2019s features and content.\nSustainability: The ability for the technology driving a site or application to work for devices that exist today and to continue to be usable and accessible to users, devices, and browsers in the future.\nPerformance: The speed at which a site\u2019s features and content are perceived to be delivered to the user and the efficiency with which they operate within the user interface.\n\n\n\nScott\u2019s book covers these ideas in a lot more detail than I\u2019ll be able to here (put it on your Christmas list if it\u2019s not there already), but for now let\u2019s think a bit more about our roles as digital creators\u00a0and the power this gives us.\n\nOur choices around technology and the decisions we have to make can be extremely wide-ranging. Solutions will vary hugely depending on the needs of each project, though we can further explore the concept of making our creations more responsive through the use of humble web technologies.\n\nThe power of the web\n\nWe all know that under the HTML5 umbrella are some great new capabilities, including a number of JavaScript APIs such as geolocation, web audio, the file API and many more. We often use these to enhance the functionality of our sites and apps, to add in new features, or to facilitate device-specific interactions.\n\nYou\u2019ll have seen articles with flashy titles such as \u201cTop 5 JavaScript APIs You\u2019ve Never Heard Of!\u201d, which you\u2019ll probably read, think \u201cThat\u2019s quite cool\u201d, yet never use in any real work.\n\nThere is great potential for technologies like these\u00a0to be misused, but there are also great prospects for them to be used well to enhance experiences. Let\u2019s have a look at a few\u00a0examples you may not have considered.\n\nOffline first\n\nWhen we make websites, many of us follow a process which involves user stories \u2013 standardised snippets of context explaining who needs what, and why.\n\n\u201cAs a student I want to pay online for my course so I don\u2019t have to visit the college in person.\u201d\n\n\u201cAs a retailer I want to generate unique product codes so I can manage my stock.\u201d\n\nWe very often focus heavily on what\u00a0needs doing, but may not consider carefully how it will be done. As in Scott\u2019s list, accessibility is extremely important, not only in terms of providing a great experience to users of assistive technologies, but also to make your creation more accessible in the general sense \u2013 including under different conditions.\n\nOffline first is yet another \u2018first\u2019 methodology (my personal favourite being \u2018tea first\u2019), which encourages us to develop so that connectivity\u00a0itself is an enhancement \u2013 letting\u00a0users continue with tasks even when they\u2019re offline. Despite the rapid growth in public Wi-Fi, if we consider data costs and connectivity in developing countries, our travel habits with planes, underground trains and roaming (or simply if you live in the UK\u2019s signal-barren East Anglian wilderness as I do), then you\u2019ll realise that connectivity isn\u2019t as ubiquitous as our internet-addled brains would make us believe. Take a scenario that I\u2019m sure we\u2019re all familiar with \u2013 the digital conference. Your venue may be in a city served by high-speed networks, but after overloading capacity with a full house of hashtag-hungry attendees, each carrying several devices, then everyone\u2019s likely to be offline after all. Wouldn\u2019t it be better if we could do something like this instead?\n\n\n\tSomeone visits our conference website.\n\tOn this initial run, some assets may be cached for future use: the conference schedule, the site\u2019s CSS, photos of the speakers.\n\tWhen the attendee revisits the site on the day, the page shell loads up from the cache.\n\tIf we have cached content (our session timetable, speaker photos or anything else), we can load it directly from the cache. We might then try to update this, or get some new content from the internet, but the conference attendee already has a base experience to use.\n\tIf we don\u2019t have something cached already, then we can try\u00a0grabbing it online.\n\tIf for any reason our requests for new content fail (we\u2019re offline), then we can display a pre-cached error message from the initial load, perhaps providing our users with alternative suggestions from what is\u00a0cached.\n\n\nThere are a number of ways we can make something like this, including using the application cache (AppCache) if you\u2019re that way inclined. However, you may want to look into service workers\u00a0instead. There are also some great resources on Offline First!\u00a0if you\u2019d like to find out more about this.\n\nBuilding in offline functionality isn\u2019t necessarily about starting offline first, and it\u2019s also perfectly possible to retrofit sites and apps to catch offline scenarios, but this kind of graceful degradation can end up being more complex than if we\u2019d considered it from the start. By treating connectivity as an enhancement, we can improve the experience and provide better performance than we can when waiting to counter failures. Our websites can respond to connectivity and usage scenarios, on top of adapting how we present our content. Thinking in this way can enhance each point in Scott\u2019s criteria.\n\nAs I mentioned, this isn\u2019t necessarily the kind of development choice that our clients will ask us for, but it\u2019s one we may decide is simply the right way to build based on our project, enhancing the experience we provide to people, and making it more responsive to their situation.\n\nEven more accessible\n\nWe\u2019ve looked at accessibility in terms of broadening when we can interact with a website, but what about how? Our user stories and personas are often of limited use. We refer in very general terms to students, retailers, and sometimes just users. What if we have a student whose needs are very different from another student? Can we make our sites even more usable and accessible through our development choices?\n\nAgain using JavaScript to illustrate this concept, we can do a lot more with the ways people interact with our websites, and with the feedback we provide, than simply accepting keyboard, mouse and touch inputs and displaying output on a screen.\n\nInput\n\nAmbient light detection is one of those features that looks great in simple demos, but which we struggle to put to practical use. It\u2019s not new \u2013 many satnav systems automatically change the contrast for driving at night or in tunnels, and our laptops may alter the screen brightness or keyboard backlighting to better adapt to our surroundings. Using web technologies we can adapt our presentation to be better suited to ambient light levels.\n\nIf our device has an appropriate light sensor and runs a browser that supports the API, we can grab the ambient light in units using ambient light events, in JavaScript. We may then change our presentation based on different bandings, perhaps like this:\n\nwindow.addEventListener('devicelight', function(e) {\n var lux = e.value;\n\n if (lux < 50) {\n //Change things for dim light\n }\n if (lux >= 50 && lux <= 10000) {\n //Change things for normal light\n }\n if (lux > 10000) {\n //Change things for bright light\n }\n});\n\nLive demo\u00a0(requires light sensor and supported browser).\n\nSoon we may also be able to do such detection through CSS, with light-level being cited in the Media Queries Level 4 specification. If that becomes the case, it\u2019ll probably look something like this:\n\n@media (light-level: dim) {\n /*Change things for dim light*/\n}\n\n@media (light-level: normal) {\n /*Change things for normal light*/\n}\n\n@media (light-level: washed) {\n /*Change things for bright light*/\n}\n\nWhile we may be quick to dismiss this kind of detection as being a gimmick, it\u2019s important to consider that apps such as Light Detector, listed on Apple\u2019s accessibility page, provide important context around exactly this functionality.\n\n\n\t\u201cIf you are blind, Light Detector helps you to be more independent in many daily activities. At home, point your iPhone towards the ceiling to understand where the light fixtures are and whether they are switched on. In a room, move the device along the wall to check if there is a window and where it is. You can find out whether the shades are drawn by moving the device up and down.\u201d\n\n\teverywaretechnologies.com/apps/lightdetector\n\n\nInput can be about so much more than what we enter through keyboards. Both an ever increasing amount of available sensors and more APIs being supported by the major browsers will allow us to cater for more scenarios and respond to them accordingly. This can be as complex or simple as you need; for instance, while x-webkit-speech has been deprecated, the web speech API is available for a number of browsers, and research into sign language detection is also being performed by organisations such as Microsoft.\n\nOutput\n\nWeb technologies give us some great enhancements around input, allowing us to adapt our experiences accordingly. They also provide us with some nice ways to provide feedback to users.\n\nWhen we play video games, many of our modern consoles come with the ability to have rumble effects on our controller pads. These are a great example of an enhancement, as they provide a level of feedback that is entirely optional, but which can give a great deal of extra information to the player in the right circumstances, and broaden the scope of our comprehension beyond what we\u2019re seeing and hearing.\n\nHaptic feedback is possible on the web as well. We could use this in any number of responsible applications, such as alerting a user to changes or using different patterns as a communication mechanism. If you find yourself in a pickle, here\u2019s how to print out SOS in Morse code through the vibration API. The following code indicates the length of vibration in milliseconds, interspersed by pauses in milliseconds.\n\nnavigator.vibrate([100, 300, 100, 300, 100, 300, 600, 300, 600, 300, 600, 300, 100, 300, 100, 300, 100]);\n\nLive demo\u00a0(requires supported browser)\n\nWith great power\u2026\n\nWhat you\u2019ve no doubt come to realise by now is that these are just more examples of progressive enhancement, whose inclusion will provide a better experience if the capabilities are available, but which we should not rely on. This idea isn\u2019t new, but the most important thing to remember, and what I would like you to take away from this article, is that it is up to us to decide to include these kind of approaches within our projects \u2013 if we don\u2019t root for them, they probably won\u2019t happen. This is where our professional responsibility comes in.\n\nWe won\u2019t necessarily be asked to implement solutions for the scenarios above, but they illustrate how we can help to push the boundaries of experiences. Maybe we\u2019ll have to switch our thinking about how we build, but we can create more usable products for a diverse range of people and usage scenarios through the choices we make around technology. Let\u2019s stop thinking simply in terms of features inside a narrow view of our target users, and work out how we can extend these to cater for a wider set of situations.\n\nWhen you plan your next digital project, consider the power of the web and the enhancements we can use, and try to make your projects even more responsive and responsible.", "year": "2014", "author": "Sally Jenkinson", "author_slug": "sallyjenkinson", "published": "2014-12-10T00:00:00+00:00", "url": "https://24ways.org/2014/making-sites-more-responsive-responsibly/", "topic": "code"} {"rowid": 46, "title": "Responsive Enhancement", "contents": "24 ways has been going strong for ten years. That\u2019s an aeon in internet timescales. Just think of all the changes we\u2019ve seen in that time: the rise of Ajax, the explosion of mobile devices, the unrecognisably changed landscape of front-end tooling.\n\nTools and technologies come and go, but one thing has remained constant for me over the past decade: progressive enhancement.\n\nProgressive enhancement isn\u2019t a technology. It\u2019s more like a way of thinking. Instead of thinking about the specifics of how a finished website might look, progressive enhancement encourages you to think about the fundamental meaning of what the website is providing. So instead of thinking of a website in terms of its ideal state in a modern browser on a nice widescreen device, progressive enhancement allows you to think about the core functionality in a more abstract way.\n\nOnce you\u2019ve figured out what the core functionality is \u2013 adding an item to a shopping cart, posting a message, sharing a photo \u2013 then you can enable that functionality in the simplest possible way. That usually means starting with good old-fashioned HTML. Links and forms are often all you need. Then, once you have the core functionality working in a basic way, you can start to enhance to make a progressively better experience for more modern browsers.\n\nThe advantage of working this way isn\u2019t just that your site will work in older browsers (albeit in a rudimentary way). It also ensures that if anything goes wrong in a modern browser, it won\u2019t be catastrophic.\n\nThere\u2019s a common misconception that progressive enhancement means that you\u2019ll spend your time dealing with older browsers, but in fact the opposite is true. Putting the basic functionality into place doesn\u2019t take very long at all. And once you\u2019ve done that, you\u2019re free to spend all your time experimenting with the latest and greatest browser technologies, secure in the knowledge that even if they aren\u2019t universally supported yet, that\u2019s OK: you\u2019ve already got your fallback in place.\n\nThe key to thinking about web development this way is realising that there isn\u2019t one final interface \u2013 there could be many, slightly different interfaces depending on the properties and capabilities of any particular user agent at any particular moment. And that\u2019s OK. Websites do not need to look the same in every browser.\n\nOnce you truly accept that, it\u2019s an immensely liberating idea. Instead of spending your time trying to make websites look the same in wildly varying browsers, you can spend your time making sure that the core functionality of what you build works everywhere, while providing the best possible experience for more capable browsers.\n\nAllow me to demonstrate with a simple example: navigation.\n\nStep one: core functionality\n\nLet\u2019s say we have a straightforward website about the twelve days of Christmas, with a page for each day. The core functionality is pretty clear:\n\n\n\tTo read about any particular day.\n\tTo browse from day to day.\n\n\nThe first is easily satisfied by marking up the text with headings, paragraphs and all the usual structural HTML elements. The second is satisfied by providing a list of good ol\u2019 hyperlinks.\n\nNow where\u2019s the best place to position this navigation list? Personally, I\u2019m a big fan of the jump-to-footer pattern. This puts the content first and the navigation second. At the top of the page there\u2019s a link with an href attribute pointing to the fragment identifier for the navigation.\n\n\n
\n Menu\n ...\n
\n \n\n\nSee the footer-anchor pattern in action.\n\nBecause it\u2019s nothing more than a hyperlink, this works in just about every browser since the dawn of the web. Following hyperlinks is what web browsers were made to do (hence the name).\n\nStep two: layout as an enhancement\n\nThe footer-anchor pattern is a particularly neat solution on small-screen devices, like mobile phones. Once more screen real estate is available, I can use the magic of CSS to reposition the navigation above the content. I could use position: absolute, flexbox or, in this case, display: table.\n\n@media all and (min-width: 35em) {\n .control {\n display: none;\n }\n body {\n display: table;\n }\n [role=\"navigation\"] {\n display: table-caption;\n columns: 6 15em;\n }\n}\n\nSee the styles for wider screens in action\n\nStep three: enhance!\n\nRight. At this point I\u2019m providing core functionality to everyone, and I\u2019ve got nice responsive styles for wider screens. I could stop here, but the real advantage of progressive enhancement is that I don\u2019t have to. From here on, I can go crazy adding all sorts of fancy enhancements for modern browsers, without having to worry about providing a fallback for older browsers \u2013 the fallback is already in place.\n\nWhat I\u2019d really like is to provide a swish off-canvas pattern for small-screen devices. Here\u2019s my plan:\n\n\n\tPosition the navigation under the main content.\n\tListen out for the .control links being activated and intercept that action.\n\tWhen those links are activated, toggle a class of .active on the body.\n\tIf the .active class exists, slide the content out to reveal the navigation.\n\n\nHere\u2019s the CSS for positioning the content and navigation:\n\n@media all and (max-width: 35em) {\n [role=\"main\"] {\n transition: all .25s;\n width: 100%;\n position: absolute;\n z-index: 2;\n top: 0;\n right: 0;\n }\n [role=\"navigation\"] {\n width: 75%;\n position: absolute;\n z-index: 1;\n top: 0;\n right: 0;\n }\n .active [role=\"main\"] {\n transform: translateX(-75%);\n }\n}\n\nIn my JavaScript, I\u2019m going to listen out for any clicks on the .control links and toggle the .active class on the body accordingly:\n\n(function (win, doc) {\n 'use strict';\n var linkclass = 'control',\n activeclass = 'active',\n toggleClassName = function (element, toggleClass) {\n var reg = new RegExp('(s|^)' + toggleClass + '(s|$)');\n if (!element.className.match(reg)) {\n element.className += ' ' + toggleClass;\n } else {\n element.className = element.className.replace(reg, '');\n }\n },\n navListener = function (ev) {\n ev = ev || win.event;\n var target = ev.target || ev.srcElement;\n if (target.className.indexOf(linkclass) !== -1) {\n ev.preventDefault();\n toggleClassName(doc.body, activeclass);\n }\n };\n doc.addEventListener('click', navListener, false);\n}(this, this.document));\n\nI\u2019m all set, right? Not so fast!\n\nCutting the mustard\n\nI\u2019ve made the assumption that addEventListener will be available in my JavaScript. That isn\u2019t a safe assumption. That\u2019s because JavaScript \u2013 unlike HTML or CSS \u2013 isn\u2019t fault-tolerant. If you use an HTML element or attribute that a browser doesn\u2019t understand, or if you use a CSS selector, property or value that a browser doesn\u2019t understand, it\u2019s no big deal. The browser will just ignore what it doesn\u2019t understand: it won\u2019t throw an error, and it won\u2019t stop parsing the file.\n\nJavaScript is different. If you make an error in your JavaScript, or use a JavaScript method or property that a browser doesn\u2019t recognise, that browser will throw an error, and it will stop parsing the file. That\u2019s why it\u2019s important to test for features before using them in JavaScript. That\u2019s also why it isn\u2019t safe to rely on JavaScript for core functionality.\n\nIn my case, I need to test for the existence of addEventListener:\n\n(function (win, doc) {\n if (!win.addEventListener) {\n return;\n }\n ...\n}(this, this.document));\n\nThe good folk over at the BBC call this kind of feature test cutting the mustard. If a browser passes the test, it cuts the mustard, and so it gets the enhancements. If a browser doesn\u2019t cut the mustard, it doesn\u2019t get the enhancements. And that\u2019s fine because, remember, websites don\u2019t need to look the same in every browser.\n\nI want to make sure that my off-canvas styles are only going to apply to mustard-cutting browsers. I\u2019m going to use JavaScript to add a class of .cutsthemustard to the document:\n\n(function (win, doc) {\n if (!win.addEventListener) {\n return;\n }\n ...\n var enhanceclass = 'cutsthemustard';\n doc.documentElement.className += ' ' + enhanceclass;\n}(this, this.document));\n\nNow I can use the existence of that class name to adjust my CSS:\n\n@media all and (max-width: 35em) {\n .cutsthemustard [role=\"main\"] {\n transition: all .25s;\n width: 100%;\n position: absolute;\n z-index: 2;\n top: 0;\n right: 0;\n }\n .cutsthemustard [role=\"navigation\"] {\n width: 75%;\n position: absolute;\n z-index: 1;\n top: 0;\n right: 0;\n }\n .cutsthemustard .active [role=\"main\"] {\n transform: translateX(-75%);\n }\n}\n\nSee the enhanced mustard-cutting off-canvas navigation. Remember, this only applies to small screens so you might have to squish your browser window.\n\nEnhance all the things!\n\nThis was a relatively simple example, but it illustrates the thinking behind progressive enhancement: once you\u2019re providing the core functionality to everyone, you\u2019re free to go crazy with all the latest enhancements for modern browsers.\n\nProgressive enhancement doesn\u2019t mean you have to provide all the same functionality to everyone \u2013 quite the opposite. That\u2019s why it\u2019s key to figure out early on what the core functionality is, and make sure that it can be provided with the most basic technology. But from that point on, you\u2019re free to add many more features that aren\u2019t mission-critical. You should reward more capable browsers by giving them more of those features, such as animation in CSS, geolocation in JavaScript, and new input types in HTML.\n\nLike I said, progressive enhancement isn\u2019t a technology. It\u2019s a way of thinking. Once you start thinking this way, you\u2019ll be prepared for whatever the next ten years throws at us.", "year": "2014", "author": "Jeremy Keith", "author_slug": "jeremykeith", "published": "2014-12-09T00:00:00+00:00", "url": "https://24ways.org/2014/responsive-enhancement/", "topic": "code"} {"rowid": 38, "title": "Websites of Christmas Past, Present and Future", "contents": "The websites of Christmas past\n\nThe first website was created at CERN. It was launched on 20 December 1990 (just in time for Christmas!), and it still works today, after twenty-four years. Isn\u2019t that incredible?!\n\nWhy does this website still work after all this time? I can think of a few reasons.\n\nFirst, the authors of this document chose HTML. Of course they couldn\u2019t have known back then the extent to which we would be creating documents in HTML, but HTML always had a lot going for it. It\u2019s built on top of plain text, which means it can be opened in any text editor, and it\u2019s pretty readable, even without any parsing.\n\nDespite the fact that HTML has changed quite a lot over the past twenty-four years, extensions to the specification have always been implemented in a backwards-compatible manner. Reading through the 1992 W3C document HTML Tags, you\u2019ll see just how it has evolved. We still have h1 \u2013 h6 elements, but I\u2019d not heard of the element before. Despite being deprecated since HTML2, it still works in several browsers. You can see it in action on my website.\n\nAs well as being written in HTML, there is no run-time compilation of code; the first website simply consists of HTML files transmitted over the web. Due to its lack of complexity, it stood a good chance of surviving in the turbulent World Wide Web.\n\nThat\u2019s all well and good for a simple, static website. But websites created today are increasingly interactive. Many require a login and provide experiences that are tailored to the individual user. This type of dynamic website requires code to be executed somewhere.\n\nTraditionally, dynamic websites would execute such code on the server, and transmit a simple HTML file to the user. As far as the browser was concerned, this wasn\u2019t much different from the first website, as the additional complexity all happened before the document was sent to the browser.\n\nDoing it all in the browser\n\nIn 2003, the first single page interface was created at slashdotslash.com. A single page interface or single page app is a website where the page is created in the browser via JavaScript. The benefit of this technique is that, after the initial page load, subsequent interactions can happen instantly, or very quickly, as they all happen in the browser.\n\nWhen software runs on the client rather than the server, it is often referred to as a fat client. This means that the bulk of the processing happens on the client rather than the server (which can now be thin).\n\nA fat client is preferred over a thin client because:\n\n\n\tIt takes some processing requirements away from the server, thereby reducing the cost of servers (a thin server requires cheaper, or fewer servers).\n\tThey can often continue working offline, provided no server communication is required to complete tasks after initial load.\n\tThe latency of internet communications is bypassed after initial load, as interactions can appear near instantaneous when compared to waiting for a response from the server.\n\n\nBut there are also some big downsides, and these are often overlooked:\n\n\n\tThey can\u2019t work without JavaScript. Obviously JavaScript is a requirement for any client-side code execution. And as the UK Government Digital Service discovered, 1.1% of their visitors did not receive JavaScript enhancements. Of that 1.1%, 81% had JavaScript enabled, but their browsers failed to execute it (possibly due to dropping the internet connection). If you care about 1.1% of your visitors, you should care about the non-JavaScript experience for your website.\n\tThe browser needs to do all the processing. This means that the hardware it runs on needs to be fast. It also means that we require all clients to have largely the same capabilities and browser APIs.\n\tThe initial payload is often much larger, and nothing will be rendered for the user until this payload has been fully downloaded and executed. If the connection drops at any point, or the code fails to execute owing to a bug, we\u2019re left with the non-JavaScript experience.\n\tThey are not easily indexed as every crawler now needs to run JavaScript just to receive the content of the website.\n\n\nThese are not merely edge case issues to shirk off. The first three issues will affect some of your visitors; the fourth affects everyone, including you.\n\nWhat problem are we trying to solve?\n\nSo what can be done to address these issues? Whereas fat clients solve some inherent issues with the web, they seem to create as many problems. When attempting to resolve any issue, it\u2019s always good to try to uncover the original problem and work forwards from there. One of the best ways to frame a problem is as a user story. A user story considers the who, what and why of a need. Here\u2019s a template:\n\n\n\tAs a {who} I want {what} so that {why}\n\n\nI haven\u2019t got a specific project in mind, so let\u2019s refer to the who as user. Here\u2019s one that could explain the use of thick clients.\n\n\n\tAs a user I want the site to respond to my actions quickly so that I get immediate feedback when I do something.\n\n\nThis user story could probably apply to a great number of websites, but so could this:\n\n\n\tAs a user I want to get to the content quickly, so that I don\u2019t have to wait too long to find out what the site is all about or get the content I need.\n\n\nA better solution\n\nHow can we balance both these user needs? How can we have a website that loads fast, and also reacts fast? The solution is to have a thick server, that serves the complete document, and then a thick client, that manages subsequent actions and replaces parts of the page. What we\u2019re talking about here is simply progressive enhancement, but from the user\u2019s perspective.\n\nThe initial payload contains the entire document. At this point, all interactions would happen in a traditional way using links or form elements. Then, once we\u2019ve downloaded the JavaScript (asynchronously, after load) we can enhance the experience with JavaScript interactions. If for whatever reason our JavaScript fails to download or execute, it\u2019s no biggie \u2013 we\u2019ve already got a fully functioning website. If an API that we need isn\u2019t available in this browser, it\u2019s not a problem. We just fall back to the basic experience.\n\nThis second point, of having some minimum requirement for an enhanced experience, is often referred to as cutting the mustard, first used in this sense by the BBC News team. Essentially it\u2019s an if statement like this:\n\nif('querySelector' in document\n && 'localStorage' in window\n && 'addEventListener' in window) {\n // bootstrap the JavaScript application\n }\n\nThis code states that the browser must support the following methods before downloading and executing the JavaScript:\n\n\n\tdocument.querySelector (can it find elements by CSS selectors)\n\twindow.localStorage (can it store strings)\n\twindow.addEventListener (can it bind to events in a standards-compliant way)\n\n\nThese three properties are what the BBC News team decided to test for, as they are present in their website\u2019s JavaScript. Each website will have its own requirements. The last method, window.addEventListener is in interesting one. Although it\u2019s simple to bind to events on IE8 and earlier, these browsers have very inconsistent support for standards. Making any JavaScript-heavy website work on IE8 and earlier is a painful exercise, and comes at a cost to all users on other browsers, as they\u2019ll download unnecessary code to patch support for IE.\n\n JavaScript API support by browser.\n\nI discovered that IE8 supports 12% of the current JavaScript APIs, while IE9 supports 16%, and IE10 51%. It seems, then, that IE10 could be the earliest version of IE that I\u2019d like to develop JavaScript for. That doesn\u2019t mean that users on browsers earlier than 10 can\u2019t use the website. On the contrary, they get the core experience, and because it\u2019s just HTML and CSS, it\u2019s much more likely to be bug-free, and could even provide a better experience than trying to run JavaScript in their browser. They receive the thin client experience.\n\nBy reducing the number of platforms that our enhanced JavaScript version supports, we can better focus our efforts on those platforms and offer an even greater experience to those users. But we can only do that if we use progressive enhancement. Otherwise our website would be completely broken for all other users.\n\nSo what we have is a thick server, capable of serving the entire website to our users, complete with all core functionality needed for our users to complete their tasks; and we have a thick client on supported browsers, which can bring an even greater experience to those users.\n\nThis is all transparent to users. They may notice that the website seems snappier on the new iPhone they received for Christmas than on the Windows 7 machine they got five years ago, but then they probably expected it to be faster on their iPhone anyway.\n\nIsn\u2019t this just more work?\n\nIt\u2019s true that making a thick server and a thick client is more work than just making one or the other. But there are some big advantages:\n\n\n\tThe website works for everyone.\n\tYou can decide when users get the enhanced experience.\n\tYou can enhance features in an iterative (or agile) manner.\n\tWhen the website breaks, it doesn\u2019t break down.\n\tThe more you practise this approach, the quicker you will become.\n\n\nThe websites of Christmas present\n\nThe best way to discover websites using this technique of progressive enhancement is to disable JavaScript and see if the website breaks. I use the Web Developer extension, which is available for Chrome and Firefox. It lets me quickly disable JavaScript.\n\n Web Developer extension.\n\n24 ways works with and without JavaScript. Try using the menu icon to view the navigation. Without JavaScript, it\u2019s a jump link to the bottom of the page, but with JavaScript, the menu slides in from the right.\n\n 24 ways navigation with JavaScript disabled.\n\n 24 ways navigation with working JavaScript.\n\nGoogle search will also work without JavaScript. You won\u2019t get instant search results or any prerendering, because those are enhancements.\n\nFor a more app-like example, try using Twitter. Without JavaScript, it still works, and looks nearly identical. But when you load JavaScript, links open in modal windows and all pages are navigated much quicker, as only the content that has changed is loaded. You can read about how they achieved this in Twitter\u2019s blog posts Improving performance on twitter.com and Implementing pushState for twitter.com.\n\nUnfortunately Facebook doesn\u2019t use progressive enhancement, which not only means that the website doesn\u2019t work without JavaScript, but it takes longer to load. I tested it on WebPagetest and if you compare the load times of Twitter and Facebook, you\u2019ll notice that, despite putting similar content on the page, Facebook takes two and a half times longer to render the core content on the page.\n\n Facebook takes two and a half times longer to load than Twitter.\n\nWebsites of Christmas yet to come\n\nEvery project is different, and making a website that enjoys a long life, or serves a larger number of users may or may not be a high priority. But I hope I\u2019ve convinced you that it certainly is possible to look to the past and future simultaneously, and that there can be significant advantages to doing so.", "year": "2014", "author": "Josh Emerson", "author_slug": "joshemerson", "published": "2014-12-08T00:00:00+00:00", "url": "https://24ways.org/2014/websites-of-christmas-past-present-and-future/", "topic": "code"} {"rowid": 34, "title": "Collaborative Responsive Design Workflows", "contents": "Much has been written about workflow and designer-developer collaboration in web design, but many teams still struggle with this issue; either with how to adapt their internal workflow, or how to communicate the need for best practices like mobile first and progressive enhancement to their teams and clients. Christmas seems like a good time to have another look at what doesn\u2019t work between us and how we can improve matters.\n\nWhy is it so difficult?\n\nWe\u2019re still beginning to understand responsive design workflows, acknowledging the need to move away from static design tools and towards best practices in development. It\u2019s not that we don\u2019t want to change\u00a0\u2013 so why is it so difficult?\n\nChanging the way we do something that has become routine is always problematic, even with small things, and the changes today\u2019s web environment requires from web design and development teams are anything but small.\n\nAlthough developers also have a host of new skills to learn and things to consider, designers are probably the ones pushed furthest out of their comfort zones: as well as graphic design, a web designer today also needs an understanding of interaction design and ergonomics, because more and more websites are becoming tools rather than pages meant to be read like a book or magazine. In addition to that there are thousands of different devices and screen sizes on the market today that layout and interactions need to work on.\n\nThese aspects make it impossible to design in a static design tool, so beyond having to learn about new aspects of design, the designer has to either learn how to code or learn to work with a responsive design tool.\n\nWhy do it\n\nThat alone is enough to leave anyone overwhelmed, as learning a new skill takes time and slows you down in a project \u2013 and on most projects time is in short supply. Yet we have to make time or fall behind in the industry as others pitch better, interactive designs. For an efficient workflow, both designers and developers must familiarise themselves with new tools and techniques.\n\nA designer has to be able to play with ideas, make small adjustments here and there, look at the result, go back to the settings and make further adjustments, and so on. You can only realistically do that if you are able to play with all the elements of a design, including interactivity, accessibility and responsiveness.\n\nFiguring out the right breakpoints in a layout is one of the foremost reasons for designing in a responsive design tool. Even if you create layouts for three viewport sizes (i.e. smartphone, tablet and the most common desktop size), you\u2019d only cover around 30% of visitors and you might miss problems like line breaks and padding at other viewport sizes.\n\nAnother advantage is consistency. In static design tools changes will not be applied across all your other layouts. A developer referring back to last week\u2019s comps might work with outdated metrics. Furthermore, you cannot easily test what impact changes might have on previously designed areas. In a dynamic design tool such changes will be applied to the entire design and allow you to test things in site areas you had already finished.\n\nNo static design tool allows you to do this, and having somebody else produce a mockup from your static designs or wireframes will duplicate work and is inefficient.\n\nHow to do it\n\nWhen working in a responsive design tool rather than in the browser, there is still the question of how and when to communicate with the developer. I have found that working with Sass in combination with a visual style guide is very efficient, but it does need careful planning: fundamental metrics for padding, margins and font sizes, but also design elements like sliders, forms, tabs, buttons and navigational elements, should be defined at the beginning of a project and used consistently across the site. Working with a grid can help you develop a consistent design language across your site.\n\nCreate a visual style guide that shows what the elements look like and how they behave across different screen sizes \u2013 and when interacted with. Put all metrics on paddings, margins, breakpoints, widths, colours and so on in a text document, ideally with names that your developer can use as Sass variables in the CSS. For example:\n\n$padding-default-vertical: 1.5em;\n\nDevelopers, too, need an efficient workflow to keep code maintainable and speed up the time needed for more complex interactions with an eye on accessibility and performance. CSS preprocessors like Sass allow you to work with variables and mixins for default rules, as well as style sheet partials for different site areas or design elements. Create your own boilerplate to use for your projects and then update your variables with the information from your designer for each individual project.\n\nHow to get buy-in\n\nOne obstacle when implementing responsive design, accessibility and content strategy is the logistics of learning new skills and iterating on your workflow. Another is how to sell it. You might expect everyone on a project (including the client) to want to design and develop the best website possible: ultimately, a great site will lead to more conversions. However, we often hear that people find it difficult to convince their teammates, bosses or clients to implement best practices.\n\nWhy is that? Well, I believe a lot of it is down to how we sell it. You will have experienced this yourself: some people you trust to know what they are talking about, and others you don\u2019t. Think about why you trust that first person but don\u2019t buy what the other one is telling you. It is likely because person A has a self-assured, calm and assertive demeanour, while person B seems insecure and apologetic. To sell our ideas, we need to become person A! For a timid designer or developer suffering from imposter syndrome (like many of us do in this industry) that is a difficult task. So how can we become more confident in selling our expertise?\n\nWrite\n\nWe need to become experts. And I mean not just in writing great code or coming up with beautiful designs but at explaining why we\u2019re doing what we\u2019re doing. Why do you code this way or that? Why is this the best layout? Why does a website have to be accessible and responsive? Write about it. Putting your thoughts down on paper or screen is a really efficient way of getting your head around a topic and learning to make a case for something. You may even find that you come up with new ideas as you are writing, so you\u2019ll become a better designer or developer along the way.\n\nTalk\n\nThen, talk about it. Start out in front of your team, then do a lightning talk at a web event near you, then a longer talk or workshop. Having to talk about a topic is going to help you put into spoken words the argument that you\u2019ve previously put together in writing. Writing comes more easily when you\u2019re starting out but we use a different register when writing than talking and you need to learn how to speak your case. Do the talk a couple of times and after each talk make adjustments where you found it didn\u2019t work well. By this time, you are more than ready to make your case to the client. In fact, you\u2019ve been ready since that first talk in front of your colleagues ;)\n\nPitch\n\nPitches used to be based on a presentation of static layouts for for three to five typical pages and three different designs. But if we want to sell interactivity, structure, usability, accessibility and responsiveness, we need to demonstrate these things and I believe that it can only do us good. I have seen a few pitches sitting in the client\u2019s chair and static layouts are always sort of dull. What makes a website a website is the fact that I can interact with it and smooth interactions or animations add that extra sparkle.\n\nI can\u2019t claim personal experience for this one but I\u2019d be bold and go for only one design. One demo page matching the client\u2019s corporate design but not any specific page for the final site. Include design elements like navigation, photography, typefaces, article layout (with real content), sliders, tabs, accordions, buttons, forms, tables (yes, tables) \u2013 everything you would include in a style tiles document, only interactive. Demonstrate how the elements behave when clicked, hovered and touched, and how they change across different screen sizes. You may even want to demonstrate accessibility features like tabbed navigation and screen reader use.\n\nObviously, there are many approaches that will work in different situations but don\u2019t give up on finding a process that works for you and that ultimately allows you to build delightful, accessible, responsive user experiences for the web. Make time to try new tools and techniques and don\u2019t just work on them on the side \u2013 start using them on an actual project. It is only when we use a tool or process in the real world that we become true experts. Remember your driving lessons: once the instructor had explained how to operate the car, you were sent to practise driving on the road in actual traffic!", "year": "2014", "author": "Sibylle Weber", "author_slug": "sibylleweber", "published": "2014-12-07T00:00:00+00:00", "url": "https://24ways.org/2014/collaborative-responsive-design-workflows/", "topic": "process"} {"rowid": 40, "title": "Don\u2019t Push Through the Pain", "contents": "In 2004, I lost my web career. In a single day, it was gone. I was in too much pain to use a keyboard, a Wacom tablet (I couldn\u2019t even click the pen), or a trackball. Switching my mouse to use my left (non-dominant) hand only helped a bit; then that hand went, too. I tried all the easy-to-find equipment out there, except for expensive gizmos with foot pedals. I had tingling in my fingers\u2014which, when I was away from the computer, would rhythmically move as if some other being controlled them. I worried about Parkinson\u2019s because the movements were so dramatic. Pen on paper was painful. Finally, I discovered one day that I couldn\u2019t even turn a doorknob.\n\nThe only highlight was that I couldn\u2019t dust, scrub, or vacuum. We were forced to hire someone to come in once a week for an hour to whip through the house. You can imagine my disappointment. \n\nMy injuries had gradually slithered into my life without notice. I\u2019d occasionally have sore elbows, or my wrist might ache for a day, or my shoulders feel tight. But nothing to keyboard home about. That\u2019s the critical bit of news. One day, you\u2019re pretty fine. The next day, you don\u2019t have your job\u2014or any job that requires the use of your hands and wrists. \n\nI had to walk away from the computer for over four months\u2014and partially for several months more. That\u2019s right: no income. If I hadn\u2019t found a gifted massage therapist, the right book of stretches, the equipment I should have been using all along, and learned how to pay attention to my body\u2014even just a little bit more\u2014I quite possibly wouldn\u2019t be writing this article today. I wouldn\u2019t be writing anything, anywhere. \n\nMost of us have heard of (and even claimed to have read all of) Mihaly Csikszentmihalyi, author of Flow: The Psychology of Optimal Experience, who describes the state of flow\u2014the place our minds go when we are fully engaged and in our element. This lovely state of highly focused activity is deeply satisfying, often creative, and quite familiar to many of us on the web who just can\u2019t quit until the copy sings or the code is untangled or we get our highest score yet in Angry Birds. Our minds may enter that flow, but too often as our brains take flight, all else recedes. And we leave something very important behind. \n\nOur bodies. \n\nMy body wasn\u2019t made to make the same minute movements thousands of times a day, most days of the year, for decades, and neither was yours. The wear and tear sneaks up on you, especially if you\u2019re the obsessive perfectionist that we all pretend not to be. Oh? You\u2019re not obsessed? I wasn\u2019t like this all the time, but I remember sitting across from my husband, eating dinner, and I didn\u2019t hear a word he said. I\u2019d left my brain upstairs in my office, where it was wrestling in a death match with the box model or, God help us all, IE 5.2. I was a writer, too, and I was having my first inkling that I was a content strategist. Work was exciting. I could sit up late, in the flow, fingers flying at warp speed. I could sit until those wretched birds outside mocked me with their damn, cheerful \u201cHurray, it\u2019s morning!\u201d songs. Suddenly, while, say, washing dishes, the one magical phrase that captured the essence of a voice or idea would pop up, and I would have mowed down small animals and toddlers to get to my computer and hammer out that website or article, to capture that thought before it escaped. Note my use of the word hammer. Sound at all familiar? \n\nBut where was my body during my work? Jaw jutting forward to see the screen, feet oddly positioned\u2014and then left in place like chunks of marble\u2014back unsupported, fingers pounding the keys, wrists and arms permanently twisted in unnatural angles that we thought were natural. And clicking. Clicking, clicking, clicking that mouse. Thumbing tiny keyboards on phones. A lethal little gesture for tiny little tendons. Though I was fine from, say 1997 to 2004, by the end of 2004 this behavior culminated in disaster. I had repetitive stress injuries, aka repetitive motion injuries. As the Apple site says, \u201cA brief exposure to these conditions would not cause harm. But a prolonged exposure may, in some people, result in reduced ability to function.\u201d I\u2019ll say. \n\nI frantically turned to people on lists and forums. \u201cTry a track ball.\u201d Already did that. \u201cTry a tablet.\u201d Worse. One person wrote, \u201cI still come here once in a while and can type a couple sentences, but I\u2019ve permanently got thoracic outlet syndrome and I\u2019ll never work again.\u201d Oh, beauteous web, oh, long-distance friends, farewell. \n\nThe Wrist Bone\u2019s Connected to the Brain Bone\n\nThat variation on the old song tells part of the story. Most people (and many of their physicians) believe that tingling fingers and aching wrists MUST be carpel tunnel syndrome. Nope. If your neck juts forward, it tenses and stays tense the entire time you work in that position. Remember how your muscles felt after holding a landline phone with your neck tilted to one side for a long client meeting? Regrettable. Tensing your shoulders because your chair\u2019s not designed properly puts you at risk for thoracic outlet syndrome, a career-killer if ever there was one. The nerves and tendons in your neck and shoulder refer down your arms, and muscles swell around nerves, causing pain and dysfunction. Your elbows have a tendon that is especially vulnerable to repetitive movements (think tennis elbow). Your wrists are performing something akin to a circus act with one thousand shows a day. \n\nSo, all the fine tendons and ligaments in your fingers have problems that may not start at your wrists at all. Though some people truly do have carpal tunnel syndrome, my finger and wrist problems weren\u2019t solved by heavily massaging my fingers (though, that was helpful, too) or my wrists. They were fixed by work on my neck, upper back, shoulders, arms, and elbows. This explains why many people have surgery for carpal tunnel syndrome and just months later say, \u201cWhat?! How can I possibly have it again? I had an operation!\u201d Well, fellow buckaroo, you may never have had carpel tunnel syndrome. You may have had\u2014or perhaps will have\u2014one long disaster area from your neck to your fingertips. \n\nHow to Crawl Back \n\nBefore trying extreme measures, you may be able to function again even if you feel hopeless. I managed to heal, and so have others, but I\u2019ll always be at risk. \n\nAs Jen Simmons, of The Web Ahead podcast and other projects told me, \u201cIt took a long time to injure myself. It took a long time to get back to where I was. My right arm between my elbow and wrist would start aching intermittently. Eventually, my arm even ached at night. I started each day with yesterday\u2019s pain.\u201d Simple measures, used consistently, helped her back. \n\n1. Massage therapy\n\nI don\u2019t remember what the rest of the world is like, but in Portland, Oregon, we have more than one massage therapy college. (Of course we do.) I saw a former teacher at the most respected school. This is not your \u201cIt was all so soothing. Why, I fell asleep!\u201d massage. This is \u201cHoly crap, he\u2019s grinding his elbow into my armpit!\u201c massage therapy, with the emphasis on therapy. I owe him everything. Make sure you have someone who really knows what they\u2019re doing. Get many referrals. Try a question, \u201cDoes my psoas muscle affect my back?\u201d If they can\u2019t answer it, flee. Regularly see the one you choose and after a while, depending on how injured you are, you may be able to taper off. \n\n2. Change your equipment\n\nYou may need to be hands-on with several pieces of equipment before you find the ones that don\u2019t cause more pain. Many companies have restocking fees, charges to ship the equipment you want to return, and other retail atrocities. Always be sure to ask what the return policies are at any company before purchasing.\n\nMice \n\nYou may have more success than I did with equipment such as the Wacom tablet. Mine came with a pen, and it hurt to repetitively click it. Trackballs are another option but, for many, they are better at prevention than recovery. But let\u2019s get to the really effective stuff. One of the biggest sources of pain is using your mouse. One major reason is that your hand and wrist are in a perpetually unnatural position and you\u2019re also moving your arm quite a bit. Each time you move the mouse, it is placing stress on your neck, shoulders and arms, because you need to lift them slightly in order to move the mouse and you need to angle your wrist. You may also be too injured to use the trackpad all the time, and this mouse, the vertical mouse is a dandy preventative measure, too. Shaking up your patterns is a wise move. I have long fingers, not especially thin, yet the small size works best for me. (They have larger choices available.) What?! A sideways mouse? Yep. All the weight of your hand will be resting on it in the handshake position. Your forearms aren\u2019t constantly twisting over hill and dale. You aren\u2019t using any muscles in your wrist or hand. They are relaxing. You\u2019ll adapt in a day, and oh, oh, what a relief it is. \n\nKeyboards\n\nI really liked doing business with the people at Kinesis-Ergo. (I\u2019m not affiliated with them in any way.) They have the vertical mouse and a number of keyboards. The one that felt the most natural to me, and, once again, it only takes a day to adapt, is the Freestyle2 for the Mac. They have several options. I kept the keyboard halves attached to each other at first, and then spread them apart a little more. I recommend choosing one that slants and can separate. You can adjust the angle. For a little extra, they\u2019ll make sure it\u2019s all set up and ready to go for you. I\u2019m guessing that some Googling will find you similar equipment, wherever you live. \n\nWarning: if you use the ergonomic keyboards, you may have fewer USB ports. The laptop will be too far away to see unless you find a satisfactory setup using a stand. This is the perfect excuse for purchasing a humongous display. \n\nYou may not look cool while jetting coast to coast in your skinny jeans and what appears to be the old-time orthopedic shoe version of computing gear. But once you have rested and used many of these suggestions consistently, you may be able to use your laptop or other device in all its lovely sleekness during the trip. \n\nOther doohickies\n\nThe Kinesis site and The Human Solution have a wide selection of ergonomic products: standing desks, ergonomically correct chairs, and, yes, even things with foot pedals. Explore! \n\n3. Stop clicking, at least for a while\n\nUse keyboard shortcuts, but use them slowly. This is not the time to show off your skillz. You\u2019ll be sort of like a recovering alcoholic, in that you\u2019ll be a recovering repetitive stress survivor for the rest of your life, once you really injure yourself. Always be vigilant. There\u2019s also a bit of software sold by The Human Solution and other places, and it was my salvation. It\u2019s called the McNib for Macs, and the Nib for PCs. (I\u2019ve only used the McNib.) It\u2019s for click-free mousing. I found it tricky to use when writing markup and code, but you may become quite adept at it. A little rectangle pops up on your screen, you mouse over it and choose, let\u2019s say, \u201cDouble-click.\u201d Until you change that choice, if you mouse over a link or anything else, it will double-click it for you. All you do is glide your mouse around. Awkward for a day or two, but you\u2019ll pick it up quickly. Though you can use it all day for work, even if you just use this for browsing LOLcats or Gary Vaynerchuk\u2019s YouTube videos, it will help you by giving your fingers a sweet break. \n\nBut here\u2019s the sad news. The developer who invented this died a few years ago. (Yes, I used to speak to him on the phone.) While it is for sale, it isn\u2019t compatible with Mac OS X Lion or anything subsequent. PowerPC strikes again. His site is still up. Demos for use with older software can be downloaded free at his old site, or at The Human Solution. Perhaps an enterprising developer can invent something that would provide this help, without interfering with patents. Rumor has it among ergonomic retailers (yes, I\u2019m like a police dog sniffing my way to a criminal once I head down a trail) that his company was purchased by a company in China, with no update in sight. \n\n4. Use built-in features\n\nThat little microphone icon that comes up alongside the keyboard on your iPhone allows you to speak your message instead of incessantly thumbing it. I believe it works in any program that uses the keyboard. It\u2019s not Siri. She\u2019s for other things, like having a personal relationship with an inanimate object. Apple even has a good section on ergonomics. You think I\u2019m intense about this subject? To improve your repetitive stress, Apple doesn\u2019t want you to use oral contraceptives, alcohol, or tobacco, to which I say, \u201cHave as much sex, bacon, and chocolate as possible to make up for it.\u201d \n\nApple\u2019s info even has illustrations of things like a faucet dripping into what is labeled a bucket full of \u201cTRAUMA.\u201d Sounds like upgrading to Yosemite, but I digress. \n\n5. Take breaks \n\nIf it\u2019s a game or other non-essential activity, take a break for a month. Fine, now that I\u2019ve called games non-essential, I suppose you\u2019ll all unfollow me on Twitter. \n\n6. Whether you are sore or not, do stretches throughout the day \n\nThis is a big one. Really big. The best book on the subject of repetitive stress injuries is Conquering Carpal Tunnel Syndrome and Other Repetitive Strain Injuries: A Self-Care Program by Sharon J. Butler. Don\u2019t worry, most of it is illustrations. Pretend it\u2019s a graphic novel. \n\nI\u2019m notorious for never reading instructions, and who on earth reads the introduction of a book, unless they wrote it? I wrote a book a long time ago, and I bet my house, husband, and life savings that my own parents never read the intro. Well, I did read the intro to this book, and you should, too. Stretching correctly, in a way that doesn\u2019t further hurt you, that keeps you flexible if you aren\u2019t injured, that actually heals you, calls for precision. Read and you\u2019ll see. The key is to stretch just until you start to feel the stretch, even if that\u2019s merely a tiny movement. Don\u2019t force anything past that point. Kindly nurse yourself back to health, or nurture your still-healthy body by stretching. Over the following days, weeks, months, you\u2019ll be moving well past that initial stretch point. \n\nThe book is brimming with examples. You only have to pick a few stretches, if this is too much to handle. Do it every single day. I can tell you some of the best ones for me, but it depends on the person. You\u2019ll also discover in Butler\u2019s book that areas that you think are the problem are sometimes actually adjacent to the muscle or tendon that is the source of the problem. Add a few stretches or two for that area, too. \n\nBut please follow the instructions in the introduction. If you overdo it, or perform some other crazy-ass hijinks, as I would be tempted to do, I am not responsible for your outcome. I give you fair warning that I am not a healthcare provider. I\u2019m just telling you as a friend, an untrained one, at that, who has been through this experience. \n\n7. Follow good habits\n\nDevelop habits like drinking lots of water (which helps with lactic acid buildup in muscles), looking away from the computer for twenty seconds every twenty to thirty minutes, eating right, and probably doing everything else your mother told you to do. Maybe this is a good time to bring up flossing your teeth, and going outside to play instead of watching TV. As your mom would say, \u201cIt\u2019s a beautiful day outside, what are you kids doing in here?\u201d \n\n8. Speak instead of writing, if you can \n\nAmber Simmons, who is very smart and funny, once tweeted in front of the whole world that, \u201c@carywood is a Skype whore.\u201d I was always asking people on Twitter if we could Skype instead of using iChat or exchanging emails. (I prefer the audio version so I don\u2019t have to, you know, do something drastic like comb my hair.) Keyboarding is tough on hands, whether you notice it or not at the time, and when doing rapid-fire back-and-forthing with people, you tend to speed up your typing and not take any breaks. This is a hand-killer. Voice chats have made such a difference for me that I am still a rabid Skype whore. Wait, did I say that out loud? \n\nSpeak your text or emails, using Dragon Dictate or other software. In about 2005, accessibility and user experience design expert, Derek Featherstone, in Canada, and I, at home, chatted over the internet, each of us using a different voice-to-text program. The programs made so many mistakes communicating with each other that we began that sort of endless, tearful laughing that makes you think someone may need to call an ambulance. This type of software has improved quite a bit over the years, thank goodness. Lack of accessibility of any kind isn\u2019t funny to Derek or me or to anyone who can\u2019t use the web without pain. \n\n9. Watch your position \n\nFor example, if you lift up your arms to use the computer, or stare down at your laptop, you\u2019ll need to rearrange your equipment. The internet has a lot of information about ideal ergonomic work areas. Please use a keyboard drawer. Be sure to measure the height carefully so that even a tented keyboard, like the one I recommend, will fit. I also recommend getting the version of the Freestyle with palm supports. Just these two measures did much to help both Jen Simmons and me. \n\n10. If you need to take anti-inflammatories, stop working\n\nIf you are all drugged up on ibuprofen, and pounding and clicking like mad, your body will not know when you are tired or injuring yourself. I don\u2019t recommend taking these while using your computing devices. Perhaps just take it at night, though I\u2019m not a fan of that category of medications. Check with your healthcare provider. At least ibuprofen is an anti-inflammatory, which may help you. In contrast, acetaminophen (paracetamol) only makes your body think it\u2019s not in pain. Ice is great, as is switching back and forth between ice and heat. But again, if you need ice and ibuprofen you really need to take a major break. \n\n11. Don\u2019t forget the rest of your body\n\nI\u2019ve zeroed in on my personal area of knowledge and experience, but you may be setting yourself up for problems in other areas of your body. There\u2019s what is known to bad writers as \u201ca veritable cornucopia\u201d of information on the web about how to help the rest of your body. A wee bit of research on the web and you\u2019ll discover simple exercises and stretches for the rest of your potential catastrophic areas: your upper back, your lower back, your legs, ankles, and eyes. Do gentle stretches, three or four times a day, rather than powering your way through. Ease into new equipment such as standing desks. Stretch those newly challenged areas until your body adapts. Pay attention to your body, even though I too often forget mine. \n\n12. Remember the children\n\nKids are using equipment to play highly addictive games or to explore amazing software, and if these call for repetitive motions, children are being set up for future injuries. They\u2019ll grab hold of something, as parents out there know, and play it 3,742 times. That afternoon. Perhaps by the time they are adults, everything will just be holograms and mind-reading, but adult fingers and hands are used for most things in life, not just computing devices and phones with keyboards sized for baby chipmunks. \n\nI\u2019ll be watching you\n\nQuickly now, while I (possibly) have your attention. Don\u2019t move a muscle. Is your neck tense? Are you unconsciously lifting your shoulders up? How long since you stopped staring at the screen? How bright is your screen? Are you slumping (c\u2019mon now, \u2018fess up) and inviting sciatica problems? Do you have to turn your hands at an angle relative to your wrist in order to type? Uh-oh. That\u2019s a bad one. Your hands, wrists, and forearms should be one straight line while keyboarding. Future you is begging you to change your ways. Don\u2019t let your #ThrowbackThursday in 2020 say, \u201cHere\u2019s a photo from when I used to be able to do so many wonderful things that I can\u2019t do now.\u201d And, whatever you do, don\u2019t try for even a nanosecond to push through the pain, or the next thing you know, you\u2019ll be an unpaid extra in The Expendables 7.", "year": "2014", "author": "Carolyn Wood", "author_slug": "carolynwood", "published": "2014-12-06T00:00:00+00:00", "url": "https://24ways.org/2014/dont-push-through-the-pain/", "topic": "business"} {"rowid": 41, "title": "What Is Vagrant and Why Should I Care?", "contents": "If you run a web server, a database server and your scripting language(s) of choice on your main machine and you have not yet switched to using virtualisation in your workflow then this essay may be of some value to you.\n\nI know you exist because I bump into you daily: freelancers coming in to work on our projects; internet friends complaining about reinstalling a development environment because of an operating system upgrade; fellow agency owners who struggle to brief external help when getting a particular project up and running; or even hardcore back-end developers who \u201cdon\u2019t do ops\u201d and prefer to run their development stack of choice locally.\n\nThere are many perfectly reasonable arguments as to why you may not have already made the switch, from being simply too busy, all the way through to a distrust of the new. I\u2019ll admit that there are many new technologies or workflows that I hear of daily and instantly disregard because I have tool overload, that feeling I get when I hear about a new shiny thing and think \u201cWell, what I do now works \u2013 I\u2019ll leave it for others to play with.\u201d If that\u2019s you when it comes to Vagrant then I hope you\u2019ll hear me out. The business case is compelling enough for you to make that switch; as a bonus it\u2019s also really easy to get going.\n\nIn this article we\u2019ll start off by going through the high level, the tools available and how it all fits together. Then we\u2019ll touch on the justification for making the switch, providing a few use cases that might resonate with you. Finally, I\u2019ll provide a very simple example that you can follow to get yourself up and running.\n\nWhat?\n\nYou already know what virtualisation is. You use the ability to run an operating system within another operating system every day. Whether that\u2019s Parallels or VMware on your laptop or similar server-based tools that drive the \u2018cloud\u2019, squeezing lots of machines on to physical hardware and making it really easy to copy servers and even clusters of servers from one place to another. It\u2019s an amazing technology which has changed the face of the internet over the past fifteen years.\n\nSimply put, Vagrant makes it really easy to work with virtual machines. According to the Vagrant docs:\n\n\n\tIf you\u2019re a designer, Vagrant will automatically set everything up that is required for that web app in order for you to focus on doing what you do best: design. Once a developer configures Vagrant, you don\u2019t need to worry about how to get that app running ever again. No more bothering other developers to help you fix your environment so you can test designs. Just check out the code, vagrant up, and start designing.\n\n\nWhile I\u2019m not sure I agree with the implication that all designers would get others to do the configuring, I think you\u2019ll agree that the \u201cJust check out the code\u2026 and start designing\u201d premise is very compelling.\n\nYou don\u2019t need Vagrant to develop your web applications on virtual machines. All you need is a virtualisation software package, something like VMware Workstation or VirtualBox, and some code. Download the half-gigabyte operating system image that you want and install it. Then download and configure the stack you\u2019ll be working with: let\u2019s say Apache, MySQL, PHP. Then install some libraries, CuRL and ImageMagick maybe, and finally configure the ability to easily copy files from your machine to the new virtual one, something like Samba, or install an FTP server. Once this is all done, copy the code over, import the database, configure Apache\u2019s virtual host, restart and cross your fingers.\n\nIf you\u2019re a bit weird like me then the above is pretty easy to do and secretly quite fun. Indeed, the amount of traffic to one of my more popular blog posts proves that a lot of people have been building themselves development servers from scratch for some time (or at least trying to anyway), whether that\u2019s on virtual or physical hardware.\n\nOr you could use Vagrant. It allows you, or someone else, to specify in plain text how the machine\u2019s virtual hardware should be configured and what should be installed on it. It also makes it insanely easy to get the code on the server. You check out your project, type vagrant up and start work.\n\nWhy?\n\nIt\u2019s worth labouring the point that Vagrant makes it really easy; I mean look-no-tangle-of-wires-or-using-vim-and-loads-of-annoying-command-line-stuff easy to run a development environment.\n\nThat\u2019s all well and good, I hear you say, but there\u2019s a steep learning curve, an overhead to switch. You\u2019re busy and this all sounds great but you need to get on; you\u2019ve got a career to build or a business to run and you don\u2019t have time to learn new stuff right now.\n\nIn short, what\u2019s the business case?\n\nThe business case involves saved time, a very low barrier to entry and the ability to give the exact same environment to somebody else.\n\nGetting your first development virtual machine running will take minutes, not counting download time. Seriously, use pre-built Vagrant files and provisioners (we\u2019ll touch on this below) and you can start developing immediately.\n\nOnce you\u2019ve finished developing you can check in your changes, ask a colleague or freelancer to check them out, and then they run the code on the exact same machine \u2013 even if they are on the other side of the world and regardless of whether they are on Windows, Linux or Apple OS X.\n\nThe configuration to build the machine isn\u2019t a huge binary disk image that\u2019ll take ages to download from Git; it\u2019s two small text files that can be version controlled too, so you can see any changes made to the config and roll back if needed.\n\nNo more \u2018It works for me\u2019 reports; no \u2018Oh, I was using PHP 5.3.3, not PHP 5.3.11\u2019 \u2013 you\u2019re both working on exact same copies of the development environment. With a tested and verified provisioning file you\u2019ll have the confidence that when you brief your next freelancer in to your team there won\u2019t be that painful to and fro of getting the system up and running, where you\u2019re on a Skype call and they are uttering the immortal words, \u2018It still doesn\u2019t work\u2019. You know it works because you can run it too.\n\nThis portability becomes even more important when you\u2019re working on larger sites and systems. Need a load balancer? Multiple front-end servers and a clustered database back-end? No problem. Add each server into the same Vagrant file and a single command will build all of them. As you\u2019ll know if you work on larger, business critical systems, keeping the operating systems in sync is a real problem: one server with a slightly different library causing sporadic and hard to trace issues is a genuine time black hole. Well, the good news is that you can use the same provisioning files to keep test and production machines in sync using your current build workflow.\n\nLet\u2019s also not forget the most simple use case: a single developer with multiple websites running on a single machine. If that\u2019s you and you switch to using Vagrant-managed virtual machines then the next time you upgrade your operating system or do a fresh install there\u2019s no chance that things will all stop working. The server config is all tucked away in version control with your code. Just pull it down and carry on coding.\n\nOK, got it. Show me already\n\nIf you want to try this out you\u2019ll need to install the latest VirtualBox and Vagrant for your platform. If you already have VMware Workstation or another supported virtualisation package installed you can use that instead but you may need to tweak my Vagrant file below. Depending on your operating system, a reboot might also be wise.\n\nNote: the commands below were executed on my MacBook, but should also work on Windows and Linux. If you\u2019re using Windows make sure to run the command prompt as Administrator or it\u2019ll fall over when trying to update the hosts file.\n\nAs a quick sanity check let\u2019s just make sure that we have the vagrant command in our path, so fire up a terminal and check the version number:\n\n$ vagrant -v\nVagrant 1.6.5\n\nWe\u2019ve one final thing to install and that\u2019s the vagrant-hostsupdater plugin. Once again, in your terminal:\n\n$ vagrant plugin install vagrant-hostsupdater\nInstalling the 'vagrant-hostsupdater' plugin. This can take a few minutes...\nInstalled the plugin 'vagrant-hostsupdater (0.0.11)'!\n\nHopefully that wasn\u2019t too painful for you.\n\nThere are two things that you need to manage a virtual machine with Vagrant:\n\n\n\ta Vagrant file: this tells Vagrant what hardware to spin up\n\ta provisioning file: this tells Vagrant what to do on the machine\n\n\nTo save you copying and pasting I\u2019ve supplied you with a simple example (ZIP) containing both of these. Unzip it somewhere sensible and in your terminal make sure you are inside the Vagrant folder:\n\n$ cd where/you/placed/it/24ways\n\n$ ls -l\n-rw-r--r--@ 1 bealers staff 11055 9 Nov 09:16 bealers-24ways.md\n-rw-r--r--@ 1 bealers staff 118152 9 Nov 10:08 it-works.png\ndrwxr-xr-x 5 bealers staff 170 8 Nov 22:54 vagrant\n\n$ cd vagrant/\n\n$ ls -l\n-rw-r--r--@ 1 bealers staff 1661 8 Nov 21:50 Vagrantfile\n-rwxr-xr-x@ 1 bealers staff 3841 9 Nov 08:00 provision.sh\n\nThe Vagrant file tells Vagrant how to configure the virtual hardware of your development machine. Skipping over some of the finer details, here\u2019s what\u2019s in that Vagrant file:\n\nwww.vm.box = \"ubuntu/trusty64\" \n\nUse Ubuntu 14.04 for the VM\u2019s OS. Vagrant will only download this once. If another project uses the same OS, Vagrant will use a cached version.\n\nwww.vm.hostname = \"bealers-24ways.dev\" \n\nSet the machine\u2019s hostname. If, like us, you\u2019re using the vagrant-hostsupdater plugin, this will also get added to your hosts file, pointing to the virtual machine\u2019s IP address.\n\nwww.vm.provider :virtualbox do |vb|\n vb.customize [\"modifyvm\", :id, \"--cpus\", \"2\" ]\nend\n\nHere\u2019s an example of configuring the virtual machine\u2019s hardware on the fly. In this case we want two virtual processors.\n\nNote: this is specific for the VirtualBox provider, but you could also have a section for VMware or other supported virtualisation software.\n\nwww.vm.network \"private_network\", ip: \"192.168.13.37\" \n\nThis specifies that we want a private networking link between your computer and the virtual machine. It\u2019s probably best to use a reserved private subnet like 192.168.0.0/16 or 10.0.0.0/8\n\nwww.vm.synced_folder \"../\", \"/var/www/24ways\",\n owner: \"www-data\", group: \"www-data\"\n\nA particularly handy bit of Vagrant magic. This maps your local 24ways parent folder to /var/www/24ways on the virtual machine. This means the virtual machine already has direct access to your code and so do you. There\u2019s no messy copying or synchronisation \u2013 just edit your files and immediately run them on the server.\n\nwww.vm.provision :shell, :path => \"provision.sh\"\n\nThis is where we specify the provisioner, the script that will be executed on the machine.\n\nIf you open up the provisioner you\u2019ll see it\u2019s a bash script that does things like:\n\n\n\tinstall Apache, PHP, MySQL and related libraries\n\tconfigure the libraries: set permissions, enable logging\n\tcreate a database and grant some access rights\n\tset up some code for us to develop on; in this case, fire up a vanilla WordPress installation\n\n\nTo get this all up and running you simply need to run Vagrant from within the vagrant folder:\n\n$ vagrant up\n\nYou should now get a Matrix-like stream of stuff shooting up the screen. If this is the first time Vagrant has used this particular operating system image \u2013 remember we\u2019ve specified the latest version of Ubuntu \u2013 it\u2019ll download the disc image and cache it for future reuse. Then all the packages are downloaded and installed and finally all our configuration steps occur incluing the download and configuration of WordPress.\n\nHalfway through proceedings it\u2019s likely that the process will halt at a prompt something like this:\n\n==> www: adding to (/etc/hosts) : 192.168.13.37 bealers-24ways.dev # VAGRANT: 2dbfbced1b1e79d2a0942728a0a57ece (www) / 899bd80d-4251-4f6f-91a0-d30f2d9918cc\nPassword:\n\nYou need to enter your password to give vagrant sudo rights to add the IP address and hostname mapping to your local hosts file.\n\nOnce finished, fire up your browser and go to http://bealers-24ways.dev. You should see a default WordPress installation. The username for wp-admin is admin and the password is 24ways.\n\n\n\nIf you take a look at your local filesystem the 24ways folder should now look like:\n\n$ cd ../\n\n$ ls -l\n\n-rw-r--r--@ 1 bealers staff 13074 9 Nov 10:14 bealers-24ways.md\ndrwxr-xr-x 21 bealers staff 714 9 Nov 10:06 code\ndrwxr-xr-x 3 bealers staff 102 9 Nov 10:06 etc\n-rw-r--r--@ 1 bealers staff 118152 9 Nov 10:08 it-works.png\ndrwxr-xr-x 5 bealers staff 170 9 Nov 10:03 vagrant\n-rwxr-xr-x 1 bealers staff 1315849 9 Nov 10:06 wp-cli\n\n$ cd vagrant/\n\n$ ls -l\n-rw-r--r--@ 1 bealers staff 1661 9 Nov 09:41 Vagrantfile\n-rwxr-xr-x@ 1 bealers staff 3836 9 Nov 10:06 provision.sh\n\nThe code folder contains all the WordPress files. You can edit these directly and refresh that page to see your changes instantly.\n\nStaying in the vagrant folder, we\u2019ll now SSH to the machine and have a quick poke around.\n\n$ vagrant ssh\nWelcome to Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-39-generic x86_64)\n\n* Documentation: https://help.ubuntu.com/\n\nSystem information as of Sun Nov 9 10:03:38 UTC 2014\n\nSystem load: 1.35 Processes: 102\nUsage of /: 2.7% of 39.34GB Users logged in: 0\nMemory usage: 16% IP address for eth0: 10.0.2.15\nSwap usage: 0%\n\nGraph this data and manage this system at:\nhttps://landscape.canonical.com/\n\nGet cloud support with Ubuntu Advantage Cloud Guest:\nhttp://www.ubuntu.com/business/services/cloud\n\n0 packages can be updated.\n0 updates are security updates.\n\nvagrant@bealers-24ways:~$\n\nYou\u2019re now logged in as the Vagrant user; if you want to become root this is easy:\n\nvagrant@bealers-24ways:~$ sudo su -\nroot@bealers-24ways:~# \n\nOr you could become the webserver user, which is a good idea if you\u2019re editing the web files directly on the server:\n\nroot@bealers-24ways:~# su - www-data\nwww-data@bealers-24ways:~$\n\nwww-data\u2019s home directory is /var/www so we should be able to see our magically mapped files:\n\nwww-data@bealers-24ways:~$ ls -l\ntotal 4\ndrwxr-xr-x 1 www-data www-data 306 Nov 9 10:09 24ways\ndrwxr-xr-x 2 root root 4096 Nov 9 10:05 html\n\nwww-data@bealers-24ways:~$ cd 24ways/\n\nwww-data@bealers-24ways:~/24ways$ ls -l\ntotal 1420\n-rw-r--r-- 1 www-data www-data 13682 Nov 9 10:19 bealers-24ways.md\ndrwxr-xr-x 1 www-data www-data 714 Nov 9 10:06 code\ndrwxr-xr-x 1 www-data www-data 102 Nov 9 10:06 etc\n-rw-r--r-- 1 www-data www-data 118152 Nov 9 10:08 it-works.png\ndrwxr-xr-x 1 www-data www-data 170 Nov 9 10:03 vagrant\n-rwxr-xr-x 1 www-data www-data 1315849 Nov 9 10:06 wp-cli\n\nWe can also see some of our bespoke configurations:\n\nwww-data@bealers-24ways:~/24ways$ cat /etc/php5/mods-available/siftware.ini \nupload_max_filesize = 15M\nlog_errors = On\ndisplay_errors = On\ndisplay_startup_errors = On\nerror_log = /var/log/apache2/php.log\nmemory_limit = 1024M\ndate.timezone = Europe/London\n\nwww-data@bealers-24ways:~/24ways$ ls -l /etc/apache2/sites-enabled/\ntotal 0\nlrwxrwxrwx 1 root root 43 Nov 9 10:06 bealers-24ways.dev.conf -> /var/www/24ways/etc/bealers-24ways.dev.conf\n\nIf you want to leave the server, simply type Ctrl+D a few times and you\u2019ll be back where you started.\n\nwww-data@bealers-24ways:~/24ways$ logout\nroot@bealers-24ways:~# logout\nvagrant@bealers-24ways:~$ logout\nConnection to 127.0.0.1 closed.\n$ \n\nYou can now halt the machine:\n\n$ vagrant halt\n==> www: Attempting graceful shutdown of VM...\n==> www: Removing hosts\n\nBonus level\n\nThe example I\u2019ve provided isn\u2019t very realistic. In the real world I\u2019d expect the Vagrant file and provisioner to be included with the project and for it not to create the directory structure, which should already exist in your project. The same goes for the Apache VirtualHost file. You\u2019ll also probably have a default SQL script to populate the database.\n\nAs you work with Vagrant you might start to find bash provisioning to be quite limiting, especially if you are working on larger projects which use more than one server. In that case I would suggest you take a look at Ansible, Puppet or Chef. We use Ansible because we like YAML but they all do the same sort of thing. The main benefit is being able to use the same Vagrant provisioning scripts to also provision test, staging and production environments using your build workflows.\n\nHaving to supply a password so the hosts file can be updated gets annoying very quicky so you can give Vagrant sudo rights:\n\n$ sudo visudo\n\nAdd these lines to the bottom (Shift+G then i then Ctrl+V then Esc then :wq)\n\nCmnd_Alias VAGRANT_HOSTS_ADD = /bin/sh -c echo \"*\" >> /etc/hosts\nCmnd_Alias VAGRANT_HOSTS_REMOVE = /usr/bin/sed -i -e /*/ d /etc/hosts\n%staff ALL=(root) NOPASSWD: VAGRANT_HOSTS_ADD, VAGRANT_HOSTS_REMOVE\n\nVagrant caches the operating system images that you download but it\u2019ll download the installed software packages every time. You can get around this by using a plugin like vagrant-cachier or, if you\u2019re really keen, maintain local Apt repositories (or whatever the equivalent is for your server architecture).\n\nAt some point you might start getting a large number of virtual machines running on your poor hardware all at the same time, especially if you\u2019re switching between projects a lot and each of those projects use lots of servers. We\u2019re just getting to that stage now, so are considering a medium-term move to a containerised option like Docker, which seems to be maturing now.\n\nIf you are keen not to use any command line tools whatsoever and you\u2019re on OS X then you could check out Vagrant Manager as it looks quite shiny.\n\nFinally, there are a huge amount of resources to give you pre-built Vagrant machines from the likes of VVV for Wordpress, something similar for Perch, PuPHPet for generating various configurations, and a long list of pre-built operating systems at VagrantBox.es.\n\nWrapping up\n\nHopefully you can now see why it might be worthwhile to add Vagrant to your development workflow. Whether you\u2019re an agency drafting in freelancers or a one-person band running lots of sites on your laptop using MAMP or something similar.\n\nVagrant makes it easy to launch exact copies of the same machine in a repeatable and version controlled way. The learning curve isn\u2019t too steep and, once configured, you can forget about it and focus on getting your work done.", "year": "2014", "author": "Darren Beale", "author_slug": "darrenbeale", "published": "2014-12-05T00:00:00+00:00", "url": "https://24ways.org/2014/what-is-vagrant-and-why-should-i-care/", "topic": "process"} {"rowid": 47, "title": "Developing Robust Deployment Procedures", "contents": "Once you have developed your site, how do you make it live on your web hosting? For many years the answer was to log on to your server and upload the files via FTP. Over time most hosts and FTP clients began to support SFTP, ensuring your files were transmitted over a secure connection. The process of deploying a site however remained the same.\n\nThere are issues with deploying a site in this way. You are essentially transferring files one by one to the server without any real management of that transfer. If the transfer fails for some reason, you may end up with a site that is only half updated. It can then be really difficult to work out what hasn\u2019t been replaced or added, especially where you are updating an existing site. If you are updating some third-party software your update may include files that should be removed, but that may not be obvious to you and you risk leaving outdated files littering your file system. Updating using (S)FTP is a fragile process that leaves you open to problems caused by both connectivity and human error. Is there a better way to do this?\n\nYou\u2019ll be glad to know that there is. A modern professional deployment workflow should have you moving away from fragile manual file transfers to deployments linked to code committed into source control.\n\nThe benefits of good practice\n\nYou may never have experienced any major issues while uploading files over FTP, and good FTP clients can help. However, there are other benefits to moving to modern deployment practices.\n\nNo surprises when you launch\n\nIf you are deploying in the way I suggest in this article you should have no surprises when you launch because the code you committed from your local environment should be the same code you deploy \u2013 and to staging if you have a staging server. A missing vital file won\u2019t cause things to start throwing errors on updating the live site.\n\nBeing able to work collaboratively\n\nSource control and good deployment practice makes working with your clients and other developers easy. Deploying first to a staging server means you can show your client updates and then push them live. If you subcontract some part of the work, you can give your subcontractor the ability to deploy to staging, leaving you with the final push to launch, once you know you are happy with the work.\n\nHaving a proper backup of site files with access to them from anywhere\n\nThe process I will outline requires the use of hosted, external source control. This gives you a backup of your latest commit and the ability to clone those files and start working on them from any machine, wherever you are.\n\nBeing able to jump back into a site quickly when the client wants a few changes\n\nWhen doing client work it is common for some work to be handed over, then several months might go by without you needing to update the site. If you don\u2019t have a good process in place, just getting back to work on it may take several hours for what could be only a few hours of work in itself. A solid method for getting your local copy up to date and deploying your changes live can cut that set-up time down to a few minutes.\n\nThe tool chain\n\nIn the rest of this article I assume that your current practice is to deploy your files over (S)FTP, using an FTP client. You would like to move to a more robust method of deployment, but without blowing apart your workflow and spending all Christmas trying to put it back together again. Therefore I\u2019m selecting the most straightforward tools to get you from A to B.\n\nSource control\n\nPerhaps you already use some kind of source control for your sites. Today that is likely to be Git but you might also use Subversion or Mercurial. If you are not using any source control at all then I would suggest you choose Git, and that is what I will be working with in this article.\n\nWhen you work with Git, you always have a local repository. This is where your changes are committed. You also have the option to push those changes to a remote repository; for example, GitHub. You may well have come across GitHub as somewhere you can go to download open source code. However, you can also set up private repositories for sites whose code you don\u2019t want to make publicly accessible.\n\nA hosted Git repository gives you somewhere to push your commits to and deploy from, so it\u2019s a crucial part of our tool chain.\n\nA deployment service\n\nOnce you have your files pushed to a remote repository, you then need a way to deploy them to your staging environment and live server. This is the job of a deployment service.\n\nThis service will connect securely to your hosting, and either automatically (or on the click of a button) transfer files from your Git commit to the hosting server. If files need removing, the service should also do this too, so you can be absolutely sure that your various environments are the same.\n\nTools to choose from\n\nWhat follows are not exhaustive lists, but any of these should allow you to deploy your sites without FTP.\n\nHosted Git repositories\n\n\n\tGitHub\n\tBeanstalk\n\tBitbucket\n\n\nStandalone deployment tools\n\n\n\tDeploy\n\tdploy.io\n\tFTPloy\n\n\nI\u2019ve listed Beanstalk as a hosted Git repository, though it also includes a bundled deployment tool. Dploy.io is a standalone version of that tool just for deployment. In this tutorial I have chosen two separate services to show how everything fits together, and because you may already be using source control. If you are setting up all of this for the first time then using Beanstalk saves having two accounts \u2013 and I can personally recommend them.\n\nPutting it all together\n\nThe steps we are going to work through are:\n\n\n\tGetting your local site into a local Git repository\n\tPushing the files to a hosted repository\n\tConnecting a deployment tool to your web hosting\n\tSetting up a deployment\n\n\nGet your local site into a local Git repository\n\nDownload and install Git for your operating system.\n\nOpen up a Terminal window and tell Git your name using the following command (use the name you will set up on your hosted repository).\n\n> git config --global user.name \"YOUR NAME\"\n\n\nUse the next command to give Git your email address. This should be the address that you will use to sign up for your remote repository.\n\n> git config --global user.email \"YOUR EMAIL ADDRESS\"\n\n\nStaying in the command line, change to the directory where you keep your site files. If your files are in /Users/rachel/Sites/mynicewebite you would type:\n\n> cd /Users/rachel/Sites/mynicewebsite\n\n\nThe next command tells Git that we want to create a new Git repository here.\n\n> git init\n\n\nWe then add our files:\n\n> git add .\n\n\nThen commit the files:\n\n> git commit -m \u201cAdding initial files\u201d\n\n\nThe bit in quotes after -m is a message describing what you are doing with this commit. It\u2019s important to add something useful here to remind yourself later why you made the changes included in the commit.\n\nYour local files are now in a Git repository! However, everything should be just the same as before in terms of working on the files or viewing them in a local web server. The only difference is that you can add and commit changes to this local repository.\n\nWant to know more about Git? There are some excellent resources in a range of formats here.\n\nSetting up a hosted Git repository\n\nI\u2019m going to use Atlassian Bitbucket for my first example as they offer a free hosted and private repository.\n\nCreate an account on Bitbucket. Then create a new empty repository and give it a name that will identify the repository easily.\n\nClick Getting Started and under Command Line select \u201cI have an existing project\u201d. This will give you a set of instructions to run on the command line. The first instruction is just to change into your working directory as we did before. We then add a remote repository, and run two commands to push everything up to Bitbucket.\n\ncd /path/to/my/repo\ngit remote add origin https://myuser@bitbucket.org/myname/24ways-tutorial.git\ngit push -u origin --all \ngit push -u origin --tags \n\n\nWhen you run the push command you will be asked for the password that you set for Bitbucket. Having entered that, you should be able to view the files of your site on Bitbucket by selecting the navigation option Source in the sidebar.\n\nYou will also be able to see commits. When we initially committed our files locally we added the message \u201cAdding initial files\u201d. If you select Commits from the sidebar you\u2019ll see we have one commit, with the message we set locally. You can imagine how useful this becomes when you can look back and see why you made certain changes to a project that perhaps you haven\u2019t worked on for six months.\n\nBefore working on your site locally you should run:\n\n> git pull\n\n\nin your working directory to make sure you have all of the most up-to-date files. This is especially important if someone else might work on them, or you just use multiple machines.\n\nYou then make your changes and add any changed or modified files, for example:\n\n> git add index.php\n\n\nCommit the change locally:\n\n> git commit -m \u201cupdated the homepage\u201d\n\n\nThen push it to Bitbucket:\n\n> git push origin master\n\n\nIf you want to work on your files on a different computer you clone them using the following command:\n\n> git clone https://myuser@bitbucket.org/myname/24ways-tutorial.git\n\n\nYou then have a copy of your files that is already a Git repository with the Bitbucket repository set up as a remote, so you are all ready to start work.\n\nConnecting a deployment tool to your repository and web hosting\n\nThe next step is deploying files. I have chosen to use a deployment tool called Deploy as it has support for Bitbucket. It does have a monthly charge \u2013 but offers a free account for open source projects.\n\nSign up for your account then log in and create your first project. Select Create an empty project. Under Configure Repository Details choose Bitbucket and enter your username and password.\n\nIf Deploy can connect, it will show you your list of projects. Select the one you want.\n\nThe next screen is Add New Server and here you need to configure the server that you want to deploy to. You might set up more than one server per project. In an ideal world you would deploy to a staging server for your client preview changes and then deploy once everything is signed off. For now I\u2019ll assume you just want to set up your live site.\n\nGive the server a name; I usually use Production for the live web server. Then choose the protocol to connect with. Unless your host really does not support SFTP (which is pretty rare) I would choose that instead of FTP.\n\nYou now add the same details your host gave you to log in with your SFTP client, including the username and password. The Path on server should be where your files are on the server. When you log in with an SFTP client and you get put in the directory above public_html then you should just be able to add public_html here.\n\nOnce your server is configured you can deploy. Click Deploy now and choose the server you just set up. Then choose the last commit (which will probably be selected for you) and click Preview deployment. You will then get a preview of which files will change if you run the deployment: the files that will be added and any that will be removed. At the very top of that screen you should see the commit message you entered right back when you initially committed your files locally.\n\nIf all looks good, run the deployment.\n\nYou have taken the first steps to a more consistent and robust way of deploying your websites. It might seem like quite a few steps at first, but you will very soon come to realise how much easier deploying a live site is through this process.\n\nYour new procedure step by step\n\n\n\tEdit your files locally as before, testing them through a web server on your own computer.\n\tCommit your changes to your local Git repository.\n\tPush changes to the remote repository.\n\tLog into the deployment service.\n\tHit the Deploy now button.\n\tPreview the changes.\n\tRun the deployment and then check your live site.\n\n\nTaking it further\n\nI have tried to keep things simple in this article because so often, once you start to improve processes, it is easy to get bogged down in all the possible complexities. If you move from deploying with an FTP client to working in the way I have outlined above, you\u2019ve taken a great step forward in creating more robust processes. You can continue to improve your procedures from this point.\n\nStaging servers for client preview\n\nWhen we added our server we could have added an additional server to use as a staging server for clients to preview their site on. This is a great use of a cheap VPS server, for example. You can set each client up with a subdomain \u2013 clientname.yourcompany.com \u2013 and this becomes the place where they can view changes before you deploy them.\n\nIn that case you might deploy to the staging server, let the client check it out and then go back and deploy the same commit to the live server.\n\nUsing Git branches\n\nAs you become more familiar with using Git, and especially if you start working with other people, you might need to start developing using branches. You can then have a staging branch that deploys to staging and a production branch that is always a snapshot of what has been pushed to production. This guide from Beanstalk explains how this works.\n\nAutomatic deployment to staging\n\nI wouldn\u2019t suggest doing automatic deployment to the live site. It\u2019s worth having someone on hand hitting the button and checking that everything worked nicely. If you have configured a staging server, however, you can set it up to deploy the changes each time a commit is pushed to it.\n\nIf you use Bitbucket and Deploy you would create a deployment hook on Bitbucket to post to a URL on Deploy when a push happens to deploy the code. This can save you a few steps when you are just testing out changes. Even if you have made lots of changes to the staging deployment, the commit that you push live will include them all, so you can do that manually once you are happy with how things look in staging.\n\nFurther Reading\n\n\n\tThe tutorials from Git Client Tower, already mentioned in this article, are a great place to start if you are new to Git.\n\tA presentation from Liam Dempsey showing how to use the GitHub App to connect to Bitbucket\n\tTry Git from Code School\n\tThe Git Workbook a self study guide to Git from Lorna Mitchell\n\n\nGet set up for the new year\n\nI love to start the New Year with a clean slate and improved processes. If you are still wrangling files with FTP then this is one thing you could tick off your list to save you time and energy in 2015. Post to the comments if you have suggestions of tools or ideas for ways to enhance this type of set-up for those who have already taken the first steps.", "year": "2014", "author": "Rachel Andrew", "author_slug": "rachelandrew", "published": "2014-12-04T00:00:00+00:00", "url": "https://24ways.org/2014/developing-robust-deployment-procedures/", "topic": "process"} {"rowid": 37, "title": "JavaScript Modules the ES6 Way", "contents": "JavaScript admittedly has plenty of flaws, but one of the largest and most prominent is the lack of a module system: a way to split up your application into a series of smaller files that can depend on each other to function correctly. \n\nThis is something nearly all other languages come with out of the box, whether it be Ruby\u2019s require, Python\u2019s import, or any other language you\u2019re familiar with. Even CSS has @import! JavaScript has nothing of that sort, and this has caused problems for application developers as they go from working with small websites to full client-side applications. Let\u2019s be clear: it doesn\u2019t mean the new module system in the upcoming version of JavaScript won\u2019t be useful to you if you\u2019re building smaller websites rather than the next Instagram.\n\nThankfully, the lack of a module system will soon be a problem of the past. The next version of JavaScript, ECMAScript 6, will bring with it a full-featured module and dependency management solution for JavaScript. The bad news is that it won\u2019t be landing in browsers for a while yet \u2013 but the good news is that the specification for the module system and how it will look has been finalised. The even better news is that there are tools available to get it all working in browsers today without too much hassle. In this post I\u2019d like to give you the gift of JS modules and show you the syntax, and how to use them in browsers today. It\u2019s much simpler than you might think.\n\nWhat is ES6?\n\nECMAScript is a scripting language that is standardised by a company called Ecma International. JavaScript is an implementation of ECMAScript. ECMAScript 6 is simply the next version of the ECMAScript standard and, hence, the next version of JavaScript. The spec aims to be fully comfirmed and complete by the end of 2014, with a target initial release date of June 2015. It\u2019s impossible to know when we will have full feature support across the most popular browsers, but already some ES6 features are landing in the latest builds of Chrome and Firefox. You shouldn\u2019t expect to be able to use the new features across browsers without some form of additional tooling or library for a while yet.\n\nThe ES6 module spec\n\nThe ES6 module spec was fully confirmed in July 2014, so all the syntax I will show you in this article is not expected to change. I\u2019ll first show you the syntax and the new APIs being added to the language, and then look at how to use them today. There are two parts to the new module system. The first is the syntax for declaring modules and dependencies in your JS files, and the second is a programmatic API for loading in modules manually. The first is what most people are expected to use most of the time, so it\u2019s what I\u2019ll focus on more.\n\nModule syntax\n\nThe key thing to understand here is that modules have two key components. First, they have dependencies. These are things that the module you are writing depends on to function correctly. For example, if you were building a carousel module that used jQuery, you would say that jQuery is a dependency of your carousel. You import these dependencies into your module, and we\u2019ll see how to do that in a minute. Second, modules have exports. These are the functions or variables that your module exposes publicly to anything that imports it. Using jQuery as the example again, you could say that jQuery exports the $ function. Modules that depend on and hence import jQuery get access to the $ function, because jQuery exports it.\n\nAnother important thing to note is that when I discuss a module, all I really mean is a JavaScript file. There\u2019s no extra syntax to use other than the new ES6 syntax. Once ES6 lands, modules and files will be analogous.\n\nNamed exports\n\nModules can export multiple objects, which can be either plain old variables or JavaScript functions. You denote something to be exported with the export keyword:\n\nexport function double(x) {\n return x + x;\n};\n\n\nYou can also store something in a variable then export it. If you do that, you have to wrap the variable in a set of curly braces.\n\nvar double = function(x) {\n return x + x;\n}\n\nexport { double };\n\nA module can then import the double function like so:\n\nimport { double } from 'mymodule';\ndouble(2); // 4\n\nAgain, curly braces are required around the variable you would like to import. It\u2019s also important to note that from 'mymodule' will look for a file called mymodule.js in the same directory as the file you are requesting the import from. There is no need to add the .js extension.\n\nThe reason for those extra braces is that this syntax lets you export multiple variables:\n\nvar double = function(x) {\n return x + x;\n}\n\nvar square = function(x) {\n return x * x;\n}\n\nexport { double, square }\n\nI personally prefer this syntax over the export function \u2026, but only because it makes it much clearer to me what the module exports. Typically I will have my export {\u2026} line at the bottom of the file, which means I can quickly look in one place to determine what the module is exporting.\n\nA file importing both double and square can do so in just the way you\u2019d expect:\n\nimport { double, square } from 'mymodule';\ndouble(2); // 4\nsquare(3); // 9\n\nWith this approach you can\u2019t easily import an entire module and all its methods. This is by design \u2013 it\u2019s much better and you\u2019re encouraged to import just the functions you need to use.\n\nDefault exports\n\nAlong with named exports, the system also lets a module have a default export. This is useful when you are working with a large library such as jQuery, Underscore, Backbone and others, and just want to import the entire library. A module can define its default export (it can only ever have one default export) like so:\n\nexport default function(x) {\n return x + x;\n}\n\nAnd that can be imported:\n\nimport double from 'mymodule';\ndouble(2); // 4\n\n\nThis time you do not use the curly braces around the name of the object you are importing. Also notice how you can name the import whatever you\u2019d like. Default exports are not named, so you can import them as anything you like:\n\nimport christmas from 'mymodule';\nchristmas(2); // 4\n\nThe above is entirely valid.\n\nAlthough it\u2019s not something that is used too often, a module can have both named exports and a default export, if you wish.\n\nOne of the design goals of the ES6 modules spec was to favour default exports. There are many reasons behind this, and there is a very detailed discussion on the ES Discuss site about it. That said, if you find yourself preferring named exports, that\u2019s fine, and you shouldn\u2019t change that to meet the preferences of those designing the spec.\n\nProgrammatic API\n\nAlong with the syntax above, there is also a new API being added to the language so you can programmatically import modules. It\u2019s pretty rare you would use this, but one obvious example is loading a module conditionally based on some variable or property. You could easily import a polyfill, for example, if the user\u2019s browser didn\u2019t support a feature your app relied on. An example of doing this is:\n\nif(someFeatureNotSupported) {\n System.import('my-polyfill').then(function(myPolyFill) {\n // use the module from here\n });\n}\n\nSystem.import will return a promise, which, if you\u2019re not familiar, you can read about in this excellent article on HTMl5 Rocks by Jake Archibald. A promise basically lets you attach callback functions that are run when the asynchronous operation (in this case, System.import), is complete.\n\nThis programmatic API opens up a lot of possibilities and will also provide hooks to allow you to register callbacks that will run at certain points in the lifetime of a module. Those hooks and that syntax are slightly less set in stone, but when they are confirmed they will provide really useful functionality. For example, you could write code that would run every module that you import through something like JSHint before importing it. In development that would provide you with an easy way to keep your code quality high without having to run a command line watch task.\n\nHow to use it today\n\nIt\u2019s all well and good having this new syntax, but right now it won\u2019t work in any browser \u2013 and it\u2019s not likely to for a long time. Maybe in next year\u2019s 24 ways there will be an article on how you can use ES6 modules with no extra work in the browser, but for now we\u2019re stuck with a bit of extra work.\n\nES6 module transpiler\n\nOne solution is to use the ES6 module transpiler, a compiler that lets you write your JavaScript using the ES6 module syntax (actually a subset of it \u2013 not quite everything is supported, but the main features are) and have it compiled into either CommonJS-style code (CommonJS is the module specification that NodeJS and Browserify use), or into AMD-style code (the spec RequireJS uses). There are also plugins for all the popular build tools, including Grunt and Gulp.\n\nThe advantage of using this transpiler is that if you are already using a tool like RequireJS or Browserify, you can drop the transpiler in, start writing in ES6 and not worry about any additional work to make the code work in the browser, because you should already have that set up already. If you don\u2019t have any system in place for handling modules in the browser, using the transpiler doesn\u2019t really make sense. Remember, all this does is convert ES6 module code into CommonJS- or AMD-compliant JavaScript. It doesn\u2019t do anything to help you get that code running in the browser, but if you have that part sorted it\u2019s a really nice addition to your workflow. If you would like a tutorial on how to do this, I wrote a post back in June 2014 on using ES6 with the ES6 module transpiler.\n\nSystemJS\n\nAnother solution is SystemJS. It\u2019s the best solution in my opinion, particularly if you are starting a new project from scratch, or want to use ES6 modules on a project where you have no current module system in place. SystemJS is a spec-compliant universal module loader: it loads ES6 modules, AMD modules, CommonJS modules, as well as modules that just add a variable to the global scope (window, in the browser).\n\nTo load in ES6 files, SystemJS also depends on two other libraries: the ES6 module loader polyfill; and Traceur. Traceur is best accessed through the bower-traceur package, as the main repository doesn\u2019t have an easy to find downloadable version. The ES6 module load polyfill implements System.import, and lets you load in files using it. Traceur is an ES6-to-ES5 module loader. It takes code written in ES6, the newest version of JavaScript, and transpiles it into ES5, the version of JavaScript widely implemented in browsers. The advantage of this is that you can play with the new features of the language today, even though they are not supported in browsers. The drawback is that you have to run all your files through Traceur every time you save them, but this is easily automated. Additionally, if you use SystemJS, the Traceur compilation is done automatically for you.\n\nAll you need to do to get SystemJS running is to add a <script> element to load SystemJS into your webpage. It will then automatically load the ES6 module loader and Traceur files when it needs them. In your HTML you then need to use System.import to load in your module:\n\n<script>\n System.import('./app');\n</script>\n\nWhen you load the page, app.js will be asynchronously loaded. Within app.js, you can now use ES6 modules. SystemJS will detect that the file is an ES6 file, automatically load Traceur, and compile the file into ES5 so that it works in the browser. It does all this dynamically in the browser, but there are tools to bundle your application in production, so it doesn\u2019t make a lot of requests on the live site. In development though, it makes for a really nice workflow.\n\nWhen working with SystemJS and modules in general, the best approach is to have a main module (in our case app.js) that is the main entry point for your application. app.js should then be responsible for loading all your application\u2019s modules. This forces you to keep your application organised by only loading one file initially, and having the rest dealt with by that file.\n\nSystemJS also provides a workflow for bundling your application together into one file.\n\nConclusion\n\nES6 modules may be at least six months to a year away (if not more) but that doesn\u2019t mean they can\u2019t be used today. Although there is an overhead to using them now \u2013 with the work required to set up SystemJS, the module transpiler, or another solution \u2013 that doesn\u2019t mean it\u2019s not worthwhile. Using any module system in the browser, whether that be RequireJS, Browserify or another alternative, requires extra tooling and libraries to support it, and I would argue that the effort to set up SystemJS is no greater than that required to configure any other tool. It also comes with the extra benefit that when the syntax is supported in browsers, you get a free upgrade. You\u2019ll be able to remove SystemJS and have everything continue to work, backed by the native browser solution.\n\nIf you are starting a new project, I would strongly advocate using ES6 modules. It is a syntax and specification that is not going away at all, and will soon be supported in browsers. Investing time in learning it now will pay off hugely further down the road.\n\nFurther reading\n\nIf you\u2019d like to delve further into ES6 modules (or ES6 generally) and using them today, I recommend the following resources:\n\n\n\tECMAScript 6 modules: the final syntax by Axel Rauschmayer\n\tPractical Workflows for ES6 Modules by Guy Bedford\n\tECMAScript 6 resources for the curious JavaScripter by Addy Osmani\n\tTracking ES6 support by Addy Osmani\n\tES6 Tools List by Addy Osmani\n\tUsing Grunt and the ES6 Module Transpiler by Thomas Boyt\n\tJavaScript Modules and Dependencies with jspm by myself\n\tUsing ES6 Modules Today by Guy Bedford", "year": "2014", "author": "Jack Franklin", "author_slug": "jackfranklin", "published": "2014-12-03T00:00:00+00:00", "url": "https://24ways.org/2014/javascript-modules-the-es6-way/", "topic": "code"} {"rowid": 31, "title": "Dealing with Emergencies in Git", "contents": "The stockings were hung by the chimney with care,\nIn hopes that version control soon would be there.\n\nThis summer I moved to the UK with my partner, and the onslaught of the Christmas holiday season began around the end of October (October!). It does mean that I\u2019ve had more than a fair amount of time to come up with horrible Git analogies for this article. Analogies, metaphors, and comparisons help the learner hook into existing mental models about how a system works. They only help, however, if the learner has enough familiarity with the topic at hand to make the connection between the old and new information.\n\nLet\u2019s start by painting an updated version of Clement Clarke Moore\u2019s Christmas living room. Empty stockings are hung up next to the fireplace, waiting for Saint Nicholas to come down the chimney and fill them with small treats. Holiday treats are scattered about. A bowl of mixed nuts, the holiday nutcracker, and a few clementines. A string of coloured lights winds its way up an evergreen.\n\nPerhaps a few of these images are familiar, or maybe they\u2019re just settings you\u2019ve seen in a movie. It doesn\u2019t really matter what the living room looks like though. The important thing is to ground yourself in your own experiences before tackling a new subject. Instead of trying to brute-force your way into new information, as an adult learner constantly ask yourself: \u2018What is this like? What does this remind me of? What do I already know that I can use to map out this new territory?\u2019 It\u2019s okay if the map isn\u2019t perfect. As you refine your understanding of a new topic, you\u2019ll outgrow the initial metaphors, analogies, and comparisons.\n\nWith apologies to Mr. Moore, let\u2019s give it a try.\n\nGetting Interrupted in Git\n\nWhen on the roof there arose such a clatter!\n\nYou\u2019re happily working on your software project when all of a sudden there are freaking reindeer on the roof! Whatever you\u2019ve been working on is going to need to wait while you investigate the commotion.\n\nIf you\u2019ve got even a little bit of experience working with Git, you know that you cannot simply change what you\u2019re working on in times of emergency. If you\u2019ve been doing work, you have a dirty working directory and you cannot change branches, or push your work to a remote repository while in this state.\n\nUp to this point, you\u2019ve probably dealt with emergencies by making a somewhat useless commit with a message something to the effect of \u2018switching branches for a sec\u2019. This isn\u2019t exactly helpful to future you, as commits should really contain whole ideas of completed work. If you get interrupted, especially if there are reindeer on the roof, the chances are very high that you weren\u2019t finished with what you were working on.\n\nYou don\u2019t need to make useless commits though. Instead, you can use the stash command. This command allows you to temporarily set aside all of your changes so that you can come back to them later. In this sense, stash is like setting your book down on the side table (or pushing the cat off your lap) so you can go investigate the noise on the roof. You aren\u2019t putting your book away though, you\u2019re just putting it down for a moment so you can come back and find it exactly the way it was when you put it down.\n\nLet\u2019s say you\u2019ve been working in the branch waiting-for-st-nicholas, and now you need to temporarily set aside your changes to see what the noise was on the roof:\n\n$ git stash\n\nAfter running this command, all uncommitted work will be temporarily removed from your working directory, and you will be returned to whatever state you were in the last time you committed your work.\n\nWith the book safely on the side table, and the cat safely off your lap, you are now free to investigate the noise on the roof. It turns out it\u2019s not reindeer after all, but just your boss who thought they\u2019d help out by writing some code on the project you\u2019ve been working on. Bless. Rolling your eyes, you agree to take a look and see what kind of mischief your boss has gotten themselves into this time.\n\nYou fetch an updated list of branches from the remote repository, locate the branch your boss had been working on, and checkout a local copy:\n\n$ git fetch\n$ git branch -r\n$ git checkout -b helpful-boss-branch origin/helpful-boss-branch\n\nYou are now in a local copy of the branch where you are free to look around, and figure out exactly what\u2019s going on.\n\nYou sigh audibly and say, \u2018Okay. Tell me what was happening when you first realised you\u2019d gotten into a mess\u2019 as you look through the log messages for the branch.\n\n$ git log --oneline\n$ git log\n\nBy using the log command you will be able to review the history of the branch and find out the moment right before your boss ended up stuck on your roof.\n\nYou may also want to compare the work your boss has done to the main branch for your project. For this article, we\u2019ll assume the main branch is named master.\n\n$ git diff master\n\nLooking through the commits, you may be able to see that things started out okay but then took a turn for the worse.\n\nChecking out a single commit\n\nUsing commands you\u2019re already familiar with, you can rewind through history and take a look at the state of the code at any moment in time by checking out a single commit, just like you would a branch.\n\nUsing the log command, locate the unique identifier (commit hash) of the commit you want to investigate. For example, let\u2019s say the unique identifier you want to checkout is 25f6d7f.\n\n$ git checkout 25f6d7f\n\nNote: checking out '25f6d7f'.\n\nYou are in 'detached HEAD' state. You can look around,\nmake experimental changes and commit them, and you can\ndiscard any commits you make in this state without\nimpacting any branches by performing another checkout.\n\nIf you want to create a new branch to retain commits you create, you may do so (now or later) by using @-b@ with the checkout command again. Example:\n\n$ git checkout -b new_branch_name\n\nHEAD is now at 25f6d7f... Removed first paragraph.\n\nThis is usually where people start to panic. Your boss screwed something up, and now your HEAD is detached. Under normal circumstances, these words would be a very good reason to panic.\n\nTake a deep breath. Nothing bad is going to happen. Being in a detached HEAD state just means you\u2019ve temporarily disconnected from a known chain of events. In other words, you\u2019re currently looking at the middle of a story (or branch) about what happened \u2013 and you\u2019re not at the endpoint for this particular story.\n\nGit allows you to view the history of your repository as a timeline (technically it\u2019s a directed acyclic graph). When you make commits which are not associated with a branch, they are essentially inaccessible once you return to a known branch. If you make commits while you\u2019re in a detached HEAD state, and then try to return to a known branch, Git will give you a warning and tell you how to save your work.\n\n$ git checkout master\n\nWarning: you are leaving 1 commit behind, not connected to\nany of your branches:\n\n 7a85788 Your witty holiday commit message.\n\nIf you want to keep them by creating a new branch, this may be a good time to do so with:\n\n$ git branch new_branch_name 7a85788\n\nSwitched to branch 'master'\nYour branch is up-to-date with 'origin/master'.\n\nSo, if you want to save the commits you\u2019ve made while in a detached HEAD state, you simply need to put them on a new branch.\n\n$ git branch saved-headless-commits 7a85788\n\nWith this trick under your belt, you can jingle around in history as much as you\u2019d like. It\u2019s not like sliding around on a timeline though. When you checkout a specific commit, you will only have access to the history from that point backwards in time. If you want to move forward in history, you\u2019ll need to move back to the branch tip by checking out the branch again.\n\n$ git checkout helpful-boss-branch\n\nYou\u2019re now back to the present. Your HEAD is now pointing to the endpoint of a known branch, and so it is no longer detached. Any changes you made while on your adventure are safely stored in a new branch, assuming you\u2019ve followed the instructions Git gave you. That wasn\u2019t so scary after all, now, was it?\n\nBack to our reindeer problem.\n\nIf your boss is anything like the bosses I\u2019ve worked with, chances are very good that at least some of their work is worth salvaging. Depending on how your repository is structured, you\u2019ll want to capture the good work using one of several different methods.\n\nBack in the living room, we\u2019ll use our bowl of nuts to illustrate how you can rescue a tiny bit of work.\n\nSaving just one commit\n\nAbout that bowl of nuts. If you\u2019re like me, you probably had some favourite kinds of nuts from an assorted collection. Walnuts were generally the most satisfying to crack open. So, instead of taking the entire bowl of nuts and dumping it into a stocking (merging the stocking and the bowl of nuts), we\u2019re just going to pick out one nut from the bowl. In Git terms, we\u2019re going to cherry-pick a commit and save it to another branch.\n\nFirst, checkout the main branch for your development work. From this branch, create a new branch where you can copy the changes into.\n\n$ git checkout master\n$ git checkout -b rescue-the-boss\n\nFrom your boss\u2019s branch, helpful-boss-branch locate the commit you want to keep.\n\n$ git log --oneline helpful-boss-branch\n\nLet\u2019s say the commit ID you want to keep is e08740b. From your rescue branch, use the command cherry-pick to copy the changes into your current branch.\n\n$ git cherry-pick e08740b\n\nIf you review the history of your current branch again, you will see you now also have the changes made in the commit in your boss\u2019s branch.\n\nAt this point you might need to make a few additional fixes to help your boss out. (You\u2019re angling for a bonus out of all this. Go the extra mile.) Once you\u2019ve made your additional changes, you\u2019ll need to add that work to the branch as well.\n\n$ git add [filename(s)]\n$ git commit -m \"Building on boss's work to improve feature X.\"\n\nGo ahead and test everything, and make sure it\u2019s perfect. You don\u2019t want to introduce your own mistakes during the rescue mission!\n\nUploading the fixed branch\n\nThe next step is to upload the new branch to the remote repository so that your boss can download it and give you a huge bonus for helping you fix their branch.\n\n$ git push -u origin rescue-the-boss\n\nCleaning up and getting back to work\n\nWith your boss rescued, and your bonus secured, you can now delete the local temporary branches.\n\n$ git branch --delete rescue-the-boss\n$ git branch --delete helpful-boss-branch\n\nAnd settle back into your chair to wait for Saint Nicholas with your book, your branch, and possibly your cat.\n\n$ git checkout waiting-for-st-nicholas\n$ git stash pop\n\nYour working directory has been returned to exactly the same state you were in at the beginning of the article.\n\nHaving fun with analogies\n\nI\u2019ve had a bit of fun with analogies in this article. But sometimes those little twists on ideas can really help someone pick up a new idea (git stash: it\u2019s like when Christmas comes around and everyone throws their fashion sense out the window and puts on a reindeer sweater for the holiday party; or git bisect: it\u2019s like trying to find that one broken light on the string of Christmas lights). It doesn\u2019t matter if the analogy isn\u2019t perfect. It\u2019s just a way to give someone a temporary hook into a concept in a way that makes the concept accessible while the learner becomes comfortable with it. As the learner\u2019s comfort increases, the analogies can drop away, making room for the technically correct definition of how something works.\n\nOr, if you\u2019re like me, you can choose to never grow old and just keep mucking about in the analogies. I\u2019d argue it\u2019s a lot more fun to play with a string of Christmas lights and some holiday cheer than a directed acyclic graph anyway.", "year": "2014", "author": "Emma Jane Westby", "author_slug": "emmajanewestby", "published": "2014-12-02T00:00:00+00:00", "url": "https://24ways.org/2014/dealing-with-emergencies-in-git/", "topic": "code"} {"rowid": 29, "title": "What It Takes to Build a Website", "contents": "In 1994 we lost Kurt Cobain and got the world wide web as a weird consolation prize. In the years that followed, if you\u2019d asked me if I knew how to build a website I\u2019d have said yes, I know HTML, so I know how to build a website. If you\u2019d then asked me what it takes to build a website, I\u2019d have had to admit that HTML would hardly feature.\n\nAmong the design nerdery and dev geekery it\u2019s easy to think that the nuts and bolts of building a page just need to be multiplied up and Ta-da! There\u2019s your website. That can certainly be true with weekend projects and hackery for fun. It works for throwing something together on GitHub or experimenting with ideas on your personal site. But what about working professionally on client projects?\n\nThe web is important, so we need to build it right.\n\nIt\u2019s 2015 \u2013 your job involves people paying you money for building websites. What does it take to build a website and to do it right? What practices should we adopt to make really great, successful and professional web projects in 2015? I put that question to some friends and 24 ways authors to see what they thought.\n\nGetting the tech right\n\nInevitably, it all starts with the technology. We work in a technical medium, after all. From Notepad and WinFTP through to continuous integration and deployment \u2013 how do you build sites?\n\nCreate a stable development environment\n\nThere\u2019s little more likely to send a web developer into a wild panic and a client into a wild rage than making a new site live and things just not working. That\u2019s why it\u2019s important to have realistic development and staging environments that mimic the live server as closely as possible.\n\nAre you in the habit of developing new sites right on the client\u2019s server? Or maybe in a subfolder on your local machine? It\u2019s time to reconsider.\n\nCharlie Perrins writes:\n\n\n\tDon\u2019t work on a live server \u2013 this feels like one of those gear-changing moments for a developer\u2019s growth. Build something that works just as well locally on your own machine as it does on a live server, and capture the differences in the code between the local and live version in a single config file. Ultimately, if you can get all the differences between environments down to a config level then you\u2019ll be in a really good position to automate the deployment process at some point in the future.\n\n\nAnything that creates a significant difference between the development and the live environments has the potential to cause problems you won\u2019t know about until the site goes live \u2013 and at that point the problems are very public and very embarrassing, not to mention unprofessional.\n\nA reasonable solution is to use a tool like MAMP PRO which enables you to set up an individual local website for each project you work on. Importantly, individual sites give you both consistency of paths between development and live, but also the ability to configure server options (like PHP versions and configuration, for example) to match the live site.\n\nBetter yet is to use a virtual machine, managed with a tool such as Vagrant. If you\u2019re interested in learning more about that, we have an article on that subject later in the series.\n\nUse source control\n\nTrent Walton writes:\n\n\n\tWe use source control, and it\u2019s become the centerpiece for how we handle collaboration, enhancements, and issues. It drives our process.\n\n\nI\u2019m hoping by now that you\u2019re either using source control for all your work, or feeling a nagging guilt that you should be. Be it Git, Mercurial, Subversion (name your poison), a revision control system enables you to keep track of changes, revert anything that breaks, and keep rolling backups of your project.\n\nThe benefits only start there, and Charlie Perrins recommends using source control \u201cnot just as a personal backup of your code, but as a way to play nicely with other developers.\u201c\n\nNoting the benefits when collaborating with other developers, he adds:\n\n\n\tGraduating from being the sole architect of your codebase to contributing to a shared codebase is a huge leap for a developer. Perhaps a practical way for people who tend to work on their own to do that would be to submit a pull request or a patch to an open source project or plugin.\u201d\n\n\nRichard Rutter of Clearleft sees clear advantages for the client, too. He recommends using source control \u201cpreferably in some sort of collaborative environment that you can open up or hand over to the client\u201d \u2013 a feature found with hosted services such as GitHub.\n\nIf you\u2019d like to hone your Git skills, Emma Jane Westby wrote Git for Grown-ups in last year\u2019s 24 ways.\n\nDon\u2019t repeat, automate!\n\nTim Kadlec is a big proponent of automating your build process:\n\n\n\tI\u2019ve been hammering that home to every client I\u2019ve had this year. It\u2019s amazing how many companies don\u2019t really have a formal build/deployment process in place. So many issues on the web (performance, accessibility, etc.) can be greatly improved just by having a layer of automation involved.\n\n\tFor example, graphic editing software spits out ridiculously bloated images. Very frequently, that\u2019s what ends up getting put on a site. If you have a build process, you can have the compression automated and start seeing immediate gains for no effort. On a recent project, they were able to shave around 1.5MB from their site weight simply by automating compression.\n\n\nOnce you have your code in source control, some of that automation can be made easier. Brian Suda writes:\n\n\n\tWe have a few bash scripts that run on git commit: they compile the less, jslint and remove white-space, basically the 3 Cs, Compress, Concatenate, Combine. This is now part of our workflow without even realising it.\n\n\nOne great way to get started with a build process is to use a tool like Grunt, and a great way to get started with Grunt is to read Chris Coyier\u2019s Grunt for People Who Think Things Like Grunt are Weird and Hard.\n\nTim reinforces:\n\n\n\tIssues like [image compression] \u2014 or simple accessibility issues like alt tags on images \u2014 should never be able to hit a live server. If you can detect it, you can automate it. And if you can automate it, you can free up time for designers and developers to focus on more challenging \u2014 and interesting \u2014 problems.\n\n\nA clear call to arms to tighten up and formalise development and deployment practices. The less that has to be done manually or is susceptible to change, the less that can go wrong when a site is built and deployed. Any procedures that are automated are no longer dependant on a single person\u2019s knowledge, making it easier to build your team or just cope when someone important is out of the office or leaves.\n\nIf you\u2019re interested in kicking the FTP habit and automating your site deployments, we have an article later in the series just for you.\n\nBuild systems, not sites\n\nOne big theme arising this year was that of building websites as systems, not as individual pages.\n\nBrad Frost:\n\n\n\tFor me, teams making websites in 2015 shouldn\u2019t be working on just-another-redesign redesign. People are realizing that in order to make stable, future-friendly, scalable, extensible web experiences they\u2019re going to need to think more systematically. That means crafting deliberate and thoughtful design systems. That means establishing front-end style guides. That means killing the out-dated, siloed, assembly-line waterfall process and getting cross-disciplinary teams working together in meaningful ways. That means treating development as design. That means treating performance as design. That means taking the time out of the day to establish the big picture, rather than aimlessly crawling along quarter by quarter.\n\n\nDesigner and developer Jina Bolton also advocates the use of style guides, and recommends making the guide a project deliverable:\n\n\n\tConsider adding on a style guide/UI library to your project as a deliverable for maintainability and thinking through all UI elements and components.\n\n\nVal Head agrees: \u201cbuild and maintain a style guide for each project\u201d she wrote. On the subject of approaching a redesign, she added:\n\n\n\tA UI inventory goes a long way to helping get your head around what a design system needs in the early stages of a redesign project.\n\n\nSo what about that old chestnut, responsive web design? Should we be making sites responsive by default? How about mobile first?\n\nRichard Rutter:\n\n\n\tThink mobile first unless you have a very good reason not to. Remember to take the client with you on this principle, otherwise it won\u2019t work as a convincing piece of design.\n\n\nTrent Walton adds:\n\n\n\tThe more you can test and sort of skew your perception for what is typical on the web, the better. 4k displays hooked up to 100Mbps connections can make one extremely unsympathetic.\n\n\nThe value of testing with real devices is something Ruth John appreciates. She wrote:\n\n\n\tI still have my own small device lab at home, even though I work permanently for a well-established company (which has a LOT of devices at its disposal) \u2013 it just means I can get a good overview of how things are looking during development.\n\n\nAnd speaking of systems, Mark Norman Francis recommends the use of measuring tools to aid the design process; \u201c[U]se analytics and make decisions from actual data\u201d he suggests, rather than relying totally on intuition.\n\nTim Kadlec adds a word on performance planning:\n\n\n\tI think having a performance budget in place should now be a given on any project. We\u2019ve proven pretty conclusively through a hundred and one case studies that performance matters. And over the last year or so, we\u2019ve really seen a lot of great tools emerge to help track and enforce performance budgets. There\u2019s not really a good excuse for not using one any more.\n\n\nIt\u2019s clear that in the four years since Ethan Marcotte\u2019s Responsive Web Design article the diversity of screen sizes, network connection speeds and input methods has only increased. New web projects should presume visitors will be using anything from a watch up to a big screen desktop display, and from being offline, through to GPRS, 3G and fast broadband.\n\nWill it take more time to design and build for those constraints? Yes, it most likely will. If Internet Explorer is brave enough to ask to be your default browser, you can be brave enough to tell your client they need to build responsively.\n\nWorking collaboratively\n\nA big part of delivering a successful website project is how we work together, both as a design team and a wider project team with the client.\n\nVal Head recommends an open line of communication:\n\n\n\tKeep conversations going. With clients, with teammates. Talking is so important with the way we work now. A good team conversation place, like Slack, is slowly becoming invaluable for me too.\n\n\nRuth John agrees:\n\n\n\tWe\u2019ve recently opened up our lines of communication by using Slack. It has transformed the way we work. We\u2019re easily more productive and collaborative on projects, as well as making it a lot easier for us all to work remotely (including freelancers).\n\n\nShe goes on to point out how tools can be combined to ease team communication without adding further complications:\n\n\n\tWe have a private GitHub organisation (which everyone who works with us is granted access to), which not only holds all our project code but also a team wiki. This has lots of information to get you set up within the team, as well as coding guidelines and best practices and other admin info, like contact numbers/emails for the team.\n\n\nSmall-A agile is also the theme of the day, with Mark Norman Francis suggesting an approach of \u201csmall iterations with constant feedback around individual features, not spec-it-all-first\u201d. He also encourages you to review as you go, at each stage of the project:\n\n\n\tAlways reflect on what went well and what went badly, and how you can learn from that, even if not Doing Agile\u2122. Ultimately \u201cbest practices\u201d should come from learning lessons (both good and bad).\n\n\nRichard Rutter echoes this, warning against working in isolation from the client for too long:\n\n\n\tAvoid big reveals. Your engagement with the client should be participatory. In business no one likes surprises.\n\n\nThis experience rings true for Ruth John who recommends involving real users in the feedback loop, not just the client:\n\n\n\tWe also try and get feedback on what we\u2019re building as soon and as often as we can with our stakeholders/clients and real users.\n\n\nWe should also remember that our role is to serve the client\u2019s needs, not just bill them for whatever we can. Brian Suda adds:\n\n\n\tDon\u2019t sell clients on things they don\u2019t need. We can spout a lot of jargon and scare clients into thinking you are a god. We can do things few can now, but you can\u2019t rip people off because they are unknowledgeable.\n\n\nBut do clients know what they\u2019re getting, even when they see it? Trent Walton has an interesting take:\n\n\n\tWe focus on prototypes over image-based comps at all costs, especially when meetings are involved. It\u2019s much easier to assess a prototype, and too often with image-based comps, discussions devolve into how something might feel when actually live, or how a layout could change to fit a given viewport.\n\n\nVal Head also likes to get work into the browser:\n\n\n\tSketch design ideas with any software you like, but get to the browser as soon as possible.\n\n\nBeyond your immediate team, Emma Jane Westby has advice for looking further afield:\n\n\n\tInvest time into building relationships within your (technical) community. You never know when you might be able to lend a hand; or benefit from someone who\u2019s able to lend theirs.\n\n\nAnd when things don\u2019t go according to plan, Brian Suda has the following advice:\n\n\n\tIf something doesn\u2019t work out, be professional and don\u2019t burn bridges. It will always come back to you.\n\n\nThe best work comes from working collaboratively, not just as a team within an agency or department, but with the client and stakeholders too. If doing your job and chucking it over the fence ever worked, it certainly doesn\u2019t fly any more. You can work in isolation, but doing really great work requires collaboration.\n\nThe business end\n\nWhen you\u2019re building sites professionally, every team member has to think about the business aspects. Estimating time, setting billing rates, and establishing deliverables are all part of the job.\n\nIn 2008, Andrew Clarke gave us the Contract Killer sample contract we could use to establish a working agreement for a web design project. Richard Rutter agrees that contracts are still an essential part of business:\n\n\n\tThey are there for both parties\u2019 protection. Make sure you know what will happen if you decide you don\u2019t want to work with the client any more (it happens) and, of course, what circumstances mean they can stop taking your services.\n\n\nHaving a contract is one thing, but does it adequately protect both you and the client? Emma Jane Westby adds:\n\n\n\tFind a good IP lawyer/legal counsel. I routinely had an IP lawyer read all of my contracts to find loopholes I wouldn\u2019t have noticed. I didn\u2019t always change the contract, but at least I knew what might come back to bite me.\n\n\nSo, you have a contract in place, and know what the project is. Brian Suda recommends keeping track of time and making sure you bill fairly for the hours the project costs you:\n\n\n\tIf I go to a meeting and they are 15 minutes late, the billing clock has already started. They can\u2019t expect me to be in the 1h meeting and not bill for the extra 15\u201330 minutes they wasted. It goes both ways too. You need to do your best to respect their deadlines and time frame \u2013 this is always hard to get right.\n\n\nAs ever, it\u2019s good business to do good business. Perhaps we can at last shed the old image of web designers being snowboarding layabouts and demonstrate to clients that we care as much about conducting professional business as they do.\n\nTime to review\n\nIt\u2019s a lot to take in. Some of these ideas and practices will be familiar, others new and yet to be evaluated. The web moves at a fast pace, and we need to be constantly reexamining our tools, techniques and working practices. The most important thing is not to blindly adopt any and all suggestions, but to carefully look at what the benefits might be and decide how they apply to your work.\n\nCould you benefit from more formalised development and deployment procedures? Would your design projects run more smoothly and have a longer maintainable life if you approached the solution as a componentised system rather than a series of pages? Are your teams structured in a way that enables the most fluid communication, or are there changes you could make? Are your billing procedures and business agreements serving you and your clients in the best way possible?\n\nThe new year is a good time to look at your working practices and see what can be improved, and maybe this time next year you\u2019ll look back and think \u201cthank goodness we don\u2019t work like that any more\u201d.", "year": "2014", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2014-12-01T00:00:00+00:00", "url": "https://24ways.org/2014/what-it-takes-to-build-a-website/", "topic": "business"} {"rowid": 6, "title": "Run Ragged", "contents": "You care about typography, right? Do you care about words and how they look, read, and are understood? If you pick up a book or magazine, you notice the moment something is out of place: an orphan, rivers within paragraphs of justified prose, or caps masquerading as small caps. So why, I ask you, is your stance any different on the web?\n\nWe\u2019re told time and time again that as a person who makes websites we have to get comfortable with our lack of control. On the web, this is a feature, not a bug. But that doesn\u2019t mean we have to lower our standards, or not strive for the same amount of typographic craft of our print-based cousins. We shouldn\u2019t leave good typesetting at the door because we can\u2019t control the line length.\n\nWhen I typeset books, I\u2019d spend hours manipulating the text to create a pleasurable flow from line to line. A key aspect of this is manicuring the right rag \u2014 the vertical line of words on ranged-left text. Maximising the space available, but ensuring there are no line breaks or orphaned words that disrupt the flow of reading. Setting a right rag relies on a bunch of guidelines \u2014 or as I was first taught to call them, violations! \n\nViolation 1. Never break a line immediately following a preposition\n\nPrepositions are important, frequently used words in English. They link nouns, pronouns and other words together in a sentence. And links should not be broken if you can help it. Ending a line on a preposition breaks the join from one word to another and forces the reader to work harder joining two words over two lines.\n\nFor example: \n\n\n\tThe container is for the butter\n\n\nThe preposition here is for and shows the relationship between the butter and the container. If this were typeset on a line and the line break was after the word for, then the reader would have to carry that through to the next line. The sentence would not flow.\n\nThere are lots of prepositions in English \u2013 about 150 \u2013 but only 70 or so in use.\n\nViolation 2. Never break a line immediately following a dash\n\nA dash \u2014 either an em-dash or en-dash \u2014 can be used as a pause in the reading, or as used here, a point at which you introduce something that is not within the flow of the sentence. Like an aside. Ending with a pause on the end of the line would have the same effect as ending on a preposition. It disrupts the flow of reading.\n\nViolation 3. No small words at the end of a line\n\nDon\u2019t end a line with small words. Most of these will actually be covered by violation \u21161. But there will be exceptions. My general rule of thumb here is not to leave words of two or three letters at the end of a line.\n\nViolation 4. Hyphenation\n\nIn print, hyphens are used at the end of lines to join words broken over a line break. Mostly, this is used in justified body text, and no doubt you will be used to seeing it in newspapers or novels. A good rule of thumb is to not allow more than two consecutive lines to end with a hyphen.\n\nOn the web, of course, we can use the CSS hyphens property. It\u2019s reasonably supported with the exception of Chrome. Of course, it works best when combined with justified text to retain the neat right margin.\n\nViolation 5. Don\u2019t break emphasised phrases of three or fewer words\n\nIf you have a few words emphasised, for example:\n\n\n\tHe calls this problem definition escalation\n\n\n\u2026then try not to break the line among them. It\u2019s important the reader reads through all the words as a group.\n\nHow do we do all of that on the web?\n\nAll of those guidelines are relatively easy to implement in print. But what about the web? Where content is poured into a template from a CMS? Well, there are things we can do. Meet your new friend, the non-breaking space, or as you may know them: \u00a0.\n\nThe guidelines above are all based on one decision for the typesetter: when should the line break? \n\nWe can simply run through a body of text and add the \u00a0 based on these sets of questions:\n\n\n\tAre there any prepositions in the text? If so, add a \u00a0 after them.\n\tAre there any dashes? If so, add a \u00a0 after them.\n\tAre there any words of fewer than three characters that you haven\u2019t already added spaces to? If so, add a \u00a0 after them.\n\tAre there any emphasised groups of words either two or three words long? If so, add a \u00a0 in between them.\n\n\nFor a short piece of text, this isn\u2019t a big problem. But for longer bodies of text, this is a bit arduous. Also, as I said, lots of websites use a CMS and just dump the text into a template. What then? We can\u2019t expect our content creators to manually manicure a right rag based on these guidelines. In this instance, we really need things to be automatic.\n\nThere isn\u2019t any reason why we can\u2019t just pass the question of when to break the line straight to the browser by way of a script which compares the text against a set of rules. In plain English, this script could be to scan the text for:\n\n\n\tPrepositions. If found, add \u00a0 after them.\n\tDashes. If found, add \u00a0 after them.\n\tWords fewer than three characters long that aren\u2019t prepositions. If found, add \u00a0 after them.\n\tEmphasised phrases of up to three words in length. If found, add \u00a0 between all of the words.\n\n\nAnd there we have it.\n\nA note on fluidity\n\nAn important consideration of this script is that it doesn\u2019t scan the text to see what is at the end of a line. It just looks for prepositions, dashes, words fewer than three characters long, and emphasised words within paragraphs and applies the \u00a0 accordingly regardless of where the thing lives. This is because in a fluid layout a word might appear in the beginning, middle or the end of a line depending on the width of the browser. And we want it to behave in the right way when it does find itself at the end.\n\nSee it in action!\n\nMy friend and colleague, Nathan Ford, has written a small JavaScript called Ragadjust that does all of this automatically. The script loops through a webpage, compares the text against the conditions, and then inserts \u00a0 in the places that violate the conditions above.\n\nYou can get the script from GitHub and see it in action on my own website.\n\nSome caveats\n\nAs my friend Jon Tan says, \u201cThere are no rules in typography, just good or bad decisions\u201d, and typesetting the right rag is no different. \n\n\n\tThe guidelines for the violations above are useful for justified text, too. But we need to be careful here. Too stringent adherence to these violations could lead to ugly gaps in our words \u2014 called rivers \u2014 as the browser forces justification.\n\tThe violation regarding short words at the end of sentences is useful for longer line lengths, or measures, of text. When the measure gets shorter, maybe five or six words, then we need to be more forgiving as to what wraps to the next line and what doesn\u2019t. In fact, you can see this happening on my site where I\u2019ve not included a check on the size of the browser window (purposefully, for this demo, of course. Ahem).\n\tThis article is about applying these guidelines to English. Some of them will, no doubt, cross over to other languages quite well. But for those languages, like German for instance, where longer words tend to be in more frequent use, then some of the rules may result in a poor right rag.\n\n\nMarginal gains\n\nIn 2007, I spoke with Richard Rutter at SXSW on web typography. In that talk, Richard and I made a point that good typographic design \u2014 on the web, in print; anywhere, in fact \u2014 relies on small, measurable improvements across an entire body of work. From heading hierarchy to your grid system, every little bit helps. In and of themselves, these little things don\u2019t really mean that much. You may well have read this article, shrugged your shoulders and thought, \u201cHuh. So what?\u201d But these little things, when added up, make a difference. A difference between good typographic design and great typographic design.\n\n \n\nAppendix\n\nPreposition whitelist\n\naboard\nabout\nabove\nacross\nafter\nagainst\nalong\namid\namong\nanti\naround\nas\nat\nbefore\nbehind\nbelow\nbeneath\nbeside\nbesides\nbetween\nbeyond\nbut\nby\nconcerning\nconsidering\ndespite\ndown\nduring\nexcept\nexcepting\nexcluding\nfollowing\nfor\nfrom\nin\ninside\ninto\nlike\nminus\nnear\nof\noff\non\nonto\nopposite\noutside\nover\npast\nper\nplus\nregarding\nround\nsave\nsince\nthan\nthrough\nto\ntoward\ntowards\nunder\nunderneath\nunlike\nuntil\nup\nupon\nversus\nvia\nwith\nwithin\nwithout", "year": "2013", "author": "Mark Boulton", "author_slug": "markboulton", "published": "2013-12-24T00:00:00+00:00", "url": "https://24ways.org/2013/run-ragged/", "topic": "design"} {"rowid": 14, "title": "The Command Position Principle", "contents": "Living where I do, in a small village in rural North Wales, getting anywhere means driving along narrow country roads. Most of these are just about passable when two cars meet. \n\nIf you\u2019re driving too close to the centre of the road, when two drivers meet you stop, glare at each other and no one goes anywhere. Drive too close to your nearside and in summer you\u2019ll probably scratch your paintwork on the hedgerows, or in winter you\u2019ll sink your wheels into mud. \n\nDriving these lanes requires a balance between caring for your own vehicle and consideration for someone else\u2019s, but all too often, I\u2019ve seen drivers pushed towards the hedgerows and mud when someone who\u2019s inconsiderate drives too wide because they don\u2019t want to risk scratching their own paintwork or getting their wheels dirty.\n\nIf you learn to ride a motorcycle,\u00a0you\u2019ll be taught about the command position:\n\n\n\tApproximate central position, or any position from which the rider can exert control over invitation space either side.\n\n\nThe command position helps motorcyclists stay safe, because when they ride in the centre of their lane it prevents other people, usually car drivers, from driving alongside, either forcing them into the curb or potentially dangerously close to oncoming traffic. \n\nTaking the command position isn\u2019t about motorcyclists being aggressive, it\u2019s about them being confident. It\u2019s them knowing their rightful place on the road and communicating that through how they ride.\n\nI\u2019ve recently been trying to take that command position when driving my car on our lanes. When I see someone coming in the opposite direction, instead of instinctively moving closer to my nearside \u2014 and in so doing subconsciously invite them into my space on the road \u2014 I hold both my nerve and a central position in my lane. Since I done this I\u2019ve noticed that other drivers more often than not stay in their lane or pull closer to their nearside so we occupy equal space on the road. Although we both still need to watch our wing mirrors, neither of us gets our paint scratched or our wheels muddy.\n\nWe can apply this principle to business too, in particular to negotiations and the way we sell. Here\u2019s how we might do that.\n\nCommanding negotiations\n\nWhen a customer\u2019s been sold to well \u2014\u00a0more on that in just a moment \u2014 and they\u2019ve made the decision to buy, the thing that usually stands in the way of us doing business is a negotiation over price. Some people treat negotiations as the equivalent of driving wide. They act offensively, because their aim is to force the other person into getting less, usually in return for giving more.\n\nIn encounters like this, it\u2019s easy for us to act defensively. We might lack confidence in the price we ask for, or the value of the product or service we offer. We might compromise too early because of that. When that happens, there\u2019s a pretty good chance that we\u2019ll drive away with less than we deserve unless we use the command position principle to help us.\n\nBefore we start any negotiation it\u2019s important to know that both sides ultimately want to reach an agreement. This isn\u2019t always obvious. If one side isn\u2019t already committed, at least in principle, then it\u2019s not a negotiation at that point, it\u2019s something else. \n\nFor example, a prospective customer may be looking to learn our lowest price so that they can compare it to our competitors. When that\u2019s the case, we\u2019ve probably failed to qualify that prospect properly as, after all, who wants to be chosen simply because they\u2019re the cheapest? In this situation, negotiating is a waste of time since we don\u2019t yet know that it will result in us making a deal. We should enter into a negotiation only when we know where we stand. So ask confidently: \u201cAre you looking to [make a decision]?\u201d\n\nWhen that\u2019s been confirmed, it\u2019s down to everyone to compromise until a deal\u2019s been reached. That\u2019s because good negotiations aren\u2019t about one side beating the other, they\u2019re about achieving a good deal for both. Using the command position principle helps us to maintain control over our negotiating space and affords us the opportunity to give ground only if we need to and only when we\u2019re ready. It can also ensure that the person we\u2019re negotiating with gives up some of their space.\n\nCommanding sales\n\nIt\u2019s not always necessary to negotiate when we\u2019re doing a business deal, but we should always be prepared to sell. One of the most important parts of our sales process should be controlling when and how we tell someone our price. \n\nUnless it\u2019s impossible to avoid, don\u2019t work out a price for someone on the spot. When we do that we lose control over the time and place for presenting our price alongside the value factors that will contribute to the prospective customer accepting that price. For the same reason, never give a ballpark or, worse, a guesstimate figure. If the question of price comes up before we\u2019re fully prepared, we should say politely that we need more time to work out a meaningful cost. \n\nWhen we are ready, we shouldn\u2019t email a price for our prospective customer to read unaccompanied. Instead, create an opportunity to talk a prospect through our figures, demonstrate how we arrived at them and, most importantly, explain the value of what we\u2019re selling to their business. Agree a time and place to do this and, if possible, do it all face-to-face. \n\nWe shouldn\u2019t hesitate when we give someone a price. When we sound even the slightest bit unsure or apologetic, we give the impression that we\u2019ll be flexible in our position before negotiations have even begun.\n\nThink about the command position principle, know the price and present it confidently. That way we send a clear signal that we know our business and how we deal with people. The command position principle isn\u2019t about being cocky, it\u2019s about showing other people respect, asking for it in return and showing it to ourselves.\n\n \n\nEarlier, I mentioned selling well, because we sometimes hear people say that they dislike being sold to. In my experience, it\u2019s not that people dislike the sales process, it\u2019s that we dislike it done badly.\n\nTaking part in a good sales process, either by selling or being sold to, can be a pleasurable experience. Try to be confident \u2014 after all, we understand how our skills will benefit a customer better than anyone else. Our confidence will inspire confidence in others. \n\nSelf-confidence isn\u2019t the same as arrogance, just as the command position isn\u2019t the same as riding without consideration for others. The command position principle preserves others\u2019 space as well as our own. By the same token, we should be considerate of others\u2019 time and not waste it and our own by attempting to force them into buying something that\u2019s inappropriate.\n\nTo prevent this from happening, evaluate them well to ensure that they\u2019re the right customer for us. If they\u2019re not, let them go on their way. They\u2019ll thank us for it and may well become customers the next time we meet.\n\nThe business of closing a deal can be made an enjoyable experience for everyone if we take control by guiding someone through the sales process by asking the right questions to uncover their concerns, then allaying them by being knowledgeable and confident. This is riding in the command position.\n\nJust like demonstrating we know our rightful position on the road, knowing our rightful place in a business relationship and communicating that through how we deal with people will help everyone achieve an equitable balance. When that happens in business, as well as on the road, no one gets their paintwork scratched or their wheels muddy.", "year": "2013", "author": "Andy Clarke", "author_slug": "andyclarke", "published": "2013-12-23T00:00:00+00:00", "url": "https://24ways.org/2013/the-command-position-principle/", "topic": "business"} {"rowid": 17, "title": "Bringing Design and Research Closer Together", "contents": "The \u2018should designers be able to code\u2019 debate has raged for some time, but I\u2019m interested in another debate: should designers be able to research? \n\nAre you a designer who can do research? Good research and the insights you uncover inspire fresh ways of thinking and get your creative juices flowing. Good research brings clarity to a woolly brief. Audience insight helps sharpen your focus on what\u2019s really important. Experimentation through research and design brings a sense of playfulness and curiosity to your work. Good research helps you do good design.\n\nBeing a web designer today is pretty tough, particularly if you\u2019re a freelancer and work on your own. There are so many new ideas, approaches to workflow and trends and tools to keep up with. How do you decide which things to do and which to ignore? A modern web designer needs to be able to consider the needs of the audience, design appropriate IAs and layouts, choose colour palettes, pick appropriate typefaces and type layouts, wrangle with content, style, code, dabble in SEO, and the list goes on and on. Not only that, but today\u2019s web designer also has to keep up with the latest talking points in the industry: responsive design, Agile, accessibility, Sass, Git, lean UX, content first, mobile first, blah blah blah. Any good web designer doesn\u2019t need to be persuaded about the merits of including research in their toolkit, but do you really have time to include research too? \n\nWho is responsible for research?\n\nGenerally, research in the web industry forms part of other disciplines and isn\u2019t so much a discipline in its own right. It\u2019s very often thought of as part of UX, or activities that make up a process such as IA or content strategy. Research is often undertaken by UX designers, information architects or content strategists and isn\u2019t something designers or developers get that involved in. Some people lump all of these activities together and label it design research and have design researchers to do it. Some companies, such as the one I run with my husband Mark, are lucky enough to have someone with specialist research knowledge (yup, that would be me folks) who can lead all or most of the research work undertaken by the company. See also Mule Design, GOV.UK, the BBC, Mailchimp, Facebook and Twitter. \n\nWhat if you\u2019re not lucky enough to have your own researcher or team of researchers? Often research is the kind of thing that\u2019s nice to have, or it can be cut from scope when doing the budget dance with a client. It often forms part of the discovery phase of a project and sometimes just becomes a tick-box exercise. But research isn\u2019t just user testing and it shouldn\u2019t just live in a report on Basecamp that no one reads. I would argue that research and experimentation is a way of working or an approach to how you design. Research can be used during the whole design process and must be a vital part of a designer\u2019s workflow on every project. Even if you work in a small studio, you can still create a culture of audience insight. Even if you work on your own, you can still absorb yourself in as much audience data as you can throughout the project life cycle. Here\u2019s how.\n\nResearch is everyone\u2019s job\n\nThere is a subtle difference between writing a research report and delivering it to a client, and them actually using it and applying the insights to their thought process. In my experience of working in the audiences team at the BBC, research was most effective when the role was embedded in the production team and insights were used as part of the editorial process.\n\nIn this section I\u2019ll talk through some common problems you might encounter in a typical project life cycle and show you ways you can use research to help you. For the sake of this article, let\u2019s imagine that we\u2019re talking about a particular project here and not ongoing product development. The same principles can of course be applied then, but even if you work in-house rather than on the agency side, you\u2019re probably used to working on distinct projects or phases of work.\n\n1. Problem: I want to come up with a new product idea. \n\nSolution: Inspiration through insights.\n\nBefore you begin a new project, a good way of quickly absorbing all the existing knowledge that there maybe about a theme, product type or website is to literally surround yourself with it. This is especially relevant for new ideas or product development. Create an incident room if you can: fill the walls of your meeting room, the walls near your desk, or even just use a pinboard or online pinboard if space is tight or you\u2019re working with a dispersed team. The same process can be used throughout a project\u2019s or product life cycle \u2014 read about how MailChimp has applied this idea. \n\nLet\u2019s take a new product idea as an example. Say you wanted to develop a responsive tool for web designers but you weren\u2019t sure what aspect of responsive design to focus on. First of all, you should pose a hypothesis or problem statement to gather ideas around. For example: \u201cHow to speed up a designer\u2019s responsive workflow.\u201d You would then need to gather insights around this topic. You could run some interviews with freelance designers about how they work responsively. You could shadow a development team for the day to understand their processes. You could observe conversations on Twitter or IRC or wherever your target audience interact to see what people talk about. You could search out industry data and articles currently available.\n\nThe next stage is to comb through this data and extract insights from it. You can use good old Post-it notes and a sharpie: capture one insight or thought per Post-it. If one insight leads into another, use two Post-its. The objective is volume. Try to ensure clarity in each Post-it so you don\u2019t have to go back and reference material again (maybe you could use a key if you think it\u2019ll get confusing).\n\n\n\nAfter this, stick them all up and synthesise the same way you would for any kind of cluster or affinity sort. Organise into broad themes. These themes then become springboards for further exploration and idea generation. You might see a gap or opportunity in one particular area, both from a workflow perspective but also from a business perspective. Bingo. Your insights then become the fuel for ideas generation.\n\nThis method doesn\u2019t just have to be used for new products \u2014 it works particularly well in a discovery phase for new projects or for new features in an existing product. We\u2019re doing something similar for our own responsive tool, Gridset at the moment.\n\nResources:\n\n\n\tSticky Wisdom by Dave Allan, Matt Kingdon, Kris Murrin, Daz Rudkin\n\tThe Science of Serendipity by Matt Kingdon\n\tThe Art of Innovation by Tom Kelley\n\n\n2. Problem: You\u2019re starting a new project and need to know the basics before you get headlong into designing or building. \n\nSolution: Quantitative survey.\n\nCommon questions might be:\n\n\n\tWho are the users?\n\tHow many are there?\n\tWhat are they like?\n\tWhy do they use the site?\n\tWhat do they need from the site?\n\tWhat are their goals?\n\n\nPrint out and stick up what you already know and have in your project space or \u2018incident room\u2019: any reports you have found or been given, analytics graphs, personas, pen portraits, as well as screengrabs of the current website, product or branding. Spend time looking through it all and identify the gaps. \n\nIf you have very little existing audience data, a quick and easy way to get some baseline information is to run a quick user survey on a current website. You can establish basic demographic information, appreciation and views of the website as it stands, as well as delve a little deeper into needs and wants. This is also vital if you want some kind of trackable measures to go back to once you have designed and built your shiny new website for your client \u2014 read more in my article for 24 ways last year.)\n\n\n\nWe use surveys a lot at Mark Boulton Design for our client work. Here\u2019s a screen grab of one we ran in March on http://info.cern.ch before we redesigned the site and did the work on the First Website Project. We repeated the survey after the new website went live and were able to compare the results. Both surveys were a great source of insight to the project team as well as for the project stakeholders who needed to pitch the idea of the hack days and fundraise for them.\n\n\n\nOnce you\u2019ve run your survey, you should always write up a short summary for yourself and your client to refer to. If you\u2019re not a trained researcher, you should try to read up on analysis techniques or data visualisation. It can be easy to misinterpret data and make it bend to the story you are trying to tell. You should be looking for the story in the data and present it without bias. \n\nIf you\u2019re using the \u2018incident room\u2019 method I mentioned earlier on, you can also extract the insights onto post it notes and add them to your growing body of knowledge.\n\nResources: \n\n\n\tUsing Questionnaires for Design Research by Emma Boulton\n\tData-driven Design with an Annual Survey by Aarron Walter\n\tResearch Methods for Product Design by Alex Milton and Paul Rodgers\n\tA Practical Guide to Designing with Data by Brian Suda\n\n\n3. Problem: You have a prototype of a new design and you need some feedback from real users. \n\nSolution: User interviews and task based testing.\n\nInterviewing is a staple research method that every designer should master as it can be used throughout a project life cycle. Erika Hall recently wrote a great article on the basics for A List Apart. From stakeholder interviews in a discovery phase, to initial user research, right through to task based testing and iteration, interviews can be enormously helpful. They are very time-consuming, however, and although speaking to someone is better than speaking to no one, it\u2019s always better to plan to do a few interviews at once, rather than one or two. I generally find that patterns only start to emerge after I\u2019ve spoken to 4 or 5 people. Interviews are another thing we do a lot of at Mark Boulton Design. Most of the interviews we do are remote due to the location of our clients and their users. \n\n\n\nRigour is an important consideration in all research activities and especially if you\u2019re a non-researcher. Interviews particularly can be easily skewed by an inexperienced facilitator, which is why pairing can be a good approach. Building rapport, questioning, time keeping, note taking and thinking on your feet can be difficult to do all at once, so having a colleague take notes while you concentrate on leading the conversation can work really well. It\u2019s important for the note taker to sit in on more than one interview so that they get a more rounded view of the feedback. The same person should also be involved in the analysis of the data. \n\n\n\nInterviews can be analysed and written up in a report or summary as with other types of research. I often use the same kind of collaborative process detailed earlier for deciding on themes, particularly if multiple members of the team have been involved in interviewing. \n\nInterviews are particularly useful for our incident room and can provide much colour and insight to an exploratory process. I often find verbatim quotes to be the most insightful type of data. You might find that an inexperienced researcher (or designer who is used to solving problems) will jump to interpretation too soon and forget to just listen to what the interviewee is saying. Capturing the exact form of words a person uses can help get away from this.\n\nResources: \n\n\n\tInterviewing Humans by Erika Hall\n\tA Pocket Guide to Interviewing for Research by Andrew Travers\n\tInterviewing Users by Steve Portigal\n\n\n4. Problem: How successful have I been with this new design? \n\nSolution: Key performance indicators\n\nOnce your new design has been realised, it\u2019s important to evaluate it. What works, what doesn\u2019t work so well? As well as a straightforward design crit, don\u2019t forget to introduce audience insights into a review meeting or project wash up. \n\n\n\nWork out what your KPIs \u2014 your key performance indicators \u2014 will be beforehand and then you can start to track them over time. For example, number of visits, appreciation of the site, willingness to recommend the site to a friend, number of sales, and number of conversions are all sensible measures to track. Interviews can again be helpful but cold, hard numbers are often better here. Read Corey Vilhauer\u2019s take on this on A List Apart.\n\nConsistency is key here. If you have looked at your analytics and done a survey beforehand, you will have a baseline to start from. Don\u2019t keep changing your measures and questions, or your data will not be comparable. Pick a few key questions or a set of measures, create a survey and then run it once a month, once a quarter, every six months or annually. You\u2019ll start to see changes over time as the design beds in. You may see seasonal trends and spot patterns in the data related to other activities like marketing, promotion and so on. Keeping a record of all of this will increase your understanding of your audience. We\u2019ve created a satisfaction survey for Gridset with a number of measures that we track on an ongoing basis. MailChimp has also created an annual survey with the aim of tracking their audience measures over time\n\nResources:\n\n\n\tSearch Analytics by Louis Rosenfeld\n\tA Primer on A/B Testing by Lara Swanson\n\tLean UX by Jeff Gothelf\n\n\nAnyone can do research\n\nResearch can be brought into the project life cycle at any stage. And of course, anyone can do research \u2014 you don\u2019t need to be a researcher. Some of the main skills most designers possess are also key research skills: inquisitive nature, problem solving, playfulness, empathy, and so on.\n\nWe have a small team at Mark Boulton Design. Most of the team are designers and the rest of us focus on supporting the team and clients both in terms of billable work (research, content strategy, project management) as well as the non-billable things like finance and studio management.\n\nDespite my best intentions, in the past I\u2019ve undertaken research for clients in isolation \u2014 first being briefed by the design lead, carrying out the research and then delivering the findings back, trusting the design team to take the findings on board. This was often due to time and availability of resources.\n\nWe\u2019ve been trying hard to join up our processes and collaborate even more across the team. Undertaking heuristic or design reviews collaboratively; taking part in frequent critiques of our work and the work of others together; pairing a researcher and a designer to run interviews; workshopping results from interviews to come up with recommendations; working closely together on questionnaire design; shadowing each other on tasks that don\u2019t fall within our core skills. A little thing like moving our desks around has also helped us have more conversations that we can all be a part of.\n\n\n\nI\u2019ve come to the conclusion that my role as the research director at Mark Boulton Design is actually a facilitator of research. As well as carrying out research, I am responsible for ensuring that research happens consistently across the team. I am responsible for empowering and training our designers so they feel confident in carrying out their own user, audience or design research for clients. So they know what to look for, when to listen, when to probe and when to take note of something. So they know how to look for themes, how to synthesise insights from research and how to apply them to their work.\n\nBetter research leads to better design\n\nSo, are you a designer who can do research? Are you a researcher who can design? The best designers are a lucky combination of researcher and designer. If you\u2019re not one of those, look at ways of enhancing the skills you lack. Because there\u2019s no doubt in my mind, that becoming a better researcher will make you a better designer.\n\nGeneral resources: \n\n\n\tSeeing the Elephant by Louis Rosenfeld\n\tConnected UX by Aarron Walter\n\tBeyond Usability Testing by Devan Goldstein\n\tJust Enough Research by Erika Hall\n\tThe User Experience Team of One by Leah Buley\n\tUndercover User Experience Design by Cennydd Bowles and James Box\n\tA Pocket Guide to Psychology for Designers by Joe Leech\n\tA Pocket Guide to International User Research by Chui Chui Tan\n\tRemote Research by Nate Bolt and Tony Tulathimutte\n\tA Pocket Guide to Experiments for Designers by Colin McFarland", "year": "2013", "author": "Emma Boulton", "author_slug": "emmaboulton", "published": "2013-12-22T00:00:00+00:00", "url": "https://24ways.org/2013/bringing-design-and-research-closer-together/", "topic": "ux"} {"rowid": 5, "title": "Managing a Mind", "contents": "On 21 May 2013, I woke in a hospital bed feeling exhausted, disorientated and ashamed. The day before, I had tried to kill myself.\n\nIt\u2019s very hard to write about this and share it. It feels like I\u2019m opening up the deepest recesses of my soul and laying everything bare, but I think it\u2019s important we share this as a community. Since starting tentatively to write about my experience, I\u2019ve had many conversations about this: sharing with others; others sharing with me. I\u2019ve been surprised to discover how many people are suffering similarly, thinking that they\u2019re alone. They\u2019re not.\n\nDue to an insane schedule of teaching, writing, speaking, designing and just generally trying to keep up, I reached a point where my buffers completely overflowed. I was working so hard on so many things that I was struggling to maintain control. I was living life on fast-forward and my grasp on everything was slowly slipping.\n\nOn that day, I reached a low point \u2013 the lowest point of my life \u2013 and in that moment I could see only one way out. I surrendered. I can\u2019t really describe that moment. I\u2019m still grappling with it. All I know is that I couldn\u2019t take it any more and I gave up.\n\nI very nearly died.\n\nI\u2019m very fortunate to have survived. I was admitted to hospital, taken there unconscious in an ambulance. On waking, I felt overwhelmed with shame and overcome with remorse, but I was resolved to grasp the situation and address it. The experience has forced me to confront a great deal of issues in my life; it has also encouraged me to seek a deeper understanding of my situation and, in particular, the mechanics of the mind.\n\nThe relentless pace of change\n\nWe work in a fast-paced industry: few others, if any, confront the daily challenges we face. The landscape we work within is characterised by constant flux. It\u2019s changing and evolving at a rate we have never experienced before. Few industries reinvent themselves yearly, monthly, weekly\u2026 Ours is one of these industries. Technology accelerates at an alarming rate and keeping abreast of this change is challenging, to say the least.\n\nAs designers it can be difficult to maintain a knowledge bank that is relevant and fit for purpose. We\u2019re on a constant rollercoaster of endless learning, trying to maintain the pace as, daily, new ideas and innovations emerge \u2014 in some cases fundamentally changing our medium.\n\nUnder the pressure of client work or product design and development, it can be difficult to find the time to focus on learning the new skills we need to remain relevant and functionally competent. The result, all too often, is that the edges of our days have eroded. We no longer work nine to five; instead we work eight to six, and after the working day is over we regroup to spend our evenings learning. It\u2019s an unsustainable situation.\n\nFrom the workshop to the web\n\nAdded to this pressure to keep up, our work is now undertaken under a global gaze, conducted under an ever-present spotlight. Tools like Dribbble, Twitter and others, while incredibly powerful, have an unfortunate side effect, that of unfolding your ideas in public. This shift, from workshop to web, brings with it additional pressure.\n\nIn the past, the early stages of creativity took place within the relative safety of the workshop, an environment where one could take risks and gather feedback from a trusted few. We had space to make and space to break. No more. Our industry\u2019s focus (and society\u2019s focus) on sharing, leads us now to play out our decisions in public. This shift has changed us culturally, slowly but surely easing every aspect of our process \u2013 and lives \u2013 from private to public. This is at once liberating and debilitating.\n\nIf you\u2019re not careful, an addiction to followers, likes, retweets, page views and other forms of measurement can overwhelm you. When you release your work into the wild and all it\u2019s greeted with is silence, it can cripple you.\n\nReflecting on this, in an insightful article titled Derailed, Rogie King asks, \u201cCan social popularity take us off the course of growth and where we were intended to go?\u201d He makes a powerful point, that perhaps we might focus on what really matters, setting aside statistics. He concludes that to grow as practitioners we might be best served by seeking out critique through other avenues, away from the social spotlight.\n\nOn status anxiety and impostor syndrome\n\nFollowing my experience I embarked on a period of self-reflection. I wanted to discover what had driven me to take the course of action I had. I wanted to ensure it never happened again. I wanted to understand how the mind works and, in so doing, learn a little more about myself.\n\nI\u2019ve only begun this journey, but two things I discovered resonated with me: the twin pressures of status anxiety and impostor syndrome.\n\nIn his excellent book Status Anxiety, the philosopher Alain de Botton explores a growing concern with status anxiety, a worry about how others perceive us and how this shapes our relationship with the world. He states:\n\n\n\tWe all worry about what others think of us. We all long to succeed and fear failure. We all suffer \u2013 to a greater or lesser degree, usually privately and with embarrassment \u2013 from status anxiety. [\u2026] This is an almost universal anxiety that rarely gets mentioned directly: an anxiety about what others think of us; about whether we\u2019re judged a success or a failure, a winner or a loser.\n\n\nWe see these pressures played out and amplified in the social sphere we all inhabit. We are social animals and we cannot help but react to the landscape we live and work within. Even if your work receives the public praise you so secretly desire, you find yourself questioning this praise.\n\nA psychological phenomenon in which sufferers are unable to internalise their accomplishments, impostor syndrome is far more widespread than you\u2019d imagine. The author Leigh Buchanan describes it as \u201cA fear that one is not as smart or capable as others think.\u201d As she puts it, \u201cPeople who feel like frauds chalk up their accomplishments to external factors such as luck and timing, or worry they are coasting on charm and personality rather than on talent.\u201d\n\nAt the bottom, this was all I could see. I felt overwhelmed by others\u2019 perception of me. Was I a success or a failure? Would I be discovered as the fraud I\u2019d convinced myself that I was? These twin pressures \u2013 that I was unconscious of at the time \u2013 had lead me to a place of crippling self-doubt, questioning my very existence.\n\nThe act of discovery, of investigating how the mind functions, led me to a deeper understanding of myself. Developing an awareness of psychology and learning about conditions like status anxiety and impostor syndrome helped me to understand and recognise how my mind worked, enabling me to manage it more effectively.\n\n\n\nMake it count\n\nReflecting upon my experience, I began to regroup, to focus on what really mattered. I\u2019d taken on too much \u2014 as I believe many of us do. I was guilty of wanting to do all the things. I started to introduce pauses. Before blindly saying yes to everything, I forced myself to pause and ask: \u201cIs this important?\u201d\n\nOur community offers us huge benefits, but an always-on culture in which we\u2019re bombarded daily by opportunity places temptation in our paths. It\u2019s easy to get sucked in to a vortex of wanting to be a part of everything. It\u2019s important, however, to focus. As Simon Collison puts it:\n\n\n\tI cull and surrender topics. Then I focus on my strengths, mastering my core skills.\n\n\nWe only have so much time and we can only do so much. It\u2019s impossible, indeed futile, to try to do everything. Sometimes we need to step back a little and just enjoy life, enjoy others\u2019 achievements, without feeling the need to be actively involved ourselves.\n\nAs Mahatma Ghandi put it:\n\nA \u2018no\u2019 uttered from deepest conviction is better and greater than a \u2018yes\u2019 merely uttered to please, or what is worse, to avoid trouble.\nYoung India, volume 9, 1927\n\n\nWe need to learn to say no a little more often. We need to focus on the work that matters. This, coupled with an understanding of the mind and how it works, can help us achieve a happier balance between work and life.\n\nDon\u2019t waste your time. You only have one life. Make it count.", "year": "2013", "author": "Christopher Murphy", "author_slug": "christophermurphy", "published": "2013-12-21T00:00:00+00:00", "url": "https://24ways.org/2013/managing-a-mind/", "topic": "process"} {"rowid": 12, "title": "Untangling Web Typography", "contents": "When I was a carpenter, I noticed how homeowners often had this deer-in-the-headlights look when the contractor I worked for would ask them to make tons of decisions, seemingly all at once.\n\nSquare or subway tile? Glass or ceramic? Traditional or modern trim details? Flat face or picture frame cabinets? Real wood or laminate flooring? Every day the decisions piled up and were usually made in the context of that room, or that part of that room. Rarely did the homeowner have the benefit of taking that particular decision in full view of the larger context of the project. And architectural plans? Sure, they lay out the broad strokes, but there is still so much to decide.\n\nTypography is similar. Designers try to make sites that are easy to use and understand visually. They labour over the details of line height, font size, line length, and font weights. They consider the relative merits of different typographical scales for applications versus content-driven sites. Frequently, designers consider all of this in the context of one page, feature, or view of an application. They are asked to make a million tiny decisions.\n\nSometimes designers just bump up the font size until it looks right.\n\nI don\u2019t see anything wrong with that. Instincts are important. Designing in context is easier. It\u2019s OK to leave the big picture until later. Design a bunch of things, and then look for the patterns. You can\u2019t always know everything up front. How does the current feature relate to all the other features on the site? For a large site, just like for a substantial remodel, the number of decisions you would need to internalize to make that knowable would be prohibitively large.\n\nWhen typography goes awry\n\nI should be honest. I know very little about typography. I struggle to understand vertical rhythm and the math in Tim Ahrens\u2019s talks about the interaction between type design and rendering technology kind of melted my brain. I have an unusual perspective because I\u2019m not the one making the design decisions, but I am the one implementing them and often cleaning up when a project goes off the rails.\n\nI\u2019ve seen projects with thousands of font-size declarations and headings. One project even had over ten thousand margin declarations. So while I appreciate creative exploration, I\u2019m also eager to establish patterns in typography and make sure we aren\u2019t choosing not to choose. Or, choosing all the things.\n\nAnalyzing a site\u2019s typography\n\nMost of my projects start out with an evaluation of the client\u2019s existing CSS. I look for duplication in the CSS by using Grep, though functionality is landing soon in CSS Lint to do the same thing automatically. The goal is to find the underlying missing abstractions that, once in place, would allow developers to create new functionality without needing to write additional CSS. In addition to that, my team and I would comb through each site (generally, around ten pages is enough to get the big picture), and take screenshots of each of the components we found.\n\nIn this way, we could look for subtle visual differences that were unlikely to add value to the user. By correcting these differences, we could help make the design more consistent, and at the same time the code leaner and more performant. Typography is much like a homeowner who chooses to incorporate too many disparate design elements, pairing a mid-century modern sofa with flowered country cottage curtains. Often the typography of a site ends up collecting an endless array of new typefaces as the site\u2019s overall styles evolve. Designers come and go on a project, and eventually no one can remember how the 16px Verdana got into the codebase.\n\nAutomation\n\nWe used to do this work by hand. It was incredibly tedious. We\u2019d go through the site, taking screenshots and meticulously documenting the style information we found. We didn\u2019t have to do that many times before it became incredibly clear that the task needed to be automated. So we built a little tool called the Type-o-matic that could do it for us.\n\nTo try it on your site:\n\n\n\tDownload and install the Firebug extension to Firefox\n\tDownload and install the Type-o-matic extension to Firebug (I know, I fully intend to port it to Chrome)\n\tNow, visit the site you\u2019d like to test\n\tRight click and choose Inspect element with Firebug\n\tNow click on the Typography tab\n\tClick Persist\n\tClick Generate Report\n\tChoose which pages to analyze (we\u2019ve found that ten is a good number to get the big picture, but you can analyze as many as you\u2019d like\u200a\u2014\u200ait will even work on just one page!)\n\tNow navigate to other pages, and on each subsequent page, click Generate Report\n\tThe table of results can be a bit difficult to interact with, so you can always click Copy to clipboard, and copy the results (JSON).\n\n\n \n \nA screenshot of Type-o-matic in action\n\n\nWhat does this data mean?\n\nWhen you\u2019ve analyzed as many pages or different views as you\u2019d like, you\u2019ll start to see some interesting patterns emerge in the data. In the right-hand column, you\u2019ll see examples of how each kind of typography we found has been used in a real context on your site. It is organized by color and then by size so you can easily see how you are using typography.\n\nThe next thing you\u2019ll want to take a look at is in the first column, called \u201cCount\u201d. We\u2019ve counted how many times you\u2019ve used each combination of typographical styles. This can be incredibly helpful when deciding which styles were intentional, versus one-off color pick errors or experiments that never got removed from the code base. If you\u2019ve used one color blue 1,400 times, and another just 23, it\u2019s pretty obvious which is more in line with broader site-wide styles.\n\nConsistency before perfection\n\nIt can be really tempting to try to make everything perfect\u200a\u2014\u200ato try to make every decision final. When you use the data you can collect from this tool, I\u2019d recommend trying to get to consistent before you try to make things perfect. Stop using fifteen different shades of blue type first, and then if you want to change to a new blue, go for it! You\u2019ll be able to make design changes much more easily once you\u2019ve reduced the total number of typographical styles you rely on.\n\nLower the importance of the decisions you are making. Our sites, like ourselves, are always a work in progress. Or, as a carpenter I used to work with said, \u201cYou\u2019re not building a fucking piano.\u201d We\u2019re not building houses. We can choose one typeface today and a different one tomorrow. It is OK to experiment. Be brave.", "year": "2013", "author": "Nicole Sullivan", "author_slug": "nicolesullivan", "published": "2013-12-20T00:00:00+00:00", "url": "https://24ways.org/2013/untangling-web-typography/", "topic": "design"} {"rowid": 9, "title": "How to Write a Book", "contents": "Were you recently inspired to write a book after reading Owen Gregory\u2019s compendium of author insights? Maybe so inspired to strike out on your own and self-publish? \n\nBased on personal experience, writing a book is hard. It requires a great deal of research, experience, and patience. To be able to consolidate your thoughts and what you\u2019ve learned into a sensible and readable tome is an admirable feat. To decide to self-publish and take on yourself all of the design, printing, distribution, and so much more is tantamount to insanity. Again, based on personal experience.\n\nSo, why might you want to self-publish?\n\nIf you\u2019ve spent many a late night doing cross-browser testing just to know that your site works flawlessly in twenty-four different browsers \u2014 including Mosaic, of course \u2014 then maybe you\u2019ll understand the fun that comes from doing it all.\n\nWorking with a publisher, you\u2019re left to focus on one core thing: writing. That\u2019s a good thing. A good publisher has the right resources to help you get your idea polished and the distribution network to get your book on store shelves around the world. It\u2019s a very proud moment to be able to walk into a book store and see your book sitting there on the shelf.\n\nSelf-publishing can also be a wonderful process as you get to own it from beginning to end. Every decision is yours and if you\u2019re a control freak like me, this can be a very rewarding experience. \n\nWhile there are many aspects to self-publishing, I\u2019m going to speak to just one of them: creating an ebook.\n\nFormats \n\nIn creating an ebook, you first need to decide what formats you wish to support. There are three main formats, each with their own pros and cons:\n\n\n\tPDF\n\tEPUB\n\tMOBI\n\n\nPDFs are supported on almost every device (Windows, Mac, Kindle, iPad, Android, etc.) and can even be a stepping stone to creating a print version of your book. PDFs allow for full typographic and design control, but at the cost of needing to fit things into a predefined page layout. Is it US Letter or A4? Or is it a format that isn\u2019t easily printed by readers on their home printers?\n\nEPUB is a more fluid format that is supported by the Apple iPad, iPhone, and now on the desktop with OS X Mavericks. It\u2019s also supported by Google Play for Android devices. While EPUB is supported on other devices, you\u2019re likely to choose EPUB because you\u2019re targeting your book at the Apple audience. The EPUB format is HTML-based with support for some CSS and even video and interactive elements. You can create very rich and exciting experiences using the EPUB format that just aren\u2019t possible with PDF or MOBI. However, if you decide to support multiple file formats, you\u2019ll likely find \u2014 as I did \u2014 that a consistent experience between all formats is easier to build and maintain, and therefore the extra benefits of interactivity go out the window.\n\nMOBI is a format originally developed for the Mobipocket Reader but more popularly supported by the Amazon Kindle. If you\u2019re looking to attract the Kindle audience or publish to Amazon via the Kindle Direct Publishing platform then the HTML-based MOBI format is the format you\u2019ll want to go with. \n\nDistribution will probably factor in heavily with what format you decide to go with. Many people I know who self-publish go with PDF only due to its ubiquity. \n\nIf you want to garner a wider audience by distributing via Amazon or the iBookstore then you\u2019ll need to think about supporting all three formats (as I did).\n\nWhat tools should I use?\n\nI spent a lot of time figuring out the right toolset and finally got something that suits me just right.\n\nIn the past, when working with a publisher, I was given a Microsoft Word template that was passed back and forth between myself, the editor, and tech reviewer. This template has been the bane of any book writer that I\u2019ve spoken to. Not every publisher is like that, though. Some publishers, like O\u2019Reilly, use DocBook, an XML-based format that can be converted into PDF, EPUB, and MOBI.\n\nPublishers already have a style guide and whether it\u2019s DocBook or a Word template, they have the tools already in place to easily convert your work into multiple formats.\n\nSelf-publishing means that you\u2019ll likely have to do a lot of tweaking to get things looking and working the way you want them to. I tried DocBook and the open source export tools didn\u2019t create HTML to my liking. Fixing even the most mundane things required fiddling with XSL transformations for hours on end. Not the way I like to spend my time. I can only imagine the hoops I would\u2019ve had to go through to get a PDF to look half-decent.\n\nTools like Pages or Scrivener offer up the ability to publish to multiple formats, too, but none offered me the control over the output that I truly desired. Have a mentioned that I\u2019m a control freak?\n\nI ended up writing my book using a technology that I already knew quite well: HTML. By writing in HTML, I already had something that I could post on my website, use for the EPUB and use for the MOBI format. All without having to change a thing. (That\u2019s right: the same HTML that is used on SMACSS.com is used in the EPUB and is used in the MOBI.)\n\nWhat about PDF? I could open up the HTML in a web browser, choose Save as PDF and be done with it but let\u2019s face it: the filename and date attached to every single page doesn\u2019t exactly scream professional. Web browsers actually do a surprisingly poor job with supporting the CSS paged media spec.\n\nI had resorted to copying and pasting the content into Pages and saving as PDF from there. It wasn\u2019t elegant but it worked. However, any changes to my HTML source required redoing those changes in Pages, as well. \n\nThen I met my Prince Charming: Prince XML. It\u2019s pricey but it works incredibly well. It takes HTML and CSS (that very format I\u2019ve been using for all of my other file formats) and will generate a PDF via a command line interface. Prince supports CSS paged media including headers, footers, page counts, and alternating page styles. \n\nFrom one format, HTML, I can now easily publish to PDF, MOBI, and EPUB, and even my website. I use the PDF version to send to the printer along with cover art to be bound and ready to ship around the world. It\u2019s amazing how versatile HTML (and CSS) is.\n\nTo learn more about writing books with HTML and CSS, I recommend reading Building Books with CSS3 over at A List Apart.\n\nCreating an EPUB\n\nLet\u2019s take a step back. Prince gets us from HTML to PDF but how do we make an EPUB out of the HTML? \n\nAn EPUB file is essentially a ZIP file with a renamed extension. There are some core files that you need to start with:\n\nRoot\n META-INF\n container.xml\n mimetype\n content.opf\n toc.ncx\n\nAfter that, you can start adding your content to the project. Be sure to update the toc.ncx (Table of Contents) and content.opf (the ebook manifest) with any changes you make to your project.\n\nYou can learn more about the file formats with the EPUB Format Construction Guide.\n\nOnce all your files are in place, you\u2019ll need to create the EPUB file by running two commands (on OS X, at least):\n\nzip -X0 your-ebook.epub mimetype\nzip -Xur9D your-ebook.epub *\n\nThe mimetype needs to be the first file inside the ZIP file and therefore gets added first. Then, the rest of the files are added. \n\nI\u2019ve added a function to my .bash_profile to make this even easier:\n\nfunction epub()\n{\n zip -q0X $@ mimetype; zip -qXr9D $@ *\n}\n\nThen, within the folder from which I want to create an ebook, I just run epub your-ebook.epub from the Terminal command line and the EPUB file should be ready to go.\n\nCreating the MOBI\n\nWe have our EPUB and we have our PDF. The last step is the MOBI file. For this, I call upon Calibre. Calibre can be used as a reader and as a library but I use it exclusively to export my EPUB files to MOBI. \n\nCalibre includes a command line utility to convert from EPUB to MOBI. (To install the command line tools, go to Preferences > Advanced > Miscellaneous and click Install Command Line Tools.)\n\nebook-convert your-ebook.epub your-ebook.mobi \n\nSpread the joy\n\nNow that you have all of your different file formats, you need to get them into the hands of people who want to (ho-ho-hopefully) buy your book!\n\nThere are a number of marketplaces such as Amazon\u2019s Kindle Direct Publishing, iBookstore, Google Play, and NOOK Press.\n\nSome publishers, like PragProg and O\u2019Reilly will also add self-published books to their roster if they feel it\u2019s a good fit for their audience.\n\nWith any distribution, you\u2019ll have to give up a percentage of your sales\u2014from 30% to 70% of each sale, so consider your options wisely.\n\nOf course, you can always open your own online store and reap as much of the revenue as possible, assuming you can get the traffic to your site. Handling your own distribution allows you to create a deeper one-on-one connection with your customers, something that is impossible with other distribution channels since you don\u2019t get customer information through other services\u2014even though you are giving them a huge chunk of your sales!\n\nGo forth and prosper\n\nThere\u2019s a lot of thought and time that goes into writing a book and just as much thought and time can go into creating, publishing, and marketing your book once you\u2019re done. \n\nIn the end, self-publishing can be a very rewarding process and well worth the time that goes into it.", "year": "2013", "author": "Jonathan Snook", "author_slug": "jonathansnook", "published": "2013-12-19T00:00:00+00:00", "url": "https://24ways.org/2013/how-to-write-a-book/", "topic": "content"} {"rowid": 7, "title": "Get Started With GitHub Pages (Plus Bonus Jekyll)", "contents": "After several failed attempts at getting set up with GitHub Pages, I vowed that if I ever figured out how to do it, I\u2019d write it up. Fortunately, I did eventually figure it out, so here is my write-up.\n\nWhy I think GitHub Pages is cool\n\nNormally when you host stuff on GitHub, you\u2019re just storing your files there. If you push site files, what you\u2019re storing is the code, and when you view a file, you\u2019re viewing the code rather than the output. What GitHub Pages lets you do is store those files, and if they\u2019re HTML files, you can view them like any other website, so there\u2019s no need to host them separately yourself.\n\nGitHub Pages accepts static HTML but can\u2019t execute languages like PHP, or use a database in the way you\u2019re probably used to, so you\u2019ll need to output static HTML files. This is where templating tools such as Jekyll come in, which I\u2019ll talk about later.\n\nThe main benefit of GitHub Pages is ease of collaboration. Changes you make in the repository are automatically synced, so if your site\u2019s hosted on GitHub, it\u2019s as up-to-date as your GitHub repository. This really appeals to me because when I just want to quickly get something set up, I don\u2019t want to mess around with hosting; and when people submit a pull request, I want that change to be visible as soon as I merge it without having to set up web hooks.\n\nBefore you get started\n\nIf you\u2019ve used GitHub before, already have an account and know the basics like how to set up a repository and clone it to your computer, you\u2019re good to go. If not, I recommend getting familiar with that first. The GitHub site has extensive documentation on getting started, and if you\u2019re not a fan of using the command line, the official GitHub apps for Mac and Windows are great.\n\nI also found this tutorial about GitHub Pages by Thinkful really useful, and it contains details on how to turn an existing repository into a GitHub Pages site.\n\nAlthough this involves a bit of using the command line, it\u2019s minimal, and I\u2019ll guide you through the basics.\n\nSetting up GitHub Pages\n\nFor this demo we\u2019re going to build a Christmas recipe site \u2014 nothing complex, just a place to store recipes so we can share them with people, and they can fork or suggest changes to ones they like. My GitHub username is maban, and the project I\u2019ve set up is called christmas-recipes, so once I\u2019ve set up GitHub Pages, the site can be found here: http://maban.github.io/christmas-recipes/\n\nYou can set up a custom domain, but by default, the URL for your GitHub Pages site is your-username.github.io/your-project-name.\n\nSet up the repository\n\nThe first thing we\u2019re going to do is create a new GitHub repository, in exactly the same way as normal, and clone it to the computer. Let\u2019s give it the name christmas-recipes. There\u2019s nothing in it at the moment, but that\u2019s OK.\n\n\n\nAfter setting up the repository on the GitHub website, I cloned it to my computer in my Sites folder using the GitHub app (you can clone it somewhere else, if you want), and now I have a local repository synced with the remote one on GitHub.\n\nNavigate to the repository\n\nNow let\u2019s open up the command line and navigate to the local repository. The easiest way to do this in Terminal is by typing cd and dragging and dropping the folder into the terminal window and pressing Return. You can refer to Chris Coyier\u2019s GIF illustrating this very same thing, from last week\u2019s 24 ways article \u201cGrunt for People Who Think Things Like Grunt are Weird and Hard\u201d (which is excellent).\n\nSo, for me, that\u2019s\u2026\n\ncd /Users/Anna/Sites/christmas-recipes \n\nCreate a special GitHub Pages branch\n\nSo far we haven\u2019t done anything different from setting up a regular repository, but here\u2019s where things change.\n\nNow we\u2019re in the right place, let\u2019s create a gh-pages branch. This tells GitHub that this is a special branch, and to treat the contents of it differently.\n\nMake sure you\u2019re still in the christmas-recipes directory, and type this command to create the gh-pages branch:\n\ngit checkout --orphan gh-pages\n\nThat --orphan option might be new to you. An orphaned branch is an empty branch that\u2019s disconnected from the branch it was created off, and it starts with no commits, making it a special standalone branch. checkout switches us from the branch we were on to that branch.\n\nIf all\u2019s gone well, we\u2019ll get a message saying Switched to a new branch \u2018gh-pages\u2019.\n\nYou may get an error message saying you don\u2019t have admin privileges, in which case you\u2019ll need to type sudo at the start of that command.\n\nMake gh-pages your default branch (optional)\n\nThe gh-pages branch is separate to the master branch, but by default, the master branch is what will show up if we go to our repository\u2019s URL on GitHub. To change this, go to the repository settings and select gh-pages as the default branch.\n\n\n\nIf, like me, you only want the one branch, you can delete the master branch by following Oli Studholme\u2019s tutorial. It\u2019s actually really easy to do, and means you only have to worry about keeping one branch up to date.\n\nIf you prefer to work from master but push updates to the gh-pages branch, Lea Verou has written up a quick tutorial on how to do this, and it basically involves working from the master branch, and using git rebase to bring one branch up to date with another.\n\nAt the moment, we\u2019ve only got that branch on the local machine, and it\u2019s empty, so to be able to see something at our special GitHub Pages URL, we\u2019ll need to create a page and push it to the remote repository.\n\nMake a page\n\nOpen up your favourite text editor, create a file called index.html in your christmas-recipes folder, and put some exciting text in it. Don\u2019t worry about the markup: all we need is text because right now we\u2019re just checking it works.\n\n\n\nNow, let\u2019s commit and push our changes. You can do that in the command line if you\u2019re comfortable with that, or you can do it via the GitHub app. Don\u2019t forget to add a useful commit message.\n\n\n\nNow we\u2019re ready to see our gorgeous new site! Go to your-username.github.io/your-project-name and, hopefully, you\u2019ll see your first GitHub Pages site. If not, don\u2019t panic, it can take up to ten minutes to publish, so you could make a quick cake in a cup while you wait.\n\nAfter a short wait, our page should be live! Hopefully that wasn\u2019t too traumatic. Now we know it works, we can add some proper markup and CSS and even some more pages.\n\nIf you\u2019re feeling brave, how about we take it to the next level\u2026\n\nSetting up Jekyll\n\nSince GitHub Pages can\u2019t execute languages like PHP, we need to give it static HTML files. This is fine if there are only a few pages, but soon we\u2019ll start to miss things like PHP includes for content that\u2019s the same on every page, like headers and footers.\n\nJekyll helps set up templates and turn them into static HTML. It separates markup from content, and makes it a lot easier for people to edit pages collaboratively. With our recipe site, we want to make it really easy for people to fix typos or add notes, without having to understand PHP. Also, there\u2019s the added benefit that static HTML pages load really fast.\n\nJekyll isn\u2019t officially supported on Windows, but it is still possible to run it if you\u2019re prepared to get your hands dirty.\n\nInstall Jekyll\n\nBack in Terminal, we\u2019re going to install Jekyll\u2026\n\ngem install jekyll\n\n\u2026and wait for the script to run. This might take a few moments. It might take so long that you get worried its broken, but resist the urge to start mashing your keyboard like I did.\n\nGet Jekyll to run on the repository\n\nFingers crossed nothing has gone wrong so far. If something did go wrong, don\u2019t give up! Check this helpful post by Andy Taylor \u2013 you probably just need to install something else first. \n\nNow we\u2019re going to tell Jekyll to set up a new project in the repository, which is in my Sites folder (yours may be in a different place). Remember, we can drag the directory into the terminal window after the command.\n\njekyll new /Users/Anna/Sites/christmas-recipes\n\nIf everything went as expected, we should get this error message: Conflict: /Users/Anna/Sites/christmas-recipes exists and is not empty.\n\nBut that\u2019s OK. It\u2019s just upset because we\u2019ve got that index.html file and possibly also a README.md in there that we made earlier. So move those onto your desktop for the moment and run the command again.\n\njekyll new /Users/Anna/Sites/christmas-recipes\n\nIt should say that the site has installed.\n\nCheck you\u2019re in the repository, and if you\u2019re not, navigate to it by typing cd , drag the christmas-recipes directory into terminal\u2026\n\njekyll cd /Users/Anna/Sites/christmas-recipes\n\n\u2026and type this command to tell Jekyll to run:\n\njekyll serve --watch\n\nBy adding --watch at the end, we\u2019re forcing Jekyll to rebuild the site every time we hit Save, so we don\u2019t have to keep telling it to update every time we want to view the changes. We\u2019ll need to run this every time we start work on the project, otherwise changes won\u2019t be applied. For now, wait while it does its thing. \n\nUpdate the config file\n\nWhen it\u2019s finished, we\u2019ll see the text press ctrl-c to stop. Don\u2019t do that, though. Instead, open up the directory. You\u2019ll notice some new files and folders in there. There\u2019s one called _site, and that\u2019s where all the site files are saved when they\u2019re turned into static HTML. Don\u2019t touch the files in here \u2014 they\u2019re the generated files and will get overwritten every time we make changes to pages and layouts.\n\nThere\u2019s a file in our directory called _config.yml. This has some settings we can change, one of them being what our base URL is. GitHub Pages will assume the base URL is above the project repository, so changing the settings here will help further down the line when setting up navigation links.\n\nReplace the contents of the _config.yml file with this:\n\nname: Christmas Recipes\nmarkdown: redcarpet\npygments: true\nbaseurl: /christmas-recipes\n\nSet up your files\n\nOverwrite the index.html file in the root with the one we made earlier (you might want to pop the README.md back in there, too). \n\nDelete the files in the css folder \u2014 we\u2019ll add our own later.\n\nView the Jekyll site\n\nOpen up your favourite browser and type http://localhost:4000/christmas-recipes in the address bar.\n\n\n\nCheck it out, that\u2019s your site! But it could do with a bit more love.\n\nSet up the _includes files\n\nIt\u2019s always useful to be able to pull in snippets of content onto pages, such as the header and footer, so they only need to be updated in one place. That\u2019s what an _includes folder is for in Jekyll. Create a folder in the root called _includes, and within it add two files: head.html and foot.html. \n\nIn head.html, paste the following:\n\n<!DOCTYPE html>\n<html>\n <head>\n <meta charset=\"utf-8\">\n <title>{{ page.title }}</title>\n <link rel=\"stylesheet\" href=\"{{ site.baseurl }}/css/main.css\" >\n </head>\n <body>\n\nand in foot.html:\n\n</body>\n</html>\n\nWhenever we want to pull in something from the _includes folder, we can use {% include filename.html %} in the layout file \u2014 I\u2019ll show you how to set that up in next step.\n\nMaking layouts\n\nIn our directory, there\u2019s a folder called _layouts and this lets us create a reusable template for pages. Inside that is a default.html file. \n\nDelete everything in default.html and paste in this instead:\n\n{% include head.html %}\n\n <h1>{{ page.title }}</h1>\n\n {{ content }}\n\n{% include foot.html %}\n\nThat\u2019s a very basic page with a header, footer, page title and some content. To apply this template to a page, go back into the index.html page and add this snippet to the very top of the file:\n\n---\nlayout: default\ntitle: Home\n---\n\nNow save the index.html file and hit Refresh in the browser. We should see a heading where {{ page.title }} was in the layout, which matches what comes after title: on the page itself (in this case, Home). So, if we wanted a subheading to appear on every page, we could add {{ page.subheading }} to where we want it to appear in our layout file, and a line that says subheading: This is a subheading in between the dashes at the top of the page itself.\n\nUsing Markdown for templates\n\nAnything on a page that sits under the closing dashes is output where {{ content }} appears in the template file. At the moment, this is being output as HTML, but we can use Markdown instead, and Jekyll will convert that into HTML. For this recipe site, we want to make it as easy as possible for people to be able to collaborate, and also have the markup separate from the content, so let\u2019s use Markdown instead of HTML for the recipes.\n\nTelling a page to use Markdown instead of HTML is incredibly simple. All we need to do is change the filename from .html to .md, so let\u2019s rename the index.html to index.md. Now we can use Markdown, and Jekyll will output that as HTML.\n\nCreate a new layout\n\nWe\u2019re going to create a new layout called recipe which is going to be the template for any recipe page we create. Let\u2019s keep it super simple.\n\nIn the _layouts folder, create a file called recipe.html and paste in this:\n\n{% include head.html %}\n\n\t<main>\n\n \t<h1>{{ page.title }}</h1>\n\n \t{{ content }}\n\n \t<p>Recipe by <a href=\"{{ page.recipe-attribution-link }}\">{{ page.recipe-attribution }}</a>.</p>\n\n\t</main>\n\n\t{% include nav.html %}\n\n{% include foot.html %}\n\nThat\u2019s our template. The content that goes on the recipe layout includes a page title, the recipe content, a recipe attribution and a recipe attribution link.\n\nAdding some recipe pages\n\nCreate a new file in the root of the christmas-recipes folder and call it gingerbread.md. Paste the following into it:\n\n---\nlayout: recipe\ntitle: Gingerbread\nrecipe-attribution: HungryJenny\nrecipe-attribution-link: http://www.opensourcefood.com/people/HungryJenny/recipes/soft-christmas-gingerbread-cookies\n---\nMakes about 15 small cookies.\n\n## Ingredients\n\n* 175g plain flour\n* 90g brown sugar\n* 50g unsalted butter, diced, at room temperature\n* 2 tbsp golden syrup\n* 1 egg, beaten\n* 1 tsp ground ginger\n* 1 tsp cinnamon\n* 1 tsp bicarbonate of soda\n* Icing sugar to dust\n\n## Method\n\n1. Sift the flour, ginger, cinnamon and bicarbonate of soda into a large mixing bowl.\n2. Use your fingers to rub in the diced butter. Mix in the sugar.\n3. Mix the egg with the syrup then pour into the flour mixture. Fold in well to form a dough.\n4. Tip some flour onto the work surface and knead the dough until smooth.\n5. Preheat the oven to 180\u00b0C.\n6. Roll the dough out flat and use a shaped cutter to make as many cookies as you like.\n7. Transfer the cookies to a tray and bake in the oven for 15 minutes. Lightly dust the cookies with icing sugar.\n\nThe content is in Markdown, and when we hit Save, it\u2019ll be converted into HTML in the _site folder. Save the file, and go to http://localhost:4000/christmas-recipes/gingerbread.html in your favourite browser.\n\n \n\nAs you can see, the Markdown content has been converted into HTML, and the attribution text and link has been inserted in the right place.\n\n\nAdd some navigation\n\nIn your _includes folder, create a new file called nav.html. Here is some code that will generate your navigation:\n\n<nav class=\"nav-primary\" role=\"navigation\" >\n <ul>\n {% for p in site.pages %}\n <li>\n \t<a {% if p.url == page.url %}class=\"active\"{% endif %} href=\"{{ site.baseurl }}{{ p.url }}\">{{ p.title }}</a>\n </li>\n {% endfor %}\n </ul>\n</nav>\n\nThis is going to look for all pages and generate a list of them, and give the navigation item that is currently active a class of active so we can style it differently.\n\nNow we need to include that navigation snippet in our layout. Paste {% include nav.html %} in default.html file, under {{ content }}.\n\nPush the changes to GitHub Pages\n\nNow we\u2019ve got a couple of pages, it\u2019s time to push our changes to GitHub. We can do this in exactly the same way as before. Check your special GitHub URL (your-username.github.io/your-project-name) and you should see your site up and running.\n\nIf you quit Terminal, don\u2019t forget to run jekyll serve --watch from within the directory the next time you want to work on the files.\n\nNext steps\n\nNow we know the basics of creating Jekyll templates and publishing them as GitHub Pages, we can have some fun adding more pages and styling them up.\n\n \n \n Here\u2019s one I made earlier\n\n\nI\u2019ve only been using Jekyll for a matter of weeks, mainly for prototyping. It\u2019s really good as a content management system for blogs, and a lot of people host their Jekyll blogs on GitHub, such as Harry Roberts\n\n\n\tBy hosting the code so openly it will make me take more pride in it and allow me to work on it much more easily; no excuses now!\n\n\nOverall, the documentation for Jekyll feels a little sparse and geared more towards blogs than other sites, but as more people discover the benefits of it, I\u2019m sure this will improve over time.\n\nIf you\u2019re interested in poking about with some code, all the files from this tutorial are available on GitHub.", "year": "2013", "author": "Anna Debenham", "author_slug": "annadebenham", "published": "2013-12-18T00:00:00+00:00", "url": "https://24ways.org/2013/get-started-with-github-pages/", "topic": null} {"rowid": 3, "title": "Project Hubs: A Home Base for Design Projects", "contents": "SCENE: A design review meeting. Laptop screens. Coffee cups.\n\nProject manager: Hey, did you get my email with the assets we\u2019ll be discussing? \n\nClient: I got an email from you, but it looks like there\u2019s no attachment.\n\nPM: Whoops! OK. I\u2019m resending the files with the attachments. Check again?\n\nClient: OK, I see them. It\u2019s homepage_v3_brian-edits_FINAL_for-review.pdf, right? \n\nPM: Yeah, that\u2019s the one.\n\nClient: OK, hang on, Bill\u2019s going to print them out. (3-minute pause. Small talk ensues.)\n\nClient: Alright, Bill\u2019s back. We\u2019re good to start. \n\nBrian: Oh, actually those homepage edits we talked about last time are in the homepage_v4_brian_FINAL_v2.pdf document that I posted to Basecamp earlier today.\n\nClient: Oh, OK. What message thread was that in? \n\nBrian: Uh, I\u2019m pretty sure it\u2019s in \u201cHomepage Edits and Holiday Schedule.\u201d\n\nClient: Alright, I see them. Bill\u2019s going back to the printer. Hang on a sec\u2026\n\n\n\nThis is only a slightly exaggerated version of my experience in design review meetings. \n\nThe design project dance is a sloppy one. It involves a slew of email attachments, PDFs, PSDs, revisions, GitHub repos, staging environments, and more. And while tools like Basecamp can help manage all these moving parts, it can still be incredibly challenging to extract only the important bits, juggle deliverables, and see how your project is progressing.\n\nEnter project hubs. \n\nProject hubs\n\nA project hub consolidates all the key design and development materials onto a single webpage presented in reverse chronological order. The timeline lives online (either publicly available or password protected), so that everyone involved in the team has easy access to it.\n\n A project hub.\n\nI was introduced to project hubs after seeing Dan Mall\u2019s open redesign of Reading Is Fundamental. Thankfully, I had a chance to work with Dan on two projects where I got to see firsthand how beneficial a project hub can be. Here\u2019s what makes a project hub great:\n\n\n\tServes as a centralized home base for the project\n\tTrains clients and teams to decide in the browser\n\tEasily and visually view project\u2019s progress\n\tProvides an archive for project artifacts\n\n\nA home base\n\nYour clients and colleagues can expect to get the latest and greatest updates to your project when visiting the project hub, the same way you\u2019d expect to get the latest information on a requested topic when you visit a Wikipedia page. That\u2019s the beauty of URIs that don\u2019t change. \n\nCreating a project hub reduces a ton of email volley nonsense, and eliminates the need to produce files and directories with staggeringly ridiculous names like design/12.13.13/team/brian/for_review/_FINAL/styletile_121313_brian-edits-final_v2_FINAL.pdf. The team can simply visit the project hub\u2019s URL and click the link to whatever artifact they need. Need to make an update? Simply update the link on the project hub. No more email tango and silly file names. \n\nDeciding in the browser\n\n\n\tLet\u2019s change the phrase \u201cdesigning in the browser\u201d to \u201cdeciding in the browser.\u201d\nDan Mall\n\n\nWe make websites, but all too often we find ourselves looking at web design artifacts in abstractions. We email PDFs to each other, glance at mockup JPGs on our desktops, and of course kill trees in order to print out designs so that we can scribble in the margins. All of these practices subtly take everyone further and further away from the design\u2019s eventual final resting place: the browser.\n\nBecause a project hub is just a simple webpage, reviewing designs is as easy as clicking some links, which keep your clients and teams in the browser. \n\nYou can keep people in the browser with yet another clever trick from the wily Dan Mall: instead of sending clients PDFs or JPGs, he created a simple webpage and tossed his static visuals into the template (you can view an example here). This forces clients to review web design work in the browser rather than launching a PDF viewer or Preview. \n\nNow this all might sound trivial to you (\u201cOf course my client knows that we\u2019re designing a website!\u201d), but keeping the design artifacts in the browser subconsciously helps remind everyone of the medium for which you\u2019re designing, which helps everyone focus on the right aspects of the design and have the right conversations. \n\nProgress over time\n\nWhen you\u2019re in the trenches, it\u2019s often hard to visualize how a project is progressing. Tools like Basecamp include discussions, files, to-dos, and more, which are all great tools but also make things a bit noisy. Project hubs provide you and your clients a quick and easy way to see at a glance how things are coming along. Teams can rest assured they\u2019re viewing the most current versions of designs, and managers can share progress with stakeholders simply by providing a link to the project hub. \n\nOver time, a project hub becomes an easily accessible archive of all the design decisions, which makes it easy to compare and contrast different versions of designs and prototypes.\n\nSetting up a project hub\n\nSetting up your own project hub is pretty simple. Simply create a webpage with some basic styles and branding. I\u2019ve created a project hub template that\u2019s available on GitHub if you want a jump-start.\n\nPublish the webpage to a URL somewhere that makes sense (we\u2019ve found that a subdomain of your site works quite well) and share it with everyone involved in the project. Bookmark it. Let everyone know that this is where design updates will be shared, and that they can always come back to the project hub to track the project\u2019s progress.\n\nWhen it comes time to share new updates, simply add a new node to the timeline and republish the webpage. Simple FTPing works just fine, but it might make sense to keep track of changes using version control. Our project hub for our open redesign of the Pittsburgh Food Bank is managed on GitHub, which means that I can make edits to the hub right from GitHub. Thanks to the magical wizardry of webhooks, I can automatically deploy the project hub so that everything stays in sync. That\u2019s the fancy-pants way to do it, and is certainly not a requirement. As long as you\u2019re able to easily make edits and keep your project hub up to date, you\u2019re good to go. \n\nSo that\u2019s the hubbub\n\nProject hubs can help tame the chaos of the design process by providing a home base for all key design and development materials. Keep the design artifacts in the browser and give clients and colleagues quick insight into your project\u2019s progress.\n\nHappy hubbing!", "year": "2013", "author": "Brad Frost", "author_slug": "bradfrost", "published": "2013-12-17T00:00:00+00:00", "url": "https://24ways.org/2013/project-hubs/", "topic": "process"} {"rowid": 4, "title": "Credits and Recognition", "contents": "A few weeks ago, I saw a friendly little tweet from a business congratulating a web agency on being nominated for an award. The business was quite happy for them and proud to boot \u2014 they commented on how the same agency designed their website, too.\n\nWhat seemed like a nice little shout-out actually made me feel a little disappointed. Why? In reality, I knew that the web agency didn\u2019t actually design the site \u2014 I did, when I worked at a different agency responsible for the overall branding and identity.\n\nI certainly wasn\u2019t disappointed at the business \u2014 after all, saying that someone designed your site when they were responsible for development is an easy mistake to make. Chances are, the person behind the tweets and status updates might not even know the difference between words like design and development. \n\nWhat really disappointed me was the reminder of how many web workers out there never explain their roles in a project when displaying work in a portfolio. If you\u2019re strictly a developer and market yourself as such, there might be less room for confusion, but things can feel a little deceptive if you offer a wide range of services yet never credit the other players when collaboration is part of the game. Unfortunately, this was the case in this situation. Whatever happened to credit where credit\u2019s due?\n\nAdvertising attribution\n\nHave you ever thumbed through an advertising annual or browsed through the winners of an advertising awards website, like the campaign below from Kopenhagen Chocolate on Advertising Age? If so, it\u2019s likely that you\u2019ve noticed some big differences in how the work is credited.\n\n Everyone involved in a creative advertising project is mentioned.\n\nArt directors, writers, creative directors, photographers, illustrators and, of course, the agency all get a fair shot at fifteen minutes of fame. Why can\u2019t we take this same idea and introduce it to our own showcases?\n\nCrediting on client sites\n\nAh, the good old days of web rings, guestbooks, and under construction GIFs, when slipping in a cheeky \u201cdesigned by\u201d link in the footer of your masterpiece was just another common practice. These days most clients, especially larger companies and corporations, aren\u2019t willing to have any names on their site except their own. \n\nIf you\u2019d still like to leave a little proof of authorship on a website, consider adding a humans.txt file to the root of the site and, if possible, add an author tag in the <head> of the site:\n\n<link type=\"text/plain\" rel=\"author\" href=\"http://domain/humans.txt\">\n\nIt\u2019s a great way to add more detailed information than just a meta name without being intrusive. The example on the humanstxt.org website serves to act as a guideline, but how much detail you add is completely up to you and your team.\n\n Part of the humans.txt file on humanstxt.org\n\nAlternatively, you can use the HTML5 rel=\"author\" attribute to link to information about the author of the page in the form of a mailto: address, a link to a contact form, or a separate authors page.\n\nCrediting in portfolios\n\nWhile humans.txt is a great approach when you\u2019re authoring a site, it\u2019s even more important to clearly define your role in your own portfolio. \n\nWhile I believe it\u2019s proper etiquette to include the names of folks you collaborated with, sometimes it might not be necessary (or even possible) to list every single person, especially if you\u2019ve worked with a large agency. \n\n\u201cFake it till you make it\u201d is not a term that should apply to your portfolio. Clearly stating your own responsibilities means that nobody else browsing your work samples will assume that you did more than your actual share, and being ambiguous about your role isn\u2019t fair to yourself, or others. \n\nBefore adding any work to your portfolio, ensure that you have permission from your client. Even if you included a clause in your contract about being allowed to post your work online, it\u2019s always best to double-check. Sometimes you might not know if your work has been officially launched, and leaking something before it\u2019s ready is bound to make a client frown.\n\nExamples\n\nThere are plenty of portfolios out there that we can use for inspiration. Here are some examples that I like from other folks in the web industry:\n\nAnna Debenham\n\n In her portfolio, Anna outlines her responsibilities and those of others.\n\nIn the description, Anna clearly explains her duties of doing the HTML and CSS, along with performing research and testing the prototype in schools. She also credits Laura Kalbag for the design work.\n\nNaomi Atkinson Design\n\nThe work portfolio of Naomi Atkinson Design is short and to the point \u2014 they were responsible for the iPhone app design and IA for Artspotter.\n\n The portfolio of Naomi Atkinson Design states clearly what they did.\n\nAmber Weinberg\n\nAmber Weinberg is strictly a developer, but a potential client could see her portfolio and assume she might be a designer as well. To avoid any misunderstandings, she states her roles up front in a section called \u201cWhat I Did,\u201d supported by examples of her code.\n\n Amber Weinberg sets out all her roles in each of her portfolio\u2019s case studies.\n\nWhat if someone doesn\u2019t want to be credited?\n\nLet\u2019s face it \u2014 we\u2019ve all been there. A project, for whatever reason, turns out to be an absolute disaster and we don\u2019t feel like it\u2019s an accurate representation of the quality of our work. \n\nIf you\u2019re crediting someone else but suspect they might rather pretend it never happened, be sure to drop them a line and ask if they\u2019d like to be included. And, if someone contacts you and asks to remove their name, don\u2019t feel offended \u2014 just politely remove it.\n\nGet updating!\n\nNow that the holiday season is almost here, many of you might be planning to set aside some time for personal projects. Grab yourself a gingerbread latte and get those portfolios up to date. Remember, It doesn\u2019t have to be long-winded, just honest. Happy holidays!", "year": "2013", "author": "Geri Coady", "author_slug": "gericoady", "published": "2013-12-16T00:00:00+00:00", "url": "https://24ways.org/2013/credits-and-recognition/", "topic": "process"} {"rowid": 19, "title": "In Their Own Write: Web Books and their Authors", "contents": "The currency of written communication \u2014 words on the page, words on the screen \u2014 comprises many denominations. To further our ends in web design and development, we freely spend and receive several: tweets aphoristic and trenchant, banal and perfunctory; blog posts and articles that call us to action or reflection; anecdotes, asides, comments, essays, guides, how-tos, manuals, musings, notes, opinions, stories, thoughts, tips pro and not-so-pro. So many, many words.\n\nOur industry (so much more than this, but what on earth are we, collectively?), our community thrives on writing and sharing knowledge and experience. 24 ways is a case in point. Everyone can learn and contribute through reading and writing \u2014 it\u2019s what we\u2019ve always done.\n\nTo web authors and readers seeking greater returns, though, broader culture has vouchsafed an enduring and singular artefact: the book.\n\nLast month I asked a small sample of web book authors if they would be prepared to answer a few questions; most of them kindly agreed. In spirit, the survey was informal: I had neither hypothesis nor unground axe. I work closely with writers \u2014 and yes, I\u2019ve edited or copy-edited books by several of the authors I surveyed \u2014 and wanted to share their thoughts about what it was like to write a book (\u201c\u2026it was challenging to find a coherent narrative\u201d), why they did it (\u201cWho wouldn\u2019t want to?\u201d) and what they learned from the experience (\u201cThat I could!\u201d).\n\nReasons for writing a book\n\nIn web development the connection between authors and readers is unusually close and immediate. Working in our medium precipitates a unity that\u2019s rare elsewhere. Yet writing and publishing a book, even during the current books revolution, is something only a few of us attempt and it remains daunting and a little remote. What spurs an author to try it? For some, it\u2019s a deeply held resistance to prevailing trends:\n\nI felt that designers and developers needed to be shaken out of what seemed to me had been years of stagnation.\n\u2014Andrew Clarke\n\n\nOr even a desire to protect us from ourselves:\n\nI felt that without a book that clearly defined progressive enhancement in a very approachable and succinct fashion, the web was at risk. I was seeing Tim Berners-Lee\u2019s vision of universal availability slip away\u2026\n\u2014Aaron Gustafson\n\n\nSometimes, there\u2019s a knowledge gap to be filled by an author with the requisite excitement and need to communicate. Jon Hicks took his \u201cpet subject\u201d and was \u201centhused enough to want to spend all that time writing\u201d, particularly because:\n\n\n\t\u2026there was a gap in the market for it. No one had done it before, and it\u2019s still on its own out there, with no competition. It felt like I was able to contribute something.\n\n\nCennydd Bowles felt a professional itch at a particular point in his career, understanding that\n\n\n\t[a]s a designer becomes more senior, they start looking for ways to scale the effects of their work. For some, that leads into management. For others, into writing.\n\n\nOften, though, it\u2019s also simply a personal challenge and ambition to explore a subject at length and create something substantial. Anna Debenham describes a motivation shared by several authors:\n\nTo be able to point to something more tangible than an article and be able to say \u201cI did that.\u201d\n\n\nThat sense of a book\u2019s significance, its heft and gravity even, stems partly from the cultural esteem which honours books and their authors. Books have a long history as sources of wisdom, truth and power. Even with more books being published each year than ever before, writing one is still commonly considered a laudable achievement, including in our field.\n\nChallenges of writing a book\n\nReceived wisdom has it that writing online should be brief and chunky and approachable: get to the point; divide it all up; subheadings and lists are our friends; write like you\u2019re talking; no one has time to read. Much of such advice is true. Followed well, it lends our writing punch and pith, vigour and vim. The web is nimble, the web keeps up, and it suits what we write about developing for it. It\u2019s perfect for delivering our observations, queries and investigations into all the various aspects of the work, professional and personal.\n\nYet even for digital natives like web authors, books printed and electronic retain an attractive glister. \n\nIdeas can be developed more fully, their consequences explored to greater depth and extended with more varied examples, and the whole conveyed with more eloquence, more style. Why shouldn\u2019t authors delay their conclusions if the intervening text is apposite, rich with value and helps to flesh out the skeleton of an argument? Conclusions might or might not be reached, of course, but a writer is at greater liberty in a book to digress in tangential and interesting ways.\n\nWriting a book involves committing time, energy, thought and money. As Brian Suda found, it can be tough \u201cgetting the ideas out of my head into a cohesive blob of text.\u201d Some authors end up talking to themselves\u2026\n\nIt helps me to keep a real person in mind, someone who I\u2019m talking to as I write. Sometimes I have the same conversations over and over in my head.\n\u2014Andrew Clarke\n\n\n\u2026while others are thinking ahead, concerned with how their book will be received:\n\nWould anyone want to read it? Would they care? Would it be respected by my peers?\n\u2014Joe Leech\n\n\nChallenges that arose time and again included \u201cstarting\u201d and \u201cgetting words on the page\u201d as well as \u201cknowing when to stop\u201d or \u201cletting go\u201d. Personal organization problems and those caused by publishers were also widely mentioned. Time loomed large. Making time, finding time. Giving up \u201csleep and some sanity\u201d and realizing \u201cit will take you far, far, far longer than you naively assumed\u201d. Importantly, writing time is time away from gainful employment: Aaron Gustafson found the hardest thing about writing a book to be \u201cthe loss of income while I was writing.\u201d\n\nPerils and pleasures of editing\n\nEditing, be it structural, technical or copy editing, is founded on reciprocity. Without openness and a shared belief that the book is worthwhile, work can founder in acrimony and mistrust. Editors are a book\u2019s first and most critical (in every sense) readers. Effective and perceptive editing makes a book as good as it can be, finding the book within the draft like sculpture reveals the statue in the stone.\n\nA good editor calls you out on poor assumptions and challenges you to really clarify your thinking. Whilst it can be difficult during the process to have your thinking challenged, it\u2019s always been worth it \u2014 for me personally \u2014 in the long run. A good editor also reins you in when you\u2019ve perhaps wandered off track or taken a little too long to make a point.\n\u2014Christopher Murphy\n\n\nAndy Croll found editing \u201call positive\u201d and Aaron Gustafson loves \u201cworking with a strong editor [\u2026] I want someone to tell it to me straight.\u201d But it can be a rollercoaster, \u201cboth terrifying and the real moment of elation\u201d. Mixed emotions during the editing process are common:\n\nIt was very uncomfortable! I knew it was making the work stronger, but it was awkward having my inconsistencies and waffle picked apart.\n\u2014Jon Hicks\n\n\nIt can be distressing to have written work looked over by a professional, particularly for first-time book authors whose expertise lies elsewhere:\n\nI was a little nervous because I don\u2019t consider myself a skilled writer \u2014 I never dreamed of becoming an author. I\u2019m a designer, after all.\n\u2014Geri Coady\n\n\nCommunication is key, particularly when it comes to checking or changing the author\u2019s words.\n\nI like a good banter between me and the tech editor \u2014 if we can have a proper argument in Word comments, that\u2019s great.\n\u2014Rachel Andrew\n\n\nBut if handled poorly, small battles can break out. Rachel Andrew again:\n\n\n\tHowever, having had plenty of times where the technical editor has done nothing more than give a cursory glance, I started to leave little issues in for them to spot. If they picked them up I knew they were actually testing the code and I could be sure the work was being properly tech edited. If they didn\u2019t spot them, I\u2019d find someone myself to read through and check it!\n\n\nA major concern for writers is that their voices will be altered, filtered, mangled or otherwise obscured by the editing process. Good copy editing must remain unnoticed while enhancing the author\u2019s voice in print. Donna Spencer appreciated the way her editor \u201ctidied up my work and made it a million times better, but left it sounding exactly like me.\u201d Similarly, Andrew Travers \u201cwas incredibly impressed at how well my editor tightened up my own writing without it feeling like another\u2019s voice\u201d and Val Head sums up the consensus that:\n\n\n\tthe editor was able to help me express what I was trying to say in a better way [\u2026] I want to have editors for everything now.\n\n\nAt the keyboard, keep your friends close, but your editors closer.\n\nPublishing and publishers\n\nConditions ought to militate against the allure of writing a book about web design and development. More books are published each year than ever before, so readerships elude new authors and readers can struggle to find authors to trust in their fields of interest. New spaces for more expansive online writing about working on and with the web are opening up (sites like Contents Magazine and STET), and seminal online web development texts are emerging. Publishing online is simple, far-reaching and immediate.\n\nMuch more so than articles and blog posts, books take time to research, write and read; add the complexity of commissioning, editing, designing, proofreading, printing, marketing and distribution processes, and it can take many months, even years to publish. The ceaseless headlong momentum of the web can leave articles more than a few weeks old whimpering in its wake, but updating them at least is straightforward; printed books about web development can depreciate as rapidly as the technology and techniques they describe, while retaining the \u201cterrifying permanence that print bestows: your opinions will follow you forever\u201d.\n\nSo much moves on, and becomes out of date. Companies featured get bought by larger companies and die, techniques improve and solutions featured become terribly out of date. Unlike a website, which could be updated continuously, a book represents the thinking \u2018at that time\u2019.\n\u2014Jon Hicks\n\n\nPublishers work hard to mitigate these issues, promoting new books and new authors, bringing authors and readers together under a trusted banner. When a publisher packages up and releases a writer\u2019s words, it confers a seal of approval and \u201cbadge of quality\u201d, very important to new authors.\n\nPublishers have other benefits to offer, from expert knowledge:\n\nMy publisher was extraordinarily supportive (and patient). Her expertise in my chosen subject was both a pressure (I didn\u2019t want to let her down) and a reassurance (if she liked it, I knew it was going to be fine).\n\u2014Andrew Travers\n\n\n\u2026to systems and support mechanisms set up specifically to encourage writers and publish books:\n\nWorking as a team means you\u2019re bringing in everyone\u2019s expertise.\n\u2014Chui Chui Tan\n\n\nAs a writer, the best part about writing for a publisher was the writing infrastructure offered.\n\u2014Christopher Murphy\n\n\nThere can be drawbacks, however, and the occasional horror story:\n\nWe were just one small package on a huge conveyor belt. The publisher\u2019s process ruled all.\n\u2014Cennydd Bowles\n\n\nIt\u2019s only looking back I realise how poorly some publishers treat writers \u2014 especially when the work is so poorly remunerated.My worst experience was when a publisher decided, after I had completed the book, that they wanted to push a different take on the subject than the brief I had been given. Instead of talking to me, they rewrote chunks of my words, turning my advice into something that I would never have encouraged. Ultimately, I refused to let the book go out under my name alone, and I also didn\u2019t really promote the book as I would have had to point out the things I did not agree with that had been inserted!\n\u2014Rachel Andrew\n\n\nSelf-publishing is now a realistic option for web authors, and can offers \u201ccomplete control over the end product\u201d as well as the possibility of earning more than a \u201cpathetic author revenue percentage\u201d. There can be substantial barriers, of course, as self-publishing authors must face for themselves the risks and challenges conventional publishers usually bear. Ideally, creating a book is a collaboration between author and publisher. Geri Coady found that \u201cworking with my publisher felt more like working with a partner or co-worker, rather than working for a boss.\u201d\n\nWise words\n\nSo, after meeting the personal costs of writing and publishing a web book \u2014 fear, uncertainty, doubt, typing (so much typing) \u2014 and then smelling the roses of success, what\u2019s left for an author to say? Some words, perhaps, to people thinking of writing a book.\n\nDonna Spencer identifies a stumbling block common to many writers with an insight into the writing process:\n\n\n\tHaving talked to a lot of potential authors, I think most have the problem that they haven\u2019t actually figured out the \u2018answer\u2019 to their premise yet. They feel like they are stuck in the writing, but they are actually stuck in the thinking.\n\n\nFor some no-nonsense, straightforward advice to cut through any anxiety or inadequacy, Rachel Andrew encourages authors to \u201ctreat it like any other work. There is no mystery to writing, you just have to write. Schedule the time, sit down, write words.\u201d Tim Brown notes the importance of the editing process to refine a book and help authors reach their readers:\n\n\n\tHire good editors. Editors are amazing thinkers who can vastly improve the quality and clarity of a piece of writing.\n\n\nWe are too much beholden to the practical demands and challenges of technology, so Aaron Gustafson suggests a writer should \u201cfavor philosophies over techniques and your book will have a longer shelf life.\u201d\n\nMost intimations of renown and recognition are nipped in the bud by Joe Leech\u2019s warning: \u201cDon\u2019t expect fame and fortune.\u201d Although Cennydd Bowles\u2019 bitter experience can be discouraging:\n\n\n\tThe sacrifices required are immense. You probably won\u2019t make it.\n\n\n\u2026he would do things differently for a future book:\n\n\n\tI would approach the book with [\u2026] far more concern about conveying the damn joy of what I do for a living.\n\n\nThe pleasure of writing, not just having written is captured by James Chudley when he recalls:\n\n\n\tHow much I enjoy writing and also how much I enjoy the discipline or having a side project like this. It\u2019s a really good supplement to working life.\n\n\nAnd Jon Hicks has words that any author will find comforting:\n\n\n\tIt will be fine. Everything will be fine. Just get on with it!\n\n\n\n\nAs the web expands effortlessly and ceaselessly to make room for all our words, yet it can also discourage the accumulation of any particular theme in one space, dividing rich seams and scattering knowledge across the web\u2019s surface and into its deepest reaches. How many words become weightless and insubstantial, signals lost in the constant white noise of indistinguishable voices, unloved, unlinked? The web forgets constantly, despite the (somewhat empty) promise of digital preservation: articles and data are sacrificed to expediency, profit and apathy; online attention, acknowledgement and interest wax and wane in days, hours even.\n\nBooks can encourage deeper engagement in readers, and foster faith in an author, particularly if released under the imprint of a recognized publisher within the field. And books are changing. Although still not widely adopted, EPUB3 is the new standard in ebooks, bringing with it new possibilities for interaction and connection: readers with the text; readers with readers; and readers with authors. EPUB3 is built on HTML, CSS and JavaScript \u2014 sound familiar? In the past, we took what we could from the printed page to make the web; now books are rubbing up against what we\u2019ve made.\n\nSo: a book.\n\nEver thought you could write one? Should write one? Would?\n\n\n\nI\u2019d like to thank all the authors who wrote their books and answered my questions.\n\n\n\tRachel Andrew \u00b7 CSS3 Layout Modules, The CSS3 Anthology and more\n\tCennydd Bowles \u00b7 Undercover User Experience Design, with James Box\n\tTim Brown \u00b7 Combining Typefaces\n\tJames Chudley \u00b7 Usability of Web Photos\n\tAndrew Clarke \u00b7 Hardboiled Web Design\n\tGeri Coady \u00b7 Colour Accessibility\n\tAndy Croll \u00b7 HTML Email\n\tAnna Debenham \u00b7 Front-end Style Guides\n\tAaron Gustafson \u00b7 Adaptive Web Design\n\tVal Head \u00b7 CSS Animations\n\tJon Hicks \u00b7 The Icon Handbook\n\tJoe Leech \u00b7 Psychology for Designers\n\tChristopher Murphy \u00b7 The Craft of Words, with Niklas Persson\n\tDonna Spencer \u00b7 Information Architecture, Card Sorting and How to Write Great Copy for the Web\n\tBrian Suda \u00b7 Designing with Data\n\tChui Chui Tan \u00b7 International User Research\n\tAndrew Travers \u00b7 Interviewing for Research", "year": "2013", "author": "Owen Gregory", "author_slug": "owengregory", "published": "2013-12-15T00:00:00+00:00", "url": "https://24ways.org/2013/web-books/", "topic": "content"} {"rowid": 10, "title": "Home Kanban for Domestic Bliss", "contents": "My wife is an architect. I\u2019m a leader of big technical teams these days, but for many years after I was a dev I was a project/program manager. Our friends and family used to watch Grand Designs and think that we would make the ideal team \u2014 she could design, I could manage the project of building or converting whatever dream home we wanted.\n\nThen we bought a house.\n\nA Victorian terrace in the north-east of England that needed, well, a fair bit of work. The big decisions were actually pretty easy: yes, we should knock through a double doorway from the dining room to the lounge; yes, we should strip out everything from the utility room and redo it; yes, we should roll back the hideous carpet in the bedrooms upstairs and see if we could restore the original wood flooring.\n\nThose could be managed like a project.\n\nWhat couldn\u2019t be was all the other stuff. Incremental improvements are harder to schedule, and in a house that\u2019s over a hundred years old you never know what you\u2019re going to find when you clear away some tiles, or pull up the carpets, or even just spring-clean the kitchen (\u201cErm, hon? The paint seems to be coming off. Actually, so does the plaster\u2026\u201d). A bit like going in to fix bugs in code or upgrade a machine \u2014 sometimes you end up quite far down the rabbit hole.\n\nAnd so, as we tried to fit in those improvements in our evenings and weekends, we found ourselves disagreeing. Arguing, even. We were both trying to do the right thing (make the house better) but since we were fitting it in where we could, we often didn\u2019t get to talk and agree in detail what was needed (exactly how to make the house better). And it\u2019s really frustrating when you stay up late doing something, just to find that your other half didn\u2019t mean that they meant this instead, and so your effort was wasted.\n\nThen I saw this tweet from my friend and colleague Jamie Arnold, who was using the same kanban board approach at home as we had instituted at the UK Government Digital Service to manage our portfolio.\n\nMrs Arnold embraces Kanban wall at home. Disagreements about work in progress and priority significantly reduced.. ;) pic.twitter.com/407brMCH\u2014 Jamie Arnold (@itsallgonewrong) October 27, 2012\n\nAnd despite Jamie\u2019s questionable taste in fancy dress outfits (look closely at that board), he is a proper genius when it comes to processes and particularly agile ones. So I followed his example and instituted a home kanban board.\n\nWhat is this kanban of which you speak?\n\nKanban boards are an artefact from lean manufacturing \u2014 basically a visualisation of a production process. They are used to show you where your bottlenecks are, or where one part of the process is producing components faster than another part of the process can cope. Identifying the bottlenecks leads you to set work in progress (WIP) limits, so that you get an overall more efficient system.\n\nIncreasingly kanban is used as an agile software development approach, too, especially where support work (like fixing bugs) needs to be balanced with incremental enhancement (like adding new features).\n\nI\u2019m a big advocate of kanban when you have a system that needs to be maintained and improved by the same team at the same time. Rather than the sprint-based approach of scrum (where the next sprint\u2019s stories or features to be delivered are agreed up front), kanban lets individuals deal with incidents or problems that need investigation and bug fixing when urgent and important. Then, when someone has capacity, they can just go to the board and pull down the next feature to develop or test.\n\nSo, how did we use it?\n\nOne of the key tenets of kanban is that you visualise your workflow, so we put together a whiteboard with columns: Icebox; To Do Next; In Process; Done; and also a section called Blocked. Then, for each thing that needed to happen in the house, we put it on a Post-it note and initially chucked them all in the Icebox \u2014 a collection with no priority assigned yet.\n\nEach week we looked at the Icebox and pulled out a set of things that we felt should be done next. This was pulled into the To Do Next column, and then each time either of us had some time, we could just pull a new thing over into the In Process column. We agreed to review at the end of each week and move things to Done together, and to talk about whether this kanban approach was working for us or not.\n\nWe quickly learned for ourselves why kanban has WIP limits as a key tenet \u2014 it\u2019s tempting to pull everything into the To Do Next column, but that\u2019s unrealistic. And trying to do more than one or two things each at a given time isn\u2019t terribly productive owing to the cost of task switching. So we tend to limit our To Do Next to about seven items, and our In Process to about four (a max of two each, basically).\n\nWe use the Blocked column when something can\u2019t be completed \u2014 perhaps we can\u2019t fix something because we discovered we don\u2019t have the required tools or supplies, or if we\u2019re waiting for a call back from a plumber. But it\u2019s nice to put it to one side, knowing that it won\u2019t be forgotten.\n\nWhat helped the most?\n\nIt wasn\u2019t so much the visualisation that helped us to see what we needed to do, but the conversation that happened when we were agreeing priorities, moving them to In Process and then on to Done made the biggest difference. Getting clear on the order of importance really is invaluable \u2014 as is getting clear on what Done really means!\n\nThe Blocked column is also great, as it helps us keep track of things we need to do outside the house to make sure we can make progress. We also found it really helpful to examine the process itself and figure out whether it was working for us. For instance, one thing we realised is it\u2019s worth tracking some regular tasks that need time invested in them (like taking recycling that isn\u2019t picked up to the recycling centre) and these used to cycle around and around. So they were moved to Done as part of our weekly review, but then immediately put back in the Icebox to float back to the top again at a relevant time.\n\nBut the best thing of all? That moment where we get to mark something as done! It\u2019s immensely satisfying to review at the end of the week and have a physical marker of the progress you\u2019ve made.\n\nAll in all, a home kanban board turned out to be a very effective way to pull tasks through stages rather than always trying to plan them out in advance, and definitely made collaboration on our home tasks significantly smoother. Give it a try!", "year": "2013", "author": "Meri Williams", "author_slug": "meriwilliams", "published": "2013-12-14T00:00:00+00:00", "url": "https://24ways.org/2013/home-kanban-for-domestic-bliss/", "topic": "process"} {"rowid": 13, "title": "Data-driven Design with an Annual Survey", "contents": "Too often, we base designs on assumptions that don\u2019t match customer perspectives. Why? Because the data we need to make informed decisions isn\u2019t available.\n\nImagine starting off the year with a treasure trove of user data that can be filtered, sliced, and diced to inform new UI designs, help you discover where users struggle the most, and expose emerging trends in your customers\u2019 needs that could lead to new features. Why, that would be useful indeed. And it\u2019s easy to obtain by conducting an annual survey.\n\nAnnual surveys may seem as exciting as receiving socks and undies for Christmas, but they\u2019re the gift that keeps on giving all year long (just like fresh socks and undies). I\u2019m not ashamed to admit it: I love surveys! Each time my design research team runs a survey, we learn so much about customer motivations, interests, and behaviors. \n\nSurveys provide an aggregate snapshot of your users that can\u2019t easily be obtained by other research methods, and they can be conducted quickly too. You can build a survey in a few hours, run a pilot test in a day, and have real results streaming in the following day. Speed is essential if design research is going to keep pace with a busy product release schedule. \n\nSurveys are also an invaluable springboard for customer interviews, which provide deep perspectives on user behavior. If you play your cards right as you construct your survey, you can capture a user ID and an email address for each respondent, making it easy to get in touch with customers whose feedback is particularly intriguing. No more recruiting customers for your research via Twitter or through a recruiting company charging a small fortune. You can filter survey responses and isolate the exact customers to talk with in moments, not months.\n\nI love this connected process of sending targeted surveys, filtering the results, and then \u2014 with surgical precision \u2014 selecting just the right customers to interview. Not only is it fast and cheap, but it lets design researchers do quantitative and qualitative research in a coordinated way. Aggregate survey responses help you quantify the perspectives of different user segments, and interviews help you get into the heads of your customers.\n\nAn annual survey can give your team the data needed to make more informed designs in the new year. It all starts with a plan.\n\nPlanning your survey\n\nBefore you start jotting down questions to ask users, spend some time thinking about the work your team will be doing in the coming year. Are you planning new mobile apps or a responsive redesign? Then questions about devices used and behaviors around mobile devices might be in order. Rethinking your content strategy? Then you might want to ask a few questions about how your customers consume content.\n\nYou can\u2019t predict all of the projects you\u2019ll be working on in the coming year, but tuck a couple of sections in your survey about the projects you\u2019re certain about. This will give you the research you need to start new projects with solid foundational data.\n\nGoogle Drive is a great place to start collaboratively building survey questions with colleagues. Questions that seem crystal clear in your head get challenged, refined, or even expanded quickly when the entire team can chime in. \n\nAs you craft your survey, try to consider how you\u2019ll filter it once all of the data is compiled. Do you need to see responses by industry, by age of an account, by devices used, or by size of company? Adding the right filter questions can help you discover fascinating patterns in user segments. Filtering on responses to a few questions can surface insights like: customers in non-profit companies with more than 100 employees are 17% more likely to use an Android phone and are most attracted to features A, D, and F. A designer working on the landing page for a non-profit would love to have concrete information like this. Filter questions are key, so consider them carefully. But don\u2019t go overboard \u2014 too many of them and you\u2019ll start to hurt your survey response rate.\n\nMultiple choice questions are the heart of most surveys because respondents can complete them quickly, which increases response rate, and researchers can analyze them without a lot of manual categorization. Open text field questions are valuable too, but be careful not to add too many to your survey. You\u2019ll hate yourself after the survey\u2019s done and you have to sort through and tag thousands of open responses so patterns become visible. Oy vey!\n\nAn open-ended question works well towards the end of the survey. At this point respondents have a lot of topics swirling around in their head and tend to say weird things that will pique your interest. This is where you\u2019ll find the outliers who are using your product. They\u2019ll be fascinating to interview, and on occasion will help you see your work in a brand new way.\n\nConclude your survey with a question asking permission to get in touch for a followup interview so you don\u2019t pester people who want to be left alone. \n\nWith your questions nailed down, it\u2019s time to build out that survey and get it ready for sending!\n\nBuilding your survey\n\nThere are dozens of apps you could use to build your survey, but SurveyMonkey is the one that I prefer. It lets you pass in variables for each respondent such as user ID and email address. Metadata about respondents is essential if you\u2019re going to do any follow-up interviews with your customers in the coming year. SurveyMonkey also makes it easy to set up question logic, showing questions to customers only if they responded in a certain way to a prior question. This helps you avoid asking irrelevant questions to some respondents.\n\nDetermining survey recipients\n\nOnce you\u2019ve chosen a survey tool and entered all of your questions, you need to gather a list of recipients. Your first instinct will be to send it to everyone. You might say, \u201cI need maximum response and metric shit tons of data!\u201d But this is rarely the best approach \u2014 broad distribution almost always leads to lower response rates, increased noise, and decreased signal in your data. Are there subsets of customers you could send to, like only those who are active, those who are paying, or have been with you for a certain length of time? Talk to the keepers of your customer database and see how they can segment it so you can be certain you\u2019re talking to just the people who will have the most relevant responses for your needs. \n\nIf you want to get super nerdy when finding the right customer sample to survey, use a [sample size calculator]. Sampling is a deep subject best explored in other articles. \n\nCrafting your survey email\n\nAfter focusing your energies on writing and building your survey, the email asking your customers to respond seems almost trivial, but it will greatly influence your response rate. Take great care when writing your subject line and the body of the email. If you can pull it off, A/B testing subject lines can greatly improve the open rate of your email and click-through to your survey. My design research team has seen a ~10% increase in open and click rates when we A/B tested. We\u2019ve found that personalizing subject lines and greetings with the recipients name (ie. \u201cHey, Aarron. How can we make our app work better for you?\u201d) gave us the best response rates. Your mileage may vary.\n\nThe tone of your email is important \u2014 be friendly, honest, and to the point. Those that are passionate about your product will be happy to share their perspective. Writing a survey email that people will actually respond to ain\u2019t easy \u2014 in fact, they\u2019re almost always annoying. But Ben Chestnut found a non-annoying way to send a survey email and improve response rates.\n\nThe email sent for the 2013 MailChimp survey let customers know what we\u2019d been up to in the previous year, and invited feedback on what we should work on in the coming year.\n\nThe link to your survey should be a clear call to action. A big button with a label like \u201cAnswer a few questions\u201d generally does the trick. The URL linking to the survey will need to include some variables like user ID and email. It might look something like this if you\u2019re using SurveyMonkey:\n\nhttp://surveymonkey.com/s/somesurveyid/?uid=*|UID|*&email=*|email|*\n\nAs each email is sent, the proper data will be populated in the variables, passing it on to the survey app for inclusion in each response. This is the magic that will help you pinpoint customers to interview down the road, so take special care to test that all is working before sending to all recipients. How you construct the survey link will vary depending on what survey tool and email service provider you use, so don\u2019t take my example as gospel. You\u2019ll need to read the documentation for your survey and email apps to set things up properly.\n\nPilot before sending\n\nBy now, you\u2019ve whipped yourself into a fever pitch over your brilliant survey and the data you hope to collect. Your finger is on the send button, poised for action, but there\u2019s one very important thing to do before you send to the entire list of customers: send a pilot email. How do you know if your questions are clear, your form logic is sound, and you\u2019re passing variables from the email to the survey properly? You won\u2019t, unless you send to a small segment of your recipients first. \n\nThe data collected in your pilot will make plain where your survey needs refinement. This data won\u2019t be used in your final analysis, as you\u2019re probably going to make a few changes to your questions.\n\nSend the pilot survey to enough people that you can really stress test the clarity of the questions and data you\u2019re gathering, while considering how much data can you comfortably throw out. If you\u2019re sending your final survey to a few thousand people, you might find a couple of hundred recipients for your pilot will give you enough insight into what to improve while leaving the vast majority of the recipients for your final survey.\n\nAfter you\u2019ve sent your pilot, made your survey adjustments, and ensured the variables are being passed from your email into the survey app, you\u2019re ready to send to the remainder of your customers. This is your moment of glory!\n\nAnalyzing your results\n\nAfter a couple of weeks you can probably safely close the survey so no other responses come in as you transition from data gathering to data analysis. Any survey app worth its salt will chart responses to your multiple choice questions. Reviewing these charts is a great place to start your analysis. Is there anything particularly interesting that stands out? Jot down some of your observations. I like to print screenshots of the charts for each question, highlighting areas of interest. These prints become a particularly handy reference point for the next step in your analysis. \n\nPrinting results from a survey makes comparing different customers easy.\n\nViewing aggregate data about all responses is interesting, but the deltas between different types of customers are where the real revelations happen. Remember those filter questions you added to your survey? They\u2019re the tool that\u2019ll help you compare customer segments.\n\nMost survey apps will let you filter the data based on response to a question. If the one you\u2019re using doesn\u2019t, you can always export your data and create pivot tables in Excel. Try filtering your data based on one of your filter questions, such as industry, company size, or devices used. Now compare those printed screenshots of baseline responses to the filtered data. Chances are you\u2019ll see some significant differences in how each group responded to your questions, giving you clues about the variance in interests and motivations in customer segments and a leg up as you work on future design projects. \n\nOpen-ended responses are equally interesting, but much more time-consuming to analyze. Yes, you need to read through thousands of responses, some of which are constructive and some of which are not. Taking the time to tag each open response will help you see trends and filter out the responses that are unhelpful.\n\nUnlike questions with predefined answers, open-ended responses let users express unique ideas and use cases you may not be looking for. The tedium of reading thousands of response is always cut by eureka moments when users tell you something fascinating that changes your perspective on your app. These are the folks you want to pull out for follow-up interviews. Because you\u2019ve already captured their email addresses when you set up your survey and your email, getting in touch will be a piece of cake.\n\nFilter, compare, interview, and summarize; then share your findings with your colleagues. Reports are great for head honchos, but if you want to really inform and inspire, create a video, a poster series, or even a comic to communicate what you\u2019ve learned. Want to get really fancy? Store your survey results in a centrally accessible location so anyone in your company can research and discover the insights they need to make more informed designs. \n\nGood design researchers discover valuable insights. Great design researchers turn those insights into stories.\n\nConclusion\n\nAs we enter the new year, it\u2019s a great time to reflect on the work we\u2019ve done in the past and how we can do better in the future. Without a doubt, designers working with a foundation of insights about customers can make more effective UIs. But designers aren\u2019t the only ones who stand to gain from the data collected in an annual survey\u2014anyone who makes things for or communicates with customers will find themselves empowered to do better work when they know more about the people they serve. The data you collect with your survey is a fantastic holiday gift to your colleagues, one that they\u2019ll appreciate throughout the year.", "year": "2013", "author": "Aarron Walter", "author_slug": "aarronwalter", "published": "2013-12-13T00:00:00+00:00", "url": "https://24ways.org/2013/data-driven-design-with-an-annual-survey/", "topic": "design"} {"rowid": 22, "title": "The Responsive Hover Paradigm", "contents": "CSS transitions and animations provide web designers with a whole slew of tools to spruce up our designs. Move over ActionScript tweens! The techniques we can now implement with CSS are reminiscent of Flash-based adventures from the pages of web history.\n\nPairing CSS enhancements with our :hover pseudo-class allows us to add interesting events to our websites. We have a ton of power at our fingertips. However, with this power, we each have to ask ourselves: just because I can do something, should I?\n\nWhy bother? \n\nWe hear a lot of mantras in the web community. Some proclaim the importance of content; some encourage methods like mobile first to support content; and others warn of the overhead and speed impact of decorative flourishes and visual images. I agree, one hundred percent. At the same time, I believe that content can reign king and still provide a beautiful design with compelling interactions and acceptable performance impacts. Maybe, just maybe, we can even have a little bit of fun when crafting these systems!\n\nYes, a site with pure HTML content and no CSS will load very fast on your mobile phone, but it leaves a lot to be desired. If you went to your local library and every book looked the same, how would you know which one to borrow? Imagine if every book was printed on the same paper stock with the same cover page in the same type size set at a legible point value\u2026 how would you know if you were going to purchase a cookbook about wild game or a young adult story about teens fighting to the death?\n\nFor certain audiences, seeing a site with hip, lively hovers sure beats a stale website concept. I\u2019ve worked on many higher education sites, and setting the interactive options is often a very important factor in engaging potential students, alumni, and donors. The same can go for e-commerce sites: enticing your audience with surprise and delight factors can be the difference between a successful and a lost sale. \n\nKnowing your content and audience can help you decide if an intriguing experience is appropriate for your site; if it is, then hover responses can be a real asset. \n\nWhy hover?\n\nWe have all these capabilities with CSS properties to create the aforementioned fun interactions, and it would be quite easy to fall back into some old patterns and animation abuse. The world of Flash intros and skip links could be recreated with CSS keyframes. However, I don\u2019t think any of us want to go the route of forcing users into unwanted exchanges and road blocking content. \n\nWhat\u2019s great about utilizing hover to pair with CSS powered actions is that it\u2019s user initiated. It\u2019s a well-established expectation that when a user mouses over an object, something changes. If we can identify that something as a link, then we will expect something to change as we move our mouse over it. By waiting to trigger a CSS-based response until a user chooses to engage with a target makes for a more polished experience (as opposed to barraging our screens with animations all willy-nilly). This makes it the perfect opportunity to add some unique spunk. \n\nWhat about mobile, touch, and responsive?\n\nSo, you\u2019re on board with this so far, but what about mobile and touch devices? Sure, some devices like the Samsung Galaxy S4 have some hovering capabilities, but certainly most do not. Beyond mobile devices, we also have to worry about desktops with touch capabilities. It\u2019s super difficult to detect if a user is currently using touch or hover. One option we have is to design strictly for touch only and send hover enhancements to the graveyard. However, being that I\u2019m all \u201cfuck yeah hovers!,\u201d I like to explore all options. So, let\u2019s examine four different types of hover patterns and see how they can translate to our touch devices.\n\n1. The essential text hover\n\nChanging text color on hover is something we\u2019ve done for a while and it has helped aid in identifying links. To maintain the best accessibility we can achieve, it helps to have a different visual indicator on the default :link state, such as an underline. By making sure all text links have an underline, we won\u2019t have to rely on visual changes during hover to make sure touch device users know that it is a link. For hover-enabled devices, we can add a basic color transition. Doing so creates a nice fade, which makes the change on hover less jarring. Kinda like smooth jazz. The code* to achieve this is quite simple: \n\na {\n\tcolor: #6dd4b1;\n\ttransition: color 0.25s linear; \n}\n\na:hover, a:focus {\n\tcolor: #357099;\n}\n\n\n\tBrowser prefixes are omitted\n\n\nYou can see in the final result that, for both touch and hover, everyone wins: \n\nSee the Pen Most Basic Link Transition by Jenn Lukas (@Jenn) on CodePen\n\n \n\n2. Visual background wizardry and animated hovers\n\nWe can take this a step further by again making changes to our aesthetic on hover, but not making any content changes. Altering image hovers for fun and personality can separate your site from others; that personality is important and can enhance our content. \n\nLet\u2019s look at a few sites that do this really well. Scroll down to the judges section of CSS Off and check out the illustrations of the judges. On hover, the illustration fades into a photo of the judge. This provides a realistic alternative to the drawing. Users without the hover can click into the detail page, where they can see the full color picture and learn more about the judges; the information is still available through a different pathway. \n\nGoing back to the higher education field, let\u2019s visit Delaware Valley College. The school had recently gone through a rebranding that included loop icons as a symbol to connect ideas. These icons are brought into the website on hover of the slideshow arrows (WebKit browsers). The hover reveals a loop animation, tying in overall themes and adding some extra pizzazz that makes me think, \u201cThis is a hip place that feels current.\u201d For visitors who can\u2019t access the hover effect, the default arrow state clearly represents a clickable link, and there is swipe functionality on mobile devices to boot. \n\nDIY.org\u2019s Frontend Dev page has a bunch of enjoyable hover actions happening, featuring scaling transforms and looping animations. Nothing new is revealed on hover, so touch devices won\u2019t miss anything, but it intrigues the user who is visiting a site about front-end dev doing cool front-end things. It backs up its claim of front-end knowledge by adding this enhancement. \n\nThe old Cowork Chicago (now redirecting) had a great example, captured here:\n\n Coop: Chicago Coworking from Jenn Lukas on Vimeo.\n\nThe code for the Join areas is quite simple: \n\n.join-buttons .daily, .join-buttons .monthly { \n height: 260px; z-index: 0; margin-top: 30px;\n\ttransition: height .2s linear,margin .2s linear;\n}\n\n.join-buttons .daily:hover, .join-buttons .monthly:hover { \n\theight: 280px; margin-top: 20px; \n}\n\nli.button:hover { \n z-index: 20; \n}\n\nThe slight rotation on the photos, and the change of color and size of the rate options on hover, add to the fun factor. The site attempts to advertise the co-working space by letting bits of their charisma show through with these transitions. They don\u2019t hit the user over the head with animations, but provide a nice addition to make sure visitors know it\u2019s a welcoming place to work. Some text is added on the hover, but the text isn\u2019t essential to determine where the link goes.\n\n3. Image block hovers\n\nThere have been more designs popping up with large image blocks acting as extensive hit area links to subsequent pages. On hover of these links, text is revealed, letting the user know where the link destination goes. \n\nSee the Pen Transitioning Max Height by Jenn Lukas (@Jenn) on CodePen\n\nThis type of link is tough for users on touch as the image might not provide enough context to reveal its target. If you weren\u2019t aware of what my illustrated avatar from 2007 looked like (or even if you did), then how would you know that this is a link to my Twitter page? Instead, if we provide enough context \u2014 such as the @jennlukas handle \u2014 you could assume the destination. Users who receive the hover can also see the Twitter bio. It won\u2019t break the experience for users that can\u2019t hover, but it will provide a nice interaction and some more information for those that can. \n\nSee the Pen Transitioning Max Height by Jenn Lukas (@Jenn) on CodePen\n\nThe Esquire site follows this same pattern, in which the title of the story is shown and the subheading is revealed on hover. Dining at Altitude took the opposite approach, where all text is shown by default and, on hover, you can see more of the image that the text sits atop. This is a nice technique to follow. For touch users, following the link will allow them to see more of the image detail that was revealed on hover. \n\n4. Drop-down navigation menu hovers\n\nMain navigation options that rely on hover have come up as a problem for touch. One way to address this is to be sure your top level items are all functional links to somewhere, and not blank anchors to trigger a submenu drop-down. This ensures that, even without the hover-triggered menu, users can still navigate to those top-level pages. From there, they should be able to access the tertiary pages shown in the drop-down. Following this arrangement, drop-down menus act as a quick shortcut and aren\u2019t necessary to the navigational structure. If the top navigation items are your most visited pages, this execution won\u2019t hinder your visitors. \n\nIf the information within the menu is vital, such as a lone account menu, another option is to show drop-down menus on click instead of hover. This pattern will allow both mouse and touch users to access the drop-downs. \n\nWhy can\u2019t we just detect hover?\n\nThis is a really tricky thing to do. Internet Explorer 10 on Windows 8 uses the aria-haspopup attribute to simulate hover on touch devices, but usually our audience stretches beyond that group. There\u2019s been discussion around using Modernizr, but false positives have come with that. A W3C draft for Media Queries Level 4 includes a hover feature, but it\u2019s not supported yet. Since some devices can hover and touch, should you rely on hover effects for those? Arguments have come up that users can be browsing your site with a mouse and then decide to switch to touch, or vice versa. That might be a large concern for you, or it might be an edge case that isn\u2019t vital to your site\u2019s success. \n\nFor one site, I used mousemove and touchstart JavaScript events in order to detect if a visitor starts to browse the site with a mouse. The design initiates for touch users, showing all text on load, but as soon as a mouse movement occurs, the text becomes hidden and is then revealed on hover. \n\nSee the Pen Detect Touch devices with mousemove and touchstart by Jenn Lukas (@Jenn) on CodePen\n\nOne downside to this approach is that the text is viewable until a mouse enters the document, but if the elements are further down the page it might not be noticed. A second downside is if a user on a touch- and hover-enabled device starts browsing with the mouse and then switches back to touch, the hover-centric styles will remain until a new page load. These were acceptable scenarios in the project I worked on, but might not be for every project. \n\nCan we give our visitors a choice?\n\nI\u2019ve been thinking about how we can combat the concern of not knowing if our customers are using touch or a mouse, not to mention keyboard or Wacom tablets or Minority Report screens. We can cover keyboards with our friend :focus, but that still doesn\u2019t solve our other dilemmas. \n\nRemember when we couldn\u2019t rely on browsers to zoom text and we had to use those small A, medium A, big A [AAA] buttons? On selection of one of those options, a different style sheet would load with small, medium, or large text sizes to satisfy our user\u2019s request. We could even set cookies to remember their font choices. What if we offered a similar solution, a hover/touch switcher, for our new predicament? \n\nSee the Pen cwuJf by Jenn Lukas (@Jenn) on CodePen\n\nWe could add this switcher to our design. Maybe add it to the header on smaller screens and the footer on larger screens to play the odds. Then be sure to deliver the appropriate touch- or hover-optimized adventure for our guests.\n\nHow about adding View options in the areas where we\u2019re hiding content until hover? Looking at Delta Cycle, there\u2019s logic in place to switch layouts on some mobile devices. On desktops we can see the layout shows the product and price by default, and the name of the item and an Add to cart button on hover. If you want to keep this hover, but also worry that touch users can\u2019t access it \u2014 or even if you are concerned that people might want to view it with more details up front \u2014 we could add another view switcher. \n\nSee the Pen List/Grid Views for Hover or Touch by Jenn Lukas (@Jenn) on CodePen\n\nSimilar to the list versus grid view we often see in operating systems, a choice here could cover all of our bases. \n\nConclusion\n\nThere is no one-size-fits-all solution when it comes to hover patterns. Design for your content. If you are providing important information about driving directions or healthcare, you might want to err on the side of designing for touch only. If you are behind an educational site and trying to entice more traffic and sign-ups, or a more immersive e-commerce site selling pies, then hover activity can help support your content and engage your visitors without being a detriment. While content can be our top priority, let\u2019s not forget that our designs and interactions, hovers included, can have a great positive impact on how visitors experience our site. Hover wisely, friends.", "year": "2013", "author": "Jenn Lukas", "author_slug": "jennlukas", "published": "2013-12-12T00:00:00+00:00", "url": "https://24ways.org/2013/the-responsive-hover-paradigm/", "topic": null} {"rowid": 18, "title": "Grunt for People Who Think Things Like Grunt are Weird and Hard", "contents": "Front-end developers are often told to do certain things:\n\n\n\tWork in as small chunks of CSS and JavaScript as makes sense to you, then concatenate them together for the production website.\n\tCompress your CSS and minify your JavaScript to make their file sizes as small as possible for your production website.\n\tOptimize your images to reduce their file size without affecting quality.\n\tUse Sass for CSS authoring because of all the useful abstraction it allows.\n\n\nThat\u2019s not a comprehensive list of course, but those are the kind of things we need to do. You might call them tasks.\n\nI bet you\u2019ve heard of Grunt. Well, Grunt is a task runner. Grunt can do all of those things for you. Once you\u2019ve got it set up, which isn\u2019t particularly difficult, those things can happen automatically without you having to think about them again.\n\nBut let\u2019s face it: Grunt is one of those fancy newfangled things that all the cool kids seem to be using but at first glance feels strange and intimidating. I hear you. This article is for you.\n\nLet\u2019s nip some misconceptions in the bud right away\n\nPerhaps you\u2019ve heard of Grunt, but haven\u2019t done anything with it. I\u2019m sure that applies to many of you. Maybe one of the following hang-ups applies to you.\n\nI don\u2019t need the things Grunt does\n\nYou probably do, actually. Check out that list up top. Those things aren\u2019t nice-to-haves. They are pretty vital parts of website development these days. If you already do all of them, that\u2019s awesome. Perhaps you use a variety of different tools to accomplish them. Grunt can help bring them under one roof, so to speak. If you don\u2019t already do all of them, you probably should and Grunt can help. Then, once you are doing those, you can keep using Grunt to do more for you, which will basically make you better at doing your job.\n\nGrunt runs on Node.js \u2014 I don\u2019t know Node\n\nYou don\u2019t have to know Node. Just like you don\u2019t have to know Ruby to use Sass. Or PHP to use WordPress. Or C++ to use Microsoft Word.\n\nI have other ways to do the things Grunt could do for me\n\nAre they all organized in one place, configured to run automatically when needed, and shared among every single person working on that project? Unlikely, I\u2019d venture.\n\nGrunt is a command line tool \u2014 I\u2019m just a designer\n\nI\u2019m a designer too. I prefer native apps with graphical interfaces when I can get them. But I don\u2019t think that\u2019s going to happen with Grunt1.\n\nThe extent to which you need to use the command line is:\n\n\n\tNavigate to your project\u2019s directory.\n\tType grunt and press Return.\n\n\nAfter set-up, that is, which again isn\u2019t particularly difficult.\n\nOK. Let\u2019s get Grunt installed\n\nNode is indeed a prerequisite for Grunt. If you don\u2019t have Node installed, don\u2019t worry, it\u2019s very easy. You literally download an installer and run it. Click the big Install button on the Node website.\n\nYou install Grunt on a per-project basis. Go to your project\u2019s folder. It needs a file there named package.json at the root level. You can just create one and put it there.\n\n package.json at root\n\nThe contents of that file should be this:\n\n{\n \"name\": \"example-project\",\n \"version\": \"0.1.0\",\n \"devDependencies\": {\n \"grunt\": \"~0.4.1\"\n }\n}\n\nFeel free to change the name of the project and the version, but the devDependencies thing needs to be in there just like that.\n\nThis is how Node does dependencies. Node has a package manager called NPM (Node packaged modules) for managing Node dependencies (like a gem for Ruby if you\u2019re familiar with that). You could even think of it a bit like a plug-in for WordPress.\n\nOnce that package.json file is in place, go to the terminal and navigate to your folder. Terminal rubes like me do it like this:\n\n Terminal rube changing directories\n\nThen run the command:\n\nnpm install\n\nAfter you\u2019ve run that command, a new folder called node_modules will show up in your project.\n\n Example of node_modules folder\n\nThe other files you see there, README.md and LICENSE are there because I\u2019m going to put this project on GitHub and that\u2019s just standard fare there.\n\nThe last installation step is to install the Grunt CLI (command line interface). That\u2019s what makes the grunt command in the terminal work. Without it, typing grunt will net you a \u201cCommand Not Found\u201d-style error. It is a separate installation for efficiency reasons. Otherwise, if you had ten projects you\u2019d have ten copies of Grunt CLI.\n\nThis is a one-liner again. Just run this command in the terminal:\n\nnpm install -g grunt-cli\n\nYou should close and reopen the terminal as well. That\u2019s a generic good practice to make sure things are working right. Kinda like restarting your computer after you install a new application was in the olden days.\n\nLet\u2019s make Grunt concatenate some files\n\nPerhaps in our project there are three separate JavaScript files:\n\n\n\tjquery.js \u2013 The library we are using.\n\tcarousel.js \u2013 A jQuery plug-in we are using.\n\tglobal.js \u2013 Our authored JavaScript file where we configure and call the plug-in.\n\n\nIn production, we would concatenate all those files together for performance reasons (one request is better than three). We need to tell Grunt to do this for us.\n\nBut wait. Grunt actually doesn\u2019t do anything all by itself. Remember Grunt is a task runner. The tasks themselves we will need to add. We actually haven\u2019t set up Grunt to do anything yet, so let\u2019s do that.\n\nThe official Grunt plug-in for concatenating files is grunt-contrib-concat. You can read about it on GitHub if you want, but all you have to do to use it on your project is to run this command from the terminal (it will henceforth go without saying that you need to run the given commands from your project\u2019s root folder):\n\nnpm install grunt-contrib-concat --save-dev\n\nA neat thing about doing it this way: your package.json file will automatically be updated to include this new dependency. Open it up and check it out. You\u2019ll see a new line:\n\n\"grunt-contrib-concat\": \"~0.3.0\"\n\nNow we\u2019re ready to use it. To use it we need to start configuring Grunt and telling it what to do.\n\nYou tell Grunt what to do via a configuration file named Gruntfile.js2\n\nJust like our package.json file, our Gruntfile.js has a very special format that must be just right. I wouldn\u2019t worry about what every word of this means. Just check out the format:\n\nmodule.exports = function(grunt) {\n\n // 1. All configuration goes here \n grunt.initConfig({\n pkg: grunt.file.readJSON('package.json'),\n\n concat: {\n // 2. Configuration for concatinating files goes here.\n }\n\n });\n\n // 3. Where we tell Grunt we plan to use this plug-in.\n grunt.loadNpmTasks('grunt-contrib-concat');\n\n // 4. Where we tell Grunt what to do when we type \"grunt\" into the terminal.\n grunt.registerTask('default', ['concat']);\n\n};\n\nNow we need to create that configuration. The documentation can be overwhelming. Let\u2019s focus just on the very simple usage example.\n\nRemember, we have three JavaScript files we\u2019re trying to concatenate. We\u2019ll list file paths to them under src in an array of file paths (as quoted strings) and then we\u2019ll list a destination file as dest. The destination file doesn\u2019t have to exist yet. It will be created when this task runs and squishes all the files together.\n\nBoth our jquery.js and carousel.js files are libraries. We most likely won\u2019t be touching them. So, for organization, we\u2019ll keep them in a /js/libs/ folder. Our global.js file is where we write our own code, so that will be right in the /js/ folder. Now let\u2019s tell Grunt to find all those files and squish them together into a single file named production.js, named that way to indicate it is for use on our real live website.\n\nconcat: { \n dist: {\n src: [\n 'js/libs/*.js', // All JS in the libs folder\n 'js/global.js' // This specific file\n ],\n dest: 'js/build/production.js',\n }\n}\n\nNote: throughout this article there will be little chunks of configuration code like above. The intention is to focus in on the important bits, but it can be confusing at first to see how a particular chunk fits into the larger file. If you ever get confused and need more context, refer to the complete file.\n\nWith that concat configuration in place, head over to the terminal, run the command:\n\ngrunt\n\nand watch it happen! production.js will be created and will be a perfect concatenation of our three files. This was a big aha! moment for me. Feel the power course through your veins. Let\u2019s do more things!\n\nLet\u2019s make Grunt minify that JavaScript\n\nWe have so much prep work done now, adding new tasks for Grunt to run is relatively easy. We just need to:\n\n\n\tFind a Grunt plug-in to do what we want\n\tLearn the configuration style of that plug-in\n\tWrite that configuration to work with our project\n\n\nThe official plug-in for minifying code is grunt-contrib-uglify. Just like we did last time, we just run an NPM command to install it:\n\nnpm install grunt-contrib-uglify --save-dev\n\nThen we alter our Gruntfile.js to load the plug-in:\n\ngrunt.loadNpmTasks('grunt-contrib-uglify');\n\nThen we configure it:\n\nuglify: {\n build: {\n src: 'js/build/production.js',\n dest: 'js/build/production.min.js'\n }\n}\n\nLet\u2019s update that default task to also run minification:\n\ngrunt.registerTask('default', ['concat', 'uglify']);\n\nSuper-similar to the concatenation set-up, right?\n\nRun grunt at the terminal and you\u2019ll get some deliciously minified JavaScript:\n\n Minified JavaScript\n\nThat production.min.js file is what we would load up for use in our index.html file.\n\nLet\u2019s make Grunt optimize our images\n\nWe\u2019ve got this down pat now. Let\u2019s just go through the motions. The official image minification plug-in for Grunt is grunt-contrib-imagemin. Install it:\n\nnpm install grunt-contrib-imagemin --save-dev\n\nRegister it in the Gruntfile.js:\n\ngrunt.loadNpmTasks('grunt-contrib-imagemin');\n\nConfigure it:\n\nimagemin: {\n dynamic: {\n files: [{\n expand: true,\n cwd: 'images/',\n src: ['**/*.{png,jpg,gif}'],\n dest: 'images/build/'\n }]\n }\n}\n\nMake sure it runs:\n\ngrunt.registerTask('default', ['concat', 'uglify', 'imagemin']);\n\nRun grunt and watch that gorgeous squishification happen:\n\n Squished images\n\nGotta love performance increases for nearly zero effort.\n\nLet\u2019s get a little bit smarter and automate\n\nWhat we\u2019ve done so far is awesome and incredibly useful. But there are a couple of things we can get smarter on and make things easier on ourselves, as well as Grunt:\n\n\n\tRun these tasks automatically when they should\n\tRun only the tasks needed at the time\n\n\nFor instance:\n\n\n\tConcatenate and minify JavaScript when JavaScript changes\n\tOptimize images when a new image is added or an existing one changes\n\n\nWe can do this by watching files. We can tell Grunt to keep an eye out for changes to specific places and, when changes happen in those places, run specific tasks. Watching happens through the official grunt-contrib-watch plugin.\n\nI\u2019ll let you install it. It is exactly the same process as the last few plug-ins we installed. We configure it by giving watch specific files (or folders, or both) to watch. By watch, I mean monitor for file changes, file deletions or file additions. Then we tell it what tasks we want to run when it detects a change.\n\nWe want to run our concatenation and minification when anything in the /js/ folder changes. When it does, we should run the JavaScript-related tasks. And when things happen elsewhere, we should not run the JavaScript-related tasks, because that would be irrelevant. So:\n\nwatch: {\n scripts: {\n files: ['js/*.js'],\n tasks: ['concat', 'uglify'],\n options: {\n spawn: false,\n },\n } \n}\n\nFeels pretty comfortable at this point, hey? The only weird bit there is the spawn thing. And you know what? I don\u2019t even really know what that does. From what I understand from the documentation it is the smart default. That\u2019s real-world development. Just leave it alone if it\u2019s working and if it\u2019s not, learn more.\n\nNote: Isn\u2019t it frustrating when something that looks so easy in a tutorial doesn\u2019t seem to work for you? If you can\u2019t get Grunt to run after making a change, it\u2019s very likely to be a syntax error in your Gruntfile.js. That might look like this in the terminal:\n\n Errors running Grunt\n\nUsually Grunt is pretty good about letting you know what happened, so be sure to read the error message. In this case, a syntax error in the form of a missing comma foiled me. Adding the comma allowed it to run.\n\nLet\u2019s make Grunt do our preprocessing\n\nThe last thing on our list from the top of the article is using Sass \u2014 yet another task Grunt is well-suited to run for us. But wait? Isn\u2019t Sass technically in Ruby? Indeed it is. There is a version of Sass that will run in Node and thus not add an additional dependency to our project, but it\u2019s not quite up-to-snuff with the main Ruby project. So, we\u2019ll use the official grunt-contrib-sass plug-in which just assumes you have Sass installed on your machine. If you don\u2019t, follow the command line instructions.\n\nWhat\u2019s neat about Sass is that it can do concatenation and minification all by itself. So for our little project we can just have it compile our main global.scss file:\n\nsass: {\n dist: {\n options: {\n style: 'compressed'\n },\n files: {\n 'css/build/global.css': 'css/global.scss'\n }\n } \n}\n\nWe wouldn\u2019t want to manually run this task. We already have the watch plug-in installed, so let\u2019s use it! Within the watch configuration, we\u2019ll add another subtask:\n\ncss: {\n files: ['css/*.scss'],\n tasks: ['sass'],\n options: {\n spawn: false,\n }\n}\n\nThat\u2019ll do it. Now, every time we change any of our Sass files, the CSS will automaticaly be updated.\n\nLet\u2019s take this one step further (it\u2019s absolutely worth it) and add LiveReload. With LiveReload, you won\u2019t have to go back to your browser and refresh the page. Page refreshes happen automatically and in the case of CSS, new styles are injected without a page refresh (handy for heavily state-based websites).\n\nIt\u2019s very easy to set up, since the LiveReload ability is built into the watch plug-in. We just need to:\n\n\nInstall the browser plug-in\nAdd to the top of the watch configuration:\n. watch: {\n options: {\n livereload: true,\n },\n scripts: { \n /* etc */\n\nRestart the browser and click the LiveReload icon to activate it.\nUpdate some Sass and watch it change the page automatically.\n\n\n Live reloading browser\n\nYum.\n\nPrefer a video?\n\nIf you\u2019re the type that likes to learn by watching, I\u2019ve made a screencast to accompany this article that I\u2019ve published over on CSS-Tricks: First Moments with Grunt\n\nLeveling up\n\nAs you might imagine, there is a lot of leveling up you can do with your build process. It surely could be a full time job in some organizations.\n\nSome hardcore devops nerds might scoff at the simplistic setup we have going here. But I\u2019d advise them to slow their roll. Even what we have done so far is tremendously valuable. And don\u2019t forget this is all free and open source, which is amazing.\n\nYou might level up by adding more useful tasks:\n\n\n\tRunning your CSS through Autoprefixer (A+ Would recommend) instead of a preprocessor add-ons.\n\tWriting and running JavaScript unit tests (example: Jasmine).\n\tBuild your image sprites and SVG icons automatically (example: Grunticon).\n\tStart a server, so you can link to assets with proper file paths and use services that require a real URL like TypeKit and such, as well as remove the need for other tools that do this, like MAMP.\n\tCheck for code problems with HTML-Inspector, CSS Lint, or JS Hint.\n\tHave new CSS be automatically injected into the browser when it ever changes.\n\tHelp you commit or push to a version control repository like GitHub.\n\tAdd version numbers to your assets (cache busting).\n\tHelp you deploy to a staging or production environment (example: DPLOY).\n\n\nYou might level up by simply understanding more about Grunt itself:\n\n\n\tRead Grunt Boilerplate by Mark McDonnell.\n\tRead Grunt Tips and Tricks by Nicolas Bevacqua.\n\tOrganize your Gruntfile.js by splitting it up into smaller files.\n\tCheck out other people\u2019s and projects\u2019 Gruntfile.js.\n\tLearn more about Grunt by digging into its source and learning about its API.\n\n\nLet\u2019s share\n\nI think some group sharing would be a nice way to wrap this up. If you are installing Grunt for the first time (or remember doing that), be especially mindful of little frustrating things you experience(d) but work(ed) through. Those are the things we should share in the comments here. That way we have this safe place and useful resource for working through those confusing moments without the embarrassment. We\u2019re all in this thing together!\n\n \n\n1 Maybe someday someone will make a beautiful Grunt app for your operating system of choice. But I\u2019m not sure that day will come. The configuration of the plug-ins is the important part of using Grunt. Each plug-in is a bit different, depending on what it does. That means a uniquely considered UI for every single plug-in, which is a long shot.\n\nPerhaps a decent middleground is this Grunt DevTools Chrome add-on.\n\n2 Gruntfile.js is often referred to as Gruntfile in documentation and examples. Don\u2019t literally name it Gruntfile \u2014 it won\u2019t work.", "year": "2013", "author": "Chris Coyier", "author_slug": "chriscoyier", "published": "2013-12-11T00:00:00+00:00", "url": "https://24ways.org/2013/grunt-is-not-weird-and-hard/", "topic": "code"} {"rowid": 1, "title": "Why Bother with Accessibility?", "contents": "Web accessibility (known in other fields as inclusive design or universal design) is the degree to which a website is available to as many people as possible. Accessibility is most often used to describe how people with disabilities can access the web.\n\nHow we approach accessibility\n\nIn the web community, there\u2019s a surprisingly inconsistent approach to accessibility. There are some who are endlessly dedicated to accessible web design, and there are some who believe it so intrinsic to the web that it shouldn\u2019t be considered a separate topic. Still, of those who are familiar with accessibility, there\u2019s an overwhelming number of designers, developers, clients and bosses who just aren\u2019t that bothered.\n\nOver the last few months I\u2019ve spoken to a lot of people about accessibility, and I\u2019ve heard the same reasons to ignore it over and over again. Let\u2019s take a look at the most common excuses.\n\nExcuse 1: \u201cPeople with disabilities don\u2019t really use the web\u201d\n\nAccessibility will make your site available to more people \u2014 the inclusion case\n\nIn the same way that the accessibility of a building isn\u2019t just about access for wheelchair users, web accessibility isn\u2019t just about blind users and screen readers. We can affect positively the lives of many people by making their access to the web easier.\n\nThere are four main types of disability that affect use of the web:\n\n\n\tVisual\n\tBlindness, low vision and colour-blindness\n\tAuditory\n\tProfoundly deaf and hard of hearing\n\tMotor\n\tThe inability to use a mouse, slow response time, limited fine motor control\n\tCognitive\n\tLearning difficulties, distractibility, the inability to focus on large amounts of information\n\n\nNone of these disabilities are completely black and white\n\nExamining deafness, it\u2019s clear from the medical scale that there are many grey areas between full hearing and total deafness:\n\n\n\tmild\n\tmoderate\n\tmoderately severe\n\tsevere\n\tprofound\n\ttotally deaf\n\n\nFor eyesight, and brain conditions that affect what users see, there is a huge range of conditions and challenges:\n\n\n\tastigmatism\n\tcolour blindness\n\takinetopsia (motion blindness)\n\tscotopic visual sensitivity (visual stress related to light)\n\tvisual agnosia (impaired recognition or identification of objects)\n\n\nWhile we might have medical and government-recognised definitions that tell us what makes a disability, day-to-day life is not so straightforward. People experience varying degrees of different conditions, and often one or more conditions at a time, creating a false divide when you view disability in terms of us and them.\n\nImpairments aren\u2019t always permanent\n\nAs we age, we\u2019re more likely to experience different levels of visual, auditory, motor and cognitive impairments. We might have an accident or illness that affects us temporarily. We might struggle more earlier or later in the day. There are so many little physiological factors that affect the way people interact with the web that we can\u2019t afford to make any assumptions based on our own limited experiences.\n\nImpairments might be somewhere between the user and the website\n\nThere are also impairments that aren\u2019t directly related to the user. Environmental factors have a huge effect on the way people interact with the web. These could be:\n\n\n\tLow bandwidth, or intermittent internet connection\n\tBright light, rain, or other weather-based conditions\n\tNoisy environments, or a location where the user doesn\u2019t want to disturb their neighbours with sound\n\tBrowsing with mobile devices, games consoles and other non-desktop devices\n\tBrowsing with legacy browsers or operating systems\n\n\nSuch environmental factors show that it\u2019s not just those with physical impairments who benefit from more accessible websites. We started designing responsive websites so we could be more future-friendly, and with a shared goal of better optimised experiences, accessibility should be at the core of responsive web design.\n\nExcuse 2: \u201cWe don\u2019t want to affect the experience for the majority of our users\u201d\n\nAccessibility will improve your site for all your users \u2014 the usability case\n\nOn a basic level, the different disability groups, as shown in the inclusion case, equate to simple usability goals:\n\n\n\tVisual \u2013 make it easy to read\n\tAuditory \u2013 make it easy to hear\n\tMotor \u2013 make it easy to interact\n\tCognitive \u2013 make it easy to understand and focus\n\n\nTaking care to ensure good usability in these areas will also have an impact on accessibility. Unless your site is catering specifically to a particular disability, where extreme optimisation is most beneficial, taking care to design with accessibility in mind will rarely negatively affect the experience of your wider audience.\n\nExcuse 3: \u201cWe don\u2019t have the budget for accessibility\u201d\n\nAccessibility will make you money \u2014 the business case\n\nBy reducing your audience through ignoring accessibility, you\u2019re potentially excluding the income from those users. Designing with accessibility in mind from the beginning of a project makes it easier to make small inexpensive optimisations as part of the design and development process, rather than bolting on costly updates to increase your potential audience later on.\n\nThe following are excerpts from a white paper about companies that increased the accessibility of their websites to comply with government regulation.\n\n\n\tImprovements in accessibility doubled Legal and General\u2019s life insurance sales online.\n\n\n\n\tImprovements in accessibility increased Tesco\u2019s grocery home delivery sales by \u00a313 million in 2005\u2026 To their surprise they found that many normal visitors preferred the ease of navigation and improved simplicity of the [parallel] accessible site and switched to use it. Tesco have replaced their \u2018normal\u2019 site with their accessible version and expect a further increase in revenues.\n\n\n\n\tImprovements in accessibility increased Virgin.net sales by 68%.\n\n\nStatistics all from WSI white paper: Improve your website\u2019s usability and accessibility to increase sales (PDF).\n\nExcuse 4: \u201cAccessible websites are ugly\u201d\n\nAccessibility won\u2019t stop your site from being beautiful \u2014 the beauty case\n\nMany people use ugly accessible websites as proof that all accessible websites are ugly. This just isn\u2019t the case. I\u2019ve compiled some examples of beautiful and accessible websites with screenshots of how they look through the Color Oracle simulator and how they perform when run through Webaim\u2019s Wave accessibility checker tool.\n\nWhile automated tools are no substitute for real users, they can help you learn more about good practices, and give you guidance on where your site needs improvements to make it more accessible.\n\nAmazon.co.uk\n\nIt may not be a decorated beauty, but Amazon is often first in functional design. It\u2019s a huge website with a lot of interactive content, but it generates just five errors on the Wave test, and is easy to read under a Color Oracle filter.\n\n Screenshot of Amazon website\n Screenshot of Amazon\u2019s Wave results \u2013 five errors\n Screenshot of Amazon through a Color Oracle filter\n\n24 ways\n\nWhen Tim Van Damme redesigned 24 ways back in 2007, it was a striking and unusual design that showed what could be achieved with CSS and some imagination. Despite the complexity of the design, it gets an outstanding zero errors on the Wave test, and is still readable under a Color Oracle filter.\n\n Screenshot of pre-2013 24 ways website design\n Screenshot of 24 ways Wave results \u2013 zero errors\n Screenshot of 24ways through a Color Oracle filter\n\nOpera\u2019s Shiny Demos\n\nDemos and prototypes are notorious for ignoring accessibility, but Opera\u2019s Shiny Demos site shows how exploring new technologies doesn\u2019t have to exclude anyone. It only gets one error on the Wave test, and looks fine under a Color Oracle filter.\n\n Screenshot of Opera\u2019s Shiny Demos website\n Screenshot of Opera\u2019s Shiny Demos Wave results \u2013 1 error\n Screenshot of Opera\u2019s Shiny Demos through a Color Oracle filter\n\nSoundCloud\n\nWhen a site is more app-like, relying on more interaction from the user, accessibility can be more challenging. However, SoundCloud only gets one error on the Wave test, and the colour contrast holds up well under a Color Oracle filter.\n\n Screenshot of SoundCloud website\n Screenshot of SoundCloud\u2019s Wave results \u2013 one error\n Screenshot of SoundCloud through a Color Oracle filter\n\nEducation and balance\n\nAs with most web design, doing accessibility well is about combining your knowledge of accessibility with your project\u2019s context to create a balance that serves your users\u2019 needs. Your types of content and interactions will dictate one set of constraints. Your users\u2019 needs and goals will dictate another. In broad terms, web design as a practice is finding the equilibrium between these constraints.\n\nAnd then there\u2019s just caring. The web as a platform is open, affordable and available to many. Accessibility is our way to ensure that nobody gets shut out.", "year": "2013", "author": "Laura Kalbag", "author_slug": "laurakalbag", "published": "2013-12-10T00:00:00+00:00", "url": "https://24ways.org/2013/why-bother-with-accessibility/", "topic": "design"} {"rowid": 21, "title": "Keeping Parts of Your Codebase Private on GitHub", "contents": "Open source is brilliant, there\u2019s no denying that, and GitHub has been instrumental in open source\u2019s recent success. I\u2019m a keen open-sourcerer myself, and I have a number of projects on GitHub. However, as great as sharing code is, we often want to keep some projects to ourselves. To this end, GitHub created private repositories which act like any other Git repository, only, well, private!\n\nA slightly less common issue, and one I\u2019ve come up against myself, is the desire to only keep certain parts of a codebase private. A great example would be my site, CSS Wizardry; I want the code to be open source so that people can poke through and learn from it, but I want to keep any draft blog posts private until they are ready to go live. Thankfully, there is a very simple solution to this particular problem: using multiple remotes.\n\nBefore we begin, it\u2019s worth noting that you can actually build a GitHub Pages site from a private repo. You can keep the entire source private, but still have GitHub build and display a full Pages/Jekyll site. I do this with csswizardry.net. This post will deal with the more specific problem of keeping only certain parts of the codebase (branches) private, and expose parts of it as either an open source project, or a built GitHub Pages site.\n\nN.B. This post requires some basic Git knowledge.\n\nAdding your public remote\n\nLet\u2019s assume you\u2019re starting from scratch and you currently have no repos set up for your project. (If you do already have your public repo set up, skip to the \u201cAdding your private remote\u201d section.)\n\nSo, we have a clean slate: nothing has been set up yet, we\u2019re doing all of that now. On GitHub, create two repositories. For the sake of this article we shall call them site.com and private.site.com. Make the site.com repo public, and the private.site.com repo private (you will need a paid GitHub account).\n\nOn your machine, create the site.com directory, in which your project will live. Do your initial work in there, commit some stuff \u2014 whatever you need to do. Now we need to link this local Git repo on your machine with the public repo (remote) on GitHub. We should all be used to this:\n\n$ git remote add origin git@github.com:[user]/site.com.git\n\nHere we are simply telling Git to add a remote called origin which lives at git@github.com:[user]/site.com.git. Simple stuff. Now we need to push our current branch (which will be master, unless you\u2019ve explicitly changed it) to that remote:\n\n$ git push -u origin master\n\nHere we are telling Git to push our master branch to a corresponding master branch on the remote called origin, which we just added. The -u sets upstream tracking, which basically tells Git to always shuttle code on this branch between the local master branch and the master branch on the origin remote. Without upstream tracking, you would have to tell Git where to push code to (and pull it from) every time you ran the push or pull commands. This sets up a permanent bond, if you like.\n\nThis is really simple stuff, stuff that you will probably have done a hundred times before as a Git user. Now to set up our private remote.\n\nAdding your private remote\n\nWe\u2019ve set up our public, open source repository on GitHub, and linked that to the repository on our machine. All of this code will be publicly viewable on GitHub.com. (Remember, GitHub is just a host of regular Git repositories, which also puts a nice GUI around it all.) We want to add the ability to keep certain parts of the codebase private. What we do now is add another remote repository to the same local repository. We have two repos on GitHub (site.com and private.site.com), but only one repository (and, therefore, one directory) on our machine. Two GitHub repos, and one local one.\n\nIn your local repo, check out a new branch. For the sake of this article we shall call the branch dev. This branch might contain work in progress, or draft blog posts, or anything you don\u2019t want to be made publicly viewable on GitHub.com. The contents of this branch will, in a moment, live in our private repository.\n\n$ git checkout -b dev\n\nWe have now made a new branch called dev off the branch we were on last (master, unless you renamed it).\n\nNow we need to add our private remote (private.site.com) so that, in a second, we can send this branch to that remote:\n\n$ git remote add private git@github.com:[user]/private.site.com.git\n\nLike before, we are just telling Git to add a new remote to this repo, only this time we\u2019ve called it private and it lives at git@github.com:[user]/private.site.com.git. We now have one local repo on our machine which has two remote repositories associated with it.\n\nNow we need to tell our dev branch to push to our private remote:\n\n$ git push -u private dev\n\nHere, as before, we are pushing some code to a repo. We are saying that we want to push the dev branch to the private remote, and, once again, we\u2019ve set up upstream tracking. This means that, by default, the dev branch will only push and pull to and from the private remote (unless you ever explicitly state otherwise).\n\nNow you have two branches (master and dev respectively) that push to two remotes (origin and private respectively) which are public and private respectively.\n\nAny work we do on the master branch will push and pull to and from our publicly viewable remote, and any code on the dev branch will push and pull from our private, hidden remote.\n\nAdding more branches\n\nSo far we\u2019ve only looked at two branches pushing to two remotes, but this workflow can grow as much or as little as you\u2019d like. Of course, you\u2019d never do all your work in only two branches, so you might want to push any number of them to either your public or private remotes. Let\u2019s imagine we want to create a branch to try something out real quickly:\n\n$ git checkout -b test\n\nNow, when we come to push this branch, we can choose which remote we send it to:\n\n$ git push -u private test\n\nThis pushes the new test branch to our private remote (again, setting the persistent tracking with -u).\n\nYou can have as many or as few remotes or branches as you like.\n\nCombining the two\n\nLet\u2019s say you\u2019ve been working on a new feature in private for a few days, and you\u2019ve kept that on the private remote. You\u2019ve now finalised the addition and want to move it into your public repo. This is just a simple merge. Check out your master branch:\n\n$ git checkout master\n\nThen merge in the branch that contained the feature:\n\n$ git merge dev\n\nNow master contains the commits that were made on dev and, once you\u2019ve pushed master to its remote, those commits will be viewable publicly on GitHub:\n\n$ git push\n\nNote that we can just run $ git push on the master branch as we\u2019d previously set up our upstream tracking (-u).\n\nMultiple machines\n\nSo far this has covered working on just one machine; we had two GitHub remotes and one local repository. Let\u2019s say you\u2019ve got yourself a new Mac (yay!) and you want to clone an existing project:\n\n$ git clone git@github.com:[user]/site.com.git\n\nThis will not clone any information about the remotes you had set up on the previous machine. Here you have a fresh clone of the public project and you will need to add the private remote to it again, as above.\n\nDone!\n\nIf you\u2019d like to see me blitz through all that in one go, check the showterm recording.\n\nThe beauty of this is that we can still share our code, but we don\u2019t have to develop quite so openly all of the time. Building a framework with a killer new feature? Keep it in a private branch until it\u2019s ready for merge. Have a blog post in a Jekyll site that you\u2019re not ready to make live? Keep it in a private drafts branch. Working on a new feature for your personal site? Tuck it away until it\u2019s finished. Need a staging area for a Pages-powered site? Make a staging remote with its own custom domain.\n\nAll this boils down to, really, is the fact that you can bring multiple remotes together into one local codebase on your machine. What you do with them is entirely up to you!", "year": "2013", "author": "Harry Roberts", "author_slug": "harryroberts", "published": "2013-12-09T00:00:00+00:00", "url": "https://24ways.org/2013/keeping-parts-of-your-codebase-private-on-github/", "topic": "code"} {"rowid": 24, "title": "Kill It With Fire! What To Do With Those Dreaded FAQs", "contents": "In the mid-1640s, a man named Matthew Hopkins attempted to rid England of the devil\u2019s influence, primarily by demanding payment for the service of tying women to chairs and tossing them into lakes.\n\nUnsurprisingly, his methods garnered criticism. Hopkins defended himself\u00a0in The Discovery of Witches\u00a0in 1647, subtitled \u201cCertaine Queries answered, which have been and are likely to be objected against MATTHEW HOPKINS, in his way of finding out Witches.\u201d\n\nEach \u201cquerie\u201d was written in the voice of an imagined detractor, and answered in the voice of an imagined defender (always referring to himself as \u201cthe discoverer,\u201d or \u201chim\u201d):\n\n\n\tQuer. 14.\n\n\tAll that the witch-finder doth is to fleece the country of their money, and therefore rides and goes to townes to have imployment, and promiseth them faire promises, and it may be doth nothing for it, and possesseth many men that they have so many wizzards and so many witches in their towne, and so hartens them on to entertaine him.\n\n\tAns.\n\n\tYou doe him a great deale of wrong in every of these particulars.\n\n\nHopkins\u2019 self-defense was an early modern English FAQ.\n\nDigital beginnings\n\nQuestion and answer formatting certainly isn\u2019t new, and stretches back much further than witch-hunt days. But its most modern, most notorious, most reviled incarnation is the internet\u2019s frequently asked questions page.\n\nFAQs began showing up on pre-internet mailing lists\u00a0as a way for list members to answer and pre-empt newcomers\u2019 repetitive questions:\n\n\n\tThe presumption was that new users would download archived past messages through ftp. In practice, this rarely happened and the users tended to post questions to the mailing list instead of searching its archives. Repeating the \u201cright\u201d answers becomes tedious\u2026\n\n\nWhen all the users of a system can hear all the other users, FAQs make a lot of sense: the conversation needs to be managed and manageable. FAQs were a stopgap for the technological limitations of the time.\n\nBut the internet moved past mailing lists. Online information can be stored, searched, filtered, and muted; we choose and control our conversations. New users no longer rely on the established community to answer their questions for them.\n\nAnd yet, FAQs are still around. They\u2019re a content anti-pattern, replicated from site to site to solve a problem we no longer have.\n\nWhat we hate when we hate FAQs\n\nAs someone who creates and structures online content \u2013 always with the goal of making that content as useful as possible to people \u2013 FAQs drive me absolutely batty. Almost universally, FAQs represent the opposite of useful. A brief list of their sins:\n\n\nDouble trouble\nDuplicated content is practically a given with FAQs. They\u2019re written as though they\u2019ll be accessed in a vacuum \u2013 but search results, navigation patterns, and curiosity ensure that users will seek answers throughout the site. Is our goal to split their focus? To make them uncertain of where to look? To divert them to an isolated microcosm of the website? Duplicated content means user confusion (to say nothing of the duplicated workload for maintaining content).\nLeaving the job unfinished\nMany FAQs fail before they\u2019re even out of the gate, presenting a list of questions that\u2019s incomplete (too short and careless to be helpful) or irrelevant (avoiding users\u2019 real concerns in favor of soundbites). Alternately, if the right questions are there, the answers may be convoluted, jargon-heavy, or otherwise difficult to understand.\nLong lists of not-my-question\nGetting a single answer often means sifting through a haystack of questions. For each potential question, the user must read, comprehend, assess, move on, rinse, repeat. That\u2019s a lot of legwork for little reward \u2013 and a lot of opportunity for mistakes. Users may miss their question, or they may fail to recognize a differently worded version of their question, or they may not notice when their sought-after answer appears somewhere they didn\u2019t expect.\nThe ventriloquist act\nFAQs shift the point of view. While websites speak on behalf of the organization (\u201cour products,\u201d \u201cour services,\u201d \u201cyou can call us for assistance,\u201d etc.), FAQs speak as the user \u2013 \u201cI can\u2019t find my password\u201d or \u201cHow do I sign up?\u201d Both voices are written from the first-person perspective, but speak for different entities, which is disorienting: it breaks the tone and messaging across the website. It\u2019s also presumptuous: why do you get to speak for the user?\n\n\nThese all underscore FAQs\u2019 fatal flaw: they are content without context, delivered without regard for the larger experience of the website. You can hear the absurdity in the name itself: if users are asking the same questions so frequently, then there is an obvious gulf between their needs and the site content. (And if not, then we have a labeling problem.)\n\nInstead of sending users to a jumble of maybe-it\u2019s-here-maybe-it\u2019s-not questions, the answers to FAQs should be found naturally throughout a website. They are not separated, not isolated, not other. They are\u00a0the content.\n\nTo present it otherwise is to create a runaround, and users know it. Jay Martel\u2019s parody, \u201cF.A.Q.s about F.A.Q.s\u201d\u00a0captures the silliness and frustration of such a system:\n\n\n\tQ: Why are you so rude?\n\n\tA: For that answer, you would have to consult an F.A.Q.s about F.A.Q.s about F.A.Q.s. But your time might be better served by simply abandoning your search for a magic answer and taking responsibility for your own profound ignorance.\n\n\nFAQs aren\u2019t magic answers. They don\u2019t resolve a content dilemma or even help users. Yet they keep cropping up, defiant, weedy, impossible to eradicate.\n\nWhere are they all coming from?\n\nBlame it on this: writing is hard. When generating content, most of us do whatever it takes to get some words on the screen. And the format of question and answer makes it easy: a reactionary first stab at content development.\n\nAfter all, the point of website content is to answer users\u2019 questions. So this \u2013 to give everyone credit \u2013 is a really good move. Content creators who think in terms of questions and answers are actually thinking of their users, particularly first-time users, trying to anticipate their needs and write towards them.\n\nIt\u2019s a good start. But it\u2019s scaffolding: writing that helps you get to the writing you\u2019re supposed to be doing. It supports you while you write your way to the heart of your content. And once you get there, you have to look back and take the scaffolding down.\n\nLeaving content in the Q&A format that helped you develop it is missing the point. You\u2019re not there to build scaffolding. You have to see your content in its naked purpose and determine the best method for communicating that purpose \u2013 and it usually won\u2019t be what got you there.\n\nThe goal (to borrow a lesson from content management systems) is to separate the content from its presentation, to let the meaning of the content inform its display.\n\nThis is, of course, a nice theory.\n\nAn occasionally necessary evil\n\nI have a lot of clients who adore FAQs. They\u2019ve developed their content over a long period of time. They\u2019ve listened to the questions their users are asking. And they\u2019ve answered them all on a page that I simply cannot get them to part with.\n\nWhich means I\u2019ve had to consider that there may be occasions where an FAQ page is appropriate.\n\nAs an example: one of my clients is a financial office in a large institution. Because this office manages several third-party systems that serve a range of niche audiences, they had developed FAQs that addressed hyper-specific instances of dysfunction within systems for different users \u2013 \u00e0 la \u201cI\u2019m a financial director and my employee submitted an expense report in such-and-such system and it returned such-and-such error. What do I do?\u201d\n\nYes, this content could be removed from the question format and rewritten. But I\u2019m not sure it would be an improvement. It won\u2019t necessarily resolve concerns about length and searchability, and the different audiences may complicate the delivery. And since the work of rewriting it didn\u2019t fit into the client workflow (small team, no writers, pressed for time), I didn\u2019t recommend the change.\n\nI\u2019ve had to make peace with not being to torch all the FAQs on the internet. Some content, like troubleshooting information or complex procedures, may be better in that format. It may be the smartest way for a particular client to handle that particular information.\n\nOf course, this has to be determined on a case-by-case basis, taking into account the amount of content, the subject matter, the skill levels of the content creators, the publishing workflow, and the search habits of the users.\n\nIf you determine that an FAQ page is the only way to go, ask yourself:\n\n\n\tIs there a better label or more specific term for the page (support, troubleshooting, product concerns, etc.)?\n\tIs there way to structure the page, categorize the questions, or otherwise make it easier for users to navigate quickly to the answer they need?\n\tIs a question and answer format absolutely the best way to communicate this information?\n\n\nForm follows function\n\nJust as a question and answer format isn\u2019t necessarily required to deliver the content, neither is it an inappropriate method in and of itself. Content professionals have developed a knee-jerk reaction:\u00a0It\u2019s an FAQ page! Quick, burn it! Buuuuurn it!\n\nBut there\u2019s no inherent evil in questions and answers. Framing content in an interrogatory construct is no more a deal with the devil than subheads and paragraphs, or narrative arcs, or bullet points.\n\nYes, FAQs are riddled with communication snafus. They deserve, more often than not, to be tied to a chair and thrown into a lake. But that wouldn\u2019t fix our content problems. FAQs are a shiny and obvious target for our frustration, but they\u2019re not unique in their flaws. In any format, in any display, in any kind of page, weak content can rear its ugly, poorly written head.\n\nIt\u2019s not the Q&A that\u2019s to blame, it\u2019s bad content. Content without context will always fail users. That\u2019s the real witch in our midst.", "year": "2013", "author": "Lisa Maria Martin", "author_slug": "lisamariamartin", "published": "2013-12-08T00:00:00+00:00", "url": "https://24ways.org/2013/what-to-do-with-faqs/", "topic": "content"} {"rowid": 23, "title": "Animating Vectors with SVG", "contents": "It is almost 2014 and fifteen years ago the W3C started to develop a web-based scalable vector graphics (SVG) format. As web technologies go, this one is pretty old and well entrenched. \n\nSee the Pen yJflC by Drew McLellan (@drewm) on CodePen\n\n\nEmbed not working on your device? Try direct. \n\nUnlike rasterized images, SVG files will stay crisp and sharp at any resolution. With high-DPI phones, tablets and monitors, all those rasterized icons are starting to look a bit old and blocky. There are several options to get simpler, decorative pieces to render smoothly and respond to various device widths, shapes and sizes. Symbol fonts are one option; the other is SVG.\n\nI\u2019m a big fan of SVG. SVG is an XML format, which means it is possible to write by hand or to script. The most common way to create an SVG file is through the use of various drawing applications like Illustrator, Inkscape or Sketch. All of them open and save the SVG format.\n\nBut, if SVG is so great, why doesn\u2019t it get more attention?\n\nThe simple answer is that for a long time it wasn\u2019t well supported, so no one touched the technology. SVG\u2019s adoption has always been hampered by browser support, but that\u2019s not the case any more. Every modern browser (at least three versions back) supports SVG. Even IE9. \n\nAlthough the browsers support SVG, it is implemented in many different ways.\n\nSVG in HTML\n\nSome browsers allow you to embed SVG right in the HTML: the <svg> element. Treating SVG as a first-class citizen works \u2014 sometimes. Another way to embed SVG is via the <img> element; using the src attribute, you can refer to an SVG file. Again, this only works sometimes and leaves you in a tight space if you need to have a fallback for older browsers. The most common solution is to use the <object> element, with the data attribute referencing the SVG file. When a browser does not support this, it falls back to the content inside the <object>. This could be a rasterized fallback <img>. This method gets you the best of both worlds: a nice vector image with an alternative rasterized image for browsers that don\u2019t support SVG. The downside is that you need to manage both formats, and some browsers will download both the SVG and the rasterized version, becoming a performance problem.\n\nAlexey Ten came up with a brilliant little trick that uses inline SVG combined with an SVG <image> element. This has an SVG href pointing to the vector SVG representation and a src attribute to the rasterized version. Older browsers will rewrite the <image> element as <img> and use the rasterized src attribute, but modern browsers will show the vector SVG.\n\n<svg width=\"96\" height=\"96\">\n <image xlink:href=\"svg.svg\" src=\"svg.png\" width=\"96\" height=\"96\"/>\n</svg>\n\nIt is a great workaround for most situations. You will have to determine the browsers you want or need to support and consider performance issues to decide which method is best for you.\n\nSo it can be used in HTML. Why?\n\nThere are two compelling reasons why vector graphics in the form of icons and symbols are going to be important on the web. With higher resolution screens, going from 72dpi to 200, 300, even over 400dpi, your rasterized icons are looking a little too blocky. As we zoom and print, we expect the visuals on the site to also stay smooth and crisp.\n\nThe other main reason vector graphics are useful is scaling. As responsive websites become the norm, we need a way to dynamically readjust the heights, widths and styles of various elements. SVG handles this perfectly, since vectors remain smooth when changing size.\n\nSVG files are text-based, so they\u2019re small and can be gzipped nicely. There are also techniques for creating SVG sprites to further squeeze out performance gains. But SVG really shines when you begin to couple it with JavaScript. Since SVG elements are part of the DOM, they can be interacted with just like any other element you are used to.\n\nThe folks at Vox Media had an ingenious little trick with their SVG for a Playstation and Xbox One reviews. I\u2019ve used the same technique for the 24 ways example. Vox Media spent a lot of time creating SVG line art of the two consoles, but once in place the artwork scaled and resized beautifully. \n\nThey still had another trick up their sleeves. In their example, they knew each console was line art, so they used SVG\u2019s line dash property to simulate the lines being drawn by animating the growth of the line by small percentage increments until the lines were complete.\n\nThis is a great example of a situation where the alternatives wouldn\u2019t be as straightforward to implement. Using an animated GIF would create a heavy file since it would need to contain all the frames of the animation at a large size to permit scaling; even then, smooth aliasing would be lost. canvas and plenty of JavaScript would be another alternative, but this is a rasterized format. It would need be redrawn at each scale, which is certainly possible, but smoothness would be lost when zooming or printing.\n\nThe HTML, SVG and JavaScript for this example is less than 4KB! Let\u2019s have a quick look at the code:\n\n<script>\nvar current_frame = 0;\nvar total_frames = 60;\nvar path = new Array();\nvar length = new Array();\nfor(var i=0; i<4;i++){\n\tpath[i] = document.getElementById('i'+i);\n\tl = path[i].getTotalLength();\n\tlength[i] = l;\n\tpath[i].style.strokeDasharray = l + ' ' + l; \n\tpath[i].style.strokeDashoffset = l;\n}\nvar handle = 0;\n\nvar draw = function() {\n var progress = current_frame/total_frames;\n if (progress > 1) {\n window.cancelAnimationFrame(handle);\n } else {\n current_frame++;\n for(var j=0; j<path.length;j++){\n\t path[j].style.strokeDashoffset = Math.floor(length[j] * (1 - progress));\n }\n handle = window.requestAnimationFrame(draw);\n }\n};\ndraw();\n</script>\n\nFirst, we need to initialize a few variables to set the current frame, the number of frames, how fast the animation will run, and we get each of the paths based on their IDs. With those paths, we set the dash and dash offset.\n\npath[i].style.strokeDasharray = l + ' ' + l; \npath[i].style.strokeDashoffset = l;\n\nWe start the line as a dash, which effectively makes it blank or invisible.\n\nNext, we move to the draw() function. This is where the magic happens. We want to increment the frame to move us forward in the animation and check it\u2019s not finished. If it continues, we then take a percentage of the distance based on the frame and then set the dash offset to this new percentage. This gives the illusion that the line is being drawn. Then we have an animation callback, which starts the draw process over again.\n\nThat\u2019s it! It will work with any SVG <path> element that you can draw.\n\nLibraries to get you started\n\nIf you aren\u2019t sure where to start with SVG, there are several libraries out there to help. They also abstract all browser compatibility issues to make your life easier.\n\n\n\tRapha\u00ebl\n\tSnap.svg\n\tsvg.js\n\n\nYou can also get most vector applications to export SVG. This means that you can continue your normal workflows, but instead of flattening the image as a PNG or bringing it over to Photoshop to rasterize, you can keep all your hard work as vectors and reap the benefits of SVG.", "year": "2013", "author": "Brian Suda", "author_slug": "briansuda", "published": "2013-12-07T00:00:00+00:00", "url": "https://24ways.org/2013/animating-vectors-with-svg/", "topic": null} {"rowid": 2, "title": "Levelling Up", "contents": "Hello, 24 ways. I\u2019m Ashley and I sell property insurance. I\u2019m interrupting your Christmas countdown with an article about rental property software and a guy, Pete, who selflessly encouraged me to build my first web app. It doesn\u2019t sound at all festive, or \u2014 considering I\u2019ve used both \u201cinsurance\u201d and \u201crental property\u201d \u2014 interesting, but do stick with me. There\u2019s eggnog at the end.\n\nI run a property insurance business, Brokers Direct. It\u2019s a small operation, but well established. We\u2019ve been selling landlord insurance on the web for over thirteen years, for twelve of which we have provided our clients with third-party software for managing their rental property portfolios. Free. Of. Charge.\n\nIt sounds like a sweet deal for our customers, but it isn\u2019t. At least, not any more. The third-party software is victim to years of neglect by its vendor. Its questionable interface, garish visuals and, ahem, clip art icons have suffered from a lack of updates. While it was never a contender for software of the year, I\u2019ve steadily grown too embarrassed to associate my business with it.\n\n The third-party rental property software we distributed\n\nI wanted to offer my customers a simple, clean and lightweight alternative. In an industry that\u2019s dominated by dated and bloated software, it seemed only logical that I should build my own rental property tool.\n\nThe long learning-to-code slog\n\nLearning a programming language is daunting, the source of my frustration stemming from a non-programming background. Generally, tutorials assume a degree of familiarity with programming, whether it be tools, conventions or basic skills. I had none and, at the time, there was nothing on the web really geared towards a novice. I reached the point where I genuinely thought I was just not cut out for coding. Surrendering to my feelings of self-doubt and frustration, I sourced a local Rails developer, Pete, to build it for me.\n\nPete brought a pack of index cards to our meeting. Index cards that would represent each feature the rental property software would launch with.\n\n \n\n\u201cOK,\u201d he began. \u201cWe\u2019ll need a user model, tenant model, authentication, tenant and property relationships\u2026\u201d A dozen index cards with a dozen features lined the coffee table in a grid-like format. Logical, comprehensible, achievable. Seeing the app laid out in a digestible manner made it seem surmountable. Maybe I could do this.\n\n\u201cI\u2019ve been trying to learn Rails\u2026\u201d, I piped up.\n\nI don\u2019t know why I said it. I was fully prepared to hire Pete to do the hard work for me. But Pete, unprompted, gathered the index cards and neatly stacked them together, coasting them across the table towards me. \u201cYou should build this\u201d.\n\nPete, a full-time freelance developer at the time, was turning down a paying job in favour of encouraging me to learn to code. Looking back, I didn\u2019t realise how significant this moment was.\n\nThat evening, I took Pete\u2019s index cards home to make a start on my app, slowly evolving each of the cards into a working feature. Building the app solo, I turned to Stack Overflow to solve the inevitable coding hurdles I encountered, as well as calling on a supportive Rails community. Whether they provided direct solutions to my programming woes, or simply planted a seed on how to solve a problem, I kept coding. Many months later, and after several more doubtful moments, Lodger was born.\n\n Property overview of my app, Lodger.\n\nIf I can do it, so can you\n\nI misspent a lot of time building Twitter and blogging applications (apparently, all Rails tutorials centre around Twitter and blogging). If I could rewind and impart some advice to myself, this is what I\u2019d say.\n\nThere\u2019s no magic formula\n\n\u201cI haven\u2019t quite grasped Rails routing. I should tackle another tutorial.\u201d \n\nMaking excuses \u2014 or procrastination \u2014 is something we are all guilty of. I was waiting for a programming book that would magically deposit a grasp of the entire Ruby syntax in my head. I kept buying books thinking each one would be the one where it all clicked. I now have a bookshelf full of Ruby material, all of which I\u2019ve barely read, and none of which got me any closer to launching my web app. Put simply, there\u2019s no magic formula.\n\nBreak it down\n\nWhatever it is you want to build, break it down into digestible chunks. Taking Pete\u2019s method as an example, having an index card represent an individual feature helped me tremendously. Tackle one at a time. Even if each feature takes you a month to build, and you have eight features to launch with, after eight months you\u2019ll have your MVP. Remember, if you do nothing each day, it adds up to nothing.\n\nHave a tangible product to build\n\nI have a wonderful habit of writing down personal notes, usually to express my feelings at the time or to log an idea, only to uncover them months or years down the line, long after I forgot I had written them. I made a timely discovery while writing this article, discovering this gem while flicking through a battered Moleskine:\n\n\n\t\u201cI don\u2019t seem to be making good progress with learning Rails, but development still excites me. I should maybe stop doing tutorials and work towards building a specific app.\u201d\n\n\nHaving a real product to work on, like I did with Lodger, means you have something tangible to apply the techniques you are learning. I found this prevented me from flitting aimlessly between tutorials and books, which is an easy area to accidentally remain in.\n\nTeam up\n\nIf possible, team up with a designer and create something together. Designers are great at presenting features in a way you\u2019d never have considered. You will learn a lot from making their designs come to life.\n\nYour homework for the holiday\n\nDespite having a web app under my belt, I am not a programmer. I tinker with code, piecing enough bits of it together to make something functional. And that\u2019s OK! I\u2019m not excusing sloppiness, but if we aimed for perfection every time, we\u2019d never execute any of our ideas.\n\nAs the holidays approach and you\u2019ve exhausted yet another viewing of The Muppet Christmas Carol (or is that just my guilty pleasure at Christmas?), you may have time on your hands. Time to explore an idea you\u2019ve been sitting on, but \u2014 plagued with procrastination and doubt \u2014 have yet to bring to life. This holiday, I am here to say to you what Pete said to me.\n\nYou should build this.\n\nYou don\u2019t need to be the next Mark Zuckerberg or Larry Page. You just have to learn enough to get it done.\n\nPS: I lied about the eggnogg, but try capturing somebody\u2019s attention when you tell them you sell property insurance!", "year": "2013", "author": "Ashley Baxter", "author_slug": "ashleybaxter", "published": "2013-12-06T00:00:00+00:00", "url": "https://24ways.org/2013/levelling-up/", "topic": "business"} {"rowid": 11, "title": "JavaScript: Taking Off the Training Wheels", "contents": "JavaScript is the third pillar of front-end web development. Of those pillars, it is both the most powerful and the most complex, so it\u2019s understandable that when 24 ways asked, \u201cWhat one thing do you wish you had more time to learn about?\u201d, a number of you answered \u201cJavaScript!\u201d\n\nThis article aims to help you feel happy writing JavaScript, and maybe even without libraries like jQuery. I can\u2019t comprehensively explain JavaScript itself without writing a book, but I hope this serves as a springboard from which you can jump to other great resources.\n\nWhy learn JavaScript?\n\nSo what\u2019s in it for you? Why take the next step and learn the fundamentals?\n\nConfidence with jQuery\n\nIf nothing else, learning JavaScript will improve your jQuery code; you\u2019ll be comfortable writing jQuery from scratch and feel happy bending others\u2019 code to your own purposes. Writing efficient, fast and bug-free jQuery is also made much easier when you have a good appreciation of JavaScript, because you can look at what jQuery is really doing. Understanding how JavaScript works lets you write better jQuery because you know what it\u2019s doing behind the scenes. When you need to leave the beaten track, you can do so with confidence.\n\nIn fact, you could say that jQuery\u2019s ultimate goal is not to exist: it was invented at a time when web APIs were very inconsistent and hard to work with. That\u2019s slowly changing as new APIs are introduced, and hopefully there will come a time when jQuery isn\u2019t needed.\n\nAn example of one such change is the introduction of the very useful document.querySelectorAll. Like jQuery, it converts a CSS selector into a list of matching elements. Here\u2019s a comparison of some jQuery code and the equivalent without.\n\n$('.counter').each(function (index) {\n $(this).text(index + 1);\n});\n\nvar counters = document.querySelectorAll('.counter');\n[].slice.call(counters).forEach(function (elem, index) {\n elem.textContent = index + 1;\n});\n\nSolving problems no one else has!\n\nWhen you have to go to the internet to solve a problem, you\u2019re forever stuck reusing code other people wrote to solve a slightly different problem to your own. Learning JavaScript will allow you to solve problems in your own way, and begin to do things nobody else ever has.\n\nNode.js\n\nNode.js is a non-browser environment for running JavaScript, and it can do just about anything! But if that sounds daunting, don\u2019t worry: the Node community is thriving, very friendly and willing to help.\n\nI think Node is incredibly exciting. It enables you, with one language, to build complete websites with complex and feature-filled front- and back-ends. Projects that let users log in or need a database are within your grasp, and Node has a great ecosystem of library authors to help you build incredible things. Exciting!\n\nHere\u2019s an example web server written with Node. http is a module that allows you to create servers and, like jQuery\u2019s $.ajax, make requests. It\u2019s a small amount of code to do something complex and, while working with Node is different from writing front-end code, it\u2019s certainly not out of your reach.\n\nvar http = require('http');\nhttp.createServer(function (req, res) {\n res.writeHead(200, {'Content-Type': 'text/plain'});\n res.end('Hello World');\n}).listen(1337);\nconsole.log('Server running at http://localhost:1337/');\n\nGrunt and other website tools\n\nNode has brought in something of a renaissance in tools that run in the command line, like Yeoman and Grunt. Both of these rely heavily on Node, and I\u2019ll talk a little bit about Grunt here.\n\nGrunt is a task runner, and many people use it for compiling Sass or compressing their site\u2019s JavaScript and images. It\u2019s pretty cool. You configure Grunt via the gruntfile.js, so JavaScript skills will come in handy, and since Grunt supports plug-ins built with JavaScript, knowing it unlocks the bucketloads of power Grunt has to offer.\n\nWays to improve your skills\n\nSo you know you want to learn JavaScript, but what are some good ways to learn and improve? I think the answer to that is different for different people, but here are some ideas.\n\nRebuild a jQuery app\n\nConverting a jQuery project to non-jQuery code is a great way to explore how you modify elements on the page and make requests to the server for data. My advice is to focus on making it work in one modern browser initially, and then go cross-browser if you\u2019re feeling adventurous. There are many resources for directly comparing jQuery and non-jQuery code, like Jeffrey Way\u2019s jQuery to JavaScript article.\n\nFind a mentor\n\nIf you think you\u2019d work better on a one-to-one basis then finding yourself a mentor could be a brilliant way to learn. The JavaScript community is very friendly and many people will be more than happy to give you their time. I\u2019d look out for someone who\u2019s active and friendly on Twitter, and does the kind of work you\u2019d like to do. Introduce yourself over Twitter or send them an email. I wouldn\u2019t expect a full tutoring course (although that is another option!) but they\u2019ll be very glad to answer a question and any follow-ups every now and then.\n\nGo to a workshop\n\nMany conferences and local meet-ups run workshops, hosted by experts in a particular field. See if there\u2019s one in your area. Workshops are great because you can ask direct questions, and you\u2019re in an environment where others are learning just like you are \u2014 no need to learn alone!\n\nSet yourself challenges\n\nThis is one way I like to learn new things. I have a new thing that I\u2019m not very good at, so I pick something that I think is just out of my reach and I try to build it. It\u2019s learning by doing and, even if you fail, it can be enormously valuable.\n\nWhere to start?\n\nIf you\u2019ve decided learning JavaScript is an important step for you, your next question may well be where to go from here.\n\nI\u2019ve collected some links to resources I know of or use, with some discussion about why you might want to check a particular site out. I hope this serves as a springboard for you to go out and learn as much as you want.\n\nBeginner\n\nIf you\u2019re just getting started with JavaScript, I\u2019d recommend heading to one of these places. They cover the basics and, in some cases, a little more advanced stuff. They\u2019re all reputable sources (although, I\u2019ve included something I wrote \u2014 you can decide about that one!) and will not lead you astray.\n\n\n\tjQuery\u2019s JavaScript 101 is a great first resource for JavaScript that will give you everything you need to work with jQuery like a pro.\n\tCodecademy\u2019s JavaScript Track is a small but useful JavaScript course. If you like learning interactively, this could be one for you.\n\tHTMLDog\u2019s JavaScript Tutorials take you right through from the basics of code to a brief introduction to newer technology like Node and Angular. [Disclaimer: I wrote this stuff, so it comes with a hazard warning!]\n\tThe tuts+ jQuery to JavaScript mentioned earlier is great for seeing how jQuery code looks when converted to pure JavaScript.\n\n\nGetting in-depth\n\nFor more comprehensive documentation and help I\u2019d recommend adding these places to your list of go-tos.\n\n\n\tMDN: the Mozilla Developer Network is the first place I go for many JavaScript questions. I mostly find myself there via a search, but it\u2019s a great place to just go and browse.\n\tAxel Rauschmayer\u2019s 2ality is a stunning collection of articles that will take you deep into JavaScript. It\u2019s certainly worth looking at.\n\tAddy Osmani\u2019s JavaScript Design Patterns is a comprehensive collection of patterns for writing high quality JavaScript, particularly as you (I hope) start to write bigger and more complex applications.\n\n\nAnd finally\u2026\n\nI think the key to learning anything is curiosity and perseverance. If you have a question, go out and search for the answer, even if you have no idea where to start. Keep going and going and eventually you\u2019ll get there. I bet you\u2019ll learn a whole lot along the way. Good luck!\n\nMany thanks to the people who gave me their time when I was working on this article: Tom Oakley, Jack Franklin, Ben Howdle and Laura Kalbag.", "year": "2013", "author": "Tom Ashworth", "author_slug": "tomashworth", "published": "2013-12-05T00:00:00+00:00", "url": "https://24ways.org/2013/javascript-taking-off-the-training-wheels/", "topic": "code"} {"rowid": 15, "title": "Git for Grown-ups", "contents": "You are a clever and talented person. You create beautiful designs, or perhaps you have architected a system that even my cat could use. Your peers adore you. Your clients love you. But, until now, you haven\u2019t *&^#^! been able to make Git work. It makes you angry inside that you have to ask your co-worker, again, for that *&^#^! command to upload your work.\n\nIt\u2019s not you. It\u2019s Git. Promise.\n\nYes, this is an article about the popular version control system, Git. But unlike just about every other article written about Git, I\u2019m not going to give you the top five commands that you need to memorize; and I\u2019m not going to tell you all your problems would be solved if only you were using this GUI wrapper or that particular workflow. You see, I\u2019ve come to a grand realization: when we teach Git, we\u2019re doing it wrong.\n\nLet me back up for a second and tell you a little bit about the field of adult education. (Bear with me, it gets good and will leave you feeling both empowered and righteous.) Andragogy, unlike pedagogy, is a learner-driven educational experience. There are six main tenets to adult education: \n\n\n\tAdults prefer to know why they are learning something.\n\tThe foundation of the learning activities should include experience.\n\tAdults prefer to be able to plan and evaluate their own instruction.\n\tAdults are more interested in learning things which directly impact their daily activities.\n\tAdults prefer learning to be oriented not towards content, but towards problems.\n\tAdults relate more to their own motivators than to external ones.\n\n\nNowhere in this list does it include \u201cmemorize the five most popular Git commands\u201d. And yet this is how we teach version control: init, add, commit, branch, push. You\u2019re an expert! Sound familiar? In the hierarchy of learning, memorizing commands is the lowest, or most basic, form of learning. At the peak of learning you are able to not just analyze and evaluate a problem space, but create your own understanding in relation to your existing body of knowledge.\n\n\u201cFine,\u201d I can hear you saying to yourself. \u201cBut I\u2019m here to learn about version control.\u201d Right you are! So how can we use this knowledge to master Git? First of all: I give you permission to use Git as a tool. A tool which you control and which you assign tasks to. A tool like a hammer, or a saw. Yes, your mastery of your tools will shape the kinds of interactions you have with your work, and your peers. But it\u2019s yours to control. Git was written by kernel developers for kernel development. The web world has adopted Git, but it is not a tool designed for us and by us. It\u2019s no Sass, y\u2019know? Git wasn\u2019t developed out of our frustration with managing CSS files in an increasingly complex ecosystem of components and atomic design. So, as you work through the next part of this article, give yourself a bit of a break. We\u2019re in this together, and it\u2019s going to be OK.\n\nWe\u2019re going to do a little activity. We\u2019re going to create your perfect Git cheatsheet.\n\nI want you to start by writing down a list of all the people on your code team. This list may include:\n\n\n\tdevelopers\n\tdesigners\n\tproject managers\n\tclients\n\n\nNext, I want you to write down a list of all the ways you interact with your team. Maybe you\u2019re a solo developer and you do all the tasks. Maybe you only do a few things. But I want you to write down a list of all the tasks you\u2019re actually responsible for. For example, my list looks like this:\n\n\n\twriting code\n\treviewing code\n\tpublishing tested code to your server(s)\n\ttroubleshooting broken code\n\n\nThe next list will end up being a series of boxes in a diagram. But to start, I want you to write down a list of your tools and constraints. This list potentially has a lot of noun-like items and verb-like items:\n\n\n\tcode hosting system (Bitbucket? GitHub? Unfuddle? self-hosted?)\n\tserver ecosystem (dev/staging/live)\n\tautomated testing systems or review gates\n\tautomated build systems (that Jenkins dude people keep referring to)\n\n\nBrilliant! Now you\u2019ve got your actors and your actions, it\u2019s time to shuffle them into a diagram. There are many popular workflow patterns. None are inherently right or wrong; rather, some are more or less appropriate for what you are trying to accomplish.\n\nCentralized workflow\n\nEveryone saves to a single place. This workflow may mean no version control, or a very rudimentary version control system which only ever has a single copy of the work available to the team at any point in time.\n\n \n\nBranching workflow\n\nEveryone works from a copy of the same place, merging their changes into the main copy as their work is completed. Think of the branches as a motorcycle sidecar: they\u2019re along for the ride and probably cannot exist in isolation of the main project for long without serious danger coming to the either the driver or sidecar passenger. Branches are a fundamental concept in version control \u2014 they allow you to work on new features, bug fixes, and experimental changes within a single repository, but without forcing the changes onto others working from the same branch.\n\n \n\nForking workflow\n\nEveryone works from their own, independent repository. A fork is an exact duplicate of a repository that a developer can make their own changes to. It can be kept up to date with additional changes made in other repositories, but it cannot force its changes onto another\u2019s repository. A fork is a complete repository which can use its own workflow strategies. If developers wish to merge their work with the main project, they must make a request of some kind (submit a patch, or a pull request) which the project collaborators may choose to adopt or reject. This workflow is popular for open source projects as it enforces a review process.\n\n \n\nGitflow workflow\n\nA specific workflow convention which includes five streams of parallel coding efforts: master, development, feature branches, release branches, and hot fixes. This workflow is often simplified down to a few elements by web teams, but may be used wholesale by software product teams. The original article describing this workflow was written by Vincent Driessen back in January 2010.\n\n \n\nBut these workflows aren\u2019t about you yet, are they? So let\u2019s make the connections.\n\nFrom the list of people on your team you identified earlier, draw a little circle. Give each of these circles some eyes and a smile. Now I want you to draw arrows between each of these people in the direction that code (ideally) flows. Does your designer create responsive prototypes which are pushed to the developer? Draw an arrow to represent this.\n\nChances are high that you don\u2019t just have people on your team, but you also have some kind of infrastructure. Hopefully you wrote about it earlier. For each of the servers and code repositories in your infrastructure, draw a square. Now, add to your diagram the relationships between the people and each of the machines in the infrastructure. Who can deploy code to the live server? How does it really get there? I bet it goes through some kind of code hosting system, such as GitHub. Draw in those arrows.\n\nBut wait!\n\nThe code that\u2019s on your development machine isn\u2019t the same as the live code. This is where we introduce the concept of a branch in version control. In Git, a repository contains all of the code (sort of). A branch is a fragment of the code that has been worked on in isolation to the other branches within a repository. Often branches will have elements in common. When we compare two (or more) branches, we are asking about the difference (or diff) between these two slivers. Often the master branch is used on production, and the development branch is used on our dev server. The difference between these two branches is the untested code that is not yet deployed.\n\nOn your diagram, see if you can colour-code according to the branch names at each of the locations within your infrastructure. You might find it useful to make a few different copies of the diagram to isolate each of the tasks you need to perform. For example: our team has a peer review process that each branch must go through before it is merged into the shared development branch.\n\nFinally, we are ready to add the Git commands necessary to make sense of the arrows in our diagram. If we are bringing code to our own workstation we will issue one of the following commands: clone (the first time we bring code to our workstation) or pull. Remembering that a repository contains all branches, we will issue the command checkout to switch from one branch to another within our own workstation. If we want to share a particular branch with one of our team mates, we will push this branch back to the place we retrieved it from (the origin). Along each of the arrows in your diagram, write the name of the command you are are going to use when you perform that particular task.\n\n \n\nFrom here, it\u2019s up to you to be selfish. Before asking Git what command it would like you to use, sketch the diagram of what you want. Git is your tool, you are not Git\u2019s tool. Draw the diagram. Communicate your tasks with your team as explicitly as you can. Insist on being a selfish adult learner \u2014 demand that others explain to you, in ways that are relevant to you, how to do the things you need to do today.", "year": "2013", "author": "Emma Jane Westby", "author_slug": "emmajanewestby", "published": "2013-12-04T00:00:00+00:00", "url": "https://24ways.org/2013/git-for-grownups/", "topic": "code"} {"rowid": 8, "title": "Coding Towards Accessibility", "contents": "\u201cCan we make it AAA-compliant?\u201d \u2013 does this question strike fear into your heart? Maybe for no other reason than because you will soon have to wade through the impenetrable WCAG documentation once again, to find out exactly what AAA-compliant means?\n\nI\u2019m not here to talk about that.\n\nThe Web Content Accessibility Guidelines are a comprehensive and peer-reviewed resource which we\u2019re lucky to have at our fingertips. But they are also a pig to read, and they may have contributed to the sense of mystery and dread with which some developers associate the word accessibility.\n\nThis Christmas, I want to share with you some thoughts and some practical tips for building accessible interfaces which you can start using today, without having to do a ton of reading or changing your tools and workflow.\n\nBut first, let\u2019s clear up a couple of misconceptions.\n\nDreary, flat experiences\n\nI recently built a front-end framework for the Post Office. This was a great gig for a developer, but when I found out about my client\u2019s stringent accessibility requirements I was concerned that I\u2019d have to scale back what was quite a complex set of visual designs.\n\nSites like Jakob Neilsen\u2019s old workhorse useit.com and even the pioneering GOV.UK may have to shoulder some of the blame for this. They put a premium on usability and accessibility over visual flourish. (Although, in fairness to Mr Neilsen, his new site nngroup.com is really quite a snazzy affair, comparatively.)\n\nOf course, there are other reasons for these sites\u2019 aesthetics \u2014 and it\u2019s not because of the limitations of the form. You can make an accessible site look as glossy or as plain as you want it to look. It\u2019s always our own ingenuity and attention to detail that are going to be the limiting factors.\n\nSynecdoche\n\nWe must always guard against the tendency to assume that catering to screen readers means we have the whole accessibility ballgame covered. \n\nThere\u2019s so much more to accessibility than assistive technology, as you know. And within the field of assistive technology there are plenty of other devices for us to consider.\n\nPlanning to accommodate all these users and devices can be daunting. When I first started working in this field I thought that the breadth of technology was prohibitive. I didn\u2019t even know what a screen reader looked like. (I assumed they were big and heavy, perhaps like an old typewriter, and certainly they would be expensive and difficult to fathom.) This is nonsense, of course. Screen reader emulators are readily available as browser extensions and can be activated in seconds. Chromevox and Fangs are both excellent and you should download one or the other right now.\n\nBut the really good news is that you can emulate many other types of assistive technology without downloading a byte. And this is where we move from misconceptions into some (hopefully) useful advice.\n\nThe mouse trap\n\nThe simplest and most effective way to improve your abilities as a developer of accessible interfaces is to unplug your mouse.\n\nKeyboard operation has its own WCAG chapter, because most users of assistive technology are navigating the web using only their keyboards. You can go some way towards putting yourself into their shoes so easily \u2014 just by ditching a peripheral.\n\nLearning this was a lightbulb moment for me. When I build interfaces I am constantly flicking between code and the browser, testing or viewing the changes I have made. Now, instead of checking a new element once, I check it twice: once with my mouse and then again without.\n\nDon\u2019t just :hover\n\nThe reality is that when you first start doing this you can find your site becomes unusable straightaway. It\u2019s easy to lose track of which element is in focus as you hit the tab key repeatedly.\n\nOne of the easiest changes you can make to your coding practice is to add :focus and :active pseudo-classes to every hover state that you write. I\u2019m still amazed at how many sites fail to provide a decent focus state for links (and despite previous 24 ways authors in 2007 and 2009 writing on this same issue!).\n\nYou may find that in some cases it makes sense to have something other than, or in addition to, the hover state on focus, but start with the hover state that your designer has taken the time to provide you with. It\u2019s a tiny change and there is no downside. So instead of this:\n\n.my-cool-link:hover {\n\tbackground-color: MistyRose ;\t\n}\n\n\u2026try writing this:\n\n.my-cool-link:hover,\n.my-cool-link:focus,\n.my-cool-link:active {\n\tbackground-color: MistyRose ;\t\n}\n\nI\u2019ve toyed with the idea of making a Sass mixin to take care of this for me, but I haven\u2019t yet. I worry that people reading my code won\u2019t see that I\u2019m explicitly defining my focus and active states so I take the hit and write my hover rules out longhand.\n\nJavaScript can play, too\n\nThis was another revelation for me. Keyboard-only navigation doesn\u2019t necessitate a JavaScript-free experience, and up-to-date screen readers can execute JavaScript. So we\u2019re able to create complex JavaScript-driven interfaces which all users can interact with.\n\nSome of the hard work has already been done for us. First, there are already conventions around keyboard-driven interfaces. Think about the last time you viewed a photo album on Facebook. You can use the arrow keys to switch between photos, and the escape key closes whichever lightbox-y UI thing Facebook is showing its photos in this week. Arrow keys (up/down as well as left/right) for progression through content; Escape to back out of something; Enter or space bar to indicate a positive intention \u2014 these are established keyboard conventions which we can apply to our interfaces to improve their accessiblity. \n\nOf course, by doing so we are improving our interfaces in general, giving all users the option to switch between keyboard and mouse actions as and when it suits them.\n\nSecond, this guy wants to help you out. Hans Hillen is a developer who has done a great deal of work around accessibility and JavaScript-powered interfaces. Along with The Paciello Group he has created a version of the jQuery UI library which has been fully optimised for keyboard navigation and screen reader use. It\u2019s a fantastic reference which I revisit all the time \n\nI\u2019m not a huge fan of the jQuery UI library. It\u2019s a pain to style and the code is a bit bloated. So I\u2019ve not used this demo as a code resource to copy wholesale. I use it by playing with the various components and seeing how they react to keyboard controls. Each component is also fully marked up with the relevant ARIA roles to improve screen reader announcement where possible (more on this below).\n\nCoding for accessibility promotes good habits\n\nThis is a another observation around accessibility and JavaScript. I noticed an improvement in the structure and abstraction of my code when I started adding keyboard controls to my interface elements. \n\nYour code has to become more modular and event-driven, because any number of events could trigger the same interaction. A mouse-click, the Enter key and the space bar could all conceivably trigger the same open function on a collapsed accordion element. (And you want to keep things DRY, don\u2019t you?) \n\nIf you aren\u2019t already in the habit of separating out your interface functionality into discrete functions, you will be soon.\n\nvar doSomethingCool = function(){\n\t// Do something cool here.\n}\n\n// Bind function to a button click - pretty vanilla\n$('.myCoolButton').on('click', function(){\n\tdoSomethingCool();\n\treturn false;\n});\n\n// Bind the same function to a range of keypresses\n$(document).keyup(function(e){\n\tswitch(e.keyCode) {\n\t\tcase 13: // enter\n\t\tcase 32: // spacebar\n\t\t\tdoSomethingCool();\n\t\t\tbreak;\n\t\tcase 27: // escape\n\t\t\tdoSomethingElse();\n\t\t\tbreak;\n\t}\n});\n\nTo be honest, if you\u2019re doing complex UI stuff with JavaScript these days, or if you\u2019ve been building any responsive interfaces which rely on JavaScript, then you are most likely working with an application framework such as Backbone, Angular or Ember, so an abstraced and event-driven application structure will be familar to you. It should be super easy for you to start helping out your keyboard-only users if you aren\u2019t already \u2014 just add a few more event bindings into your UI layer!\n\nManipulating the tab order\n\nSo, you\u2019ve adjusted your mindset and now you test every change to your codebase using a keyboard as well as a mouse. You\u2019ve applied all your hover states to :focus and :active so you can see where you\u2019re tabbing on the page, and your interactive components react seamlessly to a mixture of mouse and keyboard commands. Feels good, right?\n\nThere\u2019s another level of optimisation to consider: manipulating the tab order. Certain DOM elements are naturally part of the tab order, and others are excluded. Links and input elements are the main elements included in the tab order, and static elements like paragraphs and headings are excluded. What if you want to make a static element \u2018tabbable\u2019? \n\nA good example would be in an expandable accordion component. Each section of the accordion should be separated by a heading, and there\u2019s no reason to make that heading into a link simply because it\u2019s interactive.\n\n<div class=\"accordion-widget\">\n\t<h3>Tyrannosaurus</h3>\n\t<p>Tyrannosaurus; meaning \"tyrant lizard\"...<p>\n\n\t<h3>Utahraptor</h3>\n\t<p>Utahraptor is a genus of theropod dinosaurs...<p>\n\n\t<h3>Dromiceiomimus</h3>\n\t<p>Ornithomimus is a genus of ornithomimid dinosaurs...<p>\n</div>\n\nAdding the heading elements to the tab order is trivial. We just set their tabindex attribute to zero. You could do this on the server or the client. I prefer to do it with JavaScript as part of the accordion setup and initialisation process.\n\n$('.accordion-widget h3').attr('tabindex', '0');\n\nYou can apply this trick in reverse and take elements out of the tab order by setting their tabindex attribute to \u22121, or change the tab order completely by using other integers. This should be done with great care, if at all. You have to be sure that the markup you remove from the tab order comes out because it genuinely improves the keyboard interaction experience. This is hard to validate without user testing. The danger is that developers will try to sweep complicated parts of the UI under the carpet by taking them out of the tab order. This would be considered a dark pattern \u2014 at least on my team!\n\nA farewell ARIA\n\nThis is where things can get complex, and I\u2019m no expert on the ARIA specification: I feel like I\u2019ve only dipped my toe into this aspect of coding for accessibility. But, as with WCAG, I\u2019d like to demystify things a little bit to encourage you to look into this area further yourself.\n\nARIA roles are of most benefit to screen reader users, because they modify and augment screen reader announcements. \n\nLet\u2019s take our dinosaur accordion from the previous section. The markup is semantic, so a screen reader that can\u2019t handle JavaScript will announce all the content within the accordion, no problem.\n\nBut modern screen readers can deal with JavaScript, and this means that all the lovely dino information beneath each heading has probably been hidden on document.ready, when the accordion initialised. It might have been hidden using display:none, which prevents a screen reader from announcing content. If that\u2019s as far as you have gone, then you\u2019ve committed an accessibility sin by hiding content from screen readers. Your user will hear a set of headings being announced, with no content in between. It would sound something like this if you were using Chromevox:\n\n> Tyrannosaurus. Heading Three.\n> Utahraptor. Heading Three.\n> Dromiceiomimus. Heading Three.\n\nWe can add some ARIA magic to the markup to improve this, using the tablist role. Start by adding a role of tablist to the widget, and roles of tab and tabpanel to the headings and paragraphs respectively. Set boolean values for aria-selected, aria-hidden and aria-expanded. The markup could end up looking something like this.\n\n<div class=\"accordion-widget\" role=\"tablist\">\n\t<!-- T-rex -->\t\n\t<h3 role=\"tab\" \n\t\ttabindex=\"0\" \n\t\tid=\"tab-2\" \n\t\taria-controls=\"panel-2\" \n\t\taria-selected=\"false\">Utahraptor</h3>\n\t<p\trole=\"tabpanel\" \n\t\tid=\"panel-2\" \n\t\taria-labelledby=\"tab-2\" \n\t\taria-expanded=\"false\" \n\t\taria-hidden=\"true\">Utahraptor is a genus of theropod dinosaurs...</p>\n\t<!-- Dromiceiomimus -->\t\n</div>\n\nNow, if a screen reader user encounters this markup they will hear the following:\n\n> Tyrannosaurus. Tab not selected; one of three.\n> Utahraptor. Tab not selected; two of three.\n> Dromiceiomimus. Tab not selected; three of three.\n\nYou could add arrow key events to help the user browse up and down the tab list items until they find one they like. \n\nYour accordion open() function should update the ARIA boolean values as well as adding whatever classes and animations you have built in as standard. Your users know that unselected tabs are meant to be interacted with, so if a user triggers the open function (say, by hitting Enter or the space bar on the second item) they will hear this:\n\n> Utahraptor. Selected; two of three.\n\nThe paragraph element for the expanded item will not be hidden by your CSS, which means it will be announced as normal by the screen reader.\n\nThis kind of thing makes so much more sense when you have a working example to play with. Again, I refer you to the fantastic resource that Hans Hillen has put together: this is his take on an accessible accordion, on which much of my example is based.\n\nConclusion\n\nGetting complex interfaces right for all of your users can be difficult \u2014 there\u2019s no point pretending otherwise. And there\u2019s no substitute for user testing with real users who navigate the web using assistive technology every day. This kind of testing can be time-consuming to recruit for and to conduct. On top of this, we now have accessibility on mobile devices to contend with. That\u2019s a huge area in itself, and it\u2019s one which I have not yet had a chance to research properly.\n\nSo, there\u2019s lots to learn, and there\u2019s lots to do to get it right. But don\u2019t be disheartened. If you have read this far then I\u2019ll leave you with one final piece of advice: don\u2019t wait.\n\nDon\u2019t wait until you\u2019re building a site which mandates AAA-compliance to try this stuff out. Don\u2019t wait for a client with the will or the budget to conduct the full spectrum of user testing to come along. Unplug your mouse, and start playing with your interfaces in a new way. You\u2019ll be surprised at the things that you learn and the issues you uncover. \n\nAnd the next time an true accessibility project comes along, you will be way ahead of the game.", "year": "2013", "author": "Charlie Perrins", "author_slug": "charlieperrins", "published": "2013-12-03T00:00:00+00:00", "url": "https://24ways.org/2013/coding-towards-accessibility/", "topic": "code"} {"rowid": 20, "title": "Make Your Browser Dance", "contents": "It was a crisp winter\u2019s evening when I pulled up alongside the pier. I stepped out of my car and the bitterly cold sea air hit my face. I walked around to the boot, opened it and heaved out a heavy flight case. I slammed the boot shut, locked the car and started walking towards the venue.\n\nThis was it. My first gig. I thought about all those weeks of preparation: editing video clips, creating 3-D objects, making coloured patterns, then importing them all into software and configuring effects to change as the music did; targeting frequency, beat, velocity, modifying size, colour, starting point; creating playlists of these\u2026 and working out ways to mix them as the music played.\n\nThis was it. This was me VJing.\n\nThis was all a lifetime (well a decade!) ago.\n\nWhen I started web designing, VJing took a back seat. I was more interested in interactive layouts, semantic accessible HTML, learning all the IE bugs and mastering the quirks that CSS has to offer. More recently, I have been excited by background gradients, 3-D transforms, the @keyframe directive, as well as new APIs such as getUserMedia, indexedDB, the Web Audio API\n\nBut wait, have I just come full circle? Could it be possible, with these wonderful new things in technologies I am already familiar with, that I could VJ again, right here, in a browser?\n\nWell, there\u2019s only one thing to do: let\u2019s try it!\n\nLet\u2019s take to the dance floor \n\nOver the past couple of years working in The Lab I have learned to take a much more iterative approach to projects than before. One of my new favourite methods of working is to create a proof of concept to make sure my theory is feasible, before going on to create a full-blown product. So let\u2019s take the same approach here.\n\nThe main VJing functionality I want to recreate is manipulating visuals in relation to sound. So for my POC I need to create a visual, with parameters that can be changed, then get some sound and see if I can analyse that sound to detect some data, which I can then use to manipulate the visual parameters. Easy, right?\n\nSo, let\u2019s start at the beginning: creating a simple visual. For this I\u2019m going to create a CSS animation. It\u2019s just a funky i element with the opacity being changed to make it flash.\n\n See the Pen Creating a light by Rumyra (@Rumyra) on CodePen\n\nA note about prefixes: I\u2019ve left them out of the code examples in this post to make them easier to read. Please be aware that you may need them. I find a great resource to find out if you do is caniuse.com. You can also check out all the code for the examples in this article\n\nStart the music\n\nWell, that\u2019s pretty easy so far. Next up: loading in some sound. For this we\u2019ll use the Web Audio API. The Web Audio API is based around the concept of nodes. You have a source node: the sound you are loading in; a destination node: usually the device\u2019s speakers; and any number of processing nodes in between. All this processing that goes on with the audio is sandboxed within the AudioContext.\n\nSo, let\u2019s start by initialising our audio context.\n\nvar contextClass = window.AudioContext;\nif (contextClass) {\n //web audio api available.\n var audioContext = new contextClass();\n} else {\n //web audio api unavailable\n //warn user to upgrade/change browser\n}\n\nNow let\u2019s load our sound file into the new context we created with an XMLHttpRequest.\n\nfunction loadSound() {\n\t//set audio file url\n\tvar audioFileUrl = '/octave.ogg';\n\t//create new request\n\tvar request = new XMLHttpRequest();\n\trequest.open(\"GET\", audioFileUrl, true);\n\trequest.responseType = \"arraybuffer\";\n\n\trequest.onload = function() {\n\t\t//take from http request and decode into buffer\n\t\tcontext.decodeAudioData(request.response, function(buffer) {\n\t \taudioBuffer = buffer;\n\t });\n\t\t}\n\trequest.send();\n}\n\nPhew! Now we\u2019ve loaded in some sound! There are plenty of things we can do with the Web Audio API: increase volume; add filters; spatialisation. If you want to dig deeper, the O\u2019Reilly Web Audio API book by Boris Smus is available to read online free.\n\nAll we really want to do for this proof of concept, however, is analyse the sound data. To do this we really need to know what data we have.\n\n Learning the steps\n\nLet\u2019s take a minute to step back and remember our school days and science class. I\u2019m sure if I drew a picture of a sound wave, we would all start nodding our heads.\n\n \n\nThe sound you hear is caused by pressure differences in the particles in the air. Sound pushes these particles together, causing vibrations. Amplitude is basically strength of pressure. A simple example of change of amplitude is when you increase the volume on your stereo and the output wave increases in size.\n\nThis is great when everything is analogue, but the waveform varies continuously and it\u2019s not suitable for digital processing: there\u2019s an infinite set of values. For digital processing, we need discrete numbers.\n\nWe have to sample the waveform at set time intervals, and record data such as amplitude and frequency. Luckily for us, just the fact we have a digital sound file means all this hard work is done for us. What we\u2019re doing in the code above is piping that data in the audio context. All we need to do now is access it.\n\nWe can do this with the Web Audio API\u2019s analysing functionality. Just pop in an analysing node before we connect the source to its destination node.\n\nfunction createAnalyser(source) {\n\t//create analyser node\n\tanalyser = audioContext.createAnalyser();\n\t//connect to source\n\tsource.connect(analyzer);\n\t//pipe to speakers\n\tanalyser.connect(audioContext.destination);\n}\n\nThe data I\u2019m really interested in here is frequency. Later we could look into amplitude or time, but for now I\u2019m going to stick with frequency.\n\nThe analyser node gives us frequency data via the getFrequencyByteData method.\n\n Don\u2019t forget to count!\n\nTo collect the data from the getFrequencyByteData method, we need to pass in an empty array (a JavaScript typed array is ideal). But how do we know how many items the array will need when we create it?\n\nThis is really up to us and how high the resolution of frequencies we want to analyse is. Remember we talked about sampling the waveform; this happens at a certain rate (sample rate) which you can find out via the audio context\u2019s sampleRate attribute. This is good to bear in mind when you\u2019re thinking about your resolution of frequencies.\n\nvar sampleRate = audioContext.sampleRate;\n\nLet\u2019s say your file sample rate is 48,000, making the maximum frequency in the file 24,000Hz (thanks to a wonderful theorem from Dr Harry Nyquist, the maximum frequency in the file is always half the sample rate). The analyser array we\u2019re creating will contain frequencies up to this point. This is ideal as the human ear hears the range 0\u201320,000hz.\n\nSo, if we create an array which has 2,400 items, each frequency recorded will be 10Hz apart. However, we are going to create an array which is half the size of the FFT (fast Fourier transform), which in this case is 2,048 which is the default. You can set it via the fftSize property.\n\n//set our FFT size\nanalyzer.fftSize = 2048;\n//create an empty array with 1024 items\nvar frequencyData = new Uint8Array(1024);\n\nSo, with an array of 1,024 items, and a frequency range of 24,000Hz, we know each item is 24,000 \u00f7 1,024 = 23.44Hz apart.\n\nThe thing is, we also want that array to be updated constantly. We could use the setInterval or setTimeout methods for this; however, I prefer the new and shiny requestAnimationFrame.\n\nfunction update() {\n \t//constantly getting feedback from data\n \trequestAnimationFrame(update);\n \tanalyzer.getByteFrequencyData(frequencyData);\n}\n\n Putting it all together\n\nSweet sticks! Now we have an array of frequencies from the sound we loaded, updating as the sound plays. Now we want that data to trigger our animation from earlier.\n\nWe can easily pause and run our CSS animation from JavaScript:\n\nelement.style.webkitAnimationPlayState = \"paused\";\nelement.style.webkitAnimationPlayState = \"running\";\n\nUnfortunately, this may not be ideal as our animation might be a whole heap longer than just a flashing light. We may want to target specific points within that animation to have it stop and start in a visually pleasing way and perhaps not smack bang in the middle.\n\nThere is no really easy way to do this at the moment as Zach Saucier explains in this wonderful article. It takes some jiggery pokery with setInterval to try to ascertain how far through the CSS animation you are in percentage terms.\n\nThis seems a bit much for our proof of concept, so let\u2019s backtrack a little. We know by the animation we\u2019ve created which CSS properties we want to change. This is pretty easy to do directly with JavaScript.\n\nelement.style.opacity = \"1\";\nelement.style.opacity = \"0.2\";\n\nSo let\u2019s start putting it all together. For this example I want to trigger each light as a different frequency plays. For this, I\u2019ll loop through the HTML elements and change the opacity style if the frequency gain goes over a certain threshold.\n\n//get light elements\nvar lights = document.getElementsByTagName('i');\nvar totalLights = lights.length;\n\nfor (var i=0; i<totalLights; i++) {\n //get frequencyData key\n var freqDataKey = i*8;\n\t//if gain is over threshold for that frequency animate light\n if (frequencyData[freqDataKey] > 160){\n //start animation on element\n lights[i].style.opacity = \"1\";\n } else {\n lights[i].style.opacity = \"0.2\";\n }\n}\n\nSee all the code in action here. I suggest viewing in a modern browser :)\n\nAwesome! It is true \u2014 we can VJ in our browser!\n\nLet\u2019s dance!\n\nSo, let\u2019s start to expand this simple example. First, I feel the need to make lots of lights, rather than just a few. Also, maybe we should try a sound file more suited to gigs or clubs.\n\nCheck it out!\n\nI don\u2019t know about you, but I\u2019m pretty excited \u2014 that\u2019s just a bit of HTML, CSS and JavaScript!\n\nThe other thing to think about, of course, is the sound that you would get at a venue. We don\u2019t want to load sound from a file, but rather pick up on what is playing in real time. The easiest way to do this, I\u2019ve found, is to capture what my laptop\u2019s mic is picking up and piping that back into the audio context. We can do this by using getUserMedia.\n\nLet\u2019s include this in this demo. If you make some noise while viewing the demo, the lights will start to flash.\n\n And relax :)\n\nThere you have it. Sit back, play some music and enjoy the Winamp like experience in front of you.\n\nSo, where do we go from here? I already have a wealth of ideas. We haven\u2019t started with canvas, SVG or the 3-D features of CSS. There are other things we can detect from the audio as well. And yes, OK, it\u2019s questionable whether the browser is the best environment for this. For one, I\u2019m using a whole bunch of nonsensical HTML elements (maybe each animation could be held within a web component in the future). But hey, it\u2019s fun, and it looks cool and sometimes I think it\u2019s OK to just dance.", "year": "2013", "author": "Ruth John", "author_slug": "ruthjohn", "published": "2013-12-02T00:00:00+00:00", "url": "https://24ways.org/2013/make-your-browser-dance/", "topic": "code"} {"rowid": 16, "title": "URL Rewriting for the Fearful", "contents": "I think it was Marilyn Monroe who said, \u201cIf you can\u2019t handle me at my worst, please just fix these rewrite rules, I\u2019m getting an internal server error.\u201d Even the blonde bombshell hated configuring URL rewrites on her website, and I think most of us know where she was coming from.\n\nThe majority of website projects I work on require some amount of URL rewriting, and I find it mildly enjoyable \u2014 I quite like a good rewrite rule. I suspect you may not share my glee, so in this article we\u2019re going to go back to basics to try to make the whole rigmarole more understandable.\n\nWhen we think about URL rewriting, usually that means adding some rules to an .htaccess file for an Apache web server. As that\u2019s the most common case, that\u2019s what I\u2019ll be sticking to here. If you work with a different server, there\u2019s often documentation specifically for translating from Apache\u2019s mod_rewrite rules. I even found an automatic converter for nginx.\n\nThis isn\u2019t going to be a comprehensive guide to every URL rewriting problem you might ever have. That would take us until Christmas. If you consider yourself a trial-and-error dabbler in the HTTP 500-infested waters of URL rewriting, then hopefully this will provide a little bit more of a basis to help you figure out what you\u2019re doing. If you\u2019ve ever found yourself staring at the white screen of death after screwing up your .htaccess file, don\u2019t worry. As Michael Jackson once insipidly whined, you are not alone.\n\nThe basics\n\nRewrite rules form part of the Apache web server\u2019s configuration for a website, and can be placed in a number of different locations as part of your virtual host configuration. By far the simplest and most portable option is to use an .htaccess file in your website root. Provided your server has mod_rewrite available, all you need to do to kick things off in your .htaccess file is:\n\nRewriteEngine on\n\nThe general formula for a rewrite rule is:\n\nRewriteRule URL/to/match URL/to/use/if/it/matches [options]\n\nWhen we talk about URL rewriting, we\u2019re normally talking about one of two things: redirecting the browser to a different URL; or rewriting the URL internally to use a particular file. We\u2019ll look at those in turn.\n\nRedirects\n\nRedirects match an incoming URL, and then redirect the user\u2019s browser to a different address. These can be useful for maintaining legacy URLs if content changes location as part of a site redesign. Redirecting the old URL to the new location makes sure that any incoming links, such as those from search engines, continue to work. \n\nIn 1998, Sir Tim Berners-Lee wrote that Cool URIs don\u2019t change, encouraging us all to go the extra mile to make sure links keep working forever. I think that sometimes it\u2019s fine to move things around \u2014 especially to correct bad URL design choices of the past \u2014 provided that you can do so while keeping those old URLs working. That\u2019s where redirects can help.\n\nA redirect might look like this\n\nRewriteRule ^article/used/to/be/here.php$ /article/now/lives/here/ [R=301,L]\n\nRewriting\n\nBy default, web servers closely map page URLs to the files in your site. On receiving a request for http://example.com/about/history.html the server goes to the configured folder for the example.com website, and then goes into the about folder and returns the history.html file.\n\nA rewrite rule changes that process by breaking the direct relationship between the URL and the file system. \u201cWhen there\u2019s a request for /about/history.html\u201d a rewrite rule might say, \u201cuse the file /about_section.php instead.\u201d\n\nThis opens up lots of possibilities for creative ways to map URLs to the files that know how to serve up the page. Most MVC frameworks will have a single rule to rewrite all page URLs to one single file. That file will be a script which kicks off the framework to figure out what to do to serve the page.\n\nRewriteRule ^for/this/url/$ /use/this/file.php [L] \n\nMatching patterns\n\nBy now you\u2019ll have noted the weird ^ and $ characters wrapped around the URL we\u2019re trying to match. That\u2019s because what we\u2019re actually using here is a pattern. Technically, it is what\u2019s called a Perl Compatible Regular Expression (PCRE) or simply a regex or regexp. We\u2019ll call it a pattern because we\u2019re not animals.\n\nWhat are these patterns? If I asked you to enter your credit card expiry date as MM/YY then chances are you\u2019d wonder what I wanted your credit card details for, but you\u2019d know that I wanted a two-digit month, a slash, and a two-digit year. That\u2019s not a regular expression, but it\u2019s the same idea: using some placeholder characters to define the pattern of the input you\u2019re trying to match.\n\nWe\u2019ve already met two regexp characters.\n\n\n\t^\n\tMatches the beginning of a string\n\t$\n\tMatches the end of a string\n\n\nWhen a pattern starts with ^ and ends with $ it\u2019s to make sure we match the complete URL start to finish, not just part of it. There are lots of other ways to match, too:\n\n\n\t[0-9]\n\tMatches a number, 0\u20139. [2-4] would match numbers 2 to 4 inclusive.\n\t[a-z]\n\tMatches lowercase letters a\u2013z\n\t[A-Z]\n\tMatches uppercase letters A\u2013Z\n\t[a-z0-9]\n\tCombining some of these, this matches letters a\u2013z and numbers 0\u20139\n\n\nThese are what we call character groups. The square brackets basically tell the server to match from the selection of characters within them. You can put any specific characters you\u2019re looking for within the brackets, as well as the ranges shown above. \n\nHowever, all these just match one single character. [0-9] would match 8 but not 84 \u2014 to match 84 we\u2019d need to use [0-9] twice.\n\n[0-9][0-9]\n\nSo, if we wanted to match 1984 we could to do this:\n\n[0-9][0-9][0-9][0-9] \n\n\u2026but that\u2019s getting silly. Instead, we can do this:\n\n[0-9]{4}\n\nThat means any character between 0 and 9, four times. If we wanted to match a number, but didn\u2019t know how long it might be (for example, a database ID in the URL) we could use the + symbol, which means one or more.\n\n[0-9]+\n\nThis now matches 1, 123 and 1234567.\n\nPutting it into practice\n\nLet\u2019s say we need to write a rule to match article URLs for this website, and to rewrite them to use /article.php under the hood. The articles all have URLs like this:\n\n2013/article-title/\n\nThey start with a year (from 2005 up to 2013, currently), a slash, and then have a URL-safe version of the article title (a slug), ending in a slash. We\u2019d match it like this:\n\n^[0-9]{4}/[a-z0-9-]+/$\n\nIf that looks frightening, don\u2019t worry. Breaking it down, from the start of the URL (^) we\u2019re looking for four numbers ([0-9]{4}). Then a slash \u2014 that\u2019s just literal \u2014 and then anything lowercase a\u2013z or 0\u20139 or a dash ([a-z0-9-]) one or more times (+), ending in a slash (/$).\n\nPutting that into a rewrite rule, we end up with this:\n\nRewriteRule ^[0-9]{4}/[a-z0-9-]+/$ /article.php\n\nWe\u2019re getting close now. We can match the article URLs and rewrite them to use article.php. Now we just need to make sure that article.php knows which article it\u2019s supposed to display.\n\nCapturing groups, and replacements\n\nWhen rewriting URLs you\u2019ll often want to take important parts of the URL you\u2019re matching and pass them along to the script that handles the request. That\u2019s usually done by adding those parts of the URL on as query string arguments. For our example, we want to make sure that article.php knows the year and the article title we\u2019re looking for. That means we need to call it like this:\n\n/article.php?year=2013&slug=article-title\n\nTo do this, we need to mark which parts of the pattern we want to reuse in the destination. We do this with round brackets or parentheses. By placing parentheses around parts of the pattern we want to reuse, we create what\u2019s called a capturing group. To capture an important part of the source URL to use in the destination, surround it in parentheses.\n\nOur pattern now looks like this, with parentheses around the parts that match the year and slug, but ignoring the slashes:\n\n^([0-9]{4})/([a-z0-9-]+)/$ \n\nTo use the capturing groups in the destination URL, we use the dollar sign and the number of the group we want to use. So, the first capturing group is $1, the second is $2 and so on. (The $ is unrelated to the end-of-pattern $ we used before.)\n\nRewriteRule ^([0-9]{4})/([a-z0-9-]+)/$ /article.php?year=$1&slug=$2 \n\nThe value of the year capturing group gets used as $1 and the article title slug is $2. Had there been a third group, that would be $3 and so on. In regexp parlance, these are called back-references as they refer back to the pattern.\n\nOptions\n\nSeveral brain-taxing minutes ago, I mentioned some options as the final part of a rewrite rule. There are lots of options (or flags) you can set to change how the rule is processed. The most useful (to my mind) are:\n\n\n\tR=301\n\tPerform an HTTP 301 redirect to send the user\u2019s browser to the new URL. A status of 301 means a resource has moved permanently and so it\u2019s a good way of both redirecting the user to the new URL, and letting search engines know to update their indexes.\n\tL\n\tLast. If this rule matches, don\u2019t bother processing the following rules.\n\n\nOptions are set in square brackets at the end of the rule. You can set multiple options by separating them with commas:\n\nRewriteRule ^([0-9]{4})/([a-z0-9-]+)/$ /article.php?year=$1&slug=$2 [L]\n\nor\n\nRewriteRule ^about/([a-z0-9-]+).jsp/$ /about/$1/ [R=301,L] \n\nCommon pitfalls\n\nOnce you\u2019ve built up a few rewrite rules, things can start to go wrong. You may have been there: a rule which looks perfectly good is somehow not matching. One common reason for this is hidden behind that [L] flag. \n\nL for Last is a useful option to tell the rewrite engine to stop once the rule has been matched. This is what it does \u2014 the remaining rules in the .htaccess file are then ignored. However, once a URL has been rewritten, the entire set of rules are then run again on the new URL. If the new URL matches any of the rules, that too will be rewritten and on it goes. \n\nOne way to avoid this problem is to keep your \u2018real\u2019 pages under a folder path that will never match one of your rules, or that you can exclude from the rewrite rules.\n\nUseful snippets\n\nI find myself reusing the same few rules over and over again, just with minor changes. Here are some useful examples to refer back to.\n\nExcluding a directory\n\nAs mentioned above, if you\u2019re rewriting lots of fancy URLs to a collection of real files it can be helpful to put those files in a folder and exclude it from rewrite rules. This helps solve the issue of rewrite rules reapplying to your newly rewritten URL. To exclude a directory, put a rule like this at the top of your file, before your other rules. Our files are in a folder called _source, the dash in the rule means do nothing, and the L flag means the following rules won\u2019t be applied.\n\nRewriteRule ^_source - [L]\n\nThis is also useful for excluding things like CMS folders from your website\u2019s rewrite rules\n\nRewriteRule ^perch - [L] \n\nAdding or removing www from the domain\n\nSome folk like to use a www and others don\u2019t. Usually, it\u2019s best to pick one and go with it, and redirect the one you don\u2019t want. On this site, we don\u2019t use www.24ways.org so we redirect those requests to 24ways.org.\n\nThis uses a RewriteCond which is like an if for a rewrite rule: \u201cIf this condition matches, then apply the following rule.\u201d In this case, it\u2019s if the HTTP HOST (or domain name, basically) matches this pattern, then redirect everything:\n\nRewriteCond %{HTTP_HOST} ^www.24ways.org$ [NC]\nRewriteRule ^(.*)$ http://24ways.org/$1 [R=301,L]\n\nThe [NC] flag means \u2018no case\u2019 \u2014 the match is case-insensitive. The dots in the domain are escaped with a backslash, as a dot is a regular expression character which means match anything, so we escape it because we literally mean a dot in this instance.\n\nRemoving file extensions\n\nSometimes all you need to do to tidy up a URL is strip off the technology-specific file extension, so that /about/history.php becomes /about/history. This is easily achieved with the help of some more rewrite conditions.\n\nRewriteCond %{REQUEST_FILENAME} !-f\nRewriteCond %{REQUEST_FILENAME} !-d\nRewriteCond %{REQUEST_FILENAME}.php -f\nRewriteRule ^(.+)$ $1.php [L,QSA]\n\nThis says if the file being asked for isn\u2019t a file (!-f) and if it isn\u2019t a directory (!-d) and if the file name plus .php is an actual file (-f) then rewrite by adding .php on the end. The QSA flag means \u2018query string append\u2019: append the existing query string onto the rewritten URL.\n\nIt\u2019s these sorts of more generic catch-all rules that you need to watch out for when your .htaccess gets rerun after a successful match. Without care they can easily rematch the newly rewritten URL.\n\nLogging for when it all goes wrong\n\nAlthough not possible within your .htaccess file, if you have access to your Apache configuration files you can enable rewrite logging. This can be useful to track down where a rule is going wrong, if it\u2019s matching incorrectly or failing to match. It also gives you an overview of the amount of work being done by the rewrite engine, enabling you to rearrange your rules and maximise performance.\n\nRewriteEngine On\nRewriteLog \"/full/system/path/to/rewrite.log\"\nRewriteLogLevel 5\n\nTo be doubly clear: this will not work from an .htaccess file \u2014 it needs to be added to the main Apache configuration files. (I sometimes work using MAMP PRO locally on my Mac, and this can be pasted into the snappily named Customized virtual host general settings box in the Advanced tab for your site.)\n\nThe white screen of death\n\nOne of the most frustrating things when working with rewrite rules is that when you make a mistake it can result in the server returning an HTTP 500 Internal Server Error. This in itself isn\u2019t an error message, of course. It\u2019s more of a notification that an error has occurred. The real error message can usually be found in your Apache error log.\n\nIf you have access to your server logs, check the Apache error log and you\u2019ll usually find a much more descriptive error message, pointing you towards your mistake. (Again, if using MAMP PRO, go to Server, Apache and the View Log button.)\n\nIn conclusion\n\nRewriting URLs can be a bear, but the advantages are clear. Keeping a tidy URL structure, disconnected from the technology or file structure of your site can result in URLs that are easier to use and easier to maintain into the future.\n\nIf you\u2019re redesigning a site, remember that cool URIs don\u2019t change, so budget some time to make sure that any content you move has a rewrite rule associated with it to keep any links working.\n\nFurther reading\n\nTo find out more about URL rewriting and perhaps even learn more about regular expressions, I can recommend the following resources.\n\n\n\tFrom the horse\u2019s mouth, the Apache mod_rewrite documentation\n\tParticularly useful with that documentation is the RewriteRule Flags listing\n\tYou may wish to don sunglasses to follow the otherwise comprehensive Regular-Expressions.info tutorial\n\tFriend of 24 ways, Neil Crosby has a mod_rewrite Beginner\u2019s Guide which I\u2019ve found handy over the years.\n\n\nAs noted at the start, this isn\u2019t a fully comprehensive guide, but I hope it\u2019s useful in finding your feet with a powerful but sometimes annoying technology. Do you have useful snippets you often use on projects? Feel free to share them in the comments.", "year": "2013", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2013-12-01T00:00:00+00:00", "url": "https://24ways.org/2013/url-rewriting-for-the-fearful/", "topic": "code"} {"rowid": 81, "title": "Science!", "contents": "Sometimes we want to capture people\u2019s attention at a glance to communicate something fast. At other times we want to have the interface fade away into the background, letting people paint pictures in their minds with our words (if you\u2019ll forgive a little flowery festive flourish).\n\nI tend to distinguish between these two broad objectives as designing for impact on the one hand, and designing for immersion on the other. What defines them is interruption. Impact needs an attention-grabbing interruption. Immersion requires us to remove interruption from the interface. Careful design deliberately interrupts but doesn\u2019t accidentally disrupt. If that seems to make sense to you, then you\u2019ll find the following snippets of science as useful as I did.\n\nSaccades and fixations\n\nAs you\u2019re reading this your eyes are skipping along the lines in tiny jumps. During each jump everything is blurred. Each jump ends in a small pause so your brain can take a snapshot of the letters. It arranges them into words, and then parses out the meaning \u2014 fast \u2014 in around a quarter of a second.\n\nThe jumps are called saccades. The pauses are called fixations. Sometimes we take regressive saccades, skipping back to reread. There\u2019s a simple example in the excellent little book, Detail in Typography, by Jost Hochuli.\n\n\n\nIf you want to explore the science of reading in much more depth, I recommend the excellent paper, \u201cThe Science of Word Recognition\u201d, by Dr Kevin Larson of Microsoft.\n\nTo design for legibility and readability is to design for saccades and fixations. It\u2019s the craft of making it easy for people\u2019s brains to extract meaning, using techniques like good contrast, font size, spacing and structure, and only interrupting the reading experience deliberately.\n\nScan paths\n\nAt some point when visiting 24 ways you probably scanned the screen to get orientated. The journey your eyes took is known as a scan path. Scan paths are made up of saccades and fixations. Right now you\u2019re following a scan path as you read, along one line, and down to the next. This is a map of the scan paths found by Olivier Le Meur from observing people looking at Rembrandt\u2019s Le\u00e7on d\u2019anatomie:\n\n\n\nFor websites, the scan path is a little different. This is an aggregate scan path of Google from LC Technologies:\n\n\n\nThe average shape of a website scan path becomes clearer in this average scan path taken by forty-six people during research by the Poynter Institute, the Estlow Center for Journalism & New Media, and Eyetools:\n\n\n\nJust like when we read text arranged left to right in a vertical column, scan paths follow a roughly Z-shaped pattern from the top left to bottom right. Sometimes we skip back to reread a word or sentence, or glance again at a specific element, but the Z-shaped scan path persists.\n\nDesigning for scan paths is to organise content to help people move through an interface to get orientated, and to read.\n\nThe elements that are important enough to need impact must interrupt the scan path and clearly call attention to themselves. However, they don\u2019t always need to clip people round the ear from multiple directions at once to get attention. It helps to list elements by importance. That gives us an interruption hierarchy to work with. Elements can then interrupt the design with degrees of contrast to the rest of the content using either positioning, treatment, or both. Ta-da! Impact achieved, but gently. No clips round the ear required.\n\nSwinging mood\n\nHuman beings are resilient. Among the immersion and occasional interruptions, we even like a little disruption, especially if it\u2019s absurd and funny. The Ling\u2019s Cars website proves it. In fact, we\u2019re so resilient that we can work around all kinds of mayhem to get a seemingly simple task done.\n\nIn one study, \u201cThe Aesthetics of Reading\u201d (PDF, 480Kb), Dr Kevin Larson of Microsoft and Dr Rosalind Picard of MIT explored the effect of good typography on mood. Two versions of the New Yorker ePeriodical were created. One was typeset well and the other poorly.\n\n\n\nThey engaged twenty volunteers \u2014 half male, half female \u2014 and showed the good version to half of the participants. The other half saw the poor version.\n\nThe good doctors found that, \u201cthere are important differences between good and poor typography that appear to have little effect on common performance measures such as reading speed and comprehension.\u201d In short, good typography didn\u2019t help people read faster or comprehend better.\n\nOh. On the face of it that seems to invalidate what we designers do. Hold your horses, though! They also found that \u201cthe participants who received the good typography performed better on relative subjective duration and on certain cognitive tasks\u201d, and that \u201cgood typography induces a good mood.\u201d\n\nThis means that even though there were no actual differences in reading speed and comprehension, the people who read the version with good typography thought that it took less time to read, and were induced into a good mood by doing so. Not only that, but by being in a good mood, people were more capable of completing creative tasks faster.\n\nThat was a revelation to me. It means that the study showed there is a positive, measurable, emotional and perceptual benefit to good typography and design. To paraphrase: time and tasks fly when you\u2019re having fun!\n\n\n\nSource: Nationaal Archief of the Netherlands: Cheering man after the first goal, Netherlands vs. Belgium, Amsterdam, 1931.\n\nSo, among all my talk of saccades, fixations, scan paths and typesetting, there is science, and the science helps us qualify our design decisions when we need to, and do our jobs better. The science helps us understand how people will interact with our work, and what the actual benefits are for them, and the products or organisations we serve. Good design equals a subjectively quicker experience, a good mood, objectively faster completion of tasks, and happiness for everyone. Thank you, science!", "year": "2012", "author": "Jon Tan", "author_slug": "jontan", "published": "2012-12-24T00:00:00+00:00", "url": "https://24ways.org/2012/science/", "topic": "design"} {"rowid": 90, "title": "Monkey Business", "contents": "\u201cToo expensive.\u201d \u201cOver-priced.\u201d \u201cA bit rich.\u201d\n\nThey all mean the same thing.\n\nWhen you say that something\u2019s too expensive, you\u2019re doing much more than commenting on a price. You\u2019re questioning the explicit or implicit value of a product or a service. You\u2019re asking, \u201cWill I get out of it what you want me to pay for it?\u201d You\u2019re questioning the competency, judgement and possibly even integrity of the individual or company that gave you that price, even though you don\u2019t realise it. You might not be saying it explicitly, but what you\u2019re implying is, \u201cHave you made a mistake?\u201d, \u201cAm I getting the best deal?\u201d, \u201cAre you being honest with me?\u201d, \u201cCould I get this cheaper?\u201d\n\nFinally, you\u2019re being dishonest, because deep down you know all too well that there\u2019s no such thing as too expensive. \n\nWhy? \n\nIt doesn\u2019t matter what you\u2019re questioning the price of. It could be a product, a service or the cost of an hour, day or week of someone\u2019s time. Whatever you\u2019re buying, too expensive is always an excuse. Saying it shifts acceptability of a price back to the person who gave it. What you should say, but are too afraid to admit, is:\n\n\n\t\u201cIt\u2019s more money than I wanted to pay.\u201d\n\t\u201cIt\u2019s more than I estimated it would cost.\u201d\n\t\u201cIt\u2019s more than I can afford.\u201d\n\n\nEveryone who\u2019s given a price for a product or service will have been told at some point that it\u2019s too expensive. It\u2019s never comfortable to hear that. Thoughts come thick and fast: \u201cWhat do I do?\u201d \u201cHow do I react?\u201d \u201cDo I really want the business?\u201d \u201cAm I prepared to negotiate?\u201d \u201cHow much am I willing to compromise?\u201d\n\nIt\u2019s easy to be defensive when someone questions a price, but before you react, stay calm and remember that if someone says what you\u2019re offering is too expensive, they\u2019re saying more about themselves and their situation than they are about your price. Learn to read that situation and how to follow up with the right questions.\n\nImagine you\u2019ve quoted someone for a week of your time. \u201cThat\u2019s too expensive,\u201d they respond. How should you handle that? Think about what they might otherwise be saying.\n\n\n\n\u201cIt\u2019s more money than I want to pay\u201d may mean that they don\u2019t understand the value of your service. How could you respond?\n\nStart by asking what similar projects they\u2019ve worked on and the type of people they worked with. Find out what they paid and what they got for their money, because it\u2019s possible what you offer is different from what they had before. Ask if they saw a return on that previous investment. Maybe their problem isn\u2019t with your headline price, but the value they think they\u2019ll receive. Put the emphasis on value and shift the conversation to what they\u2019ll gain, rather than what they\u2019ll spend.\n\nIt\u2019s also possible they can\u2019t distinguish your service from those of your competitors, so now would be a great time to explain the differences. Do you work faster? Explain how that could help them launch faster, get customers faster, make money faster. Do you include more? Emphasise that, and how unique the experience of working with you will be.\n\n\n\n\u201cIt\u2019s more than I estimated it would cost\u201d could mean that your customer hasn\u2019t done their research properly. You\u2019d never suggest that to them, of course, but you should ask how they\u2019ve arrived at their estimate. Did they base it on work they\u2019ve purchased previously? How long ago was that? Does it come from comparable work or from a different sector?\n\nHelp your customer by explaining how you arrived at your estimate. Break down each element and while you\u2019re doing that, emphasise the parts of your process that you know will appeal to them. If you know that they\u2019ve had difficulty with something in the past, explain how your approach will benefit them. People almost always value a positive experience more than the money they\u2019ll save.\n\n\n\n\u201cIt\u2019s more than I can afford\u201d could mean they can\u2019t afford what you offer at all, but it could also mean they can\u2019t afford it right now or all at once. So ask if they could afford what you\u2019re asking if they spread payment over a longer period? Ask, \u201cWould that mean you\u2019ll give me the business?\u201d\n\nIt\u2019s possible they\u2019re asking for too much for what they can afford to pay. Will they compromise? Can you reach an agreement on something less? Ask, \u201cIf we can agree what\u2019s in and what\u2019s out, will you give me the business?\u201d\n\nWhat can they afford? When you know, you\u2019re in a good position to decide if the deal makes good business sense, for both of you. Ask, \u201cIf I can match that price, will you give me the business?\u201d\n\nThere\u2019s no such thing as \u201ca bit rich\u201d, only ways for you to get to know your customer better. There\u2019s no such thing as \u201cover-priced\u201d,\u00a0only opportunities for you to explain yourself better. You should relish those opportunities. There\u2019s really also no such thing as \u201ctoo expensive\u201d, just ways to set the tone for your relationship and help you develop that relationship to a point where money will be less of a deciding factor.\n\nUnfinished Business\n\nJoin me and my co-host Anna Debenham next year for Unfinished Business, a new discussion show about the business end of working in web, design and creative industries.", "year": "2012", "author": "Andy Clarke", "author_slug": "andyclarke", "published": "2012-12-23T00:00:00+00:00", "url": "https://24ways.org/2012/monkey-business/", "topic": "business"} {"rowid": 96, "title": "Unwrapping the Wii U Browser", "contents": "The Wii U was released on 18 November 2012 in the US, and 30 November in the UK. It\u2019s the first eighth generation home console, the first mainstream second-screen device, and it has some really impressive browser specs.\n\nConsoles are not just for games now: they\u2019re marketed as complete entertainment solutions. Internet connectivity and browser functionality have gone from a nice-to-have feature in game consoles to a selling point. In Nintendo\u2019s case, they see it as a challenge to design an experience that\u2019s better than browsing on a desktop.\n\n\n\tLet\u2019s make a browser that users can use on a daily basis, something that can really handle everything we\u2019ve come to expect from a browser and do it more naturally.\nSasaki \u2013 Iwata Asks on Nintendo.com\n\n\nWith 11% of people using console browsers to visit websites, it\u2019s important to consider these devices right from the start of projects. Browsing the web on a TV or handheld console is a very different experience to browsing on a desktop or a mobile phone, and has many usability implications.\n\nConsole browser testing\n\nWhen I\u2019m testing a console browser, one of the first things I do is run Niels Leenheer\u2019s HTML5 test and Lea Verou\u2019s CSS3 test. I use these benchmarks as a rough comparison of the standards each browser supports.\n\nIn October, IE9 came out for the Xbox 360, scoring 120/500 in the HTML5 test and 32% in the CSS3 test. The PS Vita also had an update to its browser in recent weeks, jumping from 58/500 to 243/500 in the HTML5 test, and 32% to 55% in the CSS3 test. Manufacturers have been stepping up their game, trying to make their browsing experiences better.\n\nTo give you an idea of how the Wii U currently compares to other devices, here are the test results of the other TV consoles I\u2019ve tested. I\u2019ve written more in-depth notes on TV and portable console browsers separately.\n\n\nYear of releaseHTML5 scoreCSS3 scoreNotes\nWii U2012258/50048%Runs a Netfront browser (WebKit).\nWii200689/500Wouldn\u2019t runRuns an Opera browser.\nPS3200668/50038%Runs a Netfront browser (WebKit).\nXbox 3602005120/50032%A browser for the Xbox (IE9) was only recently released in October 2012. The Kinect provides voice and gesture support. There\u2019s also SmartGlass, a second-screen app for platforms including Android and iOS.\n\n\nThe Wii U browser is Nintendo\u2019s fifth attempt at a console browser. Based on these tests, it\u2019s already looking promising.\n\nWhy console browsers used to suck\n\nIt takes a lot of system memory to run a good browser, and the problem of older consoles is that they don\u2019t have much memory available. The original Nintendo DS needs a memory expansion pack just to run the browser, because the 4MB it has on board isn\u2019t enough. I noticed that even on newer devices, some sites fail to load because the system runs out of memory.\n\nThe Wii came out six years ago with an Opera browser. Still being used today and with such low resources available, the latest browser features can\u2019t reasonably be supported. There\u2019s also pressure to add features such as tabs, and enable gamers to use the browser while a game is paused. Nintendo\u2019s browser team have the advantage of higher specs to play with on their new console (1GB of memory dedicated to games, 1GB for the system), which makes it easier to support the latest standards. But it\u2019s still a challenge to fit everything in.\n\n\n\t\u2026even though we have more memory, the amount of memory we can use for the browser is limited compared to a PC, so we\u2019ve worked in ways that efficiently allocates the available memory per tab. To work on this, the experience working on the browser for the Nintendo 3DS system under a limited memory constraint helped us greatly.\nSasaki \u2013 Iwata Asks on Nintendo.com\n\n\nIn the box\n\nThe Wii U consists of a console unit which plugs into a TV (the first to support HD), and a wireless controller known as a gamepad. The gamepad is a lot bigger than typical TV console controllers, and it has a touchscreen on the front. The touchscreen is resistive, responding to pressure rather than electrical current. It\u2019s intended to be used with a stylus (provided) but fingers can be used.\n\nIt might look a bit like one, but the gamepad isn\u2019t a portable console designed to be taken out like the PS Vita. The gamepad can be used as a standalone screen with the TV switched off, as long as it\u2019s within range of the console unit \u2013 it basically piggybacks off it.\n\n\n\nIt\u2019s surprisingly lightweight for its size. It has a wealth of detectors including 9-axis control. Sensors wake the device from sleep when it\u2019s picked up. There\u2019s also a camera on the front, and a headphone port and speakers, with audio coming through both the TV and the gamepad giving a surround sound feel.\n\nUp to six tabs can be opened at once, and the browser can be used while games are paused. There\u2019s a really nice little feature here \u2013 the current game\u2019s name is saved as a search option, so it\u2019s really quick to look up contextual content such as walk-throughs.\n\nControls\n\nOnly one gamepad can be used to control the browser, but if there are Wiimotes connected, they can be used as pointers. This doesn\u2019t let the user do anything except point (they each get a little hand icon with a number on it displayed on the screen), but it\u2019s interesting that multiple people can be interacting with a site at once.\n\n\n\nSee a bigger version\n\nThe gamepad can also be used as a simple TV remote control, with basic functions such as bringing up the programme guide, adjusting volume and changing channel. I found the simplified interface much more usable than a full-featured remote control.\n\n\n\nI\u2019m used to scrolling being sluggish on consoles, but the Wii U feels almost as snappy as a desktop browser. Sites load considerably faster compared with others I\u2019ve tested.\n\nTilt-scroll\n\nHolding down ZL and ZR while tilting the screen activates an Instapaper-style tilt to scroll for going up and down the page quickly, useful for navigating very long pages.\n\nSecond screen\n\nThe TV mirrors most of what\u2019s on the gamepad, although the TV screen just displays the contents of the browser window, while the gamepad displays the site along with the browser toolbar.\n\nWhen the user with the gamepad is typing, the keyboard is hidden from the TV screen \u2013 there\u2019s just a bit of text at the top indicating what\u2019s happening on the gamepad.\n\nPressing X draws an on-screen curtain over the TV, hiding the content that\u2019s on the gamepad from the TV. Pressing X again opens the curtains, revealing what\u2019s on the gamepad. Holding the button down plays a drumroll before it\u2019s released and the curtains are opened. I can imagine this being used in meetings as a fun presentation tool.\n\n\n\n\n\tIn a sense, browsing is a personal activity, but you get the idea that people will be coming and going through the room. When I first saw the curtain function, it made a huge impression on me. I walked around with it all over the company saying, \u201cThey\u2019ve really come up with something amazing!\u201d\nIwata \u2013 Iwata Asks on Nintendo.com\n\n\nText\n\nWriting text\n\nUnlike the capacitive screens on smartphones, the Wii U\u2019s resistive screen needs to be pressed harder than you\u2019re probably used to for registering a touch event. The gamepad screen is big, which makes it much easier to type on this device than other handheld consoles, even without the stylus. It\u2019s still more fiddly than a full-sized keyboard though. When you\u2019re designing forms, consider the extra difficulty console users experience.\n\n\n\nAlthough TV screens are physically big, they are typically viewed from further away than desktop screens. This makes readability an issue, so Nintendo have provided not one, but four ways to zoom in and out:\n\n\n\tDouble-tapping on the screen.\n\tTapping the on-screen zoom icons in the browser toolbar.\n\tPressing the + and - buttons on the device.\n\tMoving the right analogue stick up and down.\n\n\nAs well as making it easy to zoom in and out, Nintendo have done a few other things to improve the reading experience on the TV.\n\nSystem font\n\nOne thing you\u2019ll notice pretty quickly is that the browser lacks all the fonts we\u2019re used to falling back to. Serif fonts are replaced with the system\u2019s sans-serif font. I couldn\u2019t get Typekit\u2019s font loading method to work but Fontdeck, which works slightly differently, does display custom fonts.\n\n The system font has been optimised for reading at a distance and is easy to distinguish because the lowercase e has a quirky little tilt.\n\nDon\u2019t lose :focus\n\nUsing the D-pad to navigate is similar to using a keyboard. Individual links are focused on, with a blue outline drawn around them.\n\nThe recently redesigned An Event Apart site is an example that improves the experience for keyboard and D-pad users. They\u2019ve added a yellow background colour to links on focus. It feels nicer than the default blue outline on its own.\n\n\n\nMedia\n\nThis year, television overtook PCs as the primary way to watch online video content. TV is the natural environment for video, and 42% of online TVs in the US are connected to the internet via a console. Unfortunately, the <video> tag isn\u2019t supported in most console browsers, and those that have Flash installed often have such an old version that the video won\u2019t play.\n\nI suspect this has been a big driver in getting console browsers to support web standards. The Wii U is designed with video content in mind. It doesn\u2019t support Flash but it does support the HTML5 <video> tag.\n\nSome video formats can\u2019t be played, but those that are supported bring up an optimised interface with a custom scrub bar. This is where the device switches from mirroring the TV to being a second screen. The full-screen video is displayed on the TV, and the interface on the gamepad.\n\nThe really clever bit is that while a video is playing, the gamepad user can keep the video playing on the TV screen while searching for another video or browsing the web. This is the same for images too.\n\nOn the left, the video is being shown full-screen on the TV and gamepad. Only the gamepad gets the scrub bar. Clicking the slide up/down button (circled) lets the gamepad user browse the web while the video on the TV continues to play full-screen, as shown in the image on the right.\n\nThere\u2019s support for SVG images, and they look great on a high-definition TV screen. However, there\u2019s currently no way to save or download files.\n\nPreparing for console users\n\nI wasn\u2019t expecting to be quite as impressed as I am by this browser. It\u2019s encouraging to see console makers investing time into improving the experience as well as the standards support. In the same way there was an explosion in mobile browser use as the experience got better, I\u2019m sure we\u2019ll see the same with console browsers as the experience improves.\n\nThe value of detection\n\nConsoles offer a range of inputs including gesture, voice and controller buttons. That means we have to think about more diverse methods of input than just touch and click.\n\nThis is where I could tell you to add some detection methods such as user agent string sniffing to target a different experience for console users. But the majority of the time, that really isn\u2019t necessary. TV console browsers are getting a lot better at compensating for the living room environment, and they\u2019re designed to work with websites that haven\u2019t been optimised for this context.\n\nRather than tighten our grip on optimising experiences for every device out there, we\u2019ve got to be pragmatic. There are so many new devices coming out every week, our designs need to be future-proof rather than fixed to a particular device in time.\n\nEven fuzzy device detection isn\u2019t reliable \u2013 the PS Vita declares itself to be mobile, a mobile device and a Kindle Fire tablet, while the two DS devices state they\u2019re neither mobile nor mobile phones nor tablets, but computers. They\u2019re weird outliers, but they\u2019re still important devices to consider.\n\nThinking broadly about how our designs will be interacted with and viewed on a TV screen can help improve that experience for everyone. This is about accessibility. Considering how someone uses a site with a D-pad, we can improve the experience for keyboard users. When we think about colour contrast and text legibility on TV screens, we can apply that for anyone who reads content on the web. So why just offer this to the TV users?\n\nPlaying with the browser\n\n\n\t\u2026we want to prove to them through this Wii U Internet Browser that browsing itself can be an entertainment.\nIwata \u2013 Iwata Asks on Nintendo.com\n\n\nAlthough I\u2019m cautious about designing experiences for specific devices, it\u2019s fun to experiment with the technology available. Nintendo have promised web developers access to the Wii U\u2019s buttons and sensors. There\u2019s already some JavaScript documentation, and a demo for you to try.\n\nIf you\u2019re interested in making your own games, thanks to web standards, a lot of HTML5 games work already on the device. Matt Hackett wrote up his experience of testing the game he built, and he talks about some of features the browser lacks. There\u2019s certainly an incentive there for console manufacturers to improve their HTML5 support so more games can be played in their browser.\n\nWhat excites me about consoles is that it\u2019s like looking at what might be available to us in future browsers. As well as thinking about how our sites work on small screens, we should also consider big screens. We\u2019re already figuring out how images should work at different screen widths and connection speeds, but we\u2019ve also got some interesting challenges ahead of us catering for second screen experiences and 3D-enabled devices. \n\nSo, this Christmas, if you\u2019re huddled round the game console or a smart TV, give the browser in it a try.", "year": "2012", "author": "Anna Debenham", "author_slug": "annadebenham", "published": "2012-12-22T00:00:00+00:00", "url": "https://24ways.org/2012/unwrapping-the-wii-u-browser/", "topic": "ux"} {"rowid": 91, "title": "Infinite Canvas: Moving Beyond the Page", "contents": "Remember Web 2.0? I do. In fact, that phrase neatly bifurcates my life on the internet. Pre-2.0, I was occupied by chatting on AOL and eventually by learning HTML so I could build sites on Geocities. Around 2002, however, I saw a WYSIWYG demo in Dreamweaver. The instructor was dragging boxes and images around a canvas. With a few clicks he was able to build a dynamic, single-page interface. Coming from the world of tables and inline HTML styles, I was stunned.\n\nAs I entered college the next year, the web was blossoming: broadband, Wi-Fi, mobile (proud PDA owner, right here), CSS, Ajax, Bloglines, Gmail and, soon, Google Maps. I was a technology fanatic and a hobbyist web developer. For me, the web had long been informational. It was now rapidly becoming something else, something more: sophisticated, presentational, actionable.\n\nIn 2003 we watched as the internet changed. The predominant theme of those early Web 2.0 years was the withering of Internet Explorer 6 and the triumph of web standards. Upon cresting that mountain, we looked around and collectively breathed the rarefied air of pristine HMTL and CSS, uncontaminated by toxic hacks and forks \u2013 only to immediately begin hurtling down the other side at what is, frankly, terrifying speed.\n\nTen years later, we are still riding that rocket. Our days (and nights) are spent cramming for exams on CSS3 and RWD and Sass and RESS. We are the proud, frazzled owners of tiny pocket computers that annihilate the best laptops we could have imagined, and the architects of websites that are no longer restricted to big screens nor even segregated by device. We dragoon our sites into working any time, anywhere. At this point, we can hardly ask the spec developers to slow down to allow us to catch our breath, nor should we. It is, without a doubt, a most wonderful time to be a web developer.\n\nBut despite the newfound luxury of rounded corners, gradients, embeddable fonts, low-level graphics APIs, and, glory be, shadows, the canyon between HTML and native appears to be as wide as ever. The improvements in HTML and CSS have, for the most part, been conveniences rather than fundamental shifts. What I\u2019d like to do now, if you\u2019ll allow me, is outline just a few of the remaining gaps that continue to separate web sites and applications from their native companions.\n\nWhat I\u2019d like for Christmas\n\nThere is one irritant which is the grandfather of them all, the one from which all others flow and have their being, and it is, simply, the page refresh. That\u2019s right, the foundational principle of the web is our single greatest foe. To paraphrase a patron saint of designers everywhere, if you see a page refresh, we blew it.\n\nThe page refresh brings with it, of course, many noble and lovely benefits: addressability, for one; and pagination, for another. (See also caching, resource loading, and probably half a dozen others.) Still, those concerns can be answered (and arguably answered more compellingly) by replacing the weary page with the young and hearty document. Flash may be dead, but it has many lessons yet to bequeath.\n\nPreparing a single document when the site loads allows us to engage the visitor in a smooth and engrossing experience. We have long known this, of course. Twitter was not the first to attempt, via JavaScript, to envelop the user in a single-page application, nor the first to abandon it. Our shared task is to move those technologies down the stack, to make them more primitive, so that the next Twitter can be built with the most basic combination of HTML and CSS rather than relying on complicated, slow, and unreliable scripted solutions.\n\nSo, let\u2019s take a look at what we can do, right now, that we might have a better idea of where our current tools fall short.\n\nA print magazine in HTML clothing\n\nLike many others, I suspect, one of my earliest experiences with publishing was laying out newsletters and newspapers on a computer for print. If you\u2019ve ever used InDesign or Quark or even Microsoft Publisher, you\u2019ll remember reflowing content from page to page. The advent of the internet signaled, in many ways, the abandonment of that model. Articles were no longer constrained by the physical limitations of paper. In shedding our chains, however, it is arguable that we\u2019ve lost something useful. We had a self-contained and complete package, a closed loop. It was a thing that could be handled and finished, and doing so provided a sense of accomplishment that our modern, infinitely scrolling, ever-fractal web of content has stolen.\n\nFor our purposes today, we will treat 24 ways as the online equivalent of that newspaper or magazine. A single year\u2019s worth of articles could easily be considered an issue. Right now, navigating between articles means clicking on the article you\u2019d like to view and being taken to that specific address via a page reload. If Drew wanted to, it wouldn\u2019t be difficult to update the page in place (via JavaScript) and change the address (again via JavaScript with the History API) to reflect the new content found at the new location. But what if Drew wanted to do that without JavaScript? And what if he wanted the site to not merely load the content but actually whisk you along the page in a compelling and delightful way, \u00e0 la the Mag+ demo we all saw a few years ago when the iPad was first introduced? Uh, no.\n\nWe\u2019re all familiar with websites that have attempted to go beyond the page by weaving many chunks of content together into a large document and for good reason. There is tremendous appeal in opening and exploring the canvas beyond the edges of our screens.\n\nIn one rather straightforward example from last year, Mozilla contacted Full Stop to build a website promoting Aza Raskin\u2019s proposal for a set of Creative Commons-style privacy icons. Like a lot of the sites we build (including our own), the amount of information we were presenting was minimal. In these instances, we encourage our clients to consider including everything on a single page. The result was a horizontally driven site that was, if not whimsical, at least clever and attractive to the intended audience. An experience that is taken for granted when using device-native technology is utterly, maddeningly impossible to replicate on the web without jumping through JavaScript hoops.\n\nIn another, more complex example, we again had the pleasure of working with Aza earlier this year, this time on a redesign of the Massive Health website. Our assignment was to design and build a site that communicated Massive\u2019s commitment to modern personal health. The site had to be visually and interactively stunning while maintaining a usable and clear interface for the casual visitor. Our solution was to extend the infinite company logo into a ribbon that carried the visitor through the site narrative. It also meant we\u2019d be asking the browser to accommodate something it was never designed to handle: a non-linear design. (Be sure to play around. There\u2019s a lot going on under the hood. We were also this close to a ZUI, if WebKit didn\u2019t freak out when pages were scaled beyond 10\u00d7.) Despite the apparent and deliberate design simplicity, the techniques necessary to implement it are anything but. From updating the URL to moving the visitor from section to section, we\u2019re firmly in JavaScript territory. And that\u2019s a shame.\n\nWhat can we do?\n\nWe might not be able to specify these layouts in HTML and CSS just yet, but that doesn\u2019t mean we can\u2019t learn a few new tricks while we wait. Let\u2019s see how close we can come to recreating the privacy icons design, the Massive design, or the Mag+ design without resorting to JavaScript.\n\nA horizontally paginated site\n\nThe first thing we\u2019re going to need is the concept of a page within our HTML document. Using plain old HTML and CSS, we can stack a series of <div>s sideways (with a little assist from our new friend, the viewport-width unit, not that he was strictly necessary). All we need to know is how many pages we have. (And, boy, wouldn\u2019t it be nice to be able to know that without having to predetermine it or use JavaScript?)\n\n.window {\noverflow: hidden;\n width: 100%;\n}\n.pages {\n width: 200vw;\n}\n.page {\n float: left;\n overflow: hidden;\n width: 100vw;\n}\n\nIf you look carefully, you\u2019ll see that the conceit we\u2019ll use in the rest of the demos is in place. Despite the document containing multiple pages, only one is visible at any given time. This allows us to keep the user focused on the task (or content) at hand.\n\nBy the way, you\u2019ll need to use a modern, WebKit-based browser for these demos. I recommend downloading the WebKit nightly builds, Chrome Canary, or being comfortable with setting flags in Chrome.\n\nA horizontally paginated site, with transitions\n\nAh, here\u2019s the rub. We have functional navigation, but precious few cues for the user. It\u2019s not much good shoving the visitor around various parts of the document if they don\u2019t get the pleasant whooshing experience of the journey. You might be thinking, what about that new CSS selector, target-something\u2026? Well, my friend, you\u2019re on the right track. Let\u2019s test it. We\u2019re going to need to use a bit of sleight of hand. While we\u2019d like to simply offset the containing element by the number of pages we\u2019re moving (like we did on Massive), CSS alone can\u2019t give us that information, and that means we\u2019re going to need to fake it by expanding and collapsing pages as you navigate. Here are the bits we\u2019re going to need:\n\n.page {\n -webkit-transition: width 1s; // Naturally you're going to want to include all the relevant prefixes here\n float: left;\n left: 0;\n overflow: hidden;\n position: relative;\n width: 100vw;\n}\n.page:not(:target) {\n width: 0;\n}\n\nAh, but we\u2019re not fooling anyone with that trick. As soon as you move beyond a single page, the visitor\u2019s disbelief comes tumbling down when the linear page transitions are unaffected by the distance the pages are allegedly traveling. And you may have already noticed an even more fatal flaw: I secretly linked you to the first page rather than the unadorned URL. If you visit the same page with no URL fragment, you get a blank screen. Sure, we could force a redirect with some server-side trickery, but that feels like cheating. Perhaps if we had the CSS4 subject selector we could apply styles to the parent based on the child being targeted by the URL. We might also need a few more abilities, like determining the total number of pages and having relative sibling selectors (e.g. nth-sibling), but we\u2019d sure be a lot closer.\n\nA horizontally paginated site, with transitions \u2013 no cheating\n\nWell, what other cards can we play? How about the checkbox hack? Sure, it\u2019s a garish trick, but it might be the best we can do today. Check it out. \n\nlabel {\n cursor: pointer;\n}\ninput {\n display: none;\n}\ninput:not(:checked) + .page {\n max-height: 100vh;\n width: 0;\n}\n\nFinally, we can see the first page thanks to the state we are able to set on the appropriate radio button. Of course, now we don\u2019t have URLs, so maybe this isn\u2019t a winning plan after all. While our HTML and CSS toolkit may feel primitive at the moment, we certainly don\u2019t want to sacrifice the addressability of the web. If there\u2019s one bedrock principle, that\u2019s it.\n\nA horizontally paginated site, with transitions \u2013 no cheating and a gorgeous homepage\n\nGorgeous may not be the right word, but our little magazine is finally shaping up. Thanks to the CSS regions spec, we\u2019ve got an exciting new power, the ability to begin an article in one place and bend it to our will. (Remember, your everyday browser isn\u2019t going to work for these demos. Try the WebKit nightly build to see what we\u2019re talking about.) As with the rest of the examples, we\u2019re clearly abusing these features. Off-canvas layouts (you can thank Luke Wroblewski for the name) are simply not considered to be normal patterns\u2026 yet.\n\nHere\u2019s a quick look at what\u2019s going on:\n\n.excerpt-container {\n float: left;\n padding: 2em;\n position: relative;\n width: 100%;\n}\n.excerpt {\n height: 16em;\n}\n.excerpt_name_article-1,\n.page-1 .article-flow-region {\n -webkit-flow-from: article-1;\n}\n.article-content_for_article-1 {\n -webkit-flow-into: article-1;\n}\n\nThe regions pattern is comprised of at least three components: a beginning; an ending; and a source. Using CSS, we\u2019re able to define specific elements that should be available for the content to flow through. If magazine-style layouts are something you\u2019re interested in learning more about (and you should be), be sure to check out the great work Adobe has been doing.\n\nLooking forward, and backward\n\nAs designers, builders, and consumers of the web, we share a desire to see the usability and enjoyability of websites continue to rise. We are incredibly lucky to be working in a time when a three-month-old website can be laughably outdated. Our goal ought to be to improve upon both the weaknesses and the strengths of the web platform. We seek not only smoother transitions and larger canvases, but fine-grained addressability. Our URLs should point directly and unambiguously to specific content elements, be they pages, sections, paragraphs or words. Moreover, off-screen design patterns are essential to accommodating and empowering the multitude of devices we use to access the web. We should express the desire that interpage links take advantage of the CSS transitions which have been put to such good effect in every other aspect of our designs. Transitions aren\u2019t just nice to have, they\u2019re table stakes in the highly competitive world of native applications. \n\nThe tools and technologies we have right now allow us to create smart, beautiful, useful webpages. With a little help, we can begin removing the seams and sutures that bind the web to an earlier, less sophisticated generation.", "year": "2012", "author": "Nathan Peretic", "author_slug": "nathanperetic", "published": "2012-12-21T00:00:00+00:00", "url": "https://24ways.org/2012/infinite-canvas-moving-beyond-the-page/", "topic": "code"} {"rowid": 87, "title": "Content Planning Demystified", "contents": "The first thing you learn as a junior editor is that you can\u2019t do everything yourself. You must rely on someone else to do at least part of what must be done: the long-range planning, the initial drafting or shooting or recording, the editing, the production, the final polish. All of those pieces of work that belong to someone else take quite a lot of time \u2014 days, weeks, sometimes months. If you\u2019re the sort of person who wrote college term papers the night before they were due, this can come as a bit of a shock. To my twenty-two-year-old self, it certainly did. \n\nIt turns out that the only real way to avoid a trainwreck with editorial work is to get ahead of the trouble, line everything up carefully, and leave oodles of room for all the pieces to connect on time. The same is true of content strategy, content planning, and just about everything to do with content on the web, except for the writing itself \u2014 and that, too, usually takes far longer than anyone expects. If you\u2019re not a professional editor and you suddenly find yourself dealing with content creation, you\u2019re almost certainly going to underestimate the time and effort involved, or to skip something important in the planning process that pops up to bite you later. \n\nWithout good content, it doesn\u2019t matter how well designed or coded your web project is, because it won\u2019t be doing the thing it\u2019s meant to do. And even if content is far from your specialty, you may well end up being the only one willing to coordinate it far enough in advance to avoid a chaotic ending. Whether you\u2019re hiring writers and editors for a big project, working with a small client, or coaxing some editorial help out of a co-worker, getting the planning work done correctly \u2014 and ahead of time \u2014 will allow you to orchestrate a glorious ballet of togetherness, instead of feverishly scraping together something to put on your site when the deadline looms. So get out the graph paper and the pocket protector, because we\u2019re going to go Full Nerd on this problem.\n\nKnow your poison\n\nAnyone who\u2019s seen a project delayed for six months by content trouble, or derailed by content that\u2019s bland and unhelpful, knows this stuff can make you feel like a dead sock. To get ahead of the problem, you\u2019re going to have to learn to spot common problems and plan your way around them. On web projects without a dedicated editorial lead, you\u2019re likely to encounter content that is:\n\n\n\tUseless \u2013 Content that doesn\u2019t serve your readers\u2019 needs in some way is pointless. And because it takes up your time and crowds out genuinely helpful things, it\u2019s actually damaging. The logic is simple: you can make content that\u2019s all about you, and that serves your stated messaging goals, but if no one is motivated to read it, it\u2019s a waste of everyone\u2019s time.\n\tBadly written \u2013 When you publish articles or instructions or other content that is too stiffly formal, overly wordy, hard to understand, offensive, unintentionally cheesy, or otherwise off in tone or style, you\u2019re doing two things. First, you\u2019re weakening the information you\u2019re trying to convey by making it obscure or annoying. Second \u2014 and this one is even more damaging \u2014 you\u2019re demonstrating bad taste. When you get the cultural elements of publishing wrong, you encourage your readers to believe that you either don\u2019t understand them or don\u2019t care about getting it wrong.\n\tGooey \u2013 Content strategists have been talking about structured content (that\u2019s chunks versus blobs) for years. If you\u2019re publishing more than a few dozen pages without thinking through the structure of your content, you\u2019re probably missing a chance to improve your long-term efficiency. If you\u2019re publishing more than a couple of thousand pages without taking care of your content structure, you\u2019re probably doing a lot more manual wrangling (or cumbersome CMS work) than you need to be, especially when it comes to cross-platform publishing.\n\tUnregulated \u2013 If you\u2019re not tracking what works and what doesn\u2019t \u2014 and especially if you don\u2019t know what \u201cworks\u201d means for your project or organization \u2014 you\u2019re almost certainly getting worse results than you should be, for more work.\n\tOverabundant \u2013 As demonstrated by the cinnamon challenge, too much of a delicious thing can be a giant and publicly embarrassing disaster. For most projects and organizations, if you\u2019re making more stuff than your readers can handle, or if you\u2019re spreading your creative and editorial resources too thinly, that\u2019s bad. Spammers, content farms, and barrel-bottom tabloids have their own special math, the side effects of which include insomnia, irritability, and crying in traffic while silently mouthing Wilson Phillips lyrics.\n\n\n\nPrevent all preventable damage\n\nOnce you know what kind of trouble to look for, you can prevent a lot of it by doing some smart planning well before someone starts writing (or recording or shooting video).\n\n\n\tTo prevent uselessness: Know your readers and decide what you\u2019re trying to accomplish \u2014 with your website as a whole, and with each piece of content, always. Once you know what you\u2019re trying to achieve, you can evaluate your work as you go to make sure that it\u2019s actually doing the right thing. (I\u2019ve written a lot more about this for A List Apart and in The Elements of Content Strategy.)\n\tTo prevent bad writing: Establish a consistent and appropriate style using examples (and a style guide if you need one), designate an editor, hire good writers, and make time for quality control. Kate Kiefer\u2019s style guide for MailChimp is a superb example of style-wrangling that everyone can use.\n\tTo prevent repulsive goo: Give your content as much structure as possible, and know how structure relates to your entire publishing ecosystem, including all those mobile devices. Sara Wachter-Boettcher\u2019s Content Everywhere and Karen McGrane\u2019s Content Strategy for Mobile offer brilliant yet friendly introductions to the wide world of structured content.\n\tTo prevent unregulated chaos: Measure everything that matters to your project, your client, your organization, and especially your readers \u2014 not generic measures of someone else\u2019s success. Measure it all regularly. Be disciplined. Adjust at regular intervals. Rick Allen\u2019s series on content strategy analytics is an excellent place to begin (part one; part two).\n\tTo prevent overabundance: Stop trying to do everything and focus on giving your readers just a few things they want and genuinely need. Don\u2019t establish a schedule your writers might not be able to keep, and focus on differentiating yourself with quality, not quantity. (And while you\u2019re at it, scratch the auto-posting to social networks and the cross-posting between them. It\u2019s about as engaging as an automated phone system.)\n\n\nAt a slightly higher level, pick the right content person (or team) for the work. If you really only need a few pages of copy, find a smart writer who does good work for multi-platform readers. If you\u2019re slinging tens of thousands of pages of content, get someone with field experience in high-level editorial planning and the ability to turn blobs into chunks and melted goo into Legos. If you\u2019re starting a project that involves making a lot of content over time, bring in someone with journalism experience (or get your client to do so). \n\n\u201cBut wait!\u201d you may say. \u201cI\u2019m not hiring anyone. I have to do this all myself.\u201d That\u2019s not uncommon at all. The bad news is, you have to learn a bunch of stuff. The good news is, you get to learn a bunch of awesome stuff. Figure out what the project needs, just as though you were going to hire someone, and then give yourself time to get up to speed. If it\u2019s a really complicated project, you\u2019re probably going to have trouble unless you eventually get professional help. But if it\u2019s small and you can do it in steps, you can certainly do much better by giving yourself a plan and working on the things that matter most.\n\n\nPlan for the marathon, not the sprint\n\nLaunching with awesome content is a tiny fraction of a victory, which is why it\u2019s so important that your content not be gooey or unregulated. It also means that if you don\u2019t plan for a realistic publication schedule, you are going to slam into reality in a really unpleasant way not too long after you\u2019ve begun. If you\u2019re asking people to make words (or videos or whatever) for you, they\u2019re going to have to do less of something else, so plan for that beforehand. \n\nAnd while you\u2019re at it, unless publishing is your core business, ditch the feed-the-beast plan that leads to fluffy blog posts and spiritless, unhelpful social media content. It\u2019s antisocial for your reading community, offers short-term gains at best, and will burn you out or lower your standards until you don\u2019t even know you\u2019re doing lousy work. Good content is expensive, no matter how you do it, but spreading yourself too thin is a much worse investment than doing a smaller thing well and gradually building up a body of superb content that people want to share and keep and return to.", "year": "2012", "author": "Erin Kissane", "author_slug": "erinkissane", "published": "2012-12-20T00:00:00+00:00", "url": "https://24ways.org/2012/content-planning-demystified/", "topic": "content"} {"rowid": 89, "title": "Direction, Distance and Destinations", "contents": "With all these new smartphones in the hands of lost and confused owners, we need a better way to represent distances and directions to destinations. The immediate examples that jump to mind are augmented reality apps which let you see another world through your phone\u2019s camera. While this is interesting, there is a simpler way: letting people know how far away they are and if they are getting warmer or colder. \n\nIn the app world, you can easily tap into the phone\u2019s array of sensors such as the GPS and compass, but what people rarely know is that you can do the same with HTML. The native versus web app debate will never subside, but at least we can show you how to replicate some of the functionality progressively in HTML and JavaScript.\n\nIn this tutorial, we\u2019ll walk through how to create a simple webpage listing distances and directions of a few popular locations around the world. We\u2019ll use JavaScript to access the device\u2019s geolocation API and also attempt to access the compass to get a heading. Both of these APIs are documented, to be included in the W3C geolocation API specification, and can be used on both desktop and mobile devices today.\n\nTo get started, we need a list of a few locations around the world. I have chosen the highest mountain peak on each continent so you can see a diverse set of distances and directions. \n\n\n\t\t\n\t\t\tMountain \n\t\t\t\u00b0Latitude \n\t\t\t\u00b0Longitude \n\t\t\n\t\t\n\t\t\tKilimanjaro\n\t\t\t-3.075833\n\t\t\t37.353333\n\t\t\n\t\t\n\t\t\tVinson Massif\n\t\t\t-78.525483\n\t\t\t-85.617147\n\t\t\n\t\t\n\t\t\tPuncak Jaya\n\t\t\t-4.078889\n\t\t\t137.158333\n\t\t\n\t\t\n\t\t\tEverest\n\t\t\t27.988056\n\t\t\t86.925278\n\t\t\n\t\t\n\t\t\tElbrus\n\t\t\t43.355\n\t\t\t42.439167\n\t\t\n\t\t\n\t\t\tMount McKinley\n\t\t\t63.0695\n\t\t\t-151.0074\n\t\t\n\t\t\n\t\t\tAconcagua\n\t\t\t-32.653431\n\t\t\t-70.011083\n\t\t\n\n\nSource: Wikipedia \n\nWe can put those into an HTML list to be styled and accessed by JavaScript to create some distance and directions calculations.\n\nThe next thing we need to do is check to see if the browser and operating system have geolocation support. To do this we test to see if the function is available or not using a single JavaScript if statement.\n\n<script>\n// If this is true, then the method is supported and we can try to access the location\nif (navigator.geolocation) {\n\tnavigator.geolocation.getCurrentPosition(geo_success, geo_error);\n}\n</script>\n\nThe if statement will be false if geolocation support is not present, and then it is up to you to do something else instead as a fallback. For this example, we\u2019ll do nothing since our page should work as is and only get progressively better if more functionality is available. \n\nThe if statement will be true if there is support and therefore will continue inside the curly brackets to try to get the location. This should prompt the reader to accept or deny the request to get their location. If they say no, the second function callback is processed, in this case a function called geo_error; whereas if the location is available, it fires the geo_success function callback.\n\nThe function geo_error(){ } isn\u2019t that exciting. You can handle this in any way you see fit. The success function is more interesting. We get a position object passed into the function which contains a series of exciting attributes, namely the latitude and longitude of the device\u2019s current location.\n\nfunction geo_success(position){\n\tgLat = position.coords.latitude;\n\tgLon = position.coords.longitude;\n}\n\nNow, in the variables gLat and gLon we have the user\u2019s approximate geographical position. We can use this information to start to calculate some distances between where they are and all the destinations.\n\nAt the time of writing, you can also get position.coords.heading, but on Windows and iOS devices this returned NULL. In the future, if and when this is supported, this is also where you can easily grab the compass information.\n\nInside the geo_success function, we want to loop through the HTML to get all of the mountain peaks\u2019 latitudes and longitudes and compute the distance.\n\n...\n$('.geo').each(function(){\n\t// Get the lat/lon from the HTML\n\ttLat = $(this).find('.lat').html()\n\ttLon = $(this).find('.lon').html()\n\n\t// compute the distances between the current location and this points location\n\tdist = distance(tLat,tLon,gLat,gLon);\n\n\t// set the return values into something useful\n\td = parseInt(dist[0]*10)/10;\n\ta = parseFloat(dist[1]);\n\n\t// display the value in the HTML and style the arrow\n\t$(this).find('.distance').html(d+' km away');\n\t$(this).find('.direction').css('-webkit-transform','rotate(-' + a + 'deg)');\n\n\t// store the arc for later use if compass is available\n\t$(this).attr('data-arc',a);\n}\n\nIn the variable d we have the distance between the current location and the location of the mountain peak based on the Haversine Formula. The variable a is the arc, which has a value from 0 to 359.99. This will be useful later if we have compass support. Given these two values we have a distance and a heading to style the HTML.\n\nThe next thing we want to do is check to see if the device has a compass and then get access to the the current heading. As we\u2019ll see, there are several ways to do this, some of which work on certain devices but not others. The W3C geolocation spec says that, along with the coordinates, there are several other attributes: accuracy; altitude; and heading. Heading is the direction to true north, which is different than magnetic north! WebKit and Windows return NULL for the heading value, but WebKit has an experimental method to fetch the heading. If you get into accessing these sensors, you\u2019ll have to try to catch a few of these methods to finally get a value. Assuming you do, we can move on to the more interesting display opportunities.\n\nIn an ideal world, this would succeed and set a variable we\u2019ll call compassHeading to get a value between 0 and 359.99 degrees. Now we know which direction north is, we also know the direction relative to north of the path to our destination, so we can can subtract the two values to get an arrow to display on the screen. But we\u2019re not finished yet: we also need to get the device\u2019s orientation (landscape or portrait) and subtract the correct amount from the angle for the arrow. Once we have a value, we can use CSS to rotate the arrow the correct number of degrees.\n\n-webkit-transform: rotate(-180deg)\n\nNot all devices support a standard way to access compass information, so in the meantime we need to use a work around. On iOS, you can use the experimental event method e.webkitCompassHeading. We want the compass to update in real time as the device is moved around, so we\u2019ll put this inside an event listener.\n\nwindow.addEventListener('deviceorientation', function(e) {\n\t// Loop through all the locations on the page\n\t$('.geo').each(function(){\n\t\t// get the arc value from north we computed and stored earlier\n\t\tdestination_arc = parseInt($(this).attr('data-arc'))\n\t\tcompassHeading = e.webkitCompassHeading + window.orientation + destination_arc;\n\t\t// find the arrow element and rotate it accordingly\n\t\t$(this).find('.direction').css('-webkit-transform','rotate(-' + compassHeading + 'deg)');\t\t\n\t}\n}\n\nAs the device is rotated, the compass arrow will constantly be updated. If you want to see an example, you can have a look at this page which shows the distances to all the peaks on each continent.\n\nWith progressive enhancement, we slowly layer on additional functionality as we go. The reader will first see the list of locations with a latitude and longitude. If the device is capable and permissions allow, it will then compute the distance. If a compass is available, with the correct permissions it will then add the final layer which is direction.\n\nYou should consider this code a stub for your projects. If you are making a hyperlocal webpage with restaurant locations, for example, then consider adding these features. Knowing not only how far away a place is, but also the direction can be hugely important, and since the compass is always active, it acts as a guide to the location. \n\nFuture developments\n\nImprovements to this could include setting a timer and recalling the navigator.geolocation.getCurrentPosition() function and updating the distances. I chose very distant mountains so kilometres made sense, but you can divide again by 1,000 to convert to metres if you are dealing with much nearer places. Walking or driving would change the distances so the ability to refresh would be important. \n\nIt is outside the scope of this article, but if you manage to get this HTML to work offline, then you can make a nice web app which sits on your devices\u2019 homescreens and works even without an internet connection. This could be ideal for travellers in an unknown city looking for your destination. Just with offline storage, base64 encoding and data URIs, it is possible to embed plenty of design and functionality into a small offline webpage.\n\nNow you know how to use JavaScript to look up a destination\u2019s location and figure out the distance and direction \u2013 never get lost again.", "year": "2012", "author": "Brian Suda", "author_slug": "briansuda", "published": "2012-12-19T00:00:00+00:00", "url": "https://24ways.org/2012/direction-distance-and-destinations/", "topic": "code"} {"rowid": 95, "title": "Giving Content Priority with CSS3 Grid Layout", "contents": "Browser support for many of the modules that are part of CSS3 have enabled us to use CSS for many of the things we used to have to use images for. The rise of mobile browsers and the concept of responsive web design has given us a whole new way of looking at design for the web. However, when it comes to layout, we haven\u2019t moved very far at all. We have talked for years about separating our content and source order from the presentation of that content, yet most of us have had to make decisions on source order in order to get a certain visual layout. \n\nOwing to some interesting specifications making their way through the W3C process at the moment, though, there is hope of change on the horizon. In this article I\u2019m going to look at one CSS module, the CSS3 grid layout module, that enables us to define a grid and place elements on to it. This article comprises a practical demonstration of the basics of grid layout, and also a discussion of one way in which we can start thinking of content in a more adaptive way.\n\nBefore we get started, it is important to note that, at the time of writing, these examples work only in Internet Explorer 10. CSS3 grid layout is a module created by Microsoft, and implemented using the -ms prefix in IE10. My examples will all use the -ms prefix, and not include other prefixes simply because this is such an early stage specification, and by the time there are implementations in other browsers there may be inconsistencies. The implementation I describe today may well change, but is also there for your feedback.\n\nIf you don\u2019t have access to IE10, then one way to view and test these examples is by signing up for an account with Browserstack \u2013 the free trial would give you time to have a look. I have also included screenshots of all relevant stages in creating the examples.\n\nWhat is CSS3 grid layout?\n\nCSS3 grid layout aims to let developers divide up a design into a grid and place content on to that grid. Rather than trying to fabricate a grid from floats, you can declare an actual grid on a container element and then use that to position the elements inside. Most importantly, the source order of those elements does not matter. \n\nDeclaring a grid\n\nWe declare a grid using a new value for the display property: display: grid. As we are using the IE10 implementation here, we need to prefix that value: display: -ms-grid;.\n\nOnce we have declared our grid, we set up the columns and rows using the grid-columns and grid-rows properties.\n\n.wrapper {\n display: -ms-grid;\n -ms-grid-columns: 200px 20px auto 20px 200px;\n -ms-grid-rows: auto 1fr;\n}\n\nIn the above example, I have declared a grid on the .wrapper element. I have used the grid-columns property to create a grid with a 200 pixel-wide column, a 20 pixel gutter, a flexible width auto column that will stretch to fill the available space, another 20 pixel-wide gutter and a final 200 pixel sidebar: a flexible width layout with two fixed width sidebars. Using the grid-rows property I have created two rows: the first is set to auto so it will stretch to fill whatever I put into it; the second row is set to 1fr, a new value used in grids that means one fraction unit. In this case, one fraction unit of the available space, effectively whatever space is left.\n\nPositioning items on the grid\n\nNow I have a simple grid, I can pop items on to it. If I have a <div> with a class of .main that I want to place into the second row, and the flexible column set to auto I would use the following CSS:\n\n.content {\n -ms-grid-column: 3;\n -ms-grid-row: 2;\n -ms-grid-row-span: 1;\n}\n\nIf you are old-school, you may already have realised that we are essentially creating an HTML table-like layout structure using CSS. I found the concept of a table the most helpful way to think about the grid layout module when trying to work out how to place elements.\n\nCreating grid systems\n\nAs soon as I started to play with CSS3 grid layout, I wanted to see if I could use it to replicate a flexible grid system like this fluid 16-column 960 grid system.\n\nI started out by defining a grid on my wrapper element, using fractions to make this grid fluid.\n\n.wrapper {\t \n width: 90%;\n margin: 0 auto 0 auto;\n display: -ms-grid;\n -ms-grid-columns: 1fr (4.25fr 1fr)[16];\n -ms-grid-rows: (auto 20px)[24];\n}\n\nLike the 960 grid system I was using as an example, my grid starts with a gutter, followed by the first actual column, plus another gutter repeated sixteen times. What this means is that if I want to span two columns, as far as the grid layout module is concerned that is actually three columns: two wide columns, plus one gutter. So this needs to be accounted for when positioning items.\n\nI created a CSS class for each positioning option: column position; rows position; and column span. For example:\n\n.grid1 {-ms-grid-column: 2;} /* applying this class positions an item in the first column (the gutter is column 1) */\n.grid2 {-ms-grid-column: 4;} /* 2nd column - gutter|column 1|gutter */\n.grid3 {-ms-grid-column: 6;} /* 3rd column - gutter|column 1|gutter|column2|gutter */\n\n.row1 {-ms-grid-row:1;}\n.row2 {-ms-grid-row:3;}\n.row3 {-ms-grid-row:5;}\n\n.colspan1 {-ms-grid-column-span:1;}\n.colspan2 {-ms-grid-column-span:3;}\n.colspan3 {-ms-grid-column-span:5;}\n\nI could then add multiple classes to each element to set the position on on the grid.\n\n\n\nThis then gives me a replica of the fluid grid using CSS3 grid layout. To see this working fire up IE10 and view Example 1.\n\nThis works, but\u2026\n\nThis worked, but isn\u2019t ideal. I considered not showing this stage of my experiment \u2013 however, I think it clearly shows how the grid layout module works and is a useful starting point. That said, it\u2019s not an approach I would take in production. First, we have to add classes to our markup that tie an element to a position on the grid. This might not be too much of a problem if we are always going to maintain the sixteen-column grid, though, as I will show you that the real power of the grid layout module appears once you start to redefine the grid, using different grids based on media queries. If you drop to a six-column layout for small screens, positioning items into column 16 makes no sense any more.\n\nCalculating grid position using LESS\n\nAs we\u2019ve seen, if you want to use a grid with main columns and gutters, you have to take into account the spacing between columns as well as the actual columns. This means we have to do some calculating every time we place an item on the grid. In my example above I got around this by creating a CSS class for each position, allowing me to think in sixteen rather than thirty-two columns. But by using a CSS preprocessor, I can avoid using all the classes yet still think in main columns.\n\nI\u2019m using LESS for my example. My simple grid framework consists of one simple mixin.\n\n.position(@column,@row,@colspan,@rowspan) {\n -ms-grid-column: @column*2;\n -ms-grid-row: @row*2-1;\n -ms-grid-column-span: @colspan*2-1;\n -ms-grid-row-span: @rowspan*2-1;\n}\n\nMy mixin takes four parameters: column; row; colspan; and rowspan. So if I wanted to place an item on column four, row three, spanning two columns and one row, I would write the following CSS:\n\n.box {\n .position(4,3,2,1);\n}\n\nThe mixin would return:\n\n.box {\n -ms-grid-column: 8;\n -ms-grid-row: 5;\n -ms-grid-column-span: 3;\n -ms-grid-row-span: 1;\n}\n\nThis saves me some typing and some maths. I could also add other prefixed values into my mixin as other browsers started to add support.\n\nWe can see this in action creating a new grid. Instead of adding multiple classes to each element, I can add one class; that class uses the mixin to create the position. I have also played around with row spans using my mixin and you can see we end up with a quite complicated arrangement of boxes. Have a look at example two in IE10. I\u2019ve used the JavaScript LESS parser so that you can view the actual LESS that I use. Note that I have needed to escape the -ms prefixed properties with ~\"\" to get LESS to accept them.\n\n\n\nThis is looking better. I don\u2019t have direct positioning information on each element in the markup, just a class name \u2013 I\u2019ve used grid(x), but it could be something far more semantic. We can now take the example a step further and redefine the grid based on screen width.\n\nMedia queries and the grid\n\nThis example uses exactly the same markup as the previous example. However, we are now using media queries to detect screen width and redefine the grid using a different number of columns depending on that width.\n\nI start out with a six-column grid, defining that on .wrapper, then setting where the different items sit on this grid:\n\n.wrapper {\t \n width: 90%;\n margin: 0 auto 0 auto;\n display: ~\"-ms-grid\"; /* escaped for the LESS parser */\n -ms-grid-columns: ~\"1fr (4.25fr 1fr)[6]\"; /* escaped for the LESS parser */\n -ms-grid-rows: ~\"(auto 20px)[40]\"; /* escaped for the LESS parser */\n}\n.grid1 { .position(1,1,1,1); } \n.grid2 { .position(2,1,1,1); } \n/* ... see example for all declarations ... */\n\n\n\nUsing media queries, I redefine the grid to nine columns when we hit a minimum width of 700 pixels.\n\n@media only screen and (min-width: 700px) {\n.wrapper {\n -ms-grid-columns: ~\"1fr (4.25fr 1fr)[9]\";\n -ms-grid-rows: ~\"(auto 20px)[50]\";\n}\n.grid1 { .position(1,1,1,1); } \n.grid2 { .position(2,1,1,1); } \n/* ... */\n}\n\n\n\nFinally, we redefine the grid for 960 pixels, back to the sixteen-column grid we started out with.\n\n@media only screen and (min-width: 940px) {\n.wrapper {\t \n -ms-grid-columns:~\" 1fr (4.25fr 1fr)[16]\";\n -ms-grid-rows:~\" (auto 20px)[24]\";\n}\n.grid1 { .position(1,1,1,1); } \n.grid2 { .position(2,1,1,1); } \n/* ... */\n}\n\nIf you view example three in Internet Explorer 10 you can see how the items reflow to fit the window size. You can also see, looking at the final set of blocks, that source order doesn\u2019t matter. You can pick up a block from anywhere and place it in any position on the grid.\n\nLaying out a simple website\n\nSo far, like a toddler on Christmas Day, we\u2019ve been playing with boxes rather than thinking about what might be in them. So let\u2019s take a quick look at a more realistic layout, in order to see why the CSS3 grid layout module can be really useful. At this time of year, I am very excited to get out of storage my collection of odd nativity sets, prompting my family to suggest I might want to open a museum. Should I ever do so, I\u2019ll need a website, and here is an example layout.\n\n\n\nAs I am using CSS3 grid layout, I can order my source in a logical manner. In this example my document is as follows, though these elements could be in any order I please:\n\n<div class=\"wrapper\">\n <div class=\"welcome\">\n ...\n </div>\n <article class=\"main\">\n ...\n </article>\n <div class=\"info\">\n ...\n </div>\n <div class=\"ads\">\n ...\n </div>\n</div>\n\nFor wide viewports I can use grid layout to create a sidebar, with the important information about opening times on the top righ,t with the ads displayed below it. This creates the layout shown in the screenshot above.\n\n@media only screen and (min-width: 940px) {\n .wrapper {\t \n -ms-grid-columns:~\" 1fr (4.25fr 1fr)[16]\";\n -ms-grid-rows:~\" (auto 20px)[24]\";\n }\n .welcome {\n .position(1,1,12,1);\n padding: 0 5% 0 0;\n }\n .info {\n .position(13,1,4,1);\n border: 0;\n padding:0;\n }\n .main {\n .position(1,2,12,1);\n padding: 0 5% 0 0;\n } \n .ads {\n .position(13,2,4,1);\n display: block;\n margin-left: 0;\n }\n}\n\nIn a floated layout, a sidebar like this often ends up being placed under the main content at smaller screen widths. For my situation this is less than ideal. I want the important information about opening times to end up above the main article, and to push the ads below it. With grid layout I can easily achieve this at the smallest width .info ends up in row two and .ads in row five with the article between.\n\n.wrapper {\t \n display: ~\"-ms-grid\";\n -ms-grid-columns: ~\"1fr (4.25fr 1fr)[4]\";\n -ms-grid-rows: ~\"(auto 20px)[40]\";\n}\n.welcome {\n .position(1,1,4,1);\n}\n.info {\n .position(1,2,4,1);\n border: 4px solid #fff;\n padding: 10px;\n}\n.content {\n .position(1,3,4,5);\n}\n.main {\n .position(1,3,4,1);\n}\n.ads {\n .position(1,4,4,1);\n}\n\n\n\nFinally, as an extra tweak I add in a breakpoint at 600 pixels and nest a second grid on the ads area, arranging those three images into a row when they sit below the article at a screen width wider than the very narrow mobile width but still too narrow to support a sidebar. \n\n@media only screen and (min-width: 600px) {\n .ads {\n display: ~\"-ms-grid\";\n -ms-grid-columns: ~\"20px 1fr 20px 1fr 20px 1fr\";\n -ms-grid-rows: ~\"1fr\";\n margin-left: -20px;\n }\n .ad:nth-child(1) {\n .position(1,1,1,1);\n }\n .ad:nth-child(2) {\n .position(2,1,1,1);\n }\n .ad:nth-child(3) {\n .position(3,1,1,1);\n }\n}\n\nView example four in Internet Explorer 10.\n\n\n\nThis is a very simple example to show how we can use CSS grid layout without needing to add a lot of classes to our document. It also demonstrates how we can mainpulate the content depending on the context in which the user is viewing it.\n\nLayout, source order and the idea of content priority\n\nCSS3 grid layout isn\u2019t the only module that starts to move us away from the issue of visual layout being linked to source order. However, with good support in Internet Explorer 10, it is a nice way to start looking at how this might work. If you look at the grid layout module as something to be used in conjunction with the flexible box layout module and the very interesting CSS regions and exclusions specifications, we have, tantalizingly on the horizon, a powerful set of tools for layout.\n\nI am particularly keen on the potential separation of source order from layout as it dovetails rather neatly into something I spend a lot of time thinking about. As a CMS developer, working on larger scale projects as well as our CMS product Perch, I am interested in how we better enable content editors to create content for the web. In particular, I search for better ways to help them create adaptive content; content that will work in a variety of contexts rather than being tied to one representation of that content.\n\nIf the concept of adaptive content is new to you, then Karen McGrane\u2019s presentation Adapting Ourselves to Adaptive Content is the place to start. Karen talks about needing to think of content as chunks, that might be used in many different places, displayed differently depending on context.\n\nI absolutely agree with Karen\u2019s approach to content. We have always attempted to move content editors away from thinking about creating a page and previewing it on the desktop. However at some point content does need to be published as a page, or a collection of content if you prefer, and bits of that content have priority. Particularly in a small screen context, content gets linearized, we can only show so much at a time, and we need to make sure important content rises to the top. In the case of my example, I wanted to ensure that the address information was clearly visible without scrolling around too much. Dropping it with the entire sidebar to the bottom of the page would not have been so helpful, though neither would moving the whole sidebar to the top of the screen so a visitor had to scroll past advertising to get to the article.\n\nIf our layout is linked to our source order, then enabling the content editor to make decisions about priority is really hard. Only a system that can do some regeneration of the source order on the server-side \u2013 perhaps by way of multiple templates \u2013 can allow those kinds of decisions to be made. For larger systems this might be a possibility; for smaller ones, or when using an off-the-shelf CMS, it is less likely to be. Fortunately, any system that allows some form of custom field type can be used to pop a class on to an element, and with CSS grid layout that is all that is needed to be able to target that element and drop it into the right place when the content is viewed, be that on a desktop or a mobile device.\n\nThis approach can move us away from forcing editors to think visually. At the moment, I might have to explain to an editor that if a certain piece of content needs to come first when viewed on a mobile device, it needs to be placed in the sidebar area, tying it to a particular layout and design. I have to do this because we have to enforce fairly strict rules around source order to make the mechanics of the responsive design work. If I can instead advise an editor to flag important content as high priority in the CMS, then I can make decisions elsewhere as to how that is displayed, and we can maintain the visual hierarchy across all the different ways content might be rendered.\n\nWhy frustrate ourselves with specifications we can\u2019t yet use in production?\n\nThe CSS3 grid layout specification is listed under the Exploring section of the list of current work of the CSS Working Group. While discussing a module at this stage might seem a bit pointless if we can\u2019t use it in production work, there is a very real reason for doing so. If those of us who will ultimately be developing sites with these tools find out about them early enough, then we can start to give our feedback to the people responsible for the specification. There is information on the same page about how to get involved with the disussions.\n\nSo, if you have a bit of time this holiday season, why not have a play with the CSS3 grid layout module? I have outlined here some of my thoughts on how grid layout and other modules that separate layout from source order can be used in the work that I do. Likewise, wherever in the stack you work, playing with and thinking about new specifications means you can think about how you would use them to enhance your work. Spot a problem? Think that a change to the specification would improve things for a specific use case? Then you have something you could post to www-style to add to the discussion around this module.\n\nAll the examples are on CodePen so feel free to play around and fork them.", "year": "2012", "author": "Rachel Andrew", "author_slug": "rachelandrew", "published": "2012-12-18T00:00:00+00:00", "url": "https://24ways.org/2012/css3-grid-layout/", "topic": "code"} {"rowid": 83, "title": "Cut Copy Paste", "contents": "Long before I got into this design thing, I was heavily into making my own music inspired by the likes of Coldcut and Steinski. I would scour local second-hand record shops in search of obscure beats, loops and bits of dialogue in the hope of finding that killer sample I could then splice together with other things to make a huge hit that everyone would love. While it did eventually lead to a record contract and getting to release a few 12\u2033 singles, ultimately I knew I\u2019d have to look for something else to pay the bills.\n\nI may not make my own records any more, but the approach I took back then \u2013 finding (even stealing) things, cutting and pasting them into interesting combinations \u2013 is still at the centre of how I work, only these days it\u2019s pretty much bits of code rather than bits of vinyl. Over the years I\u2019ve stored these little bits of code (some I\u2019ve found, some I\u2019ve created myself) in Evernote, ready to be dialled up whenever I need them. \n\nSo when Drew got in touch and asked if I\u2019d like to do something for this year\u2019s 24 ways I thought it might be kind of cool to share with you a few of these snippets that I find really useful. Think of these as a kind of coding mix tape; but remember \u2013 don\u2019t just copy as is: play around, combine and remix them into other wonderful things. \n\nSome of this stuff is dirty; some of it will make hardcore programmers feel ill. For those people, remember this \u2013 while you were complaining about the syntax, I made something.\n\nCreate unique colours\n\nLet\u2019s start right away with something I stole. Well, actually it was given away at the time by Matt Biddulph who was then at Dopplr before Nokia destroyed it. Imagine you have thousands of words and you want to assign each one a unique colour. Well, Matt came up with a crazily simple but effective way to do that using an MD5 hash. Just encode said word using an MD5 hash, then take the first six characters of the string you get back to create a hexadecimal colour representation. \n\nI can\u2019t guarantee that it will be a harmonious colour palette, but it\u2019s still really useful. The thing I love the most about this technique is the left-field thinking of using an encryption system to create colours! Here\u2019s an example using JavaScript:\n\n// requires the MD5 library available at http://pajhome.org.uk/crypt/md5\n\n function MD5Hex(str){\n result = MD5.hex(str).substring(0, 6);\n return result;\n }\n\nMake something breathe using a sine wave\n\nI never paid attention in school, especially during double maths. As a matter of fact, the only time I received corporal punishment \u2013 several strokes of the ruler \u2013 was in maths class. Anyway, if they had shown me then how beautiful mathematics actually is, I might have paid more attention. Here\u2019s a little example of how a sine wave can be used to make something appear to breathe. \n\nI recently used this on an Arduino project where an LED ring surrounding a button would gently breathe. Because of that it felt much more inviting. I love mathematics.\n\nfor(int i = 0; i<360; i++){ \n float rad = DEG_TO_RAD * i;\n int sinOut = constrain((sin(rad) * 128) + 128, 0, 255);\n analogWrite(LED, sinOut);\n delay(10); \n}\n\nSnap position to grid\n\nThis is so elegant I love it, and it was shown to me by Gary Burgess, or Boom Boom as myself and others like to call him. It snaps a position, in this case the X-position, to a grid. Just define your grid size (say, twenty pixels) and you\u2019re good.\n\nsnappedXpos = floor( xPos / gridSize) * gridSize;\n\nCalculate the distance between two objects\n\nFor me, interaction design is about the relationship between two objects: you and another object; you and another person; or simply one object to another. How close these two things are to each other can be a handy thing to know, allowing you to react to that information within your design. Here\u2019s how to calculate the distance between two objects in a 2-D plane:\n\ndeltaX = round(p2.x-p1.x);\ndeltaY = round(p2.y-p1.y);\ndiff = round(sqrt((deltaX*deltaX)+(deltaY*deltaY)));\n\nFind the X- and Y-position between two objects\n\nWhat if you have two objects and you want to place something in-between them? A little bit of interruption and disruption can be a good thing. This small piece of code will allow you to place an object in-between two other objects:\n\n// set the position: 0.5 = half-way\t\n\nfloat position = 0.5;\nfloat x = x1 + (x2 - x1) *position; \nfloat y = y1 + (y2 - y1) *position; \n\nDistribute objects equally around a circle \t\n\nMore fun with maths, this time adding cosine to our friend sine. Let\u2019s say you want to create a circular navigation of arbitrary elements (yeah, Jakob, you heard), or you want to place images around a circle. Well, this piece of code will do just that. You can adjust the size of the circle by changing the distance variable and alter the number of objects with the numberOfObjects variable. Example below is for use in Processing.\n\n// Example for Processing available for free download at processing.org\n\nvoid setup() {\n\n size(800,800);\n int numberOfObjects = 12;\n int distance = 100;\n float inc = (TWO_PI)/numberOfObjects;\n float x,y;\n float a = 0;\n\n for (int i=0; i < numberOfObjects; i++) {\n x = (width/2) + sin(a)*distance;\n y = (height/2) + cos(a)*distance;\n ellipse(x,y,10,10);\n a += inc;\n\n }\n}\n\nUse modulus to make a grid\n\nThe modulus operator, represented by %, returns the remainder of a division. Fallen into a coma yet? Hold on a minute \u2013 this seemingly simple function is very powerful in lots of ways. At a simple level, you can use it to determine if a number is odd or even, great for creating alternate row colours in a table for instance:\n\nboolean checkForEven(numberToCheck) {\n if (numberToCheck % 2 == 0) \n return true;\n } else {\n return false; \n }\n}\n\nThat\u2019s all well and good, but here\u2019s a use of modulus that might very well blow your mind. Construct a grid with only a few lines of code. Again the example is in Processing but can easily be ported to any other language.\n\nvoid setup() {\n\nsize(600,600);\nint numItems = 120;\nint numOfColumns = 12;\nint xSpacing = 40;\nint ySpacing = 40;\nint totalWidth = xSpacing*numOfColumns;\n\nfor (int i=0; i < numItems; i++) {\n\nellipse(floor((i*xSpacing)%totalWidth),floor((i*xSpacing)/totalWidth)*ySpacing,10,10);\n\n}\n}\n\nNot all the bits of code I keep around are for actual graphical output. I also have things that are very utilitarian, but which I still consider part of the design process. Here\u2019s a couple of things that I\u2019ve found really handy lately in my design workflow. They may be a little specific, but I hope they demonstrate that it\u2019s not about working harder, it\u2019s about working smarter. \n\nMerge CSV files into one file\n\nRecently, I\u2019ve had to work with huge \u2013 about 1GB \u2013 CSV text files that I then needed to combine into one master CSV file so I could then process the data. Opening up each text file and then copying and pasting just seemed really dumb, not to mention slow, so I thought there must be a better way. After some Googling I found this command line script that would combine .txt files into one file and add a new line after each: \n\nawk 1 *.txt > finalfile.txt\n\nBut that wasn\u2019t what I was ideally after. I wanted to merge the CSV files, keeping the first row of the first file (the column headings) and then ignore the first row of subsequent files. Sure enough I found the answer after some Googling and it worked like a charm. Apologies to the original author but I can\u2019t remember where I found it, but you, sir or madam, are awesome. Save this as a shell script:\n\nFIRST=\n\nfor FILE in *.csv\n do\n exec 5<\"$FILE\" # Open file\n read LINE <&5 # Read first line\n [ -z \"$FIRST\" ] && echo \"$LINE\" # Print it only from first file\n FIRST=\"no\"\n\n cat <&5 # Print the rest directly to standard output\n exec 5<&- # Close file\n # Redirect stdout for this section into file.out \n\ndone > file.out\n\nCreate a symbolic link to another file or folder\n\nOftentimes, I\u2019ll find myself hunting through a load of directories to load a file to be processed, like a CSV file. Use a symbolic link (in the Terminal) to place a link on your desktop or wherever is most convenient and it\u2019ll save you loads of time. Especially great if you\u2019re going through a Java file dialogue box in Processing or something that doesn\u2019t allow the normal Mac dialog box or aliases.\n\ncd /DirectoryYouWantShortcutToLiveIn\nln -s /Directory/You/Want/ShortcutTo/ TheShortcut\n\nYou can do it, in the mix\n\nI hope you\u2019ve found some of the above useful and that they\u2019ve inspired a few ideas here and there. Feel free to tell me better ways of doing things or offer up any other handy pieces of code. Most of all though, collect, remix and combine the things you discover to make lovely new things.", "year": "2012", "author": "Brendan Dawes", "author_slug": "brendandawes", "published": "2012-12-17T00:00:00+00:00", "url": "https://24ways.org/2012/cut-copy-paste/", "topic": "code"} {"rowid": 73, "title": "How to Make Your Site Look Half-Decent in Half an Hour", "contents": "Programmers like me are often intimidated by design \u2013 but a little effort can give a huge return on investment. Here are one coder\u2019s tips for making any site quickly look more professional. \n\nI am a programmer. I am not a designer. I have a degree in computer science, and I don\u2019t mind Comic Sans. (It looks cheerful. Move on.)\n\nBut although I am a programmer, I want to make my sites look attractive. This is partly out of vanity, and partly realism. Vanity because I want people to think my work is good, and realism because the research shows that people won\u2019t think a site is credible unless it also looks attractive.\n\nFor a very long time after I became a programmer, I was scared of design. Design seemed to consist of complicated rules that weren\u2019t written down anywhere, plus an unlearnable sense of taste, possessed only by a black-clad elite. \n\nBut a little while ago, I decided to do my best to hack what it took to make my own projects look vaguely attractive. And although this doesn\u2019t come close to the effect a professional designer could achieve, gathering these resources for improving a site\u2019s look and feel has been really helpful. \n\nIf I hadn\u2019t figured out some basic design shortcuts, it\u2019s unlikely that a weekend hack of mine would have ended up on page three of the Daily Mail. And too often now, I see excellent programming projects that don\u2019t reach the audience they deserve, simply because their design doesn\u2019t match their execution. \n\nSo, if you are a developer, my Christmas present to you is this: my own collection of hacks that, used rightly, can make your personal programming projects look professional, quickly. None are hard to learn, most are free, and they let you focus on writing code.\n\nOne thing to note about these tips, though. They are a personal, pragmatic compilation. They are suggestions, not a definitive guide. You will definitely get better results by working with a professional designer, and by studying design more deeply. \n\nIf you are a designer, I would love to hear your suggestions for the best tools that I\u2019ve missed, and I\u2019d love to know how we programmers can do things better.\n\nWith that, on to the tools\u2026\n\n1. Use Bootstrap\n\nIf you\u2019re not already using Bootstrap, start now. I really think that Bootstrap is one of the most significant technical achievements of the last few years: it democratizes the whole process of web design. \n\nEssentially, Bootstrap is a a grid system, with a bunch of common elements. So you can lay your site out how you want, drop in simple elements like forms and tables, and get a good-looking, consistent result, without spending hours fiddling with CSS. You just need HTML. \n\nAnother massive upside is that it makes it easy to make any site responsive, so you don\u2019t have to worry about writing media queries. Go, get Bootstrap and check out the examples. To keep your site lightweight, you can customize your download to include only the elements you want. \n\nIf you have more time, then Mark Otto\u2019s article on why and how Bootstrap was created at Twitter is well worth a read. \n\n2. Pimp Bootstrap\n\nUsing Bootstrap is already a significant advance on not using Bootstrap, and massively reduces the tedium of front-end development. But you also run the risk of creating Yet Another Bootstrap Site, or Hack Day Design, as it\u2019s known. \n\nIf you\u2019re really pressed for time, you could buy a theme from Wrap Bootstrap. These are usually created by professional designers, and will give a polish that we can\u2019t achieve ourselves. Your site won\u2019t be unique, but it will look good quickly. \n\nLuckily, it\u2019s pretty easy to make Bootstrap not look too much like Bootstrap \u2013 using fonts, CSS effects, background images, colour schemes and so on. Most of the rest of this article covers different ways to achieve this. \n\nWe are going to customize this Bootstrap example page.\n\nThis already has some custom CSS in the <head>. We\u2019ll pull it all out, and create a new CSS file, custom.css. Then we add a reference to it in the header. Now we\u2019re ready to start customizing things.\n\n\n\n3. Fonts\n\nWeb fonts are one of the quickest ways to make your site look distinctive, modern, and less Bootstrappy, so we\u2019ll start there. \n\nFirst, we can add some sweet fonts, from Google Web Fonts. The intimidating bit is choosing fonts that look nice together. Luckily, there are plenty of suggestions around the web: we\u2019re going to use one of DesignShack\u2019s suggested free Google Fonts pairings. Our fonts are Corben (for headings) and Nobile (for body copy). \n\nThen we add these files to our <head>. \n\n<link href=\"http://fonts.googleapis.com/css?family=Corben:bold\" rel=\"stylesheet\" type=\"text/css\">\n <link href=\"http://fonts.googleapis.com/css?family=Nobile\" rel=\"stylesheet\" type=\"text/css\">\n\n\u2026and this to custom.css: \n\nh1, h2, h3, h4, h5, h6 {\n font-family: 'Corben', Georgia, Times, serif;\n}\np, div {\n font-family: 'Nobile', Helvetica, Arial, sans-serif;\n}\n\nNow our example looks like this. It\u2019s not going to win any design awards, but it\u2019s immediately better:\n\n\n\nI also recommend the web font services Fontdeck, or Typekit \u2013 these have a wider selection of fonts, and are worth the investment if you regularly need to make sites look good. For more font combinations, Just My Type suggests appealing pairings from Typekit. Finally, you can experiment with type pairing ideas at Type Connection. For the design background on pairing fonts, Typekit\u2019s post is worth a read. \n\n4. Textures\n\nAn instant way to make a site look classy is to use textures. You know the grey, stripy, indefinably elegant background on 24ways.org? That.\n\nIf only there was a superb resource listing attractive, free, ready-to-use textures\u2026 Oh wait, there\u2019s Atle Mo\u2019s Subtle Patterns. \n\nWe\u2019re going to use Cream Dust, for an effect that can only be described as subtle. We download the file to a new /img/ directory, then add this to the CSS file:\n\nbody { \n background: url(/img/cream_dust.png) repeat 0 0;\n}\n\nBang:\n\n\n\nFor some design background on patterns, I recommend reading through Smashing Magazine\u2019s guidelines on textures. (TL;DR version: use textures to enhance beauty, and clarify the information architecture of your site; but don\u2019t overdo it, or inadvertently obscure your text.)\n\nStill more to do, though. Onwards. \n\n5. Icons\n\nLast year\u2019s 24 ways taught us to use icon fonts for our site icons. \n\nThis is great for the time-pressed coder, because icon fonts don\u2019t just cut down on HTTP requests \u2013 they\u2019re a lot quicker to set up than image-based icons, too. \n\nBootstrap ships with an extensive, free for commercial use icon set in the shape of Font Awesome. Its icons are safe for screen readers, and can even be made to work in IE7 if needed (we\u2019re not going to bother here). \n\nTo start using these icons, just download Font Awesome, and add the /fonts/ directory to your site and the font-awesome.css file into your /css/ directory. Then add a reference to the CSS file in your header:\n\n<link rel=\"stylesheet\" href=\"/css/font-awesome.css\">\n\nFinally, we\u2019ll add a truck icon to the main action button, as follows. Why a truck? Why not?\n\n<a class=\"btn btn-large btn-success\" href=\"#\"><i class=\"icon-truck\"></i> Sign up today</a>\n\nWe\u2019ll also tweak our CSS file to stop the icon nudging up against the button text:\n\n.jumbotron .btn i { \n margin-right: 8px; \n}\n\nAnd this is how it looks:\n\n\n\nNot the most exciting change ever, but it livens up the page a bit. The licence is CC-BY-3.0, so we also include a mention of Font Awesome and its URL in the source code. \n\nIf you\u2019d like something a little more distinctive, Shifticons lets you pay a few cents for individual icons, with the bonus that you only have to serve the icons you actually use, which is more efficient. Its icons are also friendly to screen readers, but won\u2019t work in IE7. \n\n6. CSS3\n\nThe next thing you could do is add some CSS3 goodness. It can really help the key elements of the site stand out. \n\nIf you are pressed for time, just adding box-shadow and text-shadow to emphasize headings and standouts can be useful: \n\nh1 { \n text-shadow: 1px 1px 1px #ccc;\n}\n.div-that-you want-to-stand-out { \n box-shadow: 0 0 1em 1em #ccc;\n}\n\nWe have a little more time though, so we\u2019re going to do something more subtle. We\u2019ll add a radial gradient behind the main heading, using an online gradient editor. \n\nThe output is hefty, but you can see it in the CSS. Note that we also need to add the following to our HTML, for IE9 support: \n\n<!--[if gte IE 9]>\n <style type=\"text/css\">\n .gradient {\n filter: none;\n }\n </style>\n<![endif]-->\n\nAnd the effect \u2013 I don\u2019t know what a designer would think, but I like the way it makes the heading pop.\n\n\n\nFor a crash course in useful modern CSS effects, I highly recommend CodeSchool\u2019s online course in Functional HTM5 and CSS3. It costs money ($25 a month to subscribe), but it\u2019s worth it for the time you\u2019ll save. As a bonus, you also get access to some excellent JavaScript, Ruby and GitHub courses. \n\n(Incidentally, if you find yourself fighting with basic float and display attributes in CSS \u2013 and there\u2019s no shame in it, CSS layout is not intuitive \u2013 I recommend the CSS Cross-Country course at CodeSchool.)\n\n7. Add a twist\n\nWe could leave it there, but we\u2019re going to add a background image, and give the site some personality. \n\nThis is the area of design that I think many programmers find most intimidating. How do we create the graphics and photographs that a designer would use? The answer is iStockPhoto and its competitors \u2013 online image libraries where you can find and pay for images. They won\u2019t be unique, but for our purposes, that\u2019s fine. \n\nWe\u2019re going to use a Christmassy image. For a twist, we\u2019re going to use Backstretch to make it responsive. \n\nWe must pay for the image, then download it to our /img/ directory. Then, we set it as our <body>\u2019s background-image, by including a JavaScript file with just the following line: \n\n$.backstretch(\"/img/winter.jpg\");\n\nWe also reset the subtle pattern to become the background for our container image. It would look much better transparent, so we can use this technique in GIMP to make it see-through:\n\n.container-narrow {\n background: url(/img/cream_dust_transparent.png) repeat 0 0;\n}\n\nWe also fiddle with the padding on body and .container-narrow a bit, and this is the result: \n\n\n\n(Aside: If this were a real site, I\u2019d want to buy images in multiple sizes and ensure that Backstretch chose the most appropriately sized image for our screen, perhaps using responsive images.)\n\nHow to find the effects that make a site interesting? I keep a set of bookmarks for interesting JavaScript and CSS effects I might want to use someday, from realistic shadows to animating grids. The JavaScript Weekly newsletter is a great source of ideas. \n\n8. Colour schemes\n\nWe\u2019re just about getting there \u2013 though we\u2019re probably past half an hour now \u2013 but that button and that menu still both look awfully Bootstrappy. \n\nReal sites, with real designers, have a colour palette, carefully chosen to harmonize and match the brand profile. For our purposes, we\u2019re just going to borrow some colours from the image. We use Gimp\u2019s colour picker tool to identify the hex values of the blue of the snow. Then we can use Color Scheme Designer to find contrasting, but complementary, colours. \n\nFinally, we use those colours for our central button. There are lots of tools to help us do this, such as Bootstrap Buttons. The new HTML is quite long, so I won\u2019t paste it all here, but you can find it in the CSS file. \n\nWe also reset the colour of the pills in the navigation menu, which is a bit easier: \n\n.nav-pills > .active > a, .nav-pills > .active > a:hover {\n background-color: #FF9473;\n}\n\nI\u2019m not sure if the result is great to be honest, but at least we\u2019ve lost those Bootstrap-blue buttons:\n\n\n\nAnother way to do it, if you didn\u2019t have an image to match, would be to borrow an attractive colour scheme. Colourlovers is a community where people create and share ready-made colour palettes. \n\nThe key thing is to find a palette with an open licence, so you can legitimately use it. Unfortunately, you can\u2019t search for palettes by licence type, but many do have open licences. Here\u2019s a popular palette with a CC-BY-SA licence that allows reuse with attribution. \n\nAs above, you can use the hex values from the palette in your custom CSS, and bask in the newly colourful results.\n\n9. Read on\n\nWith the above techniques, you can make a site that is starting to look slightly more professional, pretty quickly. \n\nIf you have the time to invest, it\u2019s well worth learning some design principles, if only so that design seems less intimidating and more like fun. As part of my design learning, I read a few introductory design books aimed at coders. The best I found was David Kadavy\u2019s Design for Hackers: Reverse-Engineering Beauty, which explains the basic principles behind choosing colours, fonts, typefaces and layout.\n\nIn the introduction to his book, David writes: \n\n\n\tNo group stands to gain more from design literacy than hackers do\u2026 The one subject that is exceedingly frustrating for hackers to try to learn is design. Hackers know that in order to compete against corporate behemoths with just a few lines of code, they need to have good, clear design, but the resources with which to learn design are simply hard to find.\n\n\nWell said. If you have half a day to invest, rather than half an hour, I recommend getting hold of David\u2019s book.\n\nAnd the journey is over. Perhaps that took slightly more than half an hour, but with practice, using the above techniques can become second nature. What useful tools have I missed? Designers, how would you improve on the above? I would love to know, so please give me your views in the comments.", "year": "2012", "author": "Anna Powell-Smith", "author_slug": "annapowellsmith", "published": "2012-12-16T00:00:00+00:00", "url": "https://24ways.org/2012/how-to-make-your-site-look-half-decent/", "topic": "design"} {"rowid": 75, "title": "A Harder-Working Class", "contents": "Class is only becoming more important. Focusing on its original definition as an attribute for grouping (or classifying) as well as linking HTML to CSS, recent front-end development practices are emphasizing class as a vessel for structured, modularized style packages. These patterns reduce the need for repetitive declarations that can seriously bloat file sizes, and instil human-readable understanding of how the interface, layout, and aesthetics are constructed.\n\nIn the next handful of paragraphs, we will look at how these emerging practices \u2013 such as object-oriented CSS and SMACSS \u2013 are pushing the relevance of class. We will also explore how HTML and CSS architecture can be further simplified, performance can be boosted, and CSS utility sharpened by combining class with the attribute selector.\n\nA primer on attribute selectors\n\nWhile attribute selectors were introduced in the CSS 2 spec, they are still considered rather exotic. These well-established and well-supported features give us vastly improved flexibility in targeting elements in CSS, and offer us opportunities for smarter markup. With an attribute selector, you can directly style an element based on any of its unique \u2013 or uniquely shared \u2013 attributes, without the need for an ID or extra classes. Unlike pseudo-classes, pseudo-elements, and other exciting features of CSS3, attribute selectors do not require any browser-specific syntax or prefix, and are even supported in Internet Explorer 7. \n\nFor example, say we want to target all anchor tags on a page that link to our homepage. Where otherwise we might need to manually identify and add classes to the HTML for these specific links, we could simply write:\n\n[href=index.html] { }\n\nThis selector reads: target every element that has an href attribute of \u201cindex.html\u201d. \n\nAttribute selectors are more faceted, though, as they also give us some very simple regular expression-like logic that helps further narrow (or widen) a selector\u2019s scope. In our previous example, what if we wanted to also give indicative styles to any anchor tag linking to an external site? With no way to know what the exact href value would be for every external link, we need to use an expression to match a common aspect of those links. In this case, we know that all external links need to start with \u201chttp\u201d, so we can use that as a hook:\n\n[href^=http] { }\n\nThe selector here reads: target every element that has an href attribute that begins with \u201chttp\u201d (which will also include \u201chttps\u201d). The ^= means \u201cstarts with\u201d. There are a few other simple expressions that give us a lot of flexibility in targeting elements, and I have found that a deep understanding of these and other selector types to be very useful.\n\nThe class-attribute selector\n\nBy matching classes with the attribute selector, CSS can be pushed to accomplish some exciting new feats. What I call a class-attribute selector combines the advantages of classes with attribute selectors by targeting the class attribute, rather than a specific class. Instead of selecting .urgent, you could select [class*=urgent]. The latter may seem like a more verbose way of accomplishing the former, but each would actually match two subtly different groups of elements.\n\nEric Meyer first explored the possibility of using classes with attribute selectors over a decade ago. While his interest in this technique mostly explored the different facets of the syntax, I have found that using class-attribute selectors can have distinct advantages over either using an attribute selector or a straightforward class selector.\n\nFirst, let\u2019s explore some of the subtleties of why we would target class before other attributes:\n\n\n\tClasses are ubiquitous. They have been supported since the HTML 4 spec was released in 1999. Newer attributes, such as the custom data attribute, have only recently begun to be adopted by browsers.\n\tClasses have multiple ways of being targeted. You can use the class selector or attribute selector (.classname or [class=classname]), allowing more flexible specificity than resorting to an ID or !important.\n\tClasses are already widely used, so adding more classes will usually require less markup than adding more attributes.\n\tClasses were designed to abstractly group and specify elements, making them the most appropriate attribute for styling using object-oriented methods (as we will learn in a moment).\n\n\nAlso, as Meyer pointed out, we can use the class-attribute selector to be more strict about class declarations. Of these two elements:\n\n<h2 class=\"very urgent\">\n\n<h2 class=\"urgent\">\n\n\u2026only the second h2 would be selected by [class=urgent], while .urgent would select both. The use of = matches any element with the exact class value of \u201curgent\u201d. Eric explores these nuances further in his series on attribute selectors, but perhaps more dramatic is the added power that class-attribute selectors can bring to our CSS.\n\nMore object-oriented, more scalable and modular\n\nNicole Sullivan has been pushing abstracted, object-oriented thinking in CSS development for years now. She has shared stacks of knowledge on how behemoth sites have seen impressive gains in maintenance overhead and CSS file sizes by leaning heavier on classes derived from common patterns. Jonathan Snook also speaks, writes and is genuinely passionate about improving our markup by using more stratified and modular class name conventions. With SMACSS, he shows this to be highly useful across sites \u2013 both complex and simple \u2013 that exhibit repeated design patterns. Sullivan and Snook both push the use of class for styling over other attributes, and many front-end developers are fast advocating such thinking as best practice.\n\nWith class-attribute selectors, we can further abstract our CSS, pushing its scalability. In his chapter on modules, Snook gives the example of a .pod class that might represent a certain set of styles. A .pod style set might be used in varying contexts, leading to CSS that might normally look like this:\n\n.pod { }\nform .pod { }\naside .pod { }\n\nAccording to Snook, we can make these styles more portable by targeting more verbose classes, rather than context:\n\n.pod { }\n.pod-form { }\n.pod-sidebar { }\n\n\u2026resulting in the following HTML:\n\n<div class=\"pod\">\n<div class=\"pod pod-form\">\n<div class=\"pod pod-sidebar\">\n\nThis divorces the <div>\u2019s styles from its context, making it applicable to any situation in which it is needed. The markup is clean and portable, and the classes are imbued with meaning as to what module they belong to. \n\nUsing class-attribute selectors, we can simplify this further:\n\n[class*=pod] { }\n.pod-form { }\n.pod-sidebar { }\n\nThe *= tells the browser to look for any element with a class attribute containing \u201cpod\u201d, so it matches \u201cpod\u201d, \u201cpod-form\u201d, \u201cpod-sidebar\u201d, etc. This allows only one class per element, resulting in simpler HTML:\n\n<div class=\"pod\">\n<div class=\"pod-form\">\n<div class=\"pod-sidebar\">\n\nWe could further abstract the concept of \u201cform\u201d and \u201csidebar\u201d adjustments if we knew that each of those alterations would always need the same treatment.\n\n/* Modules */\n[class*=pod] { }\n[class*=btn] { }\n\n/* Alterations */\n[class*=-form] { }\n[class*=-sidebar] { }\n\nIn this case, all elements with classes appended \u201c-form\u201d or \u201c-sidebar\u201d would be altered in the same manner, allowing the markup to stay simple:\n\n<form>\n <h2 class=\"pod-form\">\n <a class=\"btn-form\" href=\"#\">\n\n<aside>\n <h2 class=\"pod-sidebar\">\n <a class=\"btn-sidebar\" href=\"#\">\n\n50+ shades of specificity\n\nClasses are just powerful enough to override element selectors and default styling, but still leave room to be trumped by IDs and !important styles. This makes them more suitable for object-oriented patterns and helps avoid messy specificity issues that can not only be a pain for developers to maintain, but can also affect a site\u2019s performance. As Sullivan notes, \u201cIn almost every case, classes work well and have fewer unintended consequences than either IDs or element selectors\u201d. Proper use of specificity and cascade is crucial in building straightforward, efficient CSS.\n\nOne interesting aspect of attribute selectors is that they can be compounded for increasing levels of specificity. Attribute selectors are assigned a specificity level of ten, the same as class selectors, but both class and attribute selectors can be chained together, giving them more and more specificity with each link. Some examples:\n\n.box { } \n/* Specificity of 10 */\n\n.box.promo { } \n/* Specificity of 20 */\n\n[class*=box] { } \n/* Specificity of 10 */\n\n[class*=box][class*=promo] { } \n/* Specificity of 20 */\n\nYou can chain both types together, too:\n\n.box[class*=promo] { } \n/* Specificity of 20 */\n\nI was amused to find, though, that you can chain the exact same class and attribute selectors for infinite levels of specificity\n\n.box { } \n/* Specificity of 10 */\n\n.box.box { } \n/* Specificity of 20 */\n\n.box.box.box { } \n/* Specificity of 30 */\n\n[class*=box] { }\n/* Specificity of 10 */\n\n[class*=box][class*=box] { }\n/* Specificity of 20 */\n\n[class*=box][class*=box][class*=box] { }\n/* Specificity of 30 */\n\n.box[class*=box].box[class*=box] { } \n/* Specificity of 40 */\n\nTo override .box styles for promo, we wouldn\u2019t need to add an ID, change the order of .promo and .box in the CSS, or resort to an !important style. Granted, any issue that might need this fine level of specificity tweaking could probably be better solved with clever cascades, but having options never hurts.\n\nSmarter CSS\n\nOne of the most powerful aspects of the class-attribute selector is its ability to expand the simple logic found in CSS. When developing Gridset (an online tool for building grids and outputting them as CSS), I realized that with the right class name conventions, class-attribute selectors would allow the CSS to be smart enough to automatically adjust for column offsets without the need for extra classes. This imbued the CSS output with logic that other frameworks lacked, and makes a developer\u2019s job much easier. \n\nSay you need an element that spans column five (c5) to column six (c6) on your grid, and is preceded by an element spanning column one (c1) to column three (c3). The CSS can anticipate such a scenario:\n\n.c1-c3 + .c5-c6 {\n margin-left: 25%; /* \u2026or the width of column four plus two gutter widths */\n}\n\n\u2026but to accommodate all of the margin offsets that could span that same gap, we would need to write a rather protracted list for just a six column grid:\n\n.c1-c3 + .c5-c6,\n.c1-c3 + .c5,\n.c2-c3 + .c5-c6,\n.c2-c3 + .c5,\n.c3 + .c5-c6,\n.c3 + .c5 {\n margin-left: 25%; \n}\n\nNow imagine how the verbosity compounds when we repeat this type of declaration for every possible margin in a grid. The more columns added to the grid, the longer this selector list would get, too, making the CSS harder for the developer to maintain and slowing the load time. Using class-attribute selectors, though, this can be much simpler:\n\n[class*=c3] + [class*=c5] {\n margin-left: 25%;\n}\n\nI\u2019ve detailed how we extract as much logic as possible from as little CSS as needed on the Gridset blog.\n\nMore flexible selectors\n\nIn a recent project, I was working with Drupal-generated classes to change styles for certain special pages on a site. Without being able to change the code base, I was left trying to find some specific aspect of the generated HTML to target. I noticed that every special page was given a prefixed class, unique to the page, resulting in CSS like this:\n\n.specialpage-about,\n.specialpage-contact,\n.specialpage-info,\n\u2026\n\n\u2026and the list kept growing with each new special page. Such bloat would lead to problems down the line, and add development overhead to editorial decisions, which was a situation we were trying to avoid. I was easily able to fix this, though, with a concise class-attribute selector:\n\n[class*=specialpage-]\n\nThe CSS was now flexible enough to accommodate both the editorial needs of the client, and the development restrictions of the CMS.\n\nSelector performance\n\nAs Snook tells us in his chapter on Selector Performance, selectors are read by the browser from right to left, matching every element that adheres to each rule (or part of the selector). The more specific we can make the right-most rules \u2013 and every other part of your selectors \u2013 the more performant your CSS will be. So this selector:\n\n.home-page .promo .main-header\n\n\u2026would be more performant than:\n\n.home-page div header\n\n\u2026because there are likely many more header and div elements on the page, but not so many elements with those specific classes.\n\nNow, the class-attribute selector could be more general than a class selector, but not by much. I ran numerous tests based on the work of Steve Souders (and a few others) to test a class-attribute selector against a normal class selector. Given that Javascript will freeze during style rendering, I created a script that will add, then remove, a stylesheet on a page 5000 times, and measure only the time that elapses during the rendering freeze. The script runs four tests, essentially: one where a class selector and class-attribute Selector match a single element, and one they match multiple elements on the page.\n\nAfter running the test over 100 times and averaging the results, I have not seen a significant difference in rendering times. (As of this writing, the class-attribute selector has been 0.398% slower on average.) View the results here.\n\nGiven the sheer amount of bytes potentially saved by reducing selector lists, though, I am confident class-attribute selectors could shorten load times on larger sites and, at the very least, save precious development time.\n\nConclusion\n\nWith its flexibility and broad remit, class has at times been derided as too lenient, allowing CMSes and lazy developers to fill its values with presentational hacks or verbose gibberish. There have even been calls for an early retirement. Class continues, though, to be one of our most crucial tools.\n\nFront-end developers are rightfully eager to expand production abilities through innovations such as Sass or LESS, but this should not preclude us from honing the tools we already know as well. Every technique demonstrated in this article was achievable over a decade ago and most of the same thinking could be applied to IDs, rels, or any other attribute (though the reasons listed above give class an edge). The recent advent of methods such as object-oriented CSS and SMACSS shows there is still much room left to expand what simple HTML and CSS can accomplish. Progress may not always be found in the innovation of our tools, but through sharpening our understanding of them.", "year": "2012", "author": "Nathan Ford", "author_slug": "nathanford", "published": "2012-12-15T00:00:00+00:00", "url": "https://24ways.org/2012/a-harder-working-class/", "topic": "code"} {"rowid": 94, "title": "Using Questionnaires for Design Research", "contents": "How do you ask the right questions? \n\nIn this article, I share a bunch of tips and practical advice on how to write and use your own surveys for design research.\n\nI\u2019m an audience researcher \u2013 I\u2019m not a designer or developer. I\u2019ve spent much of the last thirteen years working with audience data both in creative agencies and on the client-side. I\u2019m also a member of the Market Research Society. I run user surveys and undertake user research for our clients at the design studio I run with my husband \u2013 Mark Boulton Design.\n\nSo let\u2019s get started!\n\nWho are you designing for?\n\nGood web designers and developers appreciate the importance of understanding the audience they are designing or building a website or app for. I\u2019m assuming that because you are reading a quality publication like 24 ways that you fall into this category, and so I won\u2019t begin this article with a lecture.\n\nSuffice it to say, it\u2019s a good idea to involve research of some sort during the life cycle of every project you undertake. I don\u2019t just mean visual or competitor research, which of course is also very important. I mean looking at or finding your own audience or user data. Whether that be auditing existing data or research available from the client, carrying out user interviews, A/B testing, or conducting a simple questionnaire with users, any research is better than none. If you create personas as a design tool, they should always be based on research, so you will need to have plenty of data to hand for that.\n\nWhere do I start?\n\nIn the initial kick-off stages of a project, it\u2019s a good idea to start by asking your client (when working in-house you still have a client \u2013 you might even be the client) what research or audience data they have available. Some will have loads \u2013 analytics, surveys, focus groups and insights \u2013 from talking to customers. Some won\u2019t have much at all and you\u2019ll be hard pressed to find out much about the audience. It\u2019s best to review existing research first without rushing headlong into doing new research. Get a picture of what the data tells you and perhaps get this into a document \u2013 who, what, why and how are they using this website or app? What gaps are there in existing research? What else do you need to know? Then you can decide what else you need to do to plug these gaps. Think about the information first before deciding on the methodology. The rest of my article talks mostly about running self-completion online surveys. You can of course do face-to-face surveys, self-completion written questionnaires or phone polls, but I won\u2019t cover those here. That\u2019s for another article.\n\nWhy run a survey?\n\nSurveys are great for getting a broad picture of your audience. As long as they are designed carefully, you can create an overview of them, how they use the site and their opinions of it, with an idea of which parts of this picture are more important than others. By using a limited amount of open-ended questions, you can also get some more qualitative feedback or insights on your website or app. The clients we work with surprisingly often don\u2019t have much in the way of audience research available, even basic analytics, so I will often suggest running a short survey, just to create a picture of who is out there.\n\nOK, what should I do first?\n\nBefore you rush into writing questions, stop and think about what you\u2019re trying to find out. Remember being in school when you studied science and you had to propose a hypothesis? This could be a starting point \u2013 something to prove or disprove. Or, even better, write a research brief. It doesn\u2019t have to be long; it can be just a sentence that encapsulates what you\u2019re trying to do, like a good creative brief. For the purposes of this article, I created a short, slightly silly survey on Christmas and beliefs in Father Christmas.\n\nMy research brief was:\n\n\n\tTo find out more about people\u2019s beliefs about Father Christmas and their experiences of Christmas.\n\n\nInevitably, as you start thinking of what questions to ask, you will find that you go off at tangents or your client will want you to add in everything but the kitchen sink. In order for your questionnaire not to get too long and lose focus, you could write lists of what it is and what it\u2019s not. This is how I\u2019d apply it to my Christmas questionnaire example:\n\nWhat it is about\n\n\n\tHow people communicate with Father Christmas\n\tIf someone\u2019s background has affected their likelihood of believing in Father Christmas\n\n\nWhat it is not about\n\n\n\tWhat colour to change Father Christmas\u2019s coat to\n\tFather Christmas\u2019s elves\n\n\nLet\u2019s get down to business: the questions. \n\nKinds of questions\n\nThere are two basic kinds of questions: open-ended and closed. Closed questions limit answers by giving the respondent a number of predefined lists of options to choose from. Typically, these are multiple-choice questions with a list of responses. You can either select one or tick all that apply. Another useful type of closed question I often use is a rating scale, where a respondent can assess a situation along a continuum of values. These can also be useful as a measure of advocacy or strength of feeling about something. There is a standard measure called the Net Promoter score, which measures how likely someone is to recommend your product or service to a friend or acquaintance. It\u2019s a useful benchmark as you can compare your scores to others in a similar sector.\n\n\n\nOpen-ended questions often take the form of a statement which requires a response. Generally, respondents are given a text box to fill in. It\u2019s useful to limit this in some way so that people have an idea of how long the expected response should be; for example, a single line for an email address (Q18), or a larger text area for a longer response (Q6).\n\nIf you plan to send your survey out to a large number of people, I would suggest using mostly closed questions, unless you want to spend a long time wading through comments and hand-coded responses. I\u2019d always advise adding a general request at the end of a survey (\u2018Is there anything else you\u2019d like to tell us?\u2019). You\u2019d be surprised how many interesting and insightful comments people will add.\n\nThere are times when it\u2019s better to provide an open-ended text box rather than a predefined list makes assumptions about your audience\u2019s groupings. For example, we ran a short survey for our Gridset beta testers and rather than assume we knew who they were, we decided to ask an open-ended question: \u201cWhat is your current job title?\u201d\n\n\n\nThe analysis took quite a bit longer than responses using a predefined list, but it meant that we were able to make sure we didn\u2019t miss anyone. And next time we run a survey for Gridset, I can use the responses gathered from this survey to help create a predefined list to make analysis easier.\n\nWhat to ask\n\nThe questions to ask depend on what you want to know, but your brief and lists of what the survey is and isn\u2019t should help here. I always ask the design team and client to give me ideas of what they are interested in finding out, and combine this with a mix of new and standard questions I have used in other surveys. I find Survey Monkey\u2019s question bank a very useful source of example questions and help with tricky wording.\n\nI always include simple demographics so I can compare my results to the population at large or internet users as a whole \u2013 just going on age, gender and location can be quite illuminating. For example, with the Christmas survey, I can see that the respondents were typical of the online design and dev community, mainly young and male.\n\nIf appropriate, I add questions on disability, ethnic background, religion and community of interest. Questions about ethnicity, religion, sexual preference, disability and other sensitive subjects can feel awkward and difficult to ask. This is not a good reason to not ask them. Perhaps you\u2019re working for a public sector client, like a local council, so it\u2019s likely you will need to consider groups of people who maybe under-represented, who may have differing views to others, or who you need to look at specifically as a subset.\n\nHow to ask\n\nAlthough they may seem clunky and wordy, it\u2019s often best to use the census wording or professional body wording for such demographic questions. For example, I used the UK census 2011 wording for Wales on my Christmas questionnaire in my questions on religion [PDF] (Q16) and ethnicity [PDF] (Q17). I had to adapt them slightly for the Survey Monkey format \u2013 self-completion online, rather than pen and paper \u2013 which is why \u201cWhite Welsh\u201d came up as the first option for the ethnicity question. For similar questions for US audiences, try the Census Bureau website.\n\nWhen conducting a survey for a project that has a global audience, you need to consider who your primary audience is. For example, I recently created a questionnaire for a global news website. A large proportion of its audience is based in the USA, so I was careful to word things in a way Americans would find familiar. I used the US ethnic background census question wording and options, and looked at data for US competitor news websites to decide which to include.\n\nYou should also consider people whose first language isn\u2019t English. Working as an audience researcher at BBC Wales, every survey we did was bilingual. I\u2019ve also recently run a user survey in Arabic using Google Forms. During this project, we found that while Survey Monkey supports different languages, including Arabic, the text ran left to right with no option to change it to right to left \u2013 an essential when it comes to reading Arabic! If research is a deliverable in a client project, and you know you\u2019ll need to conduct it in a foreign language, always build in extra time for translation at both the questionnaire design and analysis stages. Make sure you also allow for plenty of checks. In this case we had to change to Google Forms after initially creating our survey with Survey Monkey to get the functionality we needed.\n\nLook and feel\n\nThink about the survey as another way your audience will experience your brand. Take care getting the tone of voice right. There are plenty of great articles and books out there about tone of voice \u2013 try Letting Go of the Words by Ginny Redish for starters, or Brand Language by Liz Doig. The basic rule of thumb is to sound like a human, and use clear and friendly language. If, like me, you are lucky enough to work with journalists or copy editors, you should ask for their help, particularly in the preamble, linking text and closing statements. I find it helpful to break my questions down into sections and to have a page for each. I then have an introductory piece of text for each section to guide the respondent through the survey.\n\nYou should also make sure you check with your designers how your survey looks \u2013 use a company logo and branding, and make the typography legible. Many survey apps like Survey Monkey and Google Forms have a progress bar. This is helpful for users to see how far through your survey they are. I generally time the survey and give an indication in the preamble: \u201cThis survey will only take five minutes of your time.\u201d\n\nYou also need to think about how you will technically serve the questionnaire. For example, will it be via email, social media, a pop-up or lightbox on your website, or (not recommended but possible) in an ad space?\n\nEthical considerations\n\nSomething else to think about are any local laws that govern how you collect and store data, such as the Data Protection Act in the UK. As a member of the Market Research Society, I am also obliged to consider its guidelines, but even if you\u2019re not, it\u2019s always a good idea to deal with personal data ethically.\n\nIf you collect personal data that can identify individuals, you must ask their permission to share it with others, and store it securely for no longer than two years. If you want to contact people afterwards, you must ask for their permission. If you ask for email addresses, as I did in question 18, you have a ready-made sample for a further survey, interviews or focus groups. Remember, you shouldn\u2019t survey people under sixteen years old without the permission of their parents or legal guardians, so if you know your website is likely to be used by children, you must ask for verification of age early on, and your survey should close someone answers that they are under sixteen. The ESOMAR guidelines for online research [PDF] are well worth reading, as they go into detail about such issues, as well as privacy guidelines \u2013 using cookies, storing IP addresses, and so on.\n\nTools\n\nUnless you work in-house and have proprietary software, or at a market research agency and you\u2019re using specialist software such as Snap or IBM SPSS Statistics (previously just SPSS), you will need to use a good tool to run your survey, collect your responses and, ideally, help with the analysis. I like Survey Monkey because of the question bank and analysis tools. The software graphs your results and does simple cross-tabbing and filtering. What this means is you can slice the data in more interesting ways and delve a bit deeper. For example, in the Gridset questionnaire I mentioned earlier, I cross-tabbed responses to questions against whether a person worked in-house, for an agency or as a freelancer. \n\nOther well known online tools that I also use from time to time are Wufoo and Google Forms. Smart Surveys is a similar service to Survey Monkey and it\u2019s used by many leading brands in the UK. Snap Surveys mentioned above is a well-established player in the market research scene, used a lot for face-to-face surveys and also on tablets and smartphones.\n\nAnalysis\n\nAnalysis is often overlooked but is as important as the design of the questionnaire. Don\u2019t just rely on looking at the summary report and charts generated as standard by your form or survey software. Spend time with your data. Spend at least a week now and then if you can, looking at the data. Keep coming back to it and tweaking or cutting it a different way to see if there are any different pictures. Slice it up in different ways to reveal new insights. Here is the data from my dummy survey (apart from the open-ended responses). \n\nFor open-ended questions, you can analyse collaboratively. Print and cut out the open-ended responses and do a cluster analysis or affinity sort with a colleague. \n\n\n\nDiscussing the comments helps you to understand them. You will also find the design team are more likely to buy into the research as they have uncovered the insights for themselves. Always make sure to treat open-ended responses sensitively and don\u2019t share anything publicly in a way that identifies the respondent.\n\nWrite a report\n\nNever hand over a dataset to your client without a summary of the findings. Data on its own can be skewed to suit the reader\u2019s needs, and not everyone is able to find the story in a dataset. Even if it\u2019s not a deliverable, it\u2019s always a good idea to capture your findings in a report of some sort. Use graphs sparingly to show really interesting things or to aid the reader\u2019s understanding. I have written a quick dummy report using the data from the Christmas questionnaire so you can see how it\u2019s done.\n\nI highly recommend Brian Suda\u2019s book A Practical Guide to Designing with Data for tips on how to present data effectively, but that\u2019s a subject that benefits a whole article (indeed book) in itself. \n\nI am not a designer. I am a researcher, so I never write design recommendations in a report unless they have been talked about or suggested by the designers I work with. More often, I write up the results and we talk about them and what impact they have on the project or design. Often they lead to more questions or further research.\n\nSo that\u2019s it: a brief introduction to using questionnaires for design research. Here\u2019s a quick summary to remind you what I have talked about, and a list of resources if you\u2019re interested in reading further.\n\nTop 10 things to remember when using questionnaires for design research:\n\n\n\tStart by auditing existing research to identify gaps in data.\n\tWrite a research brief. Work out exactly what you\u2019re trying to find out \u2013 what is the survey about, and what is it not about?\n\tThe two basic kinds of questions are open-ended and closed.\n\tClosed questions limit responses by giving the respondent a number of predefined lists of options to choose from (multiple choice, rating scales, and so on).\n\tOpen-ended questions are often in the form of a statement which requires a response. Always ask one at the end of a questionnaire.\n\tAlways include simple demographics to enable you to compare your sample against the population in general.\n\tIt\u2019s best to use official census or professional body wording for questions on ethnicity, disability and religion.\n\tBe sure to think carefully about your tone of voice and the look of your questionnaire.\n\tPay attention to guidelines and laws on storing personal data, cookies and privacy.\n\tInvest plenty of time in analysis and report writing. Don\u2019t just look at the obvious \u2013 dig deep for more interesting insights.\n\n\nSome useful resources for further study\n\nOnline research\n\n\n\tDesign Research: Methods and Perspectives edited by Brenda Laurel\n\tOnline Research Essentials by Brenda Russell and John Purcell\n\tHandbook of Online and Social Media Research by Ray Poynter\n\tESOMAR guidelines for online research [PDF]\n\tOnline questionnaires\n\n\nMarket research books on questionnaire design\n\n\n\tUsing Questionnaires in Small-Scale Research: A Beginner\u2019s Guide by Pamela Munn\n\tQuestionnaire Design by A N Oppenheim\n\tDeveloping a Questionnaire by Bill Gillham", "year": "2012", "author": "Emma Boulton", "author_slug": "emmaboulton", "published": "2012-12-14T00:00:00+00:00", "url": "https://24ways.org/2012/using-questionnaires-for-design-research/", "topic": "business"} {"rowid": 92, "title": "Redesigning the Media Query", "contents": "Responsive web design is showing us that designing content is more important than designing containers. But if you\u2019ve given RWD a serious try, you know that shifting your focus from the container is surprisingly hard to do. There are many factors and\ninstincts working against you, and one culprit is a perpetrator you\u2019d least suspect.\n\nThe media query is the ringmaster of responsive design. It lets us establish the rules of the game and gives us what we need most: control. However, like some kind of evil double agent, the media query is actually working against you.\n\nIts very nature diverts your attention away from content and forces you to focus on the container.\n\nThe very act of choosing a media query value means choosing a screen size.\n\nLook at the history of the media query\u2014it\u2019s always been about the container. Values like screen, print, handheld and tv don\u2019t have anything to do with content. The modern media query lets us choose screen dimensions, which is great because it makes RWD possible. But it\u2019s still the act of choosing something that is completely unpredictable.\n\nContent should dictate our breakpoints, not the container. In order to get our focus back to the only thing that matters, we need a reengineered media query\u2014one that frees us from thinking about screen dimensions. A media query that works for your content, not the window. Fortunately, Sass 3.2 is ready and willing to take on this challenge.\n\nThinking in Columns\n\nFluid grids never clicked for me. I feel so disoriented and confused by their squishiness. Responsive design demands their use though, right?\n\nI was ready to surrender until I found a grid that turned my world upright again. The Frameless Grid by Joni Korpi demonstrates that column and gutter sizes can stay fixed. As the screen size changes, you simply add or remove columns to accommodate. This made sense to me and armed with this concept I was able to give Sass the first component it needs to rewrite the media query: fixed column and gutter size variables.\n\n$grid-column: 60px;\n$grid-gutter: 20px;\n\nWe\u2019re going to want some resolution independence too, so let\u2019s create a function that converts those nasty pixel values into ems.\n\n@function em($px, $base: $base-font-size) {\n\t@return ($px / $base) * 1em;\n}\n\nWe now have the components needed to figure out the width of multiple columns in ems. Let\u2019s put them together in a function that will take any number of columns and return the fixed width value of their size.\n\n@function fixed($col) {\n\t@return $col * em($grid-column + $grid-gutter)\n}\n\nWith the math in place we can now write a mixin that takes a column count as a parameter, then generates the perfect media query necessary to fit that number of columns on the screen. We can also build in some left and right margin for our layout by adding an additional gutter value (remembering that we already have one gutter built into our fixed function).\n\n@mixin breakpoint($min) {\n\t@media (min-width: fixed($min) + em($grid-gutter)) {\n\t\t@content\n\t}\n}\n\nAnd, just like that, we\u2019ve rewritten the media query. Instead of picking a minimum screen size for our layout, we can simply determine the number of columns needed. Let\u2019s add a wrapper class so that we can center our content on the screen.\n\n@mixin breakpoint($min) {\n @media (min-width: fixed($min) + em($grid-gutter)) {\n\t.wrapper {\n\t\twidth: fixed($min) - em($grid-gutter);\n\t\tmargin-left: auto; margin-right: auto;\n\t}\n\t@content\n }\n}\n\nDesigning content with a column count gives us nice, easy, whole numbers to work with. Sizing content, sidebars or widgets is now as simple as specifying a single-digit number.\n\n@include breakpoint(8) {\n\t.main { width: fixed(5); }\n\t.sidebar { width: fixed(3); }\n}\n\nThose four lines of Sass just created a responsive layout for us. When the screen is big enough to fit eight columns, it will trigger a fixed width layout. And give widths to our main content and sidebar. The following is the outputted CSS\u2026\n\n@media (min-width: 41.25em) {\n .wrapper {\n width: 38.75em;\n margin-left: auto; margin-right: auto;\n }\n .main { width: 25em; }\n .sidebar { width: 15em; }\n}\n\nDemo\n\nI\u2019ve created a Codepen demo that demonstrates what we\u2019ve covered so far. I\u2019ve added to the demo some grid classes based on Griddle by Nicolas Gallagher to create a floatless layout. I\u2019ve also added a CSS gradient overlay to help you visualize columns. Try changing the column variable sizes or the breakpoint includes to see how the layout reacts to different screen sizes.\n\nResponsive Images\n\nResponsive images are a serious problem, but I\u2019m excited to see the community talk so passionately about a solution. Now, there are some excellent stopgaps while we wait for something official, but these solutions require you to mirror your breakpoints in JavaScript or HTML. This poses a serious problem for my Sass-generated media queries, because I have no idea what the real values of my breakpoints are anymore. For responsive images to work, JavaScript needs to recognize which media query is active so that proper images can be loaded for that layout.\n\nWhat I need is a way to label my breakpoints. Fortunately, people much smarter than I have figured this out. Jeremy Keith devised a labeling method by using CSS-generated content as the storage method for breakpoint labels. We can use this technique in our breakpoint mixin by passing a label as another argument.\n\n@include breakpoint(8, 'desktop') { /* styles */ }\n\nSass can take that label and use it when writing the corresponding media query. We just need to slightly modify our breakpoint mixin.\n\n@mixin breakpoint($min, $label) {\n @media (min-width: fixed($min) + em($grid-gutter)) {\n\n // label our mq with CSS generated content\n\tbody::before { content: $label; display: none; }\n\n\t.wrapper {\n\t\twidth: fixed($min) - em($grid-gutter);\n\t\tmargin-left: auto; margin-right: auto;\n\t}\n\t@content\n }\n}\n\nThis allows us to label our breakpoints with a user-friendly string. Now that our media queries are defined and labeled, we just need JavaScript to step in and read which label is active.\n\n// get css generated label for active media query\nvar label = getComputedStyle(document.body, '::before')['content'];\n\nJavaScript now knows which layout is active by reading the label in the current media query\u2014we just need to match that label to an image. I prefer to store references to different image sizes as data attributes on my image tag.\n\n<img class=\"responsive-image\" data-mobile=\"mobile.jpg\" data-desktop=\"desktop.jpg\" />\n<noscript><img src=\"desktop.jpg\" /></noscript>\n\nThese data attributes have names that match the labels set in my CSS. So while there is some duplication going on, setting a keyword like \u2018tablet\u2019 in two places is much easier than hardcoding media query values. With matching labels in CSS and HTML our script can marry the two and load the right sized image for our layout.\n\n// get css generated label for active media query\nvar label = getComputedStyle(document.body, '::before')['content'];\n\n// select image\nvar $image = $('.responsive-image');\n\n// create source from data attribute\n$image.attr('src', $image.data(label));\n\nDemo\n\nWith some slight additions to our previous Codepen demo you can see this responsive image technique in action. While the above JavaScript will work it is not nearly robust enough for production so the demo uses a jQuery plugin that can accomodate multiple images, reloading on screen resize and fallbacks if something doesn\u2019t match up.\n\nCreating a Framework\n\nThis media query mixin and responsive image JavaScript are the center piece of a front end framework I use to develop websites. It\u2019s a fluid, mobile first foundation that uses the breakpoint mixin to structure fixed width layouts for tablet and desktop. Significant effort was focused on making this framework completely cross-browser. For example, one of the problems with using media queries is that essential desktop structure code ends up being hidden from legacy Internet Explorer. Respond.js is an excellent polyfill, but if you\u2019re comfortable serving a single desktop layout to older IE, we don\u2019t need JavaScript. We simply need to capture layout code outside of a media query and sandbox it under an IE only class name.\n\n// set IE fallback layout to 8 columns\n$ie-support = 8;\n\n// inside of our breakpoint mixin (but outside the media query)\n@if ($ie-support and $min <= $ie-support) {\n\t.lt-ie9 { @content; }\n}\n\nPerspective Regained\n\nThinking in columns means you are thinking about content layout. How big of a screen do you need for 12 columns? Who cares? Having Sass write media queries means you can use intuitive numbers for content layout. A fixed grid means more layout control and less edge cases to test than a fluid grid. Using CSS labels for activating responsive images means you don\u2019t have to duplicate breakpoints across separations of concern. \n\nIt\u2019s a harmonious blend of approaches that gives us something we need\u2014responsive design that feels intuitive. And design that, from the very outset, focuses on what matters most. Just like our kindergarten teachers taught us: It\u2019s what\u2019s inside that counts.", "year": "2012", "author": "Les James", "author_slug": "lesjames", "published": "2012-12-13T00:00:00+00:00", "url": "https://24ways.org/2012/redesigning-the-media-query/", "topic": "code"} {"rowid": 93, "title": "Design Systems", "contents": "The most important part of responsive web design is that, no matter what the viewport width, the content is accessible in an optimum display. The best responsive designs are those that allow you to go from one optimised display to another, but with the feeling that these experiences are part of a greater product whole.\n\nResponsive design: where we\u2019ve been going wrong\n\nResponsive web design was a shock to my web designer system. Those of us who had already been designing sites for mobile probably had the biggest leap to make. We might have been detecting user agents in order to deliver a mobile-specific site, or using the slightly more familiar Bushido technique to deliver sites optimised for device type and viewport size, but either way our focus was on devices. A site was optimised for either a mobile phone or a desktop.\n\nResponsive web design brought us back to pre-table layout fluid sites that expanded or contracted to fit the viewport. This was a big difference to get our heads around when we were so used to designing for fixed-width layouts. Suddenly, an element could be any width or, at least, we needed to consider its maximum and minimum widths. Pixel perfection, while pretty, became wholly unrealistic, and a whole load of designers who prided themselves in detailed and precise designs got a bit scared.\n\nHanging on to our previous processes and typical deliverables led us to continue to optimise our sites for particular devices and provide pixel-perfect mockups for those device widths.\n\nWith all this we were concentrating on devices, not content, deliverables and not process, and making assumptions about users and their devices based on nothing but the width of the viewport.\n\nI don\u2019t think this is a crime, I think it was inevitable.\n\nWe can be up to date with our principles and ideals, but it\u2019s never as easy in practice. That\u2019s why it\u2019s more important than ever to share our successful techniques and processes. Let\u2019s drag each other into modern web design.\n\nDesign systems: the principles\n\nWhat are design systems?\n\nA visual design system is built out of the core components of typography, layout, shape or form, and colour. When considering the design of a whole product, a design system should also include patterns in user flow, content strategy, copy, and tone of voice. These concepts, design decisions or rules, created around the core components are used consistently across your product to create a cohesive feel, whether it\u2019s from one element to another, page to page, or viewport width to viewport width.\n\nResponsive design is one of the most important considerations in the components of a design system. For each component, you must decide what will unite the design across the viewports to maintain that consistent feel, and what parts of the design will differentiate in order to provide a flexible and optimal experience for different viewport sizes.\n\nComponents you might keep the same across viewports\n\n\n\ttypeface\n\tbase unit\n\tcolour\n\tshape/form\n\n\nComponents you might differentiate across viewports\n\n\n\tgrids\n\tlayout\n\tfont size\n\tmeasure (line length)\n\tleading (line height)\n\n\nContent: it must always be the same\n\nThe focus of a design system is the optimum display of content. As Mark Boulton put it, designing \u201ccontent out, not canvas in.\u201d Chris Armstrong puts the emphasis on not designing for viewports but for content \u2013 \u201cwe need to build on what we do know: content.\u201d In order to do this, we must share the same content across all devices and focus on how best to display and represent content through design system components.\n\nThe practical: core visual components\n\nTypography first\n\nWhen you work with a lot of text content, typography is the easiest way to set the visual tone of the design across all viewport widths. It\u2019s likely that you\u2019ll choose one or two typefaces to use across the whole system, but you might change the most legible font size, balanced with the most comfortable measure, as the viewport width changes.\n\nWhere typography meets layout\n\nThe unit on which you choose to base the grid and layout design, font sizes and leading could be based on the typeface, an optimal reading size, or something more arbitrary. Sometimes I\u2019ll choose a unit based on multiples of ten because it makes the maths in the CSS easier. Tim Brown suggests trying a modular scale. Chris Armstrong suggests basing it on your ideal measure, or the width of a fixed item of content such as an ad unit.\n\nGrids and layouts\n\nSensible grid design can be a flexible yet solid foundation for your design system layout component. But you must be wary in responsive design that a grid might not work across all widths: even four columns could make for very cramped content and one-word measures on smaller screens.\n\nMaybe the grid columns are something you differentiate across widths, but you can keep the concept of the grid consistent. If the content has blocks in groups of three, you might decide on a three-column grid which folds down to one column for narrow viewports. If the grid focuses on the idea of symmetry and has a four-column grid on larger viewports, it might fold down to two columns for narrower viewports. These consistencies may seem subtle, not at all obvious to many except the designer, but it\u2019s all these little constants and patterns across the whole of the design system that makes design decisions easier to make (as they adhere to the guiding concepts of your system), and give the product a uniform feel no matter what the device.\n\nShape or form\n\nThe shape or form components are concepts you already use in fixed-width web design for a strong, consistent look and feel. \n\nSince CSS border-radius became widely supported by browsers, a lot of designs feature circle themes. These are very distinctive and can be used across viewport widths giving them the same united feel, even if they\u2019re not used in the same way. This could also apply to border styles, consistent shadows and any number of decorative details and textures. These are the elements that make up the shape or form of a design system.\n\nColour\n\nColour is the most basic way to reinforce a brand and unite experiences across viewports. The same hex colour used system-wide is instantly recognisable, no matter what the viewport width.\n\nThe process\n\nWhile using a design system isn\u2019t necessarily attached to any particular process, it does lend itself to some process ideals.\n\nDetaching design considerations from viewport widths\n\nA design system allows you to focus separately on the components that make up the system, disconnecting the look and feel from the layout. This helps prevent us getting stuck in the rut of the Apple breakpoints (brilliantly coined by Simon Foster) of mobile, tablet and desktop. It also forces us to design for variation in viewport experiences side by side, not one after the other.\n\nDesign in the browser\n\nI can\u2019t start off designing in the browser \u2013 it just doesn\u2019t seem to bring out my creative side (and I\u2019m incredibly envious of you if you can; I just have to start on paper) \u2013 but static mock-ups aren\u2019t the only alternative. Style guides and style tiles are perfect for expressing the concepts of your design system. Pattern libraries could also work well.\n\nMock-ups and breakpoints\n\nAt some point, whether it\u2019s to test your system ideas, or because a client needs help visualising how your system might work, you may end up producing some static mock-ups. It\u2019s not the end of the world, but you must ensure that these consider all the viewports, not just those of the iDevices, or even the devices currently on the market. You need to decide the breakpoints where the states of your design change. The blocks within your content will always have optimum points for their display (based on their hierarchy, density, width, or type of interaction) and so your breakpoints should be based around these points.\n\nThese are probably the ideal points at which to produce static mockups; treat them as snapshots. They\u2019re not necessarily mock-ups, so much as a way of capturing how your design system would be interpreted when frozen at that particular viewport width.\n\nThe future\n\nCreating design systems will give us the flexibility we need for working with the unknown devices of the future. It may be a change in process, but it shouldn\u2019t be too much of a difference in thinking. The pioneers in responsive design have a hard job. Some of these problems may have already been solved in other technologies or industries, but it\u2019s up to the pioneers to find those connections and help us formulate solutions and standards that will make responsive design the best it can possibly be. We need to keep experimenting and communicating, particularly in the area of design, as good user experiences are the true sign of whether our products are a success.", "year": "2012", "author": "Laura Kalbag", "author_slug": "laurakalbag", "published": "2012-12-12T00:00:00+00:00", "url": "https://24ways.org/2012/design-systems/", "topic": "design"} {"rowid": 79, "title": "Responsive Images: What We Thought We Needed", "contents": "If you were to read a web designer\u2019s Christmas wish list, it would likely include a solution for displaying images responsively. For those concerned about users downloading unnecessary image data, or serving images that look blurry on high resolution displays, finding a solution has become a frustrating quest.\n\nHaving experimented with complex and sometimes devilish hacks, consensus is forming around defining new standards that could solve this problem. Two approaches have emerged.\n\nThe <picture> element markup pattern was proposed by Mat Marquis and is now being developed by the Responsive Images Community Group. By providing a means of declaring multiple sources, authors could use media queries to control which version of an image is displayed and under what conditions:\n\n<picture width=\"500\" height=\"500\">\n <source media=\"(min-width: 45em)\" src=\"large.jpg\">\n <source media=\"(min-width: 18em)\" src=\"med.jpg\">\n <source src=\"small.jpg\">\n <img src=\"small.jpg\" alt=\"\">\n <p>Accessible text</p>\n</picture>\n\nA second proposal put forward by Apple, the srcset attribute, uses a more concise syntax intended for use with the <img> element, although it could be compatible with the <picture> element too. This would allow authors to provide a set of images, but with the decision on which to use left to the browser:\n\n<img src=\"fallback.jpg\" alt=\"\" srcset=\"small.jpg 640w 1x, small-hd.jpg 640w 2x, med.jpg 1x, med-hd.jpg 2x \">\n\nEnter Scrooge\n\n\n\tMen\u2019s courses will foreshadow certain ends, to which, if persevered in, they must lead.\nEbenezer Scrooge\n\n\nGiven the complexity of this issue, there\u2019s a heated debate about which is the best option. Yet code belies a certain truth. That both feature verbose and opaque syntax, I\u2019m not sure either should find its way into the browser \u2013 especially as alternative approaches have yet to be fully explored.\n\nSo, as if to dampen the festive cheer, here are five reasons why I believe both proposals are largely redundant.\n\n1. We need better formats, not more markup\n\nAs we move away from designs defined with fixed pixel values, bitmap images look increasingly unsuitable. While simple images and iconography can use scalable vector formats like SVG, for detailed photographic imagery, raster formats like GIF, PNG and JPEG remain the only suitable option.\n\nThere is scope within current formats to account for varying bandwidth but this requires cooperation from browser vendors. Newer formats like JPEG2000 and WebP generate higher quality images with smaller file sizes, but aren\u2019t widely supported.\n\nWhile it\u2019s tempting to try to solve this issue by inventing new markup, the crux of it remains at the file level.\n\nDaan Jobsis\u2019s experimentation with image compression strengthens this argument. He discovered that by increasing the dimensions of a JPEG image while simultaneously reducing its quality, a smaller files could be produced, with the resulting image looking just as good on both standard and high-resolution displays.\n\nThis may be a hack in lieu of a more permanent solution, but it\u2019s applied in the right place. Easy to accomplish with existing tools and without compatibility issues, it has few downsides. Further experimentation in this area should be encouraged, with standardisation efforts more helpful if focused on developing new image formats or, preferably, extending existing ones.\n\n2. Art direction doesn\u2019t belong in markup\n\nA desired benefit of the <picture> markup pattern is to allow for greater art direction. For example, rather than scaling down images on smaller displays to the point that their content is hard to discern, we could present closer crops instead:\n\n\n\nThis can be achieved with CSS of course, although with a download penalty for those parts of an image not shown. This point may be negligible, however, since in the context of adaptable layouts, these hidden areas may end up being revealed anyway.\n\nArt direction concerns design, not content. If we wish to maintain a separation of concerns, including presentation within our markup seems misguided.\n\n3. The size of a display has little relation to the size of an image\n\nBy using media queries, the <picture> element allows authors to choose which characteristics of the screen or viewport to query for different images to be displayed.\n\nIn developing sites at Clearleft, we have noticed that the viewport is essentially arbitrary, with the size of an image\u2019s containing element more important. For example, look at how this grid of images may adapt at different viewport widths:\n\n\n\nAs we build more modular systems, components need to be adaptable in and of themselves. There is a case to be made for developing more contextual methods of querying, rather than those based on attributes of the display.\n\n4. We haven\u2019t lived with the problem long enough\n\nA key strength of the web is that the underlying platform can be continually iterated. This can also be problematic if snap judgements are made about what constitutes an improvement.\n\nThe early history of the web is littered with such examples, be it the perceived need for blinking text or inline typographic styling. To build a platform for the future, additions to it should be carefully considered. And if we want more consistent support across browsers, burdening vendors with an ever increasing list of features seems counterproductive.\n\nOnly once the need for a new feature is sufficiently proven, should we look to standardise it. Before we could declare hover effects, rounded corners and typographic styling in CSS, we used JavaScript as a polyfill. Sure, doing so was painful, but use cases were fully explored, and the CSS specification better reflected the needs of authors.\n\n5. Images and the web aesthetic\n\nThe srcset proposal has emerged from a company that markets its phones as being able to browse the real \u2013 yet squashed down, tapped and zoomable \u2013 web. Perhaps Apple should make its own website responsive before suggesting how the rest of us should do so.\n\nConverserly, while the <picture> proposal has the backing of a few respected developers and designers, it was born out of the work Mat Marquis and Filament Group did for the Boston Globe. As the first large-scale responsive design, this was a landmark project that ignited the responsive web design movement and proved its worth. But it was the first.\n\nIts design shares a vernacular to that of contemporary newspaper websites, with a columnar, image-laden and densely packed layout. Compared to more recent examples \u2013 Quartz, The Next Web and the New York Times Skimmer \u2013 it feels out of step with the future direction of news sites. In seeking out a truer aesthetic for the web in which software interfaces have greater influence, we might discover that the need for responsive images isn\u2019t as great as originally thought.\n\n\n\nBuilding for the future\n\nWith responsive design, we\u2019ve accepted the idea that a fully fluid layout, rather than a set of fixed layouts, is best suited to the web\u2019s unpredictable nature. Current responsive image proposals are antithetical to this approach.\n\nWe need solutions that lack complexity, are device-agnostic and work within existing workflows. Any proposal that requires different versions of the same image to be created, is likely to have to acquiesce under the pressure of reality.\n\nWhile it\u2019s easy to get distracted about the size and quality of an image, and how we might choose to serve it, often the simplest solution is not to include it at all. After years of gluttonous design practice, in which fast connections and expansive display sizes were an accepted norm, we have got use to filling pages with needless images and countless items of page furniture.\n\nTo design more adaptable experiences, the presence of every element needs to be questioned, for its existence requires additional data to be downloaded or futher complexity within a design system. Conditional loading techniques mean that the inclusion of images is no longer a binary choice, but can instead appear in a progressively enhanced manner.\n\nSo here is my proposal. Instead of spending the next year worrying about responsive images, let\u2019s embrace the constraints of the medium, and seek out new solutions that can work within them.", "year": "2012", "author": "Paul Lloyd", "author_slug": "paulrobertlloyd", "published": "2012-12-11T00:00:00+00:00", "url": "https://24ways.org/2012/responsive-images-what-we-thought-we-needed/", "topic": "code"} {"rowid": 78, "title": "Fluent Design through Early Prototyping", "contents": "There\u2019s a small problem with wireframes. They\u2019re not good for showing the kind of interactions we now take for granted \u2013 transitions and animations on the web, in Android, iOS, and other platforms. There\u2019s a belief that early prototyping requires a large amount of time and effort, and isn\u2019t worth an early investment. But it\u2019s not true!\n\nIt\u2019s still normal to spend a significant proportion of time working in wireframes. Given that wireframes are high-level and don\u2019t show much detail, it\u2019s tempting to give up control and responsibility for things like transitions and other things sidelined as visual considerations. These things aren\u2019t expressed well, and perhaps not expressed at all, in wireframes, yet they critically influence the quality of a product. Rapid prototyping early helps to bring sidelined but significant design considerations into focus.\n\nSpeaking fluent design\n\nFluency in a language means being able to speak it confidently and accurately. The Latin root means flow.\n\nBy design fluency, I mean using a set of skills in order to express or communicate an idea. Prototyping is a kind of fluency. It takes designers beyond the domain of grey and white boxes to consider all the elements that make up really good product design.\n\nDesigners shouldn\u2019t be afraid of speaking fluent design. They should think thoroughly about product decisions beyond their immediate role \u2014 not for the sake of becoming some kind of power-hungry design demigod, but because it will lead to better, more carefully considered product design.\n\nWireframes are incomplete sentences\n\nWireframes, once they\u2019ve served their purpose, are a kind of self-imposed restriction.\n\nMostly made out of grey and white boxes, they deliberately express the minimum. Important details \u2014 visuals, nuanced transitions, sounds \u2014 are missing. Their appearance bears little resemblance to the final thing. Responsibility for things that traditionally didn\u2019t matter (or exist) is relinquished. Animations and transitions in particular are increasingly relevant to the mobile designer\u2019s methods. And rather than being fanciful and superfluous visual additions to a product, they help to clarify designs and provide information about context.\n\nWireframes are useful in the early stages. As a designer trying to persuade stakeholders, clients, or peers, sometimes it will be in your interests to only tell half the story. They\u2019re ideal for gauging whether a design is taking the right direction, and they\u2019re the right medium for deciding core things, such as the overall structure and information architecture.\n\nBut spending a long time in wireframes means delaying details to a later stage in the project, or to the end, when the priority is shifted to getting designs out of the door. This leaves little time to test, finesse and perfect things which initially seemed to be less important. I think designers should move away from using wireframes as primary documentation once the design has reached a certain level of maturity.\n\nA prototype is multiple complete sentences\n\nParagraphs, even.\n\nUnlike a wireframe, a prototype is a persuasive storyteller. It can reveal the depth and range of design decisions, not just the layout, but also motion: animations and transitions. If it\u2019s a super-high-fidelity prototype, it\u2019s a perfect vessel for showing the visual design as well. It\u2019s all of these things that contribute to the impression that a product is good\u2026 and useful, and engaging, and something you\u2019d like to use.\n\nA prototype is impressive. A good prototype can help to convince stakeholders and persuade clients. With a compelling demo, people can more easily imagine that this thing could actually exist. \u201cHey\u201d, they\u2019re thinking. \u201cThis might actually be pretty good!\u201d\n\nHow to make a prototype in no time and with no effort\n\nNow, it does take time and effort to make a prototype. However, good news! It used to require a lot more effort. There are tools that make prototyping much quicker and easier.\n\nIf you\u2019re making a mobile prototype (this seems quite likely), you will want to test and show this on the actual device. This sounds like it could be a pain, but there are a few ways to do this that are quite easy.\n\nKeynote, Apple\u2019s presentation software, is an unlikely candidate for a prototyping tool, but surprisingly great and easy for creating prototypes with transitions that can be shown on different devices.\n\nKeynote enables you to do a few useful, excellent things. You can make each screen in your design a slide, which can be linked together to allow you to click through the prototype. You can add customisable transitions between screens. If you want to show a panel that can slide open or closed on your iPad mockup, for example, transitions can also be added to individual elements on the screen. The design can be shown on tablet and mobile devices, and interacted with like it\u2019s a real app. Another cool feature is that you can export the prototype as a video, which works as another effective format for demoing a design.\n\nOverall, Keynote offers a very quick, lightweight way to prototype a design. Once you\u2019ve learned the basics, it shouldn\u2019t take longer than a few hours \u2013 at most \u2013 to put together a respectable clickable prototype with transitions.\n\nDownload the interactive MOV example\n\nHolly icon by Megan Sheehan from The Noun Project\n\nThis is a Quicktime movie exported from Keynote. This version is animated for demonstration purposes, but download the interactive original and you can click the screen to move through the prototype. It demonstrates the basic interactivity of an iPhone app. This anonymised example was used on a project at Fjord to create a master example of an app\u2019s transitions.\n\nPrototyping drawbacks, and perceived drawbacks\n\nIf prototyping is so great, then why do we leave it to the end, or not bother with it at all? There are multiple misconceptions about prototyping: they\u2019re too difficult to make; they take too much time; or they\u2019re inaccurate (and dangerous) documentation.\n\nA prototype is a preliminary model. There should always be a disclaimer that it\u2019s not the real thing to avoid setting up false expectations.\n\nA prototype doesn\u2019t have to be the main deliverable. It can be a key one that\u2019s supported by visual and interaction specifications. And a prototype is a lightweight means of managing and reflecting changes and requirements in a project.\n\nAn actual drawback of prototyping is that to make one too early could mean being gung-ho with what you thought a client or stakeholder wanted, and delivering something inappropriate. To avoid this, communicate, iterate, and keep things simple until you\u2019re confident that the client or other stakeholders are happy with your chosen direction.\n\nThe key throughout any design project is iteration. Designers build iterative models, starting simple and becoming increasingly sophisticated. It\u2019s a process of iterative craft and evolution. There\u2019s no perfect methodology, no magic recipe to follow.\n\nWhat to do next\n\nMake a prototype! It\u2019s the perfect way to impress your friends.\n\nIt can help to advance a brilliant idea with a fraction of the effort of complete development. Sketches and wireframes are perfect early on in a project, but once they\u2019ve served their purpose, prototypes enable the design to advance, and push thinking towards clarifying other important details including transitions.\n\nFor Keynote tutorials, Keynotopia is a great resource. Axure is standard and popular prototyping software many UX designers will already be familiar with; it\u2019s possible to create transitions in Axure. POP is an iPhone app that allows you to design apps on paper, take photos with your phone, and turn them into interactive prototypes. Ratchet is an elegant iPhone prototyping tool aimed at web developers.\n\nThere are perhaps hundreds of different prototyping tools and methods. My final advice is not to get bogged down in (or limited by) any particular tool, but to remember you\u2019re making quick and iterative models. Experiment and play!\n\nPrototyping will push you and your designs to a scary place without limitations. No more grey and white boxes, just possibilities!", "year": "2012", "author": "Rebecca Cottrell", "author_slug": "rebeccacottrell", "published": "2012-12-10T00:00:00+00:00", "url": "https://24ways.org/2012/fluent-design-through-early-prototyping/", "topic": "ux"} {"rowid": 74, "title": "Should We Be Reactive?", "contents": "Evolution\n\nLooking at the evolution of the web and the devices we use should help remind us that the times we\u2019re adjusting to are just another step on a journey. These times seem to be telling us that we need to embrace flexibility.\n\nImagine an HTML file containing nothing but text. It\u2019s viewable on any web-capable device and reasonably readable: the notion of the universality of the web was very much a founding principle. Right from the beginning, browser vendors understood that we\u2019d want text to reflow (why wouldn\u2019t we?), so I consider the first websites to have been fluid. \n\nAs we attempted to exert more control through our designs in the early days of the web, debates about whether we should produce fixed or fluid sites raged. We could create fluid designs using tables, but what we didn\u2019t have then was a wide range of web capable devices or the ability to control this fluidity. The biggest changes occurred when stats showed enough people using a different screen resolution we could cater for.\n\nTo me, the techniques of responsive web design provide the control we were missing. Combining new approaches to layout and images with media queries empowered us to learn how to embrace the inherent flexibility of the web in ways to suit our work and the devices used by our audience.\n\nPerhaps another kind of flexibility might be found in how we use context to affect how we present our content; to consider how we might use the information we can access from people, browsers and devices to provide web experiences \u2013 effectively creating sites that react to initial or changing circumstances in the relationship between people and our content.\n\nEmbracing flexibility\n\nSo what is context? Put simply, you could think of it as a secondary piece of information that helps clarify the meaning of the first. It helps set a scene or describe circumstances. I think that Cennydd Bowles has summed it up really well through talks he\u2019s given recently, in which he\u2019s arrived at the acronym DETAILS (Device, Environment, Time, Activity, Individual, Location, Social) \u2013 I encourage you to keep an eye out for his next book due in the new year where he\u2019ll explore this idea much further. This clarity over what context could mean in terms of what we do on the web is fundamental, directing us towards ways we might use it.\n\nWhen you stop to think about it, we\u2019ve been using some basic pieces of this information right from the beginning, like bits of JavaScript or Java applets that serve an appropriate greeting to your site\u2019s visitors, or show their location, or even local weather. But what if we think of this from the beginning of our projects?\n\nWe should think about our content first. Once we know this and have a direction, perhaps then we can think about what context, or even multiple contexts, might help us to communicate more effectively.\n\nThe real world\n\nThere\u2019s always been a disconnect between the real world and the web, which is to be expected. But the world around us is a sea of data; every fundamental building block: people, places, events and things have information waiting to be explored. \n\nFor sites based around physical objects or locations, this divide is really apparent. We don\u2019t ordinarily take the time to describe in code the properties of a place, or consider whether your relationship to the place in the real world should have any impact on your relationship with a site about it.\n\nWhen I think about local businesses, they have such rich properties to draw on and yet we don\u2019t really explore them in any meaningful way, even through something as simple as opening hours.\n\nNow we have data\u2026\n\nWe\u2019ve long had access to the current time both on server- and client-sides. The use of geolocation is easier than ever, but when we look at the range of information we could glean to help us make some choices, maybe there\u2019s some help on the horizon from projects like the W3C Device APIs Working Group. This might prove useful to help make us aware of network and battery conditions of a device, along with the potential to gain data from other sensors, which could tell us about lighting conditions, ambient noise levels and temperature depending on the capabilities of the device.\n\nIt may be that our sites have some form of login or access to your profile from another site. Along with data from our devices and browsers, this should give us a sense of how best to talk to our audience in certain situations. We don\u2019t necessarily need to know any personal details, just enough to make decisions about how to present our sites.\n\nThe reactive web?\n\nSo why reactive web design? I\u2019m hoping that a name might help us to have a common vocabulary not only about what we mean when we talk about context, but how it could be considered through our projects, right from the early stages. How could this manifest itself?\n\nA simple example might be a location-aware panel on your site. Perhaps the space is a little down in your content hierarchy but serves a perfectly valid purpose by default. To visitors outside the country perhaps this works fine, but within your country maybe this panel could be used to communicate more effectively. Further still, if we knew the visitors were in the vicinity, we could talk to them more directly. \n\nWhat if both time and location were relevant? This space could work as before but you could consider how time could intersect with your local audience. If you know they\u2019re local and it\u2019s a certain time of day, you could communicate directly with them.\n\nThis example isn\u2019t beyond what banner ads often do and uses easily accessible information. There are more unusual combinations we may be able to find, such as movement and presence. Perhaps a site that tells a story, which changes design and content based on whether you\u2019re moving, how long you\u2019ve been on the site and how far you\u2019ve travelled. This isn\u2019t what we typically expect from websites, but we should bear in mind that what websites are now will not be what they become in the future.\n\nYou could do much of this contextual presentation through native apps, of course. The Silent History, an app novel written and designed for iPad and iPhone, uses an exploration element, providing \u201chundreds of location-based stories across the U.S. and around the world. These can be read only when your device\u2019s GPS matches the coordinates of the specified location.\u201d But considering the universality of the web, we could redefine what web-based experiences should be like. Not all methods would work well on the web, but that\u2019s a decision that has to be made for a specific project.\n\nBy thinking more broadly about any web-capable device, we can use what we know to provide relevant experiences for our site\u2019s visitors. We need to be sure what we mean by relevant, of course!\n\nReality bites\n\nWhile there are incredible possibilities, from a simple panel on a site to something bordering on living sites that evolve and change with our circumstances, we need to act with a degree of pragmatism and understand how much of what we could do is based on assumptions and the bias of our own experiences.\n\nWe could go wild with changing the way our content is presented based on contextual information, but if we\u2019re not careful what we end up with confuses and could provide a very fractured experience. As much as possible we need to think more ethnographically, observe and question people in the situations we think may be relevant, and test our assumptions as early as we can. Even on small projects, there may be ways we can validate our assumptions and test with our audience. The key to applying contextual content or cues is not to break the experience between contextual views (as I think we now wouldn\u2019t when hiding content on a mobile view). \n\nIt\u2019s another instance of progressive enhancement \u2013 as we know certain pieces of information, we can enhance the experience. Also, if you do change content, how can you not make a more cumbersome experience for your visitors?\n\nIt\u2019s all about communication\n\nContent is at the core of what we do, but if we consider context we need to understand the impact on that. The effect could be as subtle as an altered hierarchy, involve swapping out panels of content, or in extreme instances perhaps all of your content might change. In some ways, this extends the notion of adaptive content that Karen McGrane has been talking about, to how we write and store the content we create. Thinking about the the impact of context may require us to re-evaluate our site structure, too. Whatever we decide, we have to be clear what will happen and manage the expectations of our users.\n\nThe bottom line\n\nWhat I\u2019m proposing isn\u2019t that we go crazy and end up with a confused, disjointed set of experiences across the web. What I hope is that starting right from the beginning of a project, we think about what context is and could be, and see what relevance it might have to what we\u2019re trying to communicate. This strategic process leads us to think about design.\n\nWe are slowly adapting to what it means to be flexible through responsive and adaptive processes. What does thinking about contextual states mean to us (or designing for state in general)? Does this highlight again how difficult it\u2019ll be for our tools to keep up with our processes and output?\n\nIn terms of code, the vast majority of this data comes from the client-side through JavaScript. While we can progressively enhance, this could lead to a lot of code bloat through feature or capability detection, and potentially a lot of conditional loading of scripts. It\u2019s a real shame we don\u2019t get much we can rely on from the server-side \u2013 we know how unreliable user agents are!\n\nWe need to understand why we\u2019d do this. Are we trying to communicate well and be useful, or doing it to show off? Underneath it all, what do we base our decisions on? Do we have actual insight or are we proceeding from our assumptions and the bias of our own experiences? Scott Jenson summed it up best for me: (to paraphrase) the pain we put people through has to be greatly outweighed by the value we offer.\n\nI see that this could be another potential step in our evolution on the web; continuing this exploration of the flexibility the web allows us. It\u2019s amazing we can do such incredible things from what is essentially a set of disparate, linked documents.", "year": "2012", "author": "Dan Donald", "author_slug": "dandonald", "published": "2012-12-09T00:00:00+00:00", "url": "https://24ways.org/2012/should-we-be-reactive/", "topic": "design"} {"rowid": 76, "title": "Giving CSS Animations and Transitions Their Place", "contents": "CSS animations and transitions may not sit squarely in the realm of the behaviour layer, but they\u2019re stepping up into this area that used to be pure JavaScript territory. Heck, CSS might even perform better than its JavaScript equivalents in some cases. That\u2019s pretty serious! With CSS\u2019s new tricks blurring the lines between presentation and behaviour, it can start to feel bloated and messy in our CSS files. It\u2019s an uncomfortable feeling.\n\nHere are a pair of methods I\u2019ve found to be pretty helpful in keeping the potential bloat and wire-crossing under control when CSS has its hands in both presentation and behaviour.\n\nSame eggs, more baskets\n\nStructuring your CSS to have separate files for layout, typography, grids, and so on is a fairly common approach these days. But which one do you put your transitions and animations in? The initial answer, as always, is \u201cit depends\u201d.\n\nSmall effects here and there will likely sit just fine with your other styles. When you move into more involved effects that require multiple animations and some logic support from JavaScript, it\u2019s probably time to choose none of the above, and create a separate CSS file just for them.\n\nPutting all your animations in one file is a huge help for code organization. Even if you opt for a name less literal than animations.css, you\u2019ll know exactly where to go for anything CSS animation related. That saves time and effort when it comes to editing and maintenance. Keeping track of which animations are still currently used is easier when they\u2019re all grouped together as well. And as an added bonus, you won\u2019t have to look at all those horribly unattractive and repetitive prefixed @-keyframe rules unless you actually need to.\n\nAn animations.css file might look something like the snippet below. It defines each animation\u2019s keyframes and defines a class for each variation of that animation you\u2019ll be using. Depending on the situation, you may also want to include transitions here in a similar way. (I\u2019ve found defining transitions as their own class, or mixin, to be a huge help in past projects for me.)\n\n// defining the animation\n@keyframes catFall {\n from { background-position: center 0;}\n to {background-position: center 1000px;}\n}\n@-webkit-keyframes catFall {\n from { background-position: center 0;}\n to {background-position: center 1000px;}\n}\n@-moz-keyframes catFall {\n from { background-position: center 0;}\n to {background-position: center 1000px;}\n}\n@-ms-keyframes catFall {\n from { background-position: center 0;}\n to {background-position: center 1000px;}\n}\n\n\u2026\n\n// class that assigns the animation\n\n.catsBackground {\n height: 100%;\n background: transparent url(../endlessKittens.png) 0 0 repeat-y;\n animation: catFall 1s linear infinite;\n -webkit-animation: catFall 1s linear infinite;\n -moz-animation: catFall 1s linear infinite;\n -ms-animation: catFall 1s linear infinite;\n}\n\nIf we don\u2019t need it, why load it?\n\nHaving all those CSS animations and transitions in one file gives us the added flexibility to load them only when we want to. Loading a whole lot of things that will never be used might seem like a bit of a waste.\n\nWhile CSS has us impressed with its motion chops, it falls flat when it comes to the logic and fine-grained control. JavaScript, on the other hand, is pretty good at both those things. Chances are the content of your animations.css file isn\u2019t acting alone. You\u2019ll likely be adding and removing classes via JavaScript to manage your CSS animations at the very least. If your CSS animations are so entwined with JavaScript, why not let them hang out with the rest of the behaviour layer and only come out to play when JavaScript is supported?\n\nDynamically linking your animations.css file like this means it will be completely ignored if JavaScript is off or not supported. No JavaScript? No additional behaviour, not even the parts handled by CSS.\n\n<script>\ndocument.write('<link rel=\"stylesheet\" type=\"text/css\" href=\"animations.css\">');\n</script>\n\nThis technique comes up in progressive enhancement techniques as well, but it can help here to keep your presentation and behaviour nicely separated when more than one language is involved. The aim in both cases is to avoid loading files we won\u2019t be using.\n\nIf you happen to be doing something a bit fancier \u2013 like 3-D transforms or critical animations that require more nuanced fallbacks \u2013 you might need something like modernizr to step in to determine support more specifically. But the general idea is the same.\n\nSumming it all up\n\nUsing a couple of simple techniques like these, we get to pick where to best draw the line between behaviour and presentation based on the situation at hand, not just on what language we\u2019re using. The power of when to separate and how to reassemble the individual pieces can be even greater if you use preprocessors as part of your process. We\u2019ve got a lot of options! The important part is to make forward-thinking choices to save your future self, and even your current self, unnecessary headaches.", "year": "2012", "author": "Val Head", "author_slug": "valhead", "published": "2012-12-08T00:00:00+00:00", "url": "https://24ways.org/2012/giving-css-animations-and-transitions-their-place/", "topic": "code"} {"rowid": 88, "title": "Think First, Code Later", "contents": "This is a story that\u2019s best told from the end, and it\u2019s probably one you\u2019re all familiar with.\n\nYou, or someone just like you, have been building a website, probably as part of a skilled and capable team. You\u2019re a front-end developer, focusing on JavaScript \u2013 it\u2019s either your sole responsibility or shared around. It\u2019s quite a big job, been going on for months, and at last it feels like you\u2019re reaching the end of it.\n\nBut, in a brief moment of downtime, you step back and take a look at the code as a whole. You notice that the folder called \u201cjQuery plugins\u201d suddenly looks rather full, and maybe there\u2019s evidence of several methods of doing the same thing; there are loads of little niggly fixes in the bug tracker; and every place you use Ajax the structure of the data is slightly different. You sigh, and your shoulders droop slightly, and you think \u201cYeah, we\u2019ll do that more cleanly next time.\u201d\n\nThe thing is, you probably already know how to rewrite the start of this story to make the ending work better. This situation is not really anyone\u2019s fault \u2013 it\u2019s just an accumulation of all the things you decided along the way, all the things you agreed you\u2019d fix later that have disappeared into the black hole of technical debt, and accomodating all the \u201ccan we just\u2026?\u201d requests from around the team and the client.\n\nSo, the solution to this is easy, right? More interminable planning meetings, more tightly controlled and documented specifications, less freedom to innovate, to try out new ideas and enjoy what you\u2019re doing.\n\nWait, that sounds even less fun than the old way.\n\nMinimum viable planning\n\nActually, planning and specifications are exactly what you need, but the way you go about them can make a real difference, both to the quality of your code, and the quality of your life as a developer. It can be as simple as being a little more thoughtful before starting on any new piece of functionality. Involve your whole team if possible, or at least those working on what you\u2019re doing. Canvass opinions and work out what the solution to the problem might look like first, rather than coding speculatively to find out.\n\nThere are easy ways you can get into this habit of putting the thought and design up front, and it doesn\u2019t have to mean spending more time on the project as a whole. It also doesn\u2019t have to result in reams of functional specifications. Instead, let the code itself form the specification.\n\nAs JavaScript applications become more complex, unit testing is becoming ever more important. So embrace it, whether you prefer QUnit, or Mocha, or any of the other JavaScript testing frameworks out there. The TDD (or test-driven development) pattern is all about writing the tests first and then writing functional code to pass those tests; or, if you prefer, code that meets the specification given by the tests.\n\nSounds like a hassle at first, but once you get into the rhythm of it you should find that the time spent writing tests up front is no greater, and often significantly less, than the time you would have spent fixing bugs afterwards.\n\nIf what you\u2019re working on requires an API between client and server (usually Ajax but this can apply to any method of sending or receiving data) then spend a bit of time with the back-end developer to design the data contracts, before either of you cut any code. Work out what the API endpoints are going to be, and what the data structure you\u2019ll get back from a certain endpoint looks like. A mock JSON object documented on a wiki is enough and it can be atomic. Don\u2019t worry about planning the entire project at once, just plan enough to get on with your current tasks.\n\nDefinition in this way doesn\u2019t have to make your API immutable \u2013 change is still fine \u2013 but if you know roughly where you\u2019re heading, then not only can your team\u2019s efforts become more parallel, but you\u2019re far more likely to have an easier time making it all work. And again, you have a specification \u2013 the shape of the data \u2013 to write your JavaScript against.\n\nPutting everything together, you end up with a logical flow of development, from the specification agreed with the client (your backlog), to the specification agreed with your team (the API contract design), to the specification agreed with your code (your unit tests). Hopefully, there will be ample clues in all of this to inform your front-end library choices, because by then you should have a better picture of what you\u2019re going to need.\n\nWhat the framework?\n\nAs a JavaScript developer predominantly, these are the choices I\u2019m particularly interested in \u2013 how and why you use JavaScript libraries and frameworks, both what you expect from them and what you actually get.\n\nIf we look back at how web development, and specifically JavaScript development has progressed \u2013 from the earliest days of using lines and lines of Dreamweaver code-barf to make an image rollover effect, to today\u2019s large frameworks that handle working with the DOM, Ajax communication and visual effects all in one hit \u2013 the purpose of it is clear: to smooth over the inconsistent bumps between browsers and give a solid, reliable, predictable base on which to put our desired functionality.\n\nUnderstanding what we expect the language as a specification to do, and matching that to what we observe browsers actually doing, and then smoothing out the differences, is a big job. Since the language and the implementations are also changing as we go along, it also feels like a never-ending job. So make full use of this valuable effort. Use jQuery or YUI or anything else you\u2019re comfortable with, but it still pays to think early on about what you need your library to do and what the best choice is to meet that need.\n\nI\u2019ve come in to projects as a fixer and found, to take a recent example, that jQuery UI was being used just to provide a date picker and a modal effect. That\u2019s a lot of code weight to provide two fairly simple pieces of functionality that could easily be covered by smaller plugins. Which isn\u2019t to say that jQuery UI itself is a bad choice, but I could see that it had been included late on just to do those things, whereas a more considered approach would have been to put the library in early and use it more universally.\n\nThere are other choices, too. If you automatically throw in jQuery (or whatever your favourite main library is) to a small site with limited functionality, you might only touch a tiny fraction of its scope. In my own development I started looking at what I actually needed from a JavaScript library. For a simple project like What the Framework?, all jQuery needed to do was listen for .ready() and then perform some light DOM selection before handing over to a client-side MVC framework. So perhaps there was another way to go about this while still avoiding the cross-browser headaches.\n\nDeleting jQuery\n\nBut the jQuery pattern is compelling and familiar. And once you\u2019re comfortable with something, it\u2019s a bit of an effort to force yourself out of that comfort zone and learn. But looking back at my whole career, I realised that I\u2019ve relearned pretty much everything I do, probably several times, since I started out. So it\u2019s worth keeping in mind that learning and trying new things is how development has advanced to where it is now, and how it will keep advancing in the future.\n\nIn the end this lead me to Ender, which is billed as an NPM-style package manager for the browser, letting you search for and manage small, loosely coupled modules and their dependencies, and compile them to one file with a common API.\n\nFor What the Framework I ended up with a set of DOM tools, Underscore and Knockout, all minified into 25kb of JavaScript. This compares really well with 32kb minified for jQuery on its own, and Ender\u2019s use of the dollar variable and the jQuery-like syntax in many modules makes switching over a low-friction experience.\n\nOn more complex projects, where you\u2019re really going to use all the features of something like jQuery, but want to minimise the loading of other dependencies when you don\u2019t need them, I\u2019ve recently started looking at Jam. This uses the RequireJS pattern to compile commonly used code into a library file and then manage dependencies and bring in others on a per-page basis depending on how you need it. Again, it all comes down to thinking about what you need and using it only when you need it. And the configurability of tools like Ender or Jam allow you to be responsive to changing requirements as your project grows.\n\nThere is no right answer\n\nThat\u2019s not to say this way of working automatically makes things easier. It doesn\u2019t. On a large, long-running project or one where future functionality is unknown, it\u2019s still hard to predict and plan for everything \u2013 at least until crystal balls as a service come about. But by including strong engineering practices in your front-end, and trying to minimise technical debt, you\u2019re at least giving yourself a decent safety net to guard against the \u201ccan we just\u2026?\u201d tendencies that are a fact of life.\n\nSo, really, this is not an advocation of using a particular technology or framework, because I can\u2019t tell you what works for you or your team. But what I can tell you is that working this way round has done wonders for my productivity and enthusiasm, both for code quality and for trying out new libraries. Give it a go, you might like it!", "year": "2012", "author": "Stephen Fulljames", "author_slug": "stephenfulljames", "published": "2012-12-07T00:00:00+00:00", "url": "https://24ways.org/2012/think-first-code-later/", "topic": "process"} {"rowid": 86, "title": "Flashless Animation", "contents": "Animation in a Flashless world\n\nWhen I splashed down in web design four years ago, the first thing I wanted to do was animate a cartoon in the browser. I\u2019d been drawing comics for years, and I\u2019ve wanted to see them come to life for nearly as long. Flash animation was still riding high, but I didn\u2019t want to learn Flash. I wanted to learn JavaScript!\n\nSadly, animating with JavaScript was limiting and resource-intensive. My initial foray into an infinitely looping background did more to burn a hole in my CPU than amaze my friends (although it still looks pretty cool). And there was still no simple way to incorporate audio. The browser technology just wasn\u2019t there.\n\nThings are different now. CSS3 transitions and animations can do most of the heavy lifting and HTML5 audio can serve up the music and audio clips. You can do a lot without leaning on JavaScript at all, and when you lean on JavaScript, you can do so much more!\n\nIn this project, I\u2019m going to show you how to animate a simple walk cycle with looping audio. I hope this will inspire you to do something really cool and impress your friends. I\u2019d love to see what you come up with, so please send your creations my way at rachelnabors.com!\n\nNote: Because every browser wants to use its own prefixes with CSS3 animations, and I have neither the time nor the space to write all of them out, I will use the W3C standard syntaxes; that is, going prefix-less. You can implement them out of the box with something like Prefixfree, or you can add prefixes on your own. If you take the latter route, I recommend using Sass and Compass so you can focus on your animations, not copying and pasting.\n\nThe walk cycle\n\nWalk cycles are the \u201cHello world\u201d of animation. One of the first projects of animation students is to spend hours drawing dozens of frames to complete a simple loopable animation of a character walking.\n\nMost animators don\u2019t have to draw every frame themselves, though. They draw a few key frames and send those on to production animators to work on the between frames (or tween frames). This is meticulous, grueling work requiring an eye for detail and natural movement. This is also why so much production animation gets shipped overseas where labor is cheaper.\n\nLuckily, we don\u2019t have to worry about our frame count because we can set our own frames-per-second rate on the fly in CSS3. Since we\u2019re trying to impress friends, not animation directors, the inconsistency shouldn\u2019t be a problem. (Unless your friend is an animation director.)\n\nThis is a simple walk cycle I made of my comic character Tuna for my CSS animation talk at CSS Dev Conference this year:\n\n\n\nThe magic lies here:\n\nanimation: walk-cycle 1s steps(12) infinite;\n\nBreaking those properties down:\n\nanimation: <name> <duration> <timing-function> <iteration-count>;\n\nwalk-cycle is a simple @keyframes block that moves the background sprite on .tuna around:\n\n@keyframes walk-cycle { \n 0% {background-position: 0 0; }\n 100% {background-position: 0 -2391px;}\n}\n\nThe background sprite has exactly twelve images of Tuna that complete a full walk cycle. We\u2019re setting it to cycle through the entire sprite every second, infinitely. So why isn\u2019t the background image scrolling down the .tuna container? It\u2019s all down to the timing function steps(). Using steps() let us tell the CSS to make jumps instead of the smooth transitions you\u2019d get from something like linear. Chris Mills at dev.opera wrote in his excellent intro to CSS3 animation :\n\n\n\tInstead of giving a smooth animation throughout, [steps()] causes the animation to jump between a set number of steps placed equally along the duration. For example, steps(10) would make the animation jump along in ten equal steps. There\u2019s also an optional second parameter that takes a value of start or end. steps(10, start) would specify that the change in property value should happen at the start of each step, while steps(10, end) means the change would come at the end.\n\n\n(Seriously, go read his full article. I\u2019m not going to touch on half the stuff he does because I cannot improve on the basics any more than he already has.)\n\nThe background\n\nA cat walking in a void is hardly an impressive animation and certainly your buddy one cube over could do it if he chopped up some of those cat GIFs he keeps using in group chat. So let\u2019s add a parallax background! Yes, yes, all web designers signed a peace treaty to not abuse parallax anymore, but this is its true calling\u2014treaty be damned.\n\n\n\nAnd to think we used to need JavaScript to do this! It\u2019s still pretty CPU intensive but much less complicated. We start by splitting up the page into different layers, .foreground, .midground, and .background. We put .tuna in the .midground.\n\n.background has multiple background images, all set to repeat horizontally:\n\nbackground-image:\n url(background_mountain5.png),\n url(background_mountain4.png),\n url(background_mountain3.png),\n url(background_mountain2.png),\n url(background_mountain1.png);\nbackground-repeat: repeat-x;\n\nWith parallax, things in the foreground move faster than those in the background. Next time you\u2019re driving, notice how the things closer to you move out of your field of vision faster than something in the distance, like a mountain or a large building. We can imitate that here by making the background images on top (in the foreground, closer to us) wider than those on the bottom of the stack (in the distance).\n\nThe different lengths let us use one animation to move all the background images at different rates in the same interval of time: \n\nanimation: parallax_bg linear 40s infinite;\n\nThe shorter images have less distance to cover in the same amount of time as the longer images, so they move slower.\n\n\n\nLet\u2019s have a look at the background\u2019s animation:\n\n@keyframes parallax_bg { \n 0% {\n background-position: -2400px 100%, -2000px 100%, -1800px 100%, -1600px 100%, -1200px 100%;\n }\n 100% {\n background-position: 0 100%, 0 100%, 0 100%, 0 100%, 0 100%;\n }\n}\n\nAt 0%, all the background images are positioned at the negative value of their own widths. Then they start moving toward background-position: 0 100%. If we wanted to move them in the reverse direction, we\u2019d remove the negative values at 0% (so they would start at 2400px 100%, 2000px 100%, etc.). Try changing the values in the codepen above or changing background-repeat to none to see how the images play together.\n\n.foreground and .midground operate on the same principles, only they use single background images.\n\nThe music\n\nAfter finishing the first draft of my original walk cycle, I made a GIF with it and posted it on YTMND with some music from the movie Paprika, specifically the track \u201cThe Girl in Byakkoya.\u201d After showing it to some colleagues in my community, it became clear that this was a winning combination sure to drive away dresscode blues. So let\u2019s use HTML5 to get a clip of that music looping in there!\n\nWarning, there is sound. Please adjust your volume or apply headphones as needed.\n\n\n\nWe\u2019re using HTML5 audio\u2019s loop and autoplay abilities to automatically play and loop a sound file on page load:\n\n<audio loop autoplay>\n <source src=\"http://music.com/clip.mp3\" />\n</audio>\n\nUnfortunately, you may notice there is a small pause between loops. HTML5 audio, thou art half-baked still. Let\u2019s hope one day the Web Audio API will be able to help us out, but until things improve, we\u2019ll have to hack our way around these shortcomings.\n\nTurns out there\u2019s a handy little script called seamlessLoop.js which we can use to patch this. Mind you, if we were really getting crazy with the Cheese Whiz, we\u2019d want to get out big guns like sound.js. But that\u2019d be overkill for a mere loop (and explaining the Web Audio API might bore, rather than impress your friends)!\n\nInstalling seamlessLoop.js will get rid of the pause, and now our walk cycle is complete.\n\n(I\u2019ve done some very rough sniffing to see if the browser can play MP3 files. If not, we fall back to using .ogg formatted clips (Opera and Firefox users, you\u2019re welcome).)\n\nReally impress your friends by adding a run cycle\n\nSo we have music, we have a walk cycle, we have parallax. It will be a snap to bring them all together and have a simple, endless animation. But let\u2019s go one step further and knock the socks off our viewers by adding a run cycle.\n\nThe run cycle\n\nTacking a run cycle on to our walk cycle will require a third animation sequence: a transitional animation of Tuna switching from walking to running. I have added all these to the sprite:\n\n\n\nLet\u2019s work on getting that transition down. We\u2019re going to use multiple animations on the same .tuna div, but we\u2019re going to kick them off at different intervals using animation-delay\u2014no JavaScript required! Isn\u2019t that magical?\n\n\n\nIt requires a wee bit of math (not much, it doesn\u2019t hurt) to line them up. We want to:\n\n\n\tLoop the walk animation twice\n\tPlay the transitional cycle once (it has a finite beginning and end perfectly drawn to pick up between the last frame of the walk cycle and the first frame of the run cycle\u2014no looping this baby)\n\tRUN FOREVER.\n\n\nUsing the pattern animation: <name> <duration> <timing-function> <delay> <iteration-count>, here\u2019s what that looks like:\n\nanimation:\n walk-cycle 1s steps(12) 2,\n walk-to-run .75s steps(12) 2s 1,\n run-cycle .75s steps(13) 2.75s infinite;\n\nI played with the times to get make the movement more realistic. You may notice that the running animation looks smoother than the walking animation. That\u2019s because it has 13 keyframes running over .75 second instead of 12 running in one second. Remember, professional animation studios use super-high frame counts. This little animation isn\u2019t even up to PBS\u2019s standards!\n\nThe music: extended play with HTML5 audio sprites\n\nMy favorite part in the The Girl in Byakkoya is when the calm opening builds and transitions into a bouncy motif. I want to start with Tuna walking during the opening, and then loop the running and bounciness together for infinity.\n\n\n\tThe intro lasts for 24 seconds, so we set our 1 second walk cycle to run for 24 repetitions: \nwalk-cycle 1s steps(12) 24\n\tWe delay walk-to-run by 24 seconds so it runs for .75 seconds before\u2026\n\tWe play run-cycle at 24.75 seconds and loop it infinitely\n\n\nFor the music, we need to think of it as two parts: the intro and the bouncy loop. We can do this quite nicely with audio sprites: using one HTML5 audio element and using JavaScript to change the play head location, like skipping tracks with a CD player. Although this technique will result in a small gap in music shifts, I think it\u2019s worth using here to give you some ideas.\n\n// Get the audio element\nvar byakkoya = document.querySelector('audio');\n// create function to play and loop audio\nfunction song(a){\n //start playing at 0\n a.currentTime = 0;\n a.play();\n //when we hit 64 seconds...\n setTimeout(function(){\n // skip back to 24.5 seconds and keep playing...\n a.currentTime = 24.55;\n // then loop back when we hit 64 again, or every 59.5 seconds.\n setInterval(function(){\n a.currentTime = 24.55;\n },39450);\n },64000);\n}\n\nThe load screen\n\nI\u2019ve put it off as long as I can, but now that the music and the CSS are both running on their own separate clocks, it\u2019s imperative that both images and music be fully downloaded and ready to run when we kick this thing off. So we need a load screen (also, it\u2019s nice to give people a heads-up that you\u2019re about to blast them with music, no matter how wonderful that music may be).\n\nSince the two timers are so closely linked, we\u2019d best not run the animations until we run the music:\n\n* { animation-play-state: paused; }\n\nanimation-play-state can be set to paused or running, and it\u2019s the most useful thing you will learn today.\n\nFirst we use an event listener to see when the browser thinks we can play through from the beginning to end of the music without pause for buffering:\n\nbyakkoya.addEventListener(\"canplaythrough\", function () { });\n\n(More on HTML5 audio\u2019s media events at HTML5doctor.com)\n\nInside our event listener, I use a bit of jQuery to add class of .playable to the body when we\u2019re ready to enable the play button:\n\n$(\"body\").addClass(\"playable\");\n $(\"#play-me\").html(\"Play me.\").click(function(){\n song(byakkoya);\n $(\"body\").addClass(\"playing\");\n });\n\nThat .playing class is special because it turns on the animations at the same time we start playing the song:\n\n.playing * { animation-play-state: running; }\n\nThe background\n\nWe\u2019re almost done here! When we add the background, it needs to speed up at the same time that Tuna starts running. The music picks up speed around 24.75 seconds in, and so we\u2019re going to use animation-delay on those backgrounds, too.\n\nThis will require some math. If you try to simply shorten the animation\u2019s duration at the 24.75s mark, the backgrounds will, mid-scroll, jump back to their initial background positions to start the new animation! Argh! So let\u2019s make a new @keyframe and calculate where the background position would be just before we speed up the animation.\n\nHere\u2019s the formula:\n\nnew 0% value = delay \u00f7 old duration \u00d7 length of image\n\nnew 100% value = new 0% value + length of image\n\nHere\u2019s the formula put to work on a smaller scale:\n\n\n\nVoil\u00e0! The finished animation!\n\n\n\nI\u2019ve always wanted to bring my illustrations to life. Then I woke up one morning and realized that I had all the tools to do so in my browser and in my head. Now I have fallen in love with Flashless animation.\n\nI\u2019m sure there will be detractors who say HTML wasn\u2019t meant for this and it\u2019s a gross abuse of the DOM! But I say that these explorations help us expand what we expect from devices and software and challenge us in good ways as artists and programmers. The browser might not be the most appropriate place for animation, but is certainly a fun place to start.\n\nThere is so much you can do with the spec implemented today, and so much of the territory is still unexplored. I have not yet begun to show you everything. In eight months I expect this demo will represent the norm, not the bleeding edge. I look forward to seeing the wonderful things you create.\n\n(Also, someone, please, do something about that gappy HTML5 audio looping. It\u2019s a crying shame!)", "year": "2012", "author": "Rachel Nabors", "author_slug": "rachelnabors", "published": "2012-12-06T00:00:00+00:00", "url": "https://24ways.org/2012/flashless-animation/", "topic": "code"} {"rowid": 84, "title": "Responsive Responsive Design", "contents": "Now more than ever, we\u2019re designing work meant to be viewed along a gradient of different experiences. Responsive web design offers us a way forward, finally allowing us to \u201cdesign for the ebb and flow of things.\u201d\n\n\nWith those two sentences, Ethan closed the article that introduced the web to responsive design. Since then, responsive design has taken the web by storm. Seemingly every day, some company is touting their new responsive redesign. Large brands such as Microsoft, Time and Disney are getting in on the action, blowing away the once common criticism that responsive design was a technique only fit for small blogs.\n\nCertainly, this is a good thing. As Ethan and John Allsopp before him, were right to point out, the inherent flexibility of the web is a feature, not a bug. The web\u2019s unique ability to be consumed and interacted with on any number of devices, with any number of input methods is something to be embraced.\n\nBut there\u2019s one part of the web\u2019s inherent flexibility that seems to be increasingly overlooked: the ability for the web to be interacted with on any number of networks, with a gradient of bandwidth constraints and latency costs, on devices with varying degrees of hardware power.\n\nA few months back, Stephanie Rieger tweeted\n\n\n\t\u201cShoot me now\u2026responsive design has seemingly become confused with an opportunity to reduce performance rather than improve it.\u201d\n\n\nI would love to disagree, but unfortunately the evidence is damning. Consider the size and number of requests for four highly touted responsive sites that were launched this year:\n\n\n\t74 requests, 1,511kb\n\t114 requests, 1,200kb\n\t99 requests, 1,298kb\n\t105 requests, 5,942kb\n\n\nAnd those numbers were for the small screen versions of each site!\n\nThese sites were praised for their visual design and responsive nature, and rightfully so. They\u2019re very easy on the eyes and a lot of thought went into their appearance. But the numbers above tell an inconvenient truth: for all the time spent ensuring the visual design was airtight, seemingly very little (if any) attention was given to their performance.\n\nIt would be one thing if these were the exceptions, but unfortunately they\u2019re not. Guy Podjarny, who has done a lot of research around responsive performance, discovered that 86% of the responsive sites he tested were either the same size or larger on the small screen as they were on the desktop.\n\nThe reality is that high performance should be a requirement on any web project, not an afterthought. Poor performance has been tied to a decrease in revenue, traffic, conversions, and overall user satisfaction. Case study after case study shows that improving performance, even marginally, will impact the bottom line. The situation is no different on mobile where 71% of people say they expect sites to load as quickly or faster on their phone when compared to the desktop.\n\nThe bottom line: performance is a fundamental component of the user experience.\n\nSo, given it\u2019s extreme importance in the success of any web project, why is it that we\u2019re seeing so many bloated responsive sites?\n\nFirst, I adamantly disagree with the belief that poor performance is inherent to responsive design. That\u2019s not a rule \u2013 it\u2019s a cop-out. It\u2019s an example of blaming the technique when we should be blaming the implementation. This argument also falls flat because it ignores the fact that the trend of fat sites is increasing on the web in general. While some responsive sites are the worst offenders, it\u2019s hardly an issue resigned to one technique.\n\nTo fix the issue, we need to stop making excuses and start making improvements instead. Here, then, are some things we can do to start improving the state of responsive performance, and performance in general, right now.\n\nCreate a culture of performance\n\nIf you understand just how important performance is to the success of a project, the natural next step is to start creating a culture where high performance is a key consideration. \n\nOne of the things you can do is set a baseline. Determine the maximum size and number of requests you are going to allow, and don\u2019t let a page go live if either of those numbers is exceeded. The BBC does this with its responsive mobile site.\n\nA variation of that, which Steve Souders discussed in a recent podcast is to create a performance budget based on those numbers. Once you have that baseline set, if someone comes along and wants to add a something to the page, they have to make sure the page remains under budget. If it exceeds the budget, you have three options:\n\n\n\tOptimize an existing feature or asset on the page\n\tRemove an existing feature or asset from the page\n\tDon\u2019t add the new feature or asset\n\n\nThe idea here is that you make performance part of the process instead of something that may or may not get tacked on at the end.\n\nEmbrace the pain\n\nThis troubling trend of web bloat can be blamed in part on the lack of pain associated with poor performance. Most of us work on high-speed connections with low latency. When we fire up a 4Mb site, it doesn\u2019t feel so bad. \n\nWhen I tested the previously mentioned 5,942kb site on a 3G network, it took over 93 seconds to load. A minute and a half just staring at a white screen. Had anyone working on that project experienced that, you can bet the site wouldn\u2019t have launched in that state.\n\nDon\u2019t just crunch numbers. Fire up your site on a slower network and see what it feels like to wait. If you don\u2019t have access to a slow network, simulate one using a tool like Slowy, Throttle or the Network Conditioner found in Mac OS X 10.7.\n\nWatch for low-hanging fruit\n\nThere are a bunch of general performance improvements that apply to any site (responsive or not) but often aren\u2019t made. A great starting point is to refer to Yahoo!\u2018s list of rules.\n\nSome of this might sound complicated or intimidating, but it doesn\u2019t have to be. You can grab an .htaccess file from HTML 5 Boilerplate or use Sergey Chernyshev\u2019s drop-in .htaccess file. You can use tools like SpriteMe to simplify the creation of sprites, and ImageOptim to compress images.\n\nJust by implementing these simple optimizations you will achieve a noticeable improvement in terms of weight and page load time.\n\nBe careful with images\n\nThe most common offender for poor responsive performance is downloading unnecessarily large images, or worse yet, multiple sizes of the same image. \n\nFor background images, simply being careful with where and how you include the image can ensure you don\u2019t get caught in the trap of multiple background images being downloaded without being used. Don\u2019t count on display:none to help. While it may hide elements from displaying on screen, those images will still be requested and downloaded.\n\nContent images can be a little trickier. Whatever you do, don\u2019t serve a large image that works on a large screen display to small screens. It\u2019s wasteful, not only in terms of adding weight to the page, but also in wasting precious memory. Instead, use a tool like Adaptive Images or src.sencha.io to make sure only appropriately sized images are being downloaded. \n\nThe new <picture> element that has been so often discussed is another excellent solution if you\u2019re feeling particularly future-oriented. A picture polyfill exists so that you can start using the element now without any worries about support.\n\nConditional loading\n\nDon\u2019t load any more than you absolutely need to. If a script isn\u2019t needed at certain sizes, use the matchMedia polyfill to ensure it only loads when needed. Use eCSSential to do the same for unnecessary CSS files.\n\nLast year on 24 ways, Jeremy Keith wrote an article about conditional loading of content in a responsive design based on the screen width. The technique was later refined by the Filament Group into what they dubbed the Ajax-Include Pattern. It\u2019s a powerful and simple way to lighten the load on small screens as well as reduce clutter.\n\nGo vanilla?\n\nIf you take a look at the HTTP Archive you\u2019ll see that other than image size, JavaScript is the heaviest asset on a page weighing in at 215kb on average. It also boasts the fifth highest correlation to load time as well as the second highest correlation to render time. \n\nMuch of the weight can be attributed to our industry\u2019s increasing reliance on frameworks. This is especially a concern on mobile devices. PPK recently exclaimed that current JavaScript libraries are just \u201ctoo heavy for mobile\u201d. \u201cResearch from Stoyan Stefanov on parse times supports this. On some Android and iOS devices, it can take as long as 200-300ms just to parse jQuery.\n\nThere\u2019s nothing wrong about using a framework, but the problem is that they\u2019ve become the default. Before dropping another framework or plugin into a page, we should stop to consider the value it adds and whether we could accomplish what we need to do using a combination of vanilla JavaScript and CSS instead. (This is a great example of a scenario where a performance budget could help.)\n\nStart thinking beyond visual aesthetics\n\nWe love to tout the web\u2019s universality when discussing the need for responsive design. But that universality is not limited simply to screen size. Networks and hardware capabilities must factor in as well.\n\nThe web is an incredibly dynamic and interactive medium, and designing for it demands that we consider more than just visual aesthetics. Let\u2019s not forget to give those other qualities the attention they deserve.", "year": "2012", "author": "Tim Kadlec", "author_slug": "timkadlec", "published": "2012-12-05T00:00:00+00:00", "url": "https://24ways.org/2012/responsive-responsive-design/", "topic": "design"} {"rowid": 77, "title": "Colour Accessibility", "contents": "Here\u2019s a quote from Josef Albers:\n\nIn visual perception a colour is almost never seen as it really is[\u2026] This fact makes colour the most relative medium in art.Josef Albers, Interaction of Color, 1963\n\nAlbers was a German abstract painter and teacher, and published a very famous course on colour theory in 1963. Colour is very relative \u2014 not just in the way that it appears differently across different devices due to screen quality and colour management, but it can also be seen differently by different people \u2014 something we really need to be more mindful of when designing.\n\nWhat is colour blindness?\n\nColour blindness very rarely means that you can\u2019t see any colour at all, or that people see things in greyscale. It\u2019s actually a decreased ability to see colour, or a decreased ability to tell colours apart from one another. \n\nHow does it happen?\n\nInside the typical human retina, there are two types of receptor cells \u2014 rods and cones. Rods are the cells that allow us to see dark and light, and shape and movement. Cones are the cells that allow us to perceive colour. There are three types of cones, each responsible for absorbing blue, red, and green wavelengths in the spectrum.\n\nProblems with colour vision occur when one or more of these types of cones are defective or absent entirely, and these problems can either be inherited through genetics, or acquired through trauma, exposure to ultraviolet light, degeneration with age, an effect of diabetes, or other factors.\n\nColour blindness is a sex-linked trait and it\u2019s much more common in men than in women. The most common type of colour blindness is called deuteranomaly which occurs in 7% of males, but only 0.5% of females. That\u2019s a pretty significant portion of the population if you really stop and think about it \u2014\u00a0we can\u2019t ignore this demographic.\n\nWhat does it look like?\n\nPeople with the most common types of colour blindness, like protanopia and deuteranopia, have difficulty discriminating between red and green hues. There are also forms of colour blindness like tritanopia, which affects perception of blue and yellow hues. Below, you can see what a colour wheel might look like to these different people.\n\n\n\nWhat can we do?\n\nHere are some things you can do to make your websites and apps more accessible to people with all types of colour blindness.\n\nInclude colour names and show examples\n\nOne of the most common annoyances I\u2019ve heard from people who are colour-blind is that they often have difficulty purchasing clothing and they will sometimes need to ask another person for a second opinion on what the colour of the clothing might actually be. While it\u2019s easier to shop online than in a physical store, there are still accessibility issues to consider on shopping websites.\n\nLet\u2019s say you\u2019ve got a website that sells T-shirts. If you only show a photo of the shirt, it may be impossible for a person to tell what colour the shirt really is. For clarification, be sure to reference the name of the colour in the description of the product.\n\n\n\nUnited Pixelworkers does a great job of following this rule. The St. John\u2019s T-shirt has a quirky palette inspired by the unofficial pink, white and green Newfoundland flag, and I can imagine many people not liking it.\n\nAnother common problem occurs when a colour filter has been added to a product search. Here\u2019s an example from a clothing website with unlabelled colour swatches, and how that might look to someone with deuteranopia-type colour blindness.\n\n\n\nThe colour search filter below, from the H&M website, is much better since it uses names instead.\n\n\n\nAt first glance, Urban Outfitters also uses unlabelled colour swatches on product pages (below), but on closer inspection, the colour name is displayed on hover. This isn\u2019t an ideal solution, because although it\u2019ll work on a desktop browser, it won\u2019t work on a touchscreen device where hovering isn\u2019t an option. \n\n\n\nUsing overly fancy colour names, like the ones you might find labelling high-end interior paint can be just as confusing as not using a colour name at all. Names like grape instead of purple don\u2019t really give the viewer any useful information about what the colour actually is on a colour wheel. Is grape supposed to be purple, or could it refer to red grapes or even green? Stick with hue names as much as possible.\n\nAvoid colour-specific instructions\n\nWhen designing forms, avoid labelling required fields only with coloured text. It\u2019s safer to use a symbol cue like the asterisk which is colour-independent. \n\n\n\nA similar example would be directing a user to click a green button to purchase a product. Label your buttons clearly and reference them in the site copy by function, not colour, to avoid confusion.\n\nDon\u2019t rely on colour coding\n\nDesigning accessible maps and infographics can be much more challenging. \n\nDon\u2019t rely on colour coding alone \u2014 try to use a combination of colour and texture or pattern, along with precise labels, and reflect this in the key or legend. Combine a blue background with a crosshatched pattern, or a pink background with a stippled dot \u2014 your users will always have two pieces of information to work with.\n\n\n\nThe map of the London subway system is an iconic image not just in London, but around the world. Unfortunately, it contains some colours that are indistinguishable from each other to a person with a vision problem. This is true not only for the London underground, but also for any other wayfinding system that relies on colour coding as the only key in a legend. \n\n\n\nThere are printable versions of the map available online in black and white, using patterns and shades of black and grey that are distinguishable, but the point is that there would be no need for such a map if it were designed with accessibility in mind from the beginning. And, if you\u2019re a person who has a physical disability as well as a vision problem, the \u201cStep-Free\u201d guide map which shows stations is based on the original coloured map. \n\n\n\nProvide alternatives and customization\n\nWhile it\u2019s best to consider these issues and design your app to be accessible by default, sometimes this might not be possible. Providing alternative styles or allowing users to edit their own colours is a feature to keep in mind.\n\nThe developers of the game Faster Than Light created an alternate colour-blind mode and asked for public feedback to make sure that it passed the test. Not much needed to be done, but you can see they added stripes to the red zones and changed some outlines to blue.\n\n\n\niChat is also a good example. Although by default it uses coloured bubbles to indicate a user\u2019s status (available for chat, away or idle, or busy), included in the preferences is a \u201cUser Shapes to Indicate Status\u201d option, which changes the shape of the standard circles to green circles, yellow triangles and red squares.\n\n\n\nPay attention to contrast \n\nColours that are similar in value but different in hue may be easy to distinguish between for a user with good vision, but a person who suffers from colour blindness may not be able to tell them apart at all. Proofing your work in greyscale is a quick way to tell if there\u2019s enough contrast between the most important information in your design.\n\nCheck with a simulator\n\nThere are many tools out there for simulating different types of colour blindness, and it\u2019s worth checking your design to catch any potential problems up front. \n\nOne is called Sim Daltonism and it\u2019s available for Mac OS X. It\u2019ll show a pop-up preview next to your cursor and you can choose which type of colour blindness you want to test from a drop-down menu. \n\n\n\nYou can also proof for the two most common types of colour blindness right in Photoshop or Illustrator (CS4 and later) while you\u2019re designing. \n\n\n\nThe colour contrast check tool from designer and developer Jonathan Snook gives you the option to enter a colour code for a background, and a colour code for text, and it\u2019ll tell you if the colour contrast ratio meets the Web Content Accessibility Guidelines 2.0. You can use the built-in sliders to adjust your colours until they meet the compliant contrast ratios. This is a great tool to test your palette before going live.\n\n\n\nFor live websites, you can use the accessibility tool called WAVE, which also has a contrast checker. It\u2019s important to keep in mind, though, that while WAVE can identify contrast errors in text, other things can slip through, so a site that passes the test does not automatically mean it\u2019s accessible in reality.\n\nFor example, the contrast checker here doesn\u2019t notice that our red link in the introduction isn\u2019t underlined, and therefore could blend into the surrounding paragraph text. \n\n\n\nI know that once I started getting into the habit of checking my work in a simulator, I became more mindful of any potential problem areas and it was easier to avoid them up front. It\u2019s also made me question everything I see around me and it sends red flags off in my head if I think it\u2019s a serious colour blindness fail. Understanding that colour is relative in the planning stages and following these tips will help us make more accessible design for all.", "year": "2012", "author": "Geri Coady", "author_slug": "gericoady", "published": "2012-12-04T00:00:00+00:00", "url": "https://24ways.org/2012/colour-accessibility/", "topic": "design"} {"rowid": 82, "title": "Being Prepared To Contribute", "contents": "\u201cYou\u2019ll figure it out.\u201d The advice my dad gives has always been the same, whether addressing my grade school homework or paying bills after college. If I was looking for a shortcut, my dad wasn\u2019t going to be the one to provide it.\n\nWhen I was a kid it infuriated the hell out of me, but what I then perceived to be a lack of understanding turned out to be a keystone in my upbringing. As an adult, I realize the value in not receiving outright solutions, but being forced to figure things out. \n\nEven today, when presented with a roadblock while building for the web, I am temped to get by with the help of the latest grid system, framework, polyfill, or plugin. In and of themselves these resources are harmless, but before I can drop them in, those damn words still echo in the back of my mind: \u201cYou\u2019ll figure it out.\u201d\n\nI know that if I blindly implement these tools as drag and drop solutions I fail to understand the intricacies behind how and why they were built; repeatedly using them as shortcuts handicaps my skill set. When I solely rely on the tools of others, my work is at their mercy, leaving me less creative and resourceful, and, thus, less able to contribute to the advancement of our industry and community. \n\nOne of my favorite things about this community is how generous and collaborative it can be. I\u2019ve loved seeing FitVids used all over the web and regularly improved upon at Github. I bet we can all think of a time where implementing a shared resource has benefitted our own work and sanity. Because these resources are so valuable, it\u2019s important that we continue to be a part of the conversation in order to further develop solutions and ideas. It\u2019s easy to assume there\u2019s someone smarter or more up-to-date in any one area, but with a degree of understanding and perspective, we can all participate. \n\nThis open form of collaboration is in our web DNA. After all, its primary purpose was to promote the exchange and development of new ideas.\n\n\n\tTim Berners-Lee proposed a global hypertext project, to be known as the World Wide Web. Based on the earlier \u201cEnquire\u201d work, it was designed to allow people to work together by combining their knowledge in a web of hypertext documents.\n\n\nI\u2019m delighted to find that this spirit of collaborative ingenuity is alive and well on the web today. Take the story of Off Canvas as an example. I was at an ATX Dribbble meet up where I met Jason Weaver and chatted to him about his recent work on the responsive layout prototype, Off Canvas. Jason said he came across a post by Luke Wroblewski outlining the idea and saw this:\n\n\n\tIf anyone is interested in building a complete example of this approach using responsive Web design techniques, let me know!\n\n\nFrom there Luke recounts: \n\n\n\tWe went back and forth on email, with me laying out ideas and Jason doing all the hard work to see if they can be done and improving them bit by bit! Once we got to something we both liked, I wrote up an article explaining things and he hosted the examples.\n\n\nLuke took the time to clearly outline and diagram his ideas, and Jason responded with a solid proof of concept that has evolved into a tool we all have at our disposal. Victory!\n\nI have also benefitted from comrades who have taken an idea of mine into development. After blogging about some concerns in regards to maintaining hierarchy as media queries are used to shift layouts, Jordan Moore rebounded with some responsive demos where he used flexbox to (re)order content as viewport sizing changes.\n\nSimilar stories can be found behind the development of things like FitVids, FitText, and Molten Leading. I love this pattern of collaboration because it involves a fairly specific process:\n\n\n\tInitial idea or prototype is outlined or built, then shared\n\tDiscuss\n\tSomeone develops or improves it, then shares it\n\tDiscuss\n\tSomeone else develops or improves it, then shares it.\n\tInfinity.\n\n\nThis is what the web looks like when we build it together, and I\u2019d argue that steps 2+ are absolutely crucial. A web where everyone develops their own ideas and tools independent of one another is like a room full of people talking and no one listening. \n\n\n\nThe pattern itself mimics a literal web structure, and ideally we\u2019d be able to follow a strand from one idea to the next and so on.\n\nBlessed are the curators\n\nSometimes those lines aren\u2019t easy to find or follow. Thankfully, there are people who painstakingly log each experiment and index much of what\u2019s out there. Chris Coyier does this with CSS in general, and Brad Frost is doing this for responsive and multi-device design with his Pattern Library. Seriously, take a look at this page and imagine what it would take to find, track and organize the progression of each of these resources yourself. I\u2019d argue that ongoing collections like these are more valuable than the sum of their parts when they are updated regularly as opposed to a top ten tips blog post format.\n\nHere\u2019s my soapbox\n\nHere are a few things I appreciate about how things are shared and contributed online. And yes, I could do way better at all of them myself.\n\n\n\tConcise write-ups: honor others\u2019 time by getting to the point. Not every idea or solution needs two thousand words to convey fully. I love long-form posts, but there\u2019s a time and a place for them.\n\tVisual aids: if a quick illustration, screenshot, or graphic helps illustrate your point or problem, yes please.\n\n\nBy the way, Luke Wroblewski rules the school on both of these.\n\n\n\tDemo it: host it yourself, or put it on CodePen or JS Bin for others to see.\n\tPut it on Github: share and improve with the rest of the community. Consider, however, that because someone puts something on Github doesn\u2019t mean they\u2019re forever bound to provide support or instruction.\n\n\nThis isn\u2019t a call for everyone to learn everything all the time, but if you\u2019re curious or interested in something, skip the shortcut and get your hands dirty: sketch, prototype, question, debate, fork, and share. Figuring these things out on our own makes us valuable contributors to the web \u2013 the thing that ultimately we\u2019re all trying to figure out together.", "year": "2012", "author": "Trent Walton", "author_slug": "trentwalton", "published": "2012-12-03T00:00:00+00:00", "url": "https://24ways.org/2012/being-prepared-to-contribute/", "topic": "process"} {"rowid": 85, "title": "Starting Your Project on the Right Foot (and Keeping It There)", "contents": "I\u2019m not sure if anything is as terrifying as beginning a new design project. I often spend hours trying to find the best initial footing in a design, so I\u2019ve been working hard to improve my process, particularly for the earliest stages of a project. I want\u00a0to smooth out the bumps that disrupt my creative momentum and focus on the emotional highs and lows I experience, and then try to minimize the lows and ride the highs as long as possible. \n\nDesign is often a struggle broken up by blissful moments of creative clarity that provide valuable force to move your work forward. Momentum is a powerful tool in creative work, and it\u2019s something we don\u2019t always maximize when we\u2019re working because of the hectic nature of our field.\u00a0Obviously, every designer is going to have a different process, but I thought I\u2019d share some of the methods I\u2019ve begun to adopt. I hope this will spark a conversation among designers who are interested in looking at process in a new way.\n\nJump-starting a project\n\nI cannot overstate the importance of immersing yourself in design and collecting ample amounts of inspiration when beginning a project. I make it a daily practice to visit a handful of sites (Dribbble, Graphic Exchange, Web Creme, siteInspire, Designspiration, and others) and save any examples of design that I like. I then sort them into general categories (publication design, illustration, typography, web design, and so on). Enjoying a bit of fresh design every day helps me absorb it and analyze why it\u2019s effective instead of just imitating it.\u00a0\n\nMany designers are afraid to look at too much design for fear that they\u2019ll be tempted to copy it, but I feel a steady influx of design inspiration reduces that possibility. You\u2019re much more likely to take the easy way out and rip off a design if you\u2019re scrambling for inspiration after getting stuck. If you are immersed in design from a variety of mediums, you\u2019ll engage your creative brain on multiple levels and have an easier time creating something unique for your project. Looking at good design will not make you a good designer but it will make you a better designer.\n\nDesign is design\n\nTry not to limit your visual research to the medium you\u2019re working in. Websites, books, posters and packaging all have their own unique limitations and challenges, and any one of those characteristics could be useful to you. Posters need to grab the viewer and pass on a small tidbit of information; packaging needs to encourage physical interaction; and websites need to encourage exploration. If you know the challenges you\u2019ll be facing, you will know where to look for design that tackles those same problems.\n\nI find it refreshing to look at design from the turn of the nineteenth century, when type was laid out on objects without thought to aesthetics. Many vintage packages break all sorts of modern design rules, and looking at that kind of work is a great way to spark your creativity. Pulling yourself out of the box and away from the rules of what you\u2019re working on can reveal solutions that are innovative and unique.\u00a0After a little finessing, the warning label text from a 1940s hazardous chemical box from could have the exact type and icon arrangement you need for your project.\u00a0There\u2019s a massive pool of design to pull from that doesn\u2019t have the limitations the web has, and exploring those design worlds will help you grow your own repertoire.\n\nIf all else fails, start with the footer\n\nThe very beginning of a project is the most frustrating point in a project for me. I\u2019m trying to figure out typeface combinations, colors and the overall voice of the design, and until I find the right solutions, I\u2019m a wreck. I\u2019ve found often that my frustration stems from trying to solve too many problems at once. The beginning of a project has a lot of moving targets, nearly endless possible solutions, and constantly changing variables. You\u2019ll knock out one problem only to discover your solution doesn\u2019t jive with something you worked out earlier \u2014 you end up designing in circles.\n\nIf you find yourself getting stuck at the beginning of a website design, try working out one specific element of the site and see what emerges. I\u2019m going to recommend the footer. Why? Footers can easily be ignored in a design or become a dumping ground for items that couldn\u2019t be worked into the main layout. But, at the start of most projects, the minimum content requirements for the footer are usually established. There needs to be a certain number of links, social media buttons, copyright details, a search bar, and so on. It\u2019s a self-contained item within the design that has a specific purpose, and that\u2019s a great element to focus on when you\u2019re stuck in a design. Colors, typefaces, link styles, input fields and buttons can all be sketched out from just the footer. It\u2019s a very flexible element that can be as prominent or subtle as you want, and it\u2019s a solid starting point for setting the tone and style of a site.\n\nSave the details\n\nDesigners love details. I love details. But don\u2019t let nitpicking early on in your process kill your creative momentum.\u00a0Design is an emotional process, and being frustrated or defeated by a tricky problem or a graphical detail you just can\u2019t nail down can deflate your creative energies. If you hit a roadblock, set it aside and tackle another piece of the project. As you spend time engaged in a design, the style you develop will evolve according to the needs of the content, and you might arrive naturally at a solution that will work perfectly for the problem that had you stuck before.\u00a0\n\nIf I find myself working on one particular element for more than a half an hour without any clear movement, I shelve it. Designers often wear their obsessive detail-oriented tendencies as a badge of honor, but there\u2019s a difference between making the design better and wasting time. If you\u2019ve spent hours nudging elements around pixel by pixel and can\u2019t settle on something, it probably means what you\u2019re doing isn\u2019t making a huge improvement on the design. Don\u2019t be afraid to let it lie and come at it again with fresh eyes. You will be better equipped to tackle the finer points of a project once you\u2019ve got the broad strokes defined.\n\nHave a plan when you start and stop designing\n\nWe all know that creativity isn\u2019t something you can turn on effortlessly, and it\u2019s easy to forget the emotional process that goes along with design.\u00a0If you leave a project in a place of frustration, it\u2019s going to stay with you in your free time and affect you negatively, like a dark cloud of impending disaster. Try to end each design session with a victory, a small bit of definable progress that you can take with you in your downtime. Even something as small as finding the right opacity for the interior shadow on the search bar in the header of the site is a win.\u00a0Likewise, when you return to a project after a break, it can be difficult to get the ball rolling on the design again if you set it down without a clear path for the next steps. I find that I work on details best when I\u2019m returning from downtime, when I\u2019m fresh and re-energized and ready to dig in again. Try to pick out at least one element you\u2019d like to fine-tune when you are winding down in a design session and use it to kick-start your next session.\n\nContent is king\n\nI would argue there is nothing more crucial to the success of a design than having the content defined from the outset.\u00a0Designing without content is similar to designing without an audience, and designing with vague ideas of content types and character limits is going to result in a muted design that doesn\u2019t reach its full potential. Images and language go hand in hand with design, and can take a design from functional to outstanding if you have them available from the outset. We don\u2019t always have the luxury of having content to build a design around, but fight for it whenever you can. For example, if the site you are designing is full of technical jargon, your paragraphs might need a longer line length to accommodate the longer words being used.\u00a0\n\nOften, working with content will lead to design solutions you wouldn\u2019t have come to otherwise.\u00a0Design speaks to content, and content speaks to design. Lorem ipsum doesn\u2019t speak to anyone (unless you know Latin, in which case, congratulations!).\n\nEvery project has its own set of needs, and every designer has his or her own method of working. There\u2019s obviously no perfect process to design, and being dogmatic about process can be just as harmful as not having one. Exposing yourself to new design and new ways of designing is an easy way to test your skills and grow. When things are hard and you can\u2019t get any momentum going on a design, this is when your skill set is truly challenged. We all hope to get wonderful projects with great assets and ample creative possibilities, but you won\u2019t always be so blessed, and this is when the quality of your process is really going to shine.", "year": "2012", "author": "Bethany Heck", "author_slug": "bethanyheck", "published": "2012-12-02T00:00:00+00:00", "url": "https://24ways.org/2012/starting-your-project-on-the-right-foot/", "topic": "process"} {"rowid": 80, "title": "HTML5 Video Bumpers", "contents": "Video is a bigger part of the web experience than ever before. With native browser support for HTML5 video elements freeing us from the tyranny of plugins, and the availability of faster internet connections to the workplace, home and mobile networks, it\u2019s now pretty straightforward to publish video in a way that can be consumed in all sorts of ways on all sorts of different web devices.\n\nI recently worked on a project where the client had shot some dedicated video shorts to publish on their site. They also had some five-second motion graphics produced to top and tail the videos with context and branding. This pretty common requirement is a great idea on the web, where a user might land at your video having followed a link and be viewing a page without much context.\n\nKnown as bumpers, these short introduction clips help brand a video and make it look a lot more professional.\n\n\n\nAdding bumpers to a video\n\nThe simplest way to add bumpers to a video would be to edit them on to the start and end of the video file itself. Cooking the bumpers into the video file is easy, but should you ever want to update them it can become a real headache. If the branding needs updating, for example, you\u2019d need to re-edit and re-encode all your videos. Not a fun task.\n\nWhat if the bumpers could be added dynamically? That would enable you to use the same bumper for multiple videos (decreasing download time for users who might watch more than one) and to update the bumpers whenever you wanted. You could change them seasonally, update them for special promotions, run different advertising slots, perform multivariate testing, or even target different bumpers to different users.\n\nThe trade-off, of course, is that if you dynamically add your bumpers, there\u2019s a chance that a user in a given circumstance might not see the bumper. For example, if the main video feature was uploaded to YouTube, you\u2019d have no way to control the playback. As always, you need to weigh up the pros and cons and make your choice.\n\nHTML5 bumpers\n\nIf you wanted to dynamically add bumpers to your HTML5 video, how would you go about it? That was the question I found myself needing to answer for this particular client project.\n\nMy initial thought was to treat it just like an image slideshow. If I were building a slideshow that moved between images, I\u2019d use CSS absolute positioning with z-index to stack the images up on top of each other in a pile, with the first image on top. To transition to the second image, I\u2019d use JavaScript to fade the top image out, revealing the second image beneath it.\n\n\n\nNow that video is just a native object in the DOM, just like an image, why not do the same? Stack the videos up with the opening bumper on top, listen for the video\u2019s onended event, and fade it out to reveal the main feature behind. Good idea, right?\n\nWrong\n\nRemember that this is the web. It\u2019s never going to be that easy. The problem here is that many non-desktop devices use native, dedicated video players. Think about watching a video on a mobile phone \u2013 when you play the video, the phone often goes full-screen in its native player, leaving the web page behind. There\u2019s no opportunity to fade or switch z-index, as the video isn\u2019t being viewed in the page. Your page is left powerless. Powerless!\n\n\n\nSo what can we do? What can we control?\n\nThose of us with particularly long memories might recall a time before CSS, when we\u2019d have to use JavaScript to perform image rollovers. As CSS background images weren\u2019t a practical reality, we would use lots of <img> elements, and perform a rollover by modifying the src attribute of the image. \n\nTurns out, this old trick of modifying the source can help us out with video, too. In most cases, modifying the src attribute of a <video> element, or perhaps more likely the src attribute of a source element, will swap from one video to another.\n\nSwappin\u2019 it\n\nLet\u2019s take a deliberately simple example of a super-basic video tag:\n\n<video src=\"mycat.webm\" controls>no fallback coz i is lame, innit.</video>\n\nWe could very simply write a script to find all video tags and give them a new src to show our bumper.\n\n<script>\n\tvar videos, i, l;\n\tvideos = document.getElementsByTagName('video');\n\tfor(i=0, l=videos.length; i<l; i++) {\n\t\tvideos[i].setAttribute('src', 'bumper-in.webm');\n\t}\n</script>\n\nView the example in a browser with WebM support. You\u2019ll see that the video is swapped out for the opening bumper. Great!\n\nBeefing it up\n\nOf course, we can\u2019t just publish video in one format. In practical use, you need a <video> element with multiple <source> elements containing your different source formats.\n\n<video controls>\n <source src=\"mycat.mp4\" type=\"video/mp4\" />\n <source src=\"mycat.webm\" type=\"video/webm\" />\n <source src=\"mycat.ogv\" type=\"video/ogg\" />\n</video>\n\nThis time, our script needs to loop through the sources, not the videos. We\u2019ll use a regular expression replacement to swap out the file name while maintaining the correct file extension.\n\n<script>\n var sources, i, l, orig;\n sources = document.getElementsByTagName('source');\n for(i=0, l=sources.length; i<l; i++) {\n orig = sources[i].getAttribute('src');\n sources[i].setAttribute('src', orig.replace(/(w+).(w+)/, 'bumper-in.$2'));\n // reload the video\n sources[i].parentNode.load();\n }\n</script>\n\nThe difference this time is that when changing the src of a <source> we need to call the .load() method on the video to get it to acknowledge the change.\n\nSee the code in action, this time in a wider range of browsers.\n\nBut, my video!\n\nI guess we should get the original video playing again. Keeping the same markup, we need to modify the script to do two things:\n\n\n\tStore the original src in a data- attribute so we can access it later\n\tAdd an event listener so we can detect the end of the bumper playing, and load the original video back in\n\n\nAs we need to loop through the videos this time to add the event listener, I\u2019ve moved the .load() call into that loop. It\u2019s a bit more efficient to call it only once after modifying all the video\u2019s sources.\n\n<script>\nvar videos, sources, i, l, orig;\nsources = document.getElementsByTagName('source');\nfor(i=0, l=sources.length; i<l; i++) {\n orig = sources[i].getAttribute('src');\n sources[i].setAttribute('data-orig', orig);\n sources[i].setAttribute('src', orig.replace(/(w+).(w+)/, 'bumper-in.$2'));\n}\nvideos = document.getElementsByTagName('video');\nfor(i=0, l=videos.length; i<l; i++) {\n videos[i].load();\n videos[i].addEventListener('ended', function(){\n sources = this.getElementsByTagName('source');\n for(i=0, l=sources.length; i<l; i++) {\n orig = sources[i].getAttribute('data-orig');\n if (orig) {\n sources[i].setAttribute('src', orig);\n }\n sources[i].setAttribute('data-orig','');\n }\n this.load();\n this.play();\n });\n}\n</script>\n\nAgain, view the example to see the bumper play, followed by our spectacular main feature. (That\u2019s my cat, Widget. His interests include sleeping and internet marketing.)\n\nTidying things up\n\nThe final thing to do is add our closing bumper after the main video has played. This involves the following changes:\n\n\n\tWe need to keep track of whether the src has been changed, so we only play the video if it\u2019s changed. I\u2019ve added the modified variable to track this, and it stops us getting into a situation where the video just loops forever.\n\tAdd an else to the event listener, for when the orig is false (so the main feature has been playing) to load in the end bumper. We also check that we\u2019re not already playing the end bumper. Because looping.\n\n\n<script>\nvar videos, sources, i, l, orig, current, modified;\nsources = document.getElementsByTagName('source');\nfor(i=0, l=sources.length; i<l; i++) {\n orig = sources[i].getAttribute('src');\n sources[i].setAttribute('data-orig', orig);\n sources[i].setAttribute('src', orig.replace(/(w+).(w+)/, 'bumper-in.$2'));\n}\nvideos = document.getElementsByTagName('video');\nfor(i=0, l=videos.length; i<l; i++) {\n videos[i].load();\n modified = false;\n videos[i].addEventListener('ended', function(){\n sources = this.getElementsByTagName('source');\n for(i=0, l=sources.length; i<l; i++) {\n orig = sources[i].getAttribute('data-orig');\n if (orig) {\n sources[i].setAttribute('src', orig);\n modified = true;\n }else{\n current = sources[i].getAttribute('src');\n if (current.indexOf('bumper-out')==-1) {\n sources[i].setAttribute('src', current.replace(/([w]+).(w+)/, 'bumper-out.$2'));\n modified = true;\n }else{\n this.pause();\n modified = false;\n }\n }\n sources[i].setAttribute('data-orig','');\n }\n if (modified) {\n this.load();\n this.play();\n }\n });\n}\n</script>\n\nYo ho ho, that\u2019s a lot of JavaScript. See it in action \u2013 you should get a bumper, the cat video, and an end bumper.\n\nOf course, this code works fine for demonstrating the principle, but it\u2019s very procedural. Nothing wrong with that, but to do something similar in production, you\u2019d probably want to make the code more modular to ease maintainability. Besides, you may want to use a framework, rather than basic JavaScript. \n\nThe end credits\n\nOne really important principle here is that of progressive enhancement. If the browser doesn\u2019t support JavaScript, the user won\u2019t see your bumper, but they will get the main video. If the browser supports JavaScript but doesn\u2019t allow you to modify the src (as was the case with older versions of iOS), the user won\u2019t see your bumper, but they will get the main video.\n\nIf a search engine or social media bot grabs your page and looks for content, they won\u2019t see your bumper, but they will get the main video \u2013 which is absolutely what you want.\n\nThis means that if the bumper is absolutely crucial, you may still need to cook it into the video. However, for many applications, running it dynamically can work quite well.\n\nAs always, it comes down to three things:\n\n\n\tMeasure your audience: know how people access your site\n\tTest the solution: make sure it works for your audience\n\tPlan for failure: it\u2019s the web and that\u2019s how things work \u2018round these parts\n\n\nBut most of all play around with it, have fun and build something awesome.", "year": "2012", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2012-12-01T00:00:00+00:00", "url": "https://24ways.org/2012/html5-video-bumpers/", "topic": "code"} {"rowid": 272, "title": "Crafting the Front-end", "contents": "Much has been spoken and written recently about the virtues of craftsmanship in the context of web design and development. It seems that we as fabricators of the web are finally tiring of seeking out parallels between ourselves and architects, and are turning instead to the fabled specialist artisans.\n\nIdentifying oneself as a craftsman or craftswoman (let\u2019s just say craftsperson from here onward) will likely be a trend of early 2012. In this pre-emptive strike, I\u2019d like to expound on this movement as I feel it pertains to front-end development, and encourage care and understanding of the true qualities of craftsmanship (craftspersonship).\n\nThe core values\n\nI\u2019ll begin by defining craftspersonship. What distinguishes a craftsperson from a technician? Dictionaries tend to define a craftsperson as one who possesses great skill in a chosen field. The badge of a craftsperson for me, though, is a very special label that should be revered and used sparingly, only where it is truly deserved. A genuine craftsperson encompasses a few other key traits, far beyond raw skill, each of which must be learned and mastered.\n\nA craftsperson has: \n\n\n\tAn appreciation of good work, in both the work of others and their own. And not just good as in \u2018hey, that\u2019s pretty neat\u2019, I mean a goodness like a shining purity \u2013 the kind of good that feels right when you see it.\n\tA belief in quality at every level: every facet of the craftsperson\u2019s product is as crucial as any other, without exception, even those normally hidden from view.\n\tVision: an ability to visualize their path ahead, pre-empting the obstacles that may be encountered to plan a route around them.\n\tA preference for simplicity: an almost Bauhausesque devotion to undecorated functionality, with no unjustifiable parts included.\n\tSincerity: producing work that speaks directly to its purpose with flawless clarity.\n\n\nOnly when you become a custodian of such values in your work can you consider calling yourself a craftsperson. Now let\u2019s take a look at some steps we front-end developers can take on our journey of enlightenment toward craftspersonhood.\n\n Speaking of the craftsman\u2019s journey, be sure to watch out for the video of The Standardistas\u2019 stellar talk at the Build 2011 conference titled The Journey, which should be online sometime soon.\n\nBuilding your own toolbox\n\nMy grandfather was a carpenter and trained as a young apprentice under a master. After observing and practising the many foundation theories, principles and techniques of carpentry, he was tasked with creating his own set of woodworking tools, which he would use and maintain throughout his career. By going through the process of having to create his own tools, he would be connected at the most direct level with every piece of wood he touched, his tools being his own creations and extensions of his own skilled hands. The depth of his knowledge of these tools must have surpassed the intricate as he fathered, used, cleaned and repaired them, day in and day out over many years.\n\nAnd so it should be, ideally, with all crafts. We must understand our tools right down to the most fundamental level. I firmly believe that a level of true craftsmanship cannot be reached while there exists a layer that remains not wholly understood between a creator and his canvas. Of course, our tools as front-end developers are somewhat more complex than those of other crafts \u2013 it may seem reasonable to require that a carpenter create his or her own set of chisels, but somewhat less so to ask a front-end developer to code their own CSS preprocessor, or design their own computer.\n\nHowever, it is still vitally important that you understand how your tools work. This is particularly critical when it comes to things like preprocessors, libraries and frameworks which aim to save you time by automating common processes and functions. For the most part, anything that saves you time is a Good Thing\u2122 but it cannot be stressed enough that using tools like these in earnest should be avoided until you understand exactly what they are doing for you (and, to an extent, how they are doing it). \n\nIn particular, you must understand any drawbacks to using your tools, and any shortcuts they may be taking on your behalf. I\u2019m not suggesting that you steer clear of paid work until you\u2019ve studied each of jQuery\u2019s 9,266 lines of JavaScript source code but, all levity aside, it will further you on your journey to look at interesting or relevant bits of jQuery, and any other libraries you might want to use. Such libraries often directly link to corresponding sections of their source code on sites like GitHub from their official documentation. Better yet, they\u2019re almost always written in high level languages (easy to read), so there\u2019s no excuse not to don your pith helmet and go on something of an exploration. Any kind of tangential learning like this will drive you further toward becoming a true craftsperson, so keep an open mind and always be ready to step out of your comfort zone.\n\nDowntime and tool honing\n\nWith any craft, it is essential to keep your tools in good condition, and a good idea to stay up-to-date with the latest equipment. This is especially true on the web, which, as we like to tell anyone who is still awake more than a minute after asking what it is that we do, advances at a phenomenal pace. A tool or technique that could be considered best practice this week might be the subject of haughty derision in a comment thread within six months.\n\nI have little doubt that you already spend a chunk of time each day keeping up with the latest material from our industry\u2019s finest Interblogs and Twittertubes, but do you honestly put aside time to collect bookmarks and code snippets from things you read into a slowly evolving toolbox? At @media in 2009, Simon Collison delivered a candid talk on his \u2018Ultimate Package\u2019. Those of us who didn\u2019t flee the room anticipating a newfound and unwelcome intimacy with the contents of his trousers were shown how he maintained his own toolkit \u2013 a collection of files and folders all set up and ready to go for a new project. By maintaining a toolkit in this way, he has consistency across projects and a dependable base upon which to learn and improve.\n\nThe assembly and maintenance of such a personalized and familiar toolkit is probably as close as we will get to emulating the tool making stage of more traditional craft trades. Keep a master copy of your toolkit somewhere safe, making copies of it for new projects. When you learn of a way in which part of it can be improved, make changes to the master copy.\n\nSimplicity through modularity\n\nI believe that the user interfaces of all web applications should be thought of as being made up primarily of modular components. Modules in this context are patterns in design that appear repeatedly throughout the app. These can be small collections of elements, like a user profile summary box (profile picture, username, meta data), as well as atomic elements such as headings and list items.\n\nWell-crafted front-end architectures have the ability to support this kind of repeating pattern as modules, with as close to no repetition of CSS (or JavaScript) as possible, and as close to no variations in HTML between instances as possible.\n\nOne of the most fundamental and well known tenets of software engineering is the DRY rule \u2013 don\u2019t repeat yourself. It requires that \u201cevery piece of knowledge must have a single, unambiguous, authoritative representation within a system.\u201d \n\nAs craftspeople, we must hold this rule dear and apply it to the modules we have identified in our site designs. The moment you commit a second style definition for a module, the quality of your output (the front-end code) takes a huge hit. There should only ever be one base style definition for each distinct module or component. Keep these in a separate, sacred place in your CSS. I use a _modules.scss Sass include file, imported near the top of my main CSS files.\n\nBe sure, of course, to avoid making changes to this file lightly, as the smallest adjustment can affect multiple pages (hint: keep a structure list of which modules are used on which pages). Avoid the inevitable temptation to duplicate code late in the project. Sticking to this rule becomes more important the more complex the codebase becomes.\n\nIf you can stick to this rule, using sensible class names and consistent HTML, you can reach a joyous, self-fulfilling plateau stage in each project where you are assembling each interface from your own set of carefully crafted building blocks.\n\nOld school markup\n\nLet\u2019s take a step back. Before we fret about creating a divinely pure modular CSS framework, we need to know the site\u2019s design and what it is made of. The best way to gain this knowledge is to go old school. Print out every comp, mockup, wireframe, sketch or whatever you have. If there are sections of pages that are hidden until some user action takes place, or if the page has multiple states, be sure that you have everything that could become visible to the user on paper.\n\nOnce you have your wedge of paper designs, lay out all the pages on the floor, or stick them to the wall if you can, arranging them logically according to the site hierarchy, by user journey, or whatever guidelines make most sense to you. Once you have the site laid out before you, study it for a while, familiarizing yourself with every part of every interface. This will eliminate nasty surprises late in the project when you realize you\u2019ve duplicated something, or left an interface on the drawing board altogether.\n\nNow that you know the site like it\u2019s your best friend, get out your pens or pencils of choice and attack it. Mark it up like there\u2019s no tomorrow. Pretend you\u2019re a spy trying to identify communications from an enemy network hiding their messages in newspapers. Look for patterns and similarities, drawing circles around them. These are your modules. Start also highlighting the differences between each instance of these modules, working out which is the most basic or common type that will become the base definition from which all other representations are extended.\n\nThis simple but empowering exercise will equip you for your task of actually crafting, instead of just building, the front-end. Without the knowledge gained from this kind of research phase, you will be blundering forward, improvising as best you can, but ultimately making quality-compromising mistakes that could have been avoided.\n\nFor more on this theme, read Anna Debenham\u2019s Front-end Style Guides which recommends a similar process, and the sublime idea of extending this into a guide to refer to during development and beyond.\n\nDesign homogeneity\n\nMoving forward again, you now have your modules defined and things are looking good. I mentioned that many instances of these modules will carry minor differences. These differences must be given significant thinking time, and discussion time with your designer(s).\n\nIt should be common knowledge by now that successful software projects are not the product of distinct design and build phases with little or no bidirectional feedback. The crucial nature of the designer-developer relationship has been covered in depth this year by Paul Robert Lloyd, and a joint effort from both teams throughout the project lifecycle is pivotal to your ability to craft and ship successful products.\n\nThis relationship comes into play when you\u2019re well into the development of the site, and you start noticing these differences between instances of modules (they\u2019ll start to stand out very clearly to you and your carefully regimented modular CSS system). Before you start overriding your base styles, question the differences with the designer to work out why they exist. Perhaps they are required and are important to their context, but perhaps they were oversights from earlier design revisions, or simple mistakes.\n\nThe craftsperson\u2019s gland\n\nAs you grow towards the levels of expertise and experience where you can proudly and honestly consider yourself a craftsperson, you will find that you steadily develop what initially feels like a kind of sixth sense. I think of it more as a new hormonal gland, secreting into your bloodstream a powerful messenger chemical that can either reward or punish your brain. This gland is connected directly to your core understanding of what good quality work looks and feels like, an understanding that itself improves with experience. \n\nThis gland will make itself known to you in two ways. First, when you solve a problem in a beautifully elegant way with clean and unobtrusive code that looks good and just feels right, your craftsperson\u2019s gland will ooze something delicious that makes your brain and soul glow from the inside out. You will beam triumphantly at the succinct lines of code on your computer display before bounding outside with a spring in your step to swim up glittering rainbows and kiss soft fluffy puppies.\n\nThe second way that you may become aware of your craftsperson\u2019s gland, though, is somewhat less pleasurable. In an alternate reality, your parallel self is faced with the same problem, but decides to take a shortcut and get around it by some dubious means \u2013 the kind of technical method that the words hack, kludge and bodge are reserved for. As soon as you have done this, or even as you are doing it, your craftsperson\u2019s gland will damn well let you know that you took the wrong fork in the road. As your craftsperson\u2019s gland begins to secrete a toxic pus, you will at first become entranced into a vacant stare at the monstrous mess you are considering unleashing upon your site\u2019s visitors, before writhing in the horrible agony of an itch that can never be scratched, and a feeling of being coated with the devil\u2019s own deep and penetrating filth that no shower will ever cleanse.\n\nPerhaps I exaggerate slightly, but it is no overstatement to suggest that you will find yourself being guided by proverbial angels and demons perched on opposite shoulders, or a whispering voice inside your head. If you harness this sense, sharpening it as if it were another tool in your kit and letting it guide or at least advise your decision making, you will transcend the rocky realm of random trial and error when faced with problems, and tend toward the right answers instinctively.\n\nThis gland can also empower your ability to assess your own work, becoming a judge before whom all your work is cross-examined. A good craftsperson regularly takes a step back from their work, and questions every facet of their product for its precise alignment with their core values of quality and sincerity, and even the very necessity of each component.\n\nThe wrapping\n\nBy now, you may be thinking that I take this kind of thing far too seriously, but to terrify you further, I haven\u2019t even shared the half of it. Hopefully, though, this gives you an idea of the kind of levels of professionalism and dedication that it should take to get you on your way to becoming a craftsperson. It\u2019s a level of accomplishment and ability toward which we all should strive, both for our personal fulfilment and the betterment of the products we use daily. I look forward to seeing your finely crafted work throughout 2012.", "year": "2011", "author": "Ben Bodien", "author_slug": "benbodien", "published": "2011-12-24T00:00:00+00:00", "url": "https://24ways.org/2011/crafting-the-front-end/", "topic": "process"} {"rowid": 273, "title": "There\u2019s No Formula for Great Designs", "contents": "Before he combined them with fluid images and CSS3 media queries to coin responsive design, Ethan Marcotte described fluid grids \u2014 one of the most enjoyable parts of responsive design. Enjoyable that is, if you like working with math(s). But fluid grids aren\u2019t perfect and, unless we\u2019re careful when applying them, they can sometimes result in a design that feels disconnected.\n\nRecapping fluid grids\n\nIf you haven\u2019t read Ethan\u2019s Fluid Grids, now would be a good time to do that. It centres around a simple formula for converting pixel widths to percentages:\n\n(target \u00f7 context) \u00d7 100 = result\n\nHow does that work in practice? Well, take that Fireworks or Photoshop comp you\u2019re working on (I call them static design visuals, or just visuals.) Of course, everything on that visual \u2014 column divisions, inline images, navigation elements, everything \u2014 is measured in pixels. Now:\n\n\n\tPick something in the visual and measure its width. That\u2019s our target.\n\tTake that target measurement and divide it by the width of its parent (context).\n\tMultiply what you\u2019ve got by 100 (shift two decimal places).\n\tWhat you\u2019re left with is a percentage width to drop into your style sheets.\n\n\nFor example, divide this 300px wide sidebar division by its 948px parent and then multiply by 100: your original 300px is neatly converted to 31.646%.\n\n.content-sub {\nwidth : 31.646%; /* 300px \u00f7 948px = .31646 */ }\n\nThat formula makes it surprisingly simple for even die-hard fixed width aficionados to convert their visuals to percentage-based, fluid layouts.\n\nIt\u2019s a handy formula for those who still design using static visuals, and downright essential for those situations where one person in an organization designs in Fireworks or Photoshop and another develops with CSS. Why?\n\nWell, although I think that designing in a browser makes the best sense \u2014 particularly when designing for multiple devices \u2014 I\u2019ll wager most designers still make visuals in Fireworks or Photoshop and use them for demonstrations and get feedback and sign-off. That\u2019s OK. If you haven\u2019t made the transition to content-out designing in a browser yet, the fluid grids formula helps you carry on pushing pixels a while longer.\n\nYou can carry on moving pixel width measurements from your visuals to your style sheets, too, in the same way you always have. You can be precise to the pixel and even apply a grid image as a CSS background to help you keep everything lined up perfectly.\n\nOnce you\u2019re done, and the fixed width layout in the browser matches your visual, loop back through your style sheets and convert those pixels to percentages using the fluid grids formula. With very little extra work, you\u2019ll have a fluid implementation of your fixed width layout.\n\nThe fluid grids formula is simple and incredibly effective, but not long after I started working responsively I realized that the formula shouldn\u2019t (always) be a one-fix, set-and-forget calculation. I noticed that unless we compensate for problems it sometimes creates, the result can be a disconnected design.\n\nStaying connected\n\nGood design relies on connectedness, a feeling of natural balance between elements and the grid they\u2019re placed on. Give an element greater prominence or position in a visual hierarchy and you can fundamentally alter the balance and sometimes the meaning of a design.\n\nDifferent from a browser\u2019s page zooming feature \u2014 where images, text and overall layout change size by the same ratio \u2014 fluid grids flex a layout in response to a window or device width. Columns expand and contract, and within them fluid media (images and videos) can also change size. This can be one of the most impressive demonstrations of responsive design.\n\nBut not every element within a fluid grid can change size along with the window or device width. For example, type size and leading won\u2019t change along with a column\u2019s width.\n\nWhen columns and elements within them change width, all too easily a visual hierarchy can be broken and along with it the relationship between element sizes and the outer window or viewport. This can happen quickly if you make just one set of fluid grid calculations and use those percentages across every screen width, from smartphones through tablets and up to large desktops.\n\nThe answer? Make several sets of fluid grids calculations, each one at a significant window or device width breakpoint. Then apply those new percentages, when needed, to help keep elements in proportion and maintain balance and connectedness. Here\u2019s how I work.\n\nAvoiding disconnection\n\nI\u2019ve never been entirely happy with grid frameworks such as the 960 Grid System, so I start almost every project by creating a custom grid to inform my layout decisions. Here\u2019s a plain version of a grid from a recent project that I\u2019ll use as an illustration.\n\nThis project\u2019s grid comprises 84px columns and 24px gutters. This creates an odd number of columns at common tablet and desktop widths, and allows for 300px fixed width assets \u2014 useful when I need to fit advertising into a desktop layout\u2019s sidebar.\n\n Showing common advertising sizes (Larger image)\n\nFor this project I chose three 320 and Up breakpoints above 320px and, after placing as many columns as would fit those breakpoint widths, I derived three content widths:\n\n\n\t\t\n\t\t\tBreakpoint \n\t\t\tColumns \n\t\t\tContent width \n\t\t\n\t\t\n\t\t\t768px \n\t\t\t 7 \n\t\t\t 732px \n\t\t\n\t\t\n\t\t\t992px \n\t\t\t 9 \n\t\t\t 948px \n\t\t\n\t\t\n\t\t\t1,382px \n\t\t\t 13 \n\t\t\t 1,380px \n\t\t\n\n\nHere\u2019s my grid again, this time with pixel measurements and breakpoints overlaid.\n\n Showing pixel measurements and breakpoints (Larger image)\n\nNow cast your mind back to the fluid grids calculation I made earlier. I divided a 300px element by 948px and arrived at 31.646%. For some elements it\u2019s possible to use that percentage across all screen widths, but others will feel too small in relation to a narrower 768px and too large inside 1,380px.\n\nTo help maintain connectedness, I make a set of fluid grids calculations based on each of the content widths I established earlier. Now I can shift an element\u2019s percentage width up or down when I switch to a new breakpoint and content width. For example:\n\n\n\t300px is 40.984% of 732px\n\t300px is 31.646% of 948px\n\t300px is 21.739% of 1,380px\n\n\nI\u2019ll add all those fluid grid percentages to my grid image and save it for quick reference.\n\n Showing percentages at all breakpoints (Larger image)\n\nThen I can apply those different percentage widths to elements at each breakpoint using CSS3 media queries. For example, that sidebar division again:\n\n/* 732px, 7-column width */\n\n@media only screen and (min-width: 768px) {\n\n .content-sub {\n width : 40.983%; /* 300px \u00f7 732px = .40983 */ }\n\n}\n\n/* 948px, 9-column width */\n@media only screen and (min-width: 992px) {\n\n .content-sub {\n width : 31.645%; /* 300px \u00f7 948px = .31645 */ }\n\n}\n\n/* 1380px, 13-column width */\n@media only screen and (min-width: 1382px) {\n\n .content-sub {\n width : 21.739%; /* 300px \u00f7 1380px = .21739 */ }\n\n}\n\nThe number of changes you make to a layout at different breakpoints will, of course, depend on the specifics of the design you\u2019re working on. Yes, this is additional work, but the result will be a layout that feels better balanced and within which elements remain in harmony with each other while they respond to new screen or device widths.\n\nPutting the design in responsive web design\n\nUntil now, many of the conversations around responsive web design have been about aspects of technical implementation, rather than design. I believe we\u2019re only beginning to understand what\u2019s involved in designing responsively. In future, we\u2019ll likely be making design decisions not just about proportions but also about responsive typography. We\u2019ll also need to learn how to adapt our designs to device characteristics such as touch targets and more.\n\nSometimes we\u2019ll make decisions to improve function, other times because they make a design \u2018feel\u2019 right. You\u2019ll know when you\u2019ve made a right decision. You\u2019ll feel it.\n\nAfter all, there really is no formula for making great designs.", "year": "2011", "author": "Andy Clarke", "author_slug": "andyclarke", "published": "2011-12-23T00:00:00+00:00", "url": "https://24ways.org/2011/theres-no-formula-for-great-designs/", "topic": "ux"} {"rowid": 270, "title": "From Side Project to Not So Side Project", "contents": "In the last article I wrote for 24 ways, back in 2009, I enthused about the benefits of having a pet project, suggesting that we should all have at least one so that we could collaborate with our friends, escape our day jobs, fulfil our own needs, help others out, raise our profiles, make money, and \u2014 most importantly \u2014 have fun. I don\u2019t think I need to offer any further persuasions: it seems that designers and developers are launching their own pet projects left, right and centre. This makes me very happy.\n\nHowever, there still seems to be something of a disconnect between having a side project and turning it into something that is moderately successful; in particular, the challenge of making enough money to sustain the project and perhaps even elevating it from the sidelines so that it becomes something not so on the side at all.\n\nBefore we even begin this, let\u2019s spend a moment talking about money, also known as\u2026\n\nEvil, nasty, filthy money\n\nOver the last couple of years, I\u2019ve started referring to myself as an accidental businessman. I say accidental because my view of the typical businessman is someone who is driven by money, and I usually can\u2019t stand such people. Those who are motivated by profit, obsessed with growth, and take an active interest in the world\u2019s financial systems don\u2019t tend to be folks with whom I share a beer, unless it\u2019s to pour it over them. Especially if they\u2019re wearing pinstriped suits.\n\nThat said, we all want to make money, don\u2019t we? And most of us want to make a relatively decent amount, too. I don\u2019t think there\u2019s any harm in admitting that, is there? Hello, I\u2019m Elliot and I\u2019m a capitalist.\n\nThe key is making money from doing what we love. For most people I know in our community, we\u2019ve already achieved that \u2014 I\u2019m hard-pressed to think of anyone who isn\u2019t extremely passionate about working in our industry and I think it\u2019s one of the most positive, unifying benefits we enjoy as a group of like-minded people \u2014 but side projects usually arise from another kind of passion: a passion for something other than what we do as our day jobs. Perhaps it\u2019s because your clients are driving you mental and you need a break; perhaps it\u2019s because you want to create something that is truly your own; perhaps it\u2019s because you\u2019re sick of seeing your online work disappear so fast and you want to try your hand at print in order to make a more permanent mark.\n\nThe three factors I listed there led me to create 8 Faces, a printed magazine about typography that started as a side project and is now a very significant part of my yearly output and income.\n\nLike many things that prove fruitful, 8 Faces\u2019 success was something of an accident, too. For a start, the magazine was never meant to be profitable; its only purpose at all was to scratch my own itch. Then, after the first issue took off and I realized how much time I needed to spend in order to make the next one decent, it became clear that I would have to cover more than just the production costs: I\u2019d have to take time out from client work as well. Doing this meant I\u2019d have to earn some money. Probably not enough to equate to the exact amount of time lost when I could be doing client work (not that you could ever describe time as being lost when you work on something you love), but enough to survive; for me to feel that I was getting paid while doing all of the work that 8 Faces entailed. The answer was to raise money through partnerships with some cool companies who were happy to be associated with my little project.\n\nA sustainable business model\n\nBusiness model! I can\u2019t believe I just wrote those words! But a business model is really just a loose plan for how not to screw up. And all that stuff I wrote in the paragraph above about partnering with companies so I could get some money in while I put the magazine together? Well, that\u2019s my business model. \n\nIf you\u2019re making any product that has some sort of production cost, whether that\u2019s physical print run expenses or up-front dev work to get an app built, covering those costs before you even release your product means that you\u2019ll be in profit from the first copy you sell. This is no small point: production expenses are pretty much the only cost you\u2019ll ever need to recoup, so having them covered before you launch anything is pretty much the best possible position in which you could place yourself. Happy days, as Jamie Oliver would say.\n\nObtaining these initial funds through partnerships has another benefit. Sure, it\u2019s a form of advertising but, done right, your partners can potentially provide you with great content, too. In the case of 8 Faces, the ads look as nice as the rest of the magazine, and a couple of our partners also provide proper articles: genuinely meaningful, relevant, reader-pleasing articles at that. You\u2019d be amazed at how many companies are willing to become partners and, as the old adage goes, if you don\u2019t ask, you don\u2019t get.\n\nWith profit comes responsibility\n\nDon\u2019t forget about the responsibility you have to your audience if you engage in a relationship with a partner or any type of advertiser: although I may have freely admitted my capitalist leanings, I\u2019m still essentially a hairy hippy, and I feel that any partnership should be good for me as a publisher, good for the partner and \u2014 most importantly \u2014 good for the reader. Really, the key word here is relevance, and that\u2019s where 99.9% of advertising fails abysmally. \n\n(99.9% is not a scientific figure, but you know what I\u2019m on about.)\n\nThe main grey area when a side project becomes profitable is how you share that profit, partly because \u2014 in my opinion, at least \u2014 the transition from non-profitable side project to relatively successful source of income can be a little blurred. Asking for help for nothing when there\u2019s no money to be had is pretty normal, but sometimes it\u2019s easy to get used to that free help even once you start making money. I believe the best approach is to ask for help with the promise that it will always be rewarded as soon as there\u2019s money available. (Oh, god: this sounds like one of those nightmarish client proposals. It\u2019s not, honest.) If you\u2019re making something cool, people won\u2019t mind helping out while you find your feet.\n\nEvents often think that they\u2019re exempt from sharing profit. Perhaps that\u2019s because many event organizers think they\u2019re doing the speakers a favour rather than the other way around (that\u2019s a whole separate article), but it\u2019s shocking to see how many people seem to think they can profit from content-makers \u2014 speakers, for example \u2014 and yet not pay for that content. It was for this reason that Keir and I paid all of our speakers for our Insites: The Tour side project, which we ran back in July. We probably could\u2019ve got away without paying them, especially as the gig was so informal, but it was the right thing to do.\n\nIn conclusion: money as a by-product\n\nLet\u2019s conclude by returning to the slightly problematic nature of money, because it\u2019s the pivot on which your side project\u2019s success can swing, regardless of whether you measure success by monetary gain. I would argue that success has nothing to do with profit \u2014 it\u2019s about you being able to spend the time you want on the project. Unfortunately, that is almost always linked to money: money to pay yourself while you work on your dream idea; money to pay for more servers when your web app hits the big time; money to pay for efforts to get the word out there. The key, then, is to judge success on your own terms, and seek to generate as much money as you see fit, whether it\u2019s purely to cover your running costs, or enough to buy a small country. There\u2019s nothing wrong with profit, as long as you\u2019re ethical about it. (Pro tip: if you\u2019ve earned enough to buy a small country, you\u2019ve probably been unethical along the way.)\n\nThe point at which individuals and companies fail \u2014 in the moral sense, for sure, but often in the competitive sense, too \u2014 is when money is the primary motivation. It should never be the primary motivation. If you\u2019re not passionate enough about something to do it as an unprofitable side project, you shouldn\u2019t be doing it all. \n\nEarning money should be a by-product of doing what you love. And who doesn\u2019t want to spend their life doing what they love?", "year": "2011", "author": "Elliot Jay Stocks", "author_slug": "elliotjaystocks", "published": "2011-12-22T00:00:00+00:00", "url": "https://24ways.org/2011/from-side-project-to-not-so-side-project/", "topic": "business"} {"rowid": 267, "title": "Taming Complexity", "contents": "I\u2019m going to step into my UX trousers for this one. I wouldn\u2019t usually wear them in public, but it\u2019s Christmas, so there\u2019s nothing wrong with looking silly.\n\nAnyway, to business. Wherever I roam, I hear the familiar call for simplicity and the denouncement of complexity. I read often that the simpler something is, the more usable it will be. We understand that simple is hard to achieve, but we push for it nonetheless, convinced it will make what we build easier to use. Simple is better, right?\n\nWell, I\u2019ll try to explore that. Much of what follows will not be revelatory to some but, like all good lessons, I think this serves as a welcome reminder that as we live in a complex world it\u2019s OK to sometimes reflect that complexity in the products we build.\n\nMyths and legends\n\nLess is more, we\u2019ve been told, ever since master of poetic verse Robert Browning used the phrase in 1855. Well, I\u2019ve conducted some research, and it appears he knew nothing of web design. Neither did modernist architect Ludwig Mies van der Rohe, a later pedlar of this worthy yet contradictory notion. Broad is narrow. Tall is short. Eggs are chips. See: anyone can come up with this stuff.\n\nTo paraphrase Einstein, simple doesn\u2019t have to be simpler. In other words, simple doesn\u2019t dictate that we remove the complexity. Complex doesn\u2019t have to be confusing; it can be beautiful and elegant. On the web, complex can be necessary and powerful. A website that simplifies the lives of its users by offering them everything they need in one site or screen is powerful. For some, the greater the density of information, the more useful the site.\n\nIn our decision-making process, principles such as Occam\u2019s razor\u2019s_razor (in a nutshell: simple is better than complex) are useful, but simple is for the user to determine through their initial impression and subsequent engagement. What appears simple to me or you might appear very complex to someone else, based on their own mental model or needs. We can aim to deliver simple, but they\u2019ll be the judge.\n\nAs a designer, developer, content alchemist, user experience discombobulator, or whatever you call yourself, you\u2019re often wrestling with a wealth of material, a huge number of features, and numerous objectives. In many cases, much of that stuff is extraneous, and goes in the dustbin. However, it can be just as likely that there\u2019s a truckload of suggested features and content because it all needs to be there. Don\u2019t be afraid of that weight.\n\nIn the right hands, less can indeed mean more, but it\u2019s just as likely that less can very often lead to, well\u2026 less.\n\nComplexity is powerful\n\nSimple is the ability to offer a powerful experience without overwhelming the audience or inducing information anxiety. Giving them everything they need, without having them ferret off all over a site to get things done, is important.\n\nIt\u2019s useful to ask throughout a site\u2019s lifespan, \u201cdoes the user have everything they need?\u201d It\u2019s so easy to let our designer egos get in the way and chop stuff out, reduce down to only the things we want to see. That benefits us in the short term, but compromises the audience long-term.\n\nThe trick is not to be afraid of complexity in itself, but to avoid creating the perception of complexity. Give a user a flight simulator and they\u2019ll crash the plane or jump out. Give them everything they need and more, but make it feel simple, and you\u2019re building a relationship, empowering people.\n\nThis can be achieved carefully with what some call gradual engagement, and often the sensible thing might be to unleash complexity in carefully orchestrated phases, initially setting manageable levels of engagement and interaction, gradually increasing the inherent power of the product and fostering an empowered community.\n\nThe design aesthetic\n\nHere\u2019s a familiar scenario: the client or project lead gets overexcited and skips most of the important decision-making, instead barrelling straight into a bout of creative direction Tourette\u2019s. Visually, the design needs to be minimal, white, crisp, full of white space, have big buttons, and quite likely be \u201cclean\u201d. Of course, we all like our websites to be clean as that\u2019s more hygienic.\n\nBut what do these words even mean, really? Early in a project they\u2019re abstract distractions, unnecessary constraints. This premature narrowing forces us to think much more about throwing stuff out rather than acknowledging that what we\u2019re building is complex, and many of the components perhaps necessary.\n\nSimple is not a formula. It cannot be achieved just by using a white background, by throwing things away, or by breathing a bellowsful of air in between every element and having it all float around in space. Simple is not a design treatment. Simple is hard. Simple requires deep investigation, a thorough understanding of every aspect of a project, in line with the needs and expectations of the audience.\n\nRecognizing this helps us empathize a little more with those most vocal of UX practitioners. They usually appreciate that our successes depend on a thorough understanding of the user\u2019s mental models and expected outcomes. I personally still consider UX people to be web designers like the rest of us (mainly to wind them up), but they\u2019re web designers that design every decision, and by putting the user experience at the heart of their process, they have a greater chance of finding simplicity in complexity. The visual design aesthetic \u2014 the fa\u00e7ade \u2014 is only a part of that.\n\nDivide and conquer\n\nI\u2019m currently working on an app that\u2019s complex in architecture, and complex in ambition. We\u2019ll be releasing in carefully orchestrated private phases, gradually introducing more complexity in line with the unavoidably complex nature of the objective, but my job is to design the whole, the complete system as it will be when it\u2019s out of beta and beyond.\n\nI\u2019ve noticed that I\u2019m not throwing much out; most of it needs to be there. Therefore, my responsibility is to consider interesting and appropriate methods of navigation and bring everything together logically.\n\nI\u2019m using things like smart defaults, graphical timelines and colour keys to make sense of the complexity, techniques that are sympathetic to the content. They act as familiar points of navigation and reference, yet are malleable enough to change subtly to remain relevant to the information they connect. It\u2019s really OK to have a lot of stuff, so long as we make each component work smartly.\n\nIt\u2019s a divide and conquer approach. By finding simplicity and logic in each content bucket, I\u2019ve made more sense of the whole, allowing me to create key layouts where most of the simplified buckets are collated and sometimes combined, providing everything the user needs and expects in the appropriate places.\n\nI\u2019m also making sure I don\u2019t reduce the app\u2019s power. I need to reflect the scale of opportunity, and provide access to or knowledge of the more advanced tools and features for everyone: a window into what they can do and how they can help. I know it\u2019s the minority who will be actively building the content, but the power is in providing those opportunities for all.\n\nMuch of this will be familiar to the responsible practitioners who build websites for government, local authorities, utility companies, newspapers, magazines, banking, and we-sell-everything-ever-made online shops. Across the web, there are sites and tools that thrive on complexity.\n\nAlas, the majority of such sites have done little to make navigation intuitive, or empower audiences. Where we can make a difference is by striving to make our UIs feel simple, look wonderful, not intimidating \u2014 even if they\u2019re mind-meltingly complex behind that fa\u00e7ade.\n\nEmbrace, empathize and tame\n\nSo, there are loads of ways to exploit complexity, and make it seem simple. I\u2019ve hinted at some methods above, and we\u2019ve already looked at gradual engagement as a way to make sense of complexity, so that\u2019s a big thumbs-up for a release cycle that increases audience power.\n\nPrior to each and every release, it\u2019s also useful to rest on the finished thing for a while and use it yourself, even if you\u2019re itching to release. \u2018Ready\u2019 often isn\u2019t, and \u2018finished\u2019 never is, and the more time you spend browsing around the sites you build, the more you learn what to question, where to add, or subtract. It\u2019s definitely worth building in some contingency time for sitting on your work, so to speak.\n\nOne thing I always do is squint at my layouts. By squinting, I get a sort of abstract idea of the overall composition, and general feel for the thing. It makes my face look stupid, but helps me see how various buckets fit together, and how simple or complex the site feels overall.\n\nI mentioned the need to put our design egos to one side and not throw out anything useful, and I think that\u2019s vital. I\u2019m a big believer in economy, reduction, and removing the extraneous, but I\u2019m usually referring to decoration, bells and whistles, and fluff. I wouldn\u2019t ever advocate the complete removal of powerful content from a project roadmap.\n\nAbove all, don\u2019t fear complexity. Embrace and tame it. Work hard to empathize with audience needs, and you can create elegant, playful, risky, surprising, emotive, delightful, and ultimately simple things.", "year": "2011", "author": "Simon Collison", "author_slug": "simoncollison", "published": "2011-12-21T00:00:00+00:00", "url": "https://24ways.org/2011/taming-complexity/", "topic": "ux"} {"rowid": 277, "title": "Raising the Bar on Mobile", "contents": "One of the primary challenges of designing for mobile devices is that screen real estate is often in limited supply. Through the advocacy of Luke W and others, we\u2019ve drawn comfort from the idea that this constraint ends up benefiting users and designers alike, from obvious advantages like portability and reach, to influencing our content strategy decisions through focus and restraint. But that doesn\u2019t mean we shouldn\u2019t take advantage of every last pixel of that screen we can snag!\n\nAs anyone who has designed a website for use on a smartphone can attest, there\u2019s an awful lot of space on mobile screens dedicated to browser functions that would be better off toggled out of view. Unfortunately, the visibility of some of these elements is beyond our control, such as the buttons fixed to the bottom of the viewport in iOS\u2019s Safari and the WebOS browser. However, in many devices, the address bar at the top can be manually hidden, and its absence frees up enough pixel room for a large, impactful heading, a critical piece of navigation, or even just a little more white space to air things out.\n\nSo, as my humble contribution to this most festive of web publications, today I\u2019ll dig into the approach I used to hide the address bar in a browser-agnostic fashion for sites like BostonGlobe.com, and the jQuery Mobile framework.\n\nSurveying the land\n\nFirst, let\u2019s assess the chromes of some popular, current mobile browsers. For example purposes, the following screen-captures feature the homepage of the Boston Globe site, without any address-bar-hiding logic in place.\n\nNote: these captures are just mockups \u2013 actual experience on these platforms may vary.\n\n On the left is iOS5\u2019s Safari (running on iPhone), and on the right is Windows Phone 7 (pre-Mango).\n\n BlackBerry 7 (left), and Android 2.3 (right).\n\n WebOS (left), Opera Mini (middle), and Opera Mobile (right).\n\nSome browsers, such the default browsers on WebOS and BlackBerry 5, hide the bar automatically without any developer intervention, but many of them don\u2019t. Of these, we can only manually hide the address bar on iOS Safari and Android (according to Opera Web Opener, Mike Taylor, some discussion is underway for support in Opera Mini and Mobile as well, which would be great!). This is unfortunate, but iOS and Android are incredibly popular, so let\u2019s direct our focus there.\n\nGreat API, or greatest API?\n\nAs it turns out, iOS and Android not only allow you to hide the address bar, they use the same JavaScript method to do so, too (this shouldn\u2019t be surprising, given that they are both WebKit browsers, but nothing expected happens in mobile). However, the method they use is not exactly intuitive. You might set out looking for a JavaScript API dedicated to this purpose, like, say, window.toolbar.hide(), but alas, to hide the address bar you need to use the window.scrollTo method!\n\nwindow.scrollTo(0, 0);\n\nThe scrollTo method is not new, it\u2019s just this particular use of it that is. For the uninitiated, scrollTo is designed to scroll a document to a particular set of coordinates, assuming the document is large enough to scroll to that spot. The method accepts two arguments: a left coordinate; and a top coordinate. It\u2019s both simple and supported well pretty much everywhere. In iOS and Android, these coordinates are calculated from the top of the browser\u2019s viewport, just below the address bar (interestingly, it seems that some platforms like BlackBerry 6 treat the top of the browser chrome as 0 instead, meaning the page content is closer to 20px from the top).\n\nAnyway, by passing the coordinates 0, 0 to the scrollTo method, the browser will jump to the top of the page and pull the address bar out of view! Of course, if a quick call to scrollTo was all we need to do to hide the address bar in iOS and Android, this article would be pretty short, and nothing new. Unfortunately, the first issue we need to deal with is that this method alone will not usually do the trick: it must be called after the page has finished loading.\n\nThe browser gives us a load event for just that purpose, so we\u2019ll wrap our scrollTo method in it and continue on our merry way! We\u2019ll use the standard, addEventListener method to bind the the load event, passing arguments for event name load, and a callback function to execute when the event is triggered.\n\nwindow.addEventListener(\"load\",function() {\n window.scrollTo(0, 0);\n});\n\nFor the sake of preventing errors in those using browsers that don\u2019t support addEventListener, such as Internet Explorer 8 and under, let\u2019s make sure that method exists before we use it:\n\nif( window.addEventListener ){\n window.addEventListener(\"load\",function() {\n window.scrollTo(0, 0);\n });\n}\n\nNow we\u2019re getting somewhere, but we must also call the method after the load event\u2019s default behavior has been applied. For this, we can use the setTimeout method, delaying its execution to after the load event has run its course.\n\nif( window.addEventListener ){\n window.addEventListener(\"load\",function() {\n setTimeout(function(){\n window.scrollTo(0, 0);\n }, 0);\n });\n}\n\nSweet sugar of Christmas! Hit this demo in iOS and watch that address bar drift up and away!\n\nNot so fast\u2026\n\nWe\u2019ve got a little problem: the approach above does work in iOS but, in some cases, it works a little too well. In the process of applying this behavior, we\u2019ve broken one of the primary tenets of responsible web development: don\u2019t break the browser\u2019s default behaviour. This usability rule of thumb is often violated by developers with even the best of intentions, from breaking the browser\u2019s back button through unrecorded Ajax page refreshes, to fancy momentum touch scrolling scripts that can wreak havoc in all but the most sophisticated of devices. In this case, we\u2019ve prevented the browser\u2019s native support of deep-linking to sections of a page (a hash identifier in the URL matching a page element\u2019s id attribute, for example, http://example.com#contact) from working properly, because our script always scrolls to the top.\n\nTo avoid this collision, we\u2019ll need to detect whether a deep link, or hash, is present in the URL before applying our logic. We can do this by ensuring that the location.hash property is falsey:\n\nif( !window.location.hash && window.addEventListener ){\n window.addEventListener( \"load\",function() {\n setTimeout(function(){\n window.scrollTo(0, 0);\n }, 0);\n });\n}\n\nStill works great! And a quick test using a hash-based URL confirms that our script will not execute when a deep anchor is in play. Now iOS is looking sharp, and we\u2019ve added our feature defensively to avoid conflicts.\n\n\n\nNow, on to Android\u2026\n\nWait. You didn\u2019t expect that we could write code for one browser and be finished, right? Of course you didn\u2019t. I mentioned earlier that Android uses the same method for getting rid of the scrollbar, but I left out the fact that the arguments it prefers vary slightly, but significantly, from iOS. Bah!\n\nDifferering from the earlier logic from iOS, to remove the address bar on Android\u2019s default browser, you need to pass a Y coordinate of 1 instead of 0. Aside from being just plain odd, this is particularly unfortunate because to any other browser on the planet, 1px is a very real, however small, distance from the top of the page!\n\nwindow.scrollTo( 0, 1 );\n\nLooks like we\u2019re going to need a fork\u2026\n\nR UA Android?\n\nAt this point, some developers might decide to simply not support this feature in Android, and more determined devs might decide that a quick check of the User Agent string would be a reliable way to determine the browser and tweak the scroll value accordingly. Neither of those decisions would be tragic, but in the spirit of cross-browser and future-friendly development, I\u2019ll propose an alternative.\n\nBy this point, it should be clear that neither of the implementations above offer a particularly intuitive way to hide an address bar. As such, one might be skeptical that these approaches will stick around very long in their present state in either browser. Perhaps at some point, Android will decide to use 0 like iOS, making our lives a little easier, or maybe some new browser will decide to model their address bar hiding method after one of these implementations. In any case, detecting the User Agent only allows us to apply logic based on the known present, and in the world of mobile, let\u2019s face it, the present is already the past.\n\nWriting a check\n\nIn this next step of today\u2019s technique, we\u2019ll apply some logic to quickly determine the behavior model of the browser we\u2019re using, then capitalize on that model \u2013 without caring which browser it happens to come from \u2013 by applying the appropriate scroll distance.\n\nTo do this, we\u2019ll rely on a fortunate side effect of Android\u2019s implementation, which is when you programatically scroll the page to 1 using scrollTo, Android will report that it\u2019s still at 0 because oddly enough, it is! Of course, any other browser in this situation will report a scroll distance of 1. Thus, by scrolling the page to 1, then asking the browser its scroll distance, we can use this artifact of their wacky implementation to our advantage and scroll to the location that makes sense for the browser in play.\n\nGetting the scroll distance\n\nTo pull off our test, we\u2019ll need to ask the browser for its current scroll distance. The methods for getting scroll distance are not entirely standardized across popular browsers, so we\u2019ll need to use some cross-browser logic. The following scroll distance function is similar to what you\u2019d find in a library like jQuery. It checks the few common ways of getting scroll distance before eventually falling back to 0 for safety\u2019s sake (that said, I\u2019m unaware of any browsers that won\u2019t return a numeric value from one of the first three properties).\n\n// scrollTop getter\nfunction getScrollTop(){\n return scrollTop = window.pageYOffset ||\ndocument.compatMode === \"CSS1Compat\" && document.documentElement.scrollTop ||\ndocument.body.scrollTop || 0;\n}\n\nIn order to execute that code above, the body object (referenced here as document.body) will need to be defined already, or we\u2019ll risk an error. To determine that it\u2019s defined, we can run a quick timer to execute code as soon as that object is defined and ready for use.\n\nvar bodycheck = setInterval(function(){\n if( document.body ){\n clearInterval( bodycheck );\n //more logic can go here!!\n } \n}, 15 );\n\nAbove, we\u2019ve defined a 15 millisecond interval called bodycheck that checks if document.body is defined and, if so, clears itself of running again. Within that if statement, we can extend our logic further to run other code, such as our check for the scroll distance, defined via the variable scrollTop below:\n\nvar scrollTop,\n bodycheck = setInterval(function(){\n if( document.body ){\n clearInterval( bodycheck );\n scrollTop = getScrollTop();\n } \n}, 15 );\n\nWith this working, we can immediately scroll to 1, then check the scroll distance when the body is defined. If the distance reports 1, we\u2019re likely in a non-Android browser, so we\u2019ll scroll back to 0 and clean up our mess.\n\nwindow.scrollTo( 0, 1 );\n\nvar scrollTop,\n bodycheck = setInterval(function(){\n if( document.body ){\n clearInterval( bodycheck );\n scrollTop = getScrollTop();\n window.scrollTo( 0, scrollTop === 1 ? 0 : 1 );\n } \n}, 15 );\n\nCashing in\n\nAll of the pieces are written now, so all we need to do is combine them with our previous logic for scrolling when the window is loaded, and we\u2019ll have a cross-browser solution of which John Resig would be proud. Here\u2019s our combined code snippet, with some formatting updates rolled in as well:\n\n(function( win ){\n\tvar doc = win.document;\n\n\t// If there\u2019s a hash, or addEventListener is undefined, stop here\n\tif( !location.hash && win.addEventListener ){\n\t\t//scroll to 1\n\t\twindow.scrollTo( 0, 1 );\n\t\tvar scrollTop = 1,\n\t\t\tgetScrollTop = function(){\n\t\t\t\treturn win.pageYOffset || doc.compatMode = \"CSS1Compat\" && doc.documentElement.scrollTop || doc.body.scrollTop || 0;\n\t\t\t},\n\t\t\t//reset to 0 on bodyready, if needed\n\t\t\tbodycheck = setInterval(function(){\n\t\t\t\tif( doc.body ){\n\t\t\t\t\tclearInterval( bodycheck );\n\t\t\t\t\tscrollTop = getScrollTop();\n\t\t\t\t\twin.scrollTo( 0, scrollTop = 1 ? 0 : 1 );\n\t\t\t\t}\t\n\t\t\t}, 15 );\n\t\twin.addEventListener( \u201cload\u201d, function(){\n\t\t\tsetTimeout(function(){\n\t\t\t\t\t//reset to hide addr bar at onload\n\t\t\t\t\twin.scrollTo( 0, scrollTop === 1 ? 0 : 1 );\n\t\t\t}, 0);\n\t\t} );\n\t}\n})( this );\nView code example\n\nAnd with that, we\u2019ve got a bunch more room to play with on both iOS and Android.\n\n\n\nBreak out the eggnog\n\n\u2026because we\u2019re not done yet! In the spirit of making our script act more defensively, there\u2019s still another use case to consider. It was essential that we used the window\u2019s load event to trigger our scripting, but on pages with a lot of content, its use can come at a cost. Often, a user will begin interacting with a page, scrolling down as they read, before the load event has fired. In those situations, our script will jump the user back to the top of the page, resulting in a jarring experience.\n\nTo prevent this problem from occurring, we\u2019ll need to ensure that the page has not been scrolled beyond a certain amount. We can add a simple check using our getScrollTop function again, this time ensuring that its value is not greater than 20 pixels or so, accounting for a small tolerance.\n\nif( getScrollTop() < 20 ){\n //reset to hide addr bar at onload\n window.scrollTo( 0, scrollTop === 1 ? 0 : 1 );\n}\n\nAnd with that, we\u2019re pretty well protected! Here\u2019s a final demo.\n\nThe completed script can be found on Github (full source: https://gist.github.com/1183357 ). It\u2019s MIT licensed. Feel free to use it anywhere or any way you\u2019d like!\n\nYour thoughts?\n\nI hope this article provides you with a browser-agnostic approach to hiding the address bar that you can use in your own projects today. Perhaps alternatively, the complications involved in this approach convinced you that doing this well is more trouble than it\u2019s worth and, depending on the use case, that could be a fair decision. But at the very least, I hope this demonstrates that there\u2019s a lot of work involved in pulling off this small task in only two major platforms, and that there\u2019s a real need for standardization in this area.\n\nFeel free to leave a comment or criticism and I\u2019ll do my best to answer in a timely fashion.\n\nThanks, everyone!\n\nSome parting notes\n\nI scream, you scream\u2026\n\nAt the time of writing, I was not able to test this method on the latest Android 4.0 (Ice Cream Sandwich) build. According to Sencha Touch\u2019s browser scorecard, the browser in 4.0 may have a different way of managing the address bar, so I\u2019ll post in the comments once I get a chance to dig into it further.\n\nShort pages get no love\n\nToday\u2019s technique only works when the page is as tall, or taller than, the device\u2019s available screen height, so that the address bar may be scrolled out of view. On a short page, you might work around this issue by applying a minimum height to the body element ( body { min-height: 460px; } ), but given the variety of screen sizes out there, not to mention changes in orientation, it\u2019s tough to find a value that makes much sense (unless you manipulate it with JavaScript).", "year": "2011", "author": "Scott Jehl", "author_slug": "scottjehl", "published": "2011-12-20T00:00:00+00:00", "url": "https://24ways.org/2011/raising-the-bar-on-mobile/", "topic": "design"} {"rowid": 278, "title": "Going Both Ways", "contents": "It\u2019s that time of the year again: Santa is getting ready to travel the world. Up until now, girls and boys from all over have sent in letters asking for what they want. I hope that Santa and his elves have\u2014unlike me\u2014learned more than just English.\n\nOn the Internet, those girls and boys want to participate in sharing their stories and videos of opening presents and of being with friends and family. Ah, yes, the wonders of user generated content. But more than that, people also want to be able to use sites in the language they know.\n\nWhile you and I might expect the text to read from left to right, not all languages do. Some go from right to left, such as Arabic and Hebrew. (Some also go from top to bottom, but for now, let\u2019s just worry about those first two directions!)\n\nIf we were building a site for girls and boys to send their letters to Santa, we need to consider having the interface in the language and direction that they prefer. On the elves\u2019 side, they may be viewing the site in one direction but reading the user generated content in the other direction. We need to build a site that supports bidirectional (or bidi) text.\n\nLet\u2019s take a look at some things to be aware of when it comes to building bidi interfaces.\n\nSetting the direction of the interface\n\nRight off the bat, we need to tell the browser what direction the text should be going in. To do this, we add the dir attribute to an HTML element and set it to either LTR (for left to right) or RTL (for right to left).\n\n<body dir=\"rtl\">\n\nYou can add the dir attribute to any element and it will set or change the direction for the content within that element. \n\n<body dir=\"ltr\">\n Here is English Content.\n <div dir=\"rtl\">\u0627\u0644\u0645\u0648\u0636\u0648\u0639</div>\n</body>\n\nYou can also set the direction via CSS.\n\n.rtl {\n direction: rtl;\n }\n\nIt\u2019s generally recommended that you don\u2019t use CSS to set the direction of the text. Text direction is an important part of the content that should be retained even in environments where the CSS may not be available or fails to load.\n\nHow things change with the direction attribute\n\nJust adding the dir attribute tells the browser to render the content within it differently. \n\n\n\nThe text aligns to the right of the page and, interestingly, punctuation appears at the left of the sentence. (We\u2019ll get to that in a little bit.) \n\nScrollbars in most browsers will appear on the left instead of the right. Webkit is the notable exception here which always shows the scrollbar on the right, no matter what the text direction is. Avoid having a design that has an expectation that the scrollbar will be in a specific place (and a specific size).\n\nChanging the order of text mid-way\n\nAs we saw in that previous example, the punctuation appeared at the beginning of the sentence instead of the end, even though the text was English. At Yahoo!, we have an interesting dilemma where the company name has punctuation in it. Therefore, when the name appears in the middle of (for example) Arabic text, the exclamation mark appears at the beginning of the word instead of the end.\n\n\n\nThere are two ways in which this problem can be solved:\n\n1. Use HTML around the left-to-right content, or\n\nTo solve the problem of the Yahoo! name in the midst of Arabic text, we can wrap a span around it and change the direction on that element.\n\n\n\n2. Use a text direction mark in the content.\n\nUnicode has two marks, U+200E and U+200F, that tell the browser that the text is in a particular direction. Placing this right after the punctuation will correct the placement.\n\nUsing the HTML entity:\nYahoo!\u200e\n\nTables\n\nThankfully, the cells of a data table also get reordered from right to left. Equally as nice, if you\u2019re using display:table, the content will still get reordered.\n\n\n\nCSS\n\nSo far, we\u2019ve seen that the dir attribute does a pretty decent job of getting content flowing in the direction that we need it. Unfortunately, there are huge swaths of design that is handled by CSS that the handy dir attribute has zero effect over.\n\nMany properties, like float or absolute positioning with left and right values, are unaffected and must be handled manually. Elements that were floated left must now by floated right. Left margins and paddings must now move to the right and the right margins and paddings must now move to the left.\n\nSince the browser won\u2019t handle this for us, we have a couple approaches that we can use:\n\nCSS Only\n\nWe can take advantage of the attribute selector to target CSS to apply in one direction or another.\n\n[dir=ltr] .module {\n\tfloat: left;\n\tmargin: 0 0 0 20px;\n}\n\n[dir=rtl] .module {\n\tfloat: right;\n\tmargin: 0 20px 0 0;\n}\n\nAs you can see from this example, both of the properties have been modified for the flipped interface. If your interface is rather complicated, you will have to create a lot of duplicate rules to have the site looking good in both directions while serving up a single stylesheet.\n\nCSSJanus\n\nGoogle has a tool called CSSJanus. It\u2019s a Python script that runs over the LTR versions of your CSS files and generates RTL versions. For the RTL version of the site, just serve up those CSS files instead of the LTR versions.\n\nThe script looks for keywords and value combinations and automatically swaps them so you don\u2019t have to. \n\nAt Yahoo!, CSSJanus was a huge help in speeding up our development of a bidi interface. We\u2019ve also made a number of improvements to the script to better handle border radius, background positioning, and gradients. We will be pushing those changes back into the CSSJanus project. \n\n\n\nBackground Images\n\nBackground images, especially for things like CSS sprites, also raise an interesting dilemma. Background images are positioned relative to the left of the element. In a flipped interface, however, we need to position it relative to the right. An icon that would be to the left of some text will now need to appear on the right.\n\n\n\nIf the x position of the background is percentage-based, then it\u2019s fairly easy to swap the values. 0 becomes 100%, 10% becomes 90% and so on. If the x position is pixel-based, then we\u2019re in a bit of a pickle. There\u2019s no way to say that the image should be a certain number of pixels from the right.\n\nTherefore, you\u2019ll need to ensure that any background image that needs to be swapped should be percentage-based. (99.9% of the the time, the background position will need to be 0 so that it can be changed to 100% for RTL.)\n\nIf you\u2019re taking an existing implementation, background positioning will likely be the biggest hurdle you\u2019ll have to overcome in swapping your interface around. If you make sure your x position is always percentage-based from the beginning, you\u2019ll have a much smoother process ahead of you!\n\nFlipping Images\n\nThis is a more subtle point and one where you\u2019ll really want an expert with the region to weigh in on. In RTL interfaces, users may expect certain icons to also be flipped. Pencil icons that skew to the right in LTR interfaces might need to be swapped to skew to the left, instead. Chat bubbles that come from the left will need to come from the right.\n\nThe easiest way to handle this is to create new images. Name the LTR versions with -ltr in the name and name the RTL versions with -rtl in the name. CSSJanus will automatically rename all file references from -ltr to -rtl.\n\nThe Future\n\nThankfully, those within the W3C recognize that CSS should be more agnostic. As a result, they\u2019ve begun introducing new properties that allow the browser to manage the swapping from left to right for us.\n\nThe CSS3 specification for backgrounds allows for the background-position to be relative to other corners other than the top left by specifying keywords before each position.\n\nThis will position the background 5px from the bottom right of the element.\n\nbackground-position: right 5px bottom 5px;\n\nOpera 11.60 is currently the only browser that supports this syntax.\n\nFor margin and padding, we have margin-start and margin-end. In LTR interfaces, margin-start would be the same as margin-left and in RTL interfaces, margin-start would be the same as margin-right. \n\nFirefox and Webkit support these but with vendor prefixes right now:\n\n-webkit-margin-start: 20px;\n-moz-margin-start: 20px;\n\nIn the CSS3 Images working draft specification, there\u2019s an image() property that allows us to specify image fallbacks and whether those fallbacks are for LTR or RTL interfaces.\n\nbackground: image('sprite.png' ltr, 'sprite-rtl.png' rtl);\n\nUnfortunately, no browser supports this yet but it\u2019s nice to be able to dream of how much easier this will be in the future!\n\nHo Ho Ho\n\nHopefully, after all of this, you\u2019re full of cheer knowing that you\u2019re well on your way to creating interfaces that can go both ways!", "year": "2011", "author": "Jonathan Snook", "author_slug": "jonathansnook", "published": "2011-12-19T00:00:00+00:00", "url": "https://24ways.org/2011/going-both-ways/", "topic": "ux"} {"rowid": 268, "title": "Getting the Most Out of Google Analytics", "contents": "Something a bit different for today\u2019s 24 ways article. For starters, I\u2019m not a designer or a developer. I\u2019m an evil man who sells things to people on the internet. Second, this article will likely be a little more nebulous than you\u2019re used to, since it covers quite a number of points in a relatively short space. \n\nThis isn\u2019t going to be the complete Google Analytics Conversion University IQ course compressed into a single article, obviously. What it will be, however, is a primer on setting up and using Google Analytics in real life, and a great deal of what I\u2019ve learned using Google Analytics nearly every working day for the past six (crikey!) years.\n\nAlso, to be clear, I\u2019ll be referencing new Google Analytics here; old Google Analytics is for loooosers (and those who want reliable e-commerce conversion data per site search term, natch).\n\nYou may have been running your Analytics account for several years now, dipping in and out, checking traffic levels, seeing what\u2019s popular\u2026 and that\u2019s about it. Google Analytics provides so much more than that, but the number of reports available can often intimidate users, and documentation and case studies on their use are minimal at best. \n\nLet\u2019s start! Setting up your Analytics profile\n\nBefore we plough on, I just want to run through a quick checklist that some basic settings have been enabled for your profile. If you haven\u2019t clicked it, click the big cog on the top-right of Google Analytics and we\u2019ll have a poke about.\n\n\n\tIf you have an e-commerce site, e-commerce tracking has been enabled\u2028\n\tIf your site has a search function, site search tracking has been enabled.\n\tQuery string parameters that you do not want tracked as separate pages have been excluded (for example, any parameters needed for your platform to function, otherwise you\u2019ll get multiple entries for the same page appearing in your reports)\n\tFilters have been enabled on your main profile to exclude your office IP address and any IPs of people who frequently access the site for work purposes. In decent numbers they tend to throw data off a tad.\u2028\n\tYou may also find the need to set up multiple profiles prefiltered for specific audience segments. For example, at Lovehoney we have seventeen separate profiles that allow me quick access to certain countries, devices and traffic sources without having to segment first. You\u2019ll also find load time for any complex reports much improved. Use the same filter screen as above to set up a series of profiles that only include, say, mobile visits, or UK visitors, so you can quickly analyse important segments.\n\n\nMatt, what\u2019s a segment?\n\nA segment is a subsection of your visitor base, which you define and then call on in reports to see specific data for that subsection. For example, in this report I\u2019ve defined two segments, the first for IE6 users and the second for IE7.\n\n\n\nSegments are easily created by clicking the Advanced Segments tabs at the top of any report and clicking +New Custom Segment.\n\n\n\nWhat does your site do?\n\nUnderstanding the goals of your site is an oft-covered topic, but it\u2019s necessary not just to form a better understand of your business and prioritize your time. Understanding what you wish visitors to do on your site translates well into a goal-driven analytics package like Google Analytics. \n\nEvery site exists essentially to sell something, either financially through e-commerce, or to sell an idea or impart information, get people to download a CV or enquire about service, or to sell space on that website to advertisers. If the site did not provide a positive benefit to its owners, it would not have a reason for being. \n\nOnce you have understood the reason why you have a site, you can map that reason on to one of the three goal types Google Analytics provides. \n\nE-commerce \n\nThis conversion type registers transactions as part of a sales process which requires a monetary value, what products have been bought, an SKU (stock keeping unit), affiliation (if you\u2019re then attributing the sale to a third party or franchise) and so on.\n\nThe benefit of e-commerce tracking is not only assigning non-arbitrary monetary value to behaviour of visitors on your site, as well as being able to see ancillary costs such as shipping, but seeing product-level information, like which products are preferred from various channels, popular categories, and so on.\n\n\n\nHowever, I find the e-commerce tracking options also useful for non-e-commerce sites. For example, if you\u2019re offering downloads or subscriptions and having an email address or user\u2019s details is worth something to you, you can set up e-commerce tracking to understand how much value your site is bringing. For example, an email address might be worth 20p to you, but if it also includes a name it\u2019s worth 50p. A contact telephone number is worth \u00a32, and so on.\n\nPage goals\n\nPage goals, unsurprisingly, track a visit to a page (often with a sequence of pages leading up to that page). This is what\u2019s referred to as a goal funnel, and is generally used to track how visitors behave in a multistep checkout. \n\n\n\nInterestingly, the page doesn\u2019t have to actually exist. For example, if you have a single page checkout, you can register virtual page views using trackPageview() when a visitor clicks into a particular section of the checkout or other form. If your site is geared towards getting someone to a particular page, but where there isn\u2019t a transaction (for example, a subscription page) this is for you.\n\nThere are also behavioural goals, such as time on site and number of pages viewed, which are geared towards sites that make money from advertising.\n\nBut, going back to the page goals, these can be abstracted using regular expressions, meaning that you can define a funnel based on page type rather than having to set individual folders.\n\n\n\nIn this example, I\u2019ve created regexes for the main page types on my site, so I can create a wide funnel that captures visitors from where they enter through to checkout.\n\nEvents\n\nEvent tracking registers a predefined event, such as playing a video, or some interaction that can trigger JavaScript, such as a Tweet This button. Events can then be triggered using the trackEvent() call. If you want someone to complete watching a video, you would code your player to fire trackEvent() upon completion. \n\nWhile I don\u2019t use events as goals, I use events elsewhere to see how well a video play aids to conversion. This not only helps me justify the additional spend on creating video content, but also quickly highlights which videos are underperforming as sales tools.\n\n\n\nWhat a visitor can tell you\n\n\u2028Now you have some proper goals set up, we can start to see how changes in content (on-site and external) affect those goals. \n\nUltimately, when a visitor comes to your site, they bring information with them:\n\n\n\twhere they came from (a search engine \u2013 including: keyword searched for; a referral; direct; affiliate; or ad campaign)\n\tdemographics (country; whether they\u2019re new or returning, within thirty days)\n\ttechnical information (browser; screen size; device; bandwidth)\n\tsite-specific information (landing page; next click; previous values assigned to them as custom variables*)\n\n\n * A note about custom variables. There\u2019s no hope in hell that I can cover custom variables in this article. Go research them. Custom variables are the single best way to hack Google Analytics and bend it to your will. Custom variables allow you to record anything you want about a visitor, which that visitor will then carry around with them between visits. It\u2019s also great for plugging other services into Google Analytics (as shown by the marvelous way Visual Website Optimizer allows you to track and segment tests within the GA interface). Just make sure not to breach the terms of service, eh?\n\nCSI your website\n\nPolice procedural TV shows are all the same: the investigators are called to a crime and come across a clue; there\u2019s then an autopsy; new evidence leads them to a new location; they find a new clue; they put two and two together; they solve the mystery.\n\nThis is your life now. Exciting!\n\nSo, now you\u2019re gathering a wealth of information about what sort of people visit your site, what they do when they\u2019re there, and what eventually gets them to drive value to you. It\u2019s now your job to investigate all these little clues to see which types of people drive the most value, and what you can change to improve it.\n\nMaybe not that exciting.\n\nHowever, Google Analytics comes pre-armed with extensive reports for you to delve into. As an e-commerce guy (as opposed to a page goal guy) my day pretty much follows the pattern below.\n\n\n\tLook at e-commerce conversion rate by traffic source compared to the same day in the previous week and previous month. As ours is an e-commerce site, we have weekly and monthly trends. A big spike on Sundays and Mondays, and payday towards the end of the month is always good; on the third week of a month there tends to be a lull. Spend time letting your Google Analytics data brew, understand your own trends and patterns, and you\u2019ll start to get a feel for when something isn\u2019t quite right.\n\t\n\t\tTraffic Sources \u2192 Sources \u2192 All Traffic\n\t\n\tLook at the conversion rate by landing page for any traffic source that feels significantly different to what\u2019s expected. Check bounce rates, drill down to likely landing pages and check search keyword or referral site to see if it\u2019s a particular subset of visitor. You can do this by clicking Secondary Dimension and choosing Keyword or Source. If it\u2019s direct, choose Visitor Type to break down by new or returning visitor.\n\t\n\t\tContent \u2192 Site Content \u2192 Landing Pages\n\t\n\tI then tend to flip into Content Drilldown to see what the next clicks were from those landing pages, and whether they changed significantly to the date I\u2019m comparing with. If they have, that\u2019s usually an indicator of changed content (or its relevancy). Remember, if a bunch of people have found their way to your page via a method you\u2019re not expecting (such as a mention on a Spanish radio station \u2013 this actually happened to me once), while the content hasn\u2019t changed, the relevancy of it to the audience may have.\n\t\n\t\tContent \u2192 Site Content \u2192 Content Drilldown\n\t\n\tOnce I have an idea of what content was consumed, and whether it was relevant to the user, I then look at the visitor specifics, such as browser or demographic data, to see again whether the change was limited to a specific subset. Site speed, for example, is normally a good factor towards bounce rate, so compare that with previous data as well.\n\n\nNow, to be investigating at this level you still need a serious amount of data, in order to tell what\u2019s a significant change or not. If you\u2019re struggling with a small number of visitors, you might find reporting on a weekly or fortnightly basis more appropriate. \n\nHowever, once you\u2019ve looked into the basics of why changes happen to the value of your site, you\u2019ll soon find yourself limited by the reports offered in Standard Reporting. So, it\u2019s time to build your own. Hooray!\n\nCustom reporting\n\nGoogle Analytics provides the tools to build reports specific to the types of investigations you frequently perform. \n\n\n\nWelcome to my world.\n\nCustom reports are quite simple to build: first, you determine the metric you want the report to cover (number of visitors, bounce rate, conversion rate, and so on), then choose a set of dimensions that you\u2019d like to segment the report by (say, the source of the traffic, and whether they were new or returning users). You can filter the report, including or excluding particular dimension values, and you can assign the report to any of the profiles you created earlier. \n\nIn the example below, I\u2019ve created a report that shows me visits and conversion rate for any Google traffic that landed directly only on a product page. I can then drill down on each product page to see the complete phrases use to search. I can use this information in two ways:\n\n\n\tI can see which products aren\u2019t converting, which shows me where I need to work harder on merchandising.\n\tI can give this information to my content team, showing them the actual phrases visitors used to reach our product content, helping them write better targeted product descriptions.\n\n\n\n\nThe possibilities here are nearly endless, but here are a few examples of reports I find useful:\n\n\n\tNon-brand inbound search\nBy creating a report that shows inbound search traffic which doesn\u2019t include your brand, you can see more clearly the behaviour of visitors most likely to be unfamiliar with your site and brand values, without having to rely on the clumsy new or returning demographic date.\n\tTraffic/conversion/sales by hour\nThis is pure stats porn, but actually more useful than real-time data. By seeing this data broken down at an hourly level, you can not only compare the current day to previous days, but also see the best performing times for email broadcasts and tweets.\n\tVisits, load time, conversion and sales by page and browser\nPage speed can often kill conversion rates, but it\u2019s difficult to prove the value of focusing on speed in monetary terms. Having this report to hand helps me drive Operation Greenbelt, our effort to get into the sub-1.5 second band in Google Webmaster Tools.\n\n\nUseful things you can\u2019t do in custom reporting\n\nIf you have a search function on your website, then Conversion Rate and Products Bought by Site Search Term is an incredibly useful report that allows you to measure the effectiveness of your site\u2019s search engine at returning products and content related to the search term used. By including the products actually bought by visitors who searched for each term, you can use this information to better searchandise these results, escalating high propensity and high value products to the top of the results.\n\nHowever, it\u2019s not possible to get this information out of new Google Analytics. \n\nTry it, select the following in the report builder:\n\n\n\tMetrics: total unique searches; e-commerce or goal conversion rate\n\tDimensions: search term; product\n\n\nYou\u2019ll see that the data returned is a little nonsensical, though a 2,000% conversion rate would be nice. However, you can get more accurate information using advanced segments. By creating individual segments to define users who have searched for a particular term, you can run the sales performance and product performance reports as normal. It\u2019s laborious, but it teaches a good lesson: data that seems inaccessible can normally be found another way!\n\nReporting infrastructure\n\nNow that you have a series of reports that you can refer to on a daily or weekly basis, it\u2019s time to put together a regular reporting infrastructure. \n\nEven if you\u2019re not reporting to someone, having a set of key performance indicators that you can use to see how your performance is improving over time allows you to set yourself business goals on a monthly and annual basis.\n\nFor my own reporting, I take some high-level metrics (such as visitors, conversion rate and average order value), and segment them by traffic source and, separately, landing page. These statistics I record weekly and report:\n\n\n\tcurrent week compared with previous week\n\tsame week previous year (if available)\n\t4 week average\n\t13 week average\n\t52 week average (if available)\n\n\nThis takes into account weekly, monthly, seasonal and annual trends, and gives you a much clearer view of your performance.\n\nGetting data in other ways\n\nIf you\u2019re using Google Analytics frequently, with any large site you\u2019ll come to a couple of conclusions:\n\n\n\tDoing any kind of practical comparative analysis is unwieldy.\n\tBoy, Google Analytics is slow!\n\n\nAs you work with bigger datasets and put together more complex queries, you\u2019ll see the loading graphic more than you\u2019ll see actual data. So when you reach that level, there are ways to completely bypass the Google Analytics interface altogether, and get data into your own spreadsheet application for manipulation.\n\nData Feed Query Explorer\n\nIf you just want to pull down some quick statistics but still use complex filters and exotic metric and dimension combinations, the Data Feed Query Explorer is the quickest way of doing so. Authenticate with your Google Analytics account, select a profile, and you can start selecting metrics and dimensions to be generated in a handy, selectable tabulated format.\n\nGoogle Analytics API\n\nIf you\u2019re feeling clever, you can bypass having to copy and paste data by pulling in directly into Excel, Google Docs or your own application using the Google Analytics API. There are several scripts and plugins available to do this. I use Automate Analytics Google Docs code (there\u2019s also a paid version that simplifies setup and creates some handy reports for you).\n\nNew shiny things\n\nWell, now that that\u2019s over, I can show you some cool stuff. Well, at least it\u2019s cool to me. Google Analytics is being constantly improved and new functionality is introduced nearly every month. Here are a couple of my favourites.\n\nMultichannel attribution\n\nNot every visitor converts on your site on the first visit. They may not even do so on the second visit, or third. If they convert on the fourth visit, but each time they visit they do so via a different channel (for example, Search PPC, Search Organic, Direct, Email), which channel do you attribute the conversion to? The last channel, or the first? Dilemma! \n\nGoogle now has a Multichannel Attribution report, available in the Conversion category, which shows how each channel assists in converting, the overlap between channels, and where in the process that channel was important. \n\n\n\nFor example, you may have analysed your blog traffic from Twitter and become disheartened that not many people were subscribing after visiting from Twitter links, but instead your high-value subscribers were coming from natural search. On the face of it, you\u2019d spend less time tweeting, but a multichannel report may tell you that visitors first arrived via a Twitter link and didn\u2019t subscribe, but then came back later after searching for your blog name on Google, after which they did. Don\u2019t pack Twitter in yet!\n\nVisitor and goal flow\n\nVisitor and goal flow are amazing reports that help you visualize the flow of traffic through your site and, ultimately, into your checkout funnel or similar goal path. Flow reports are perfect for understanding drop-off points in your process, as well as what the big draws are on each page. \n\n\n\nPreviously, if you wanted to visualize this data you had to set up several abstracted microgoals and chain them together in custom reports. Frankly, it was a pain in the arse and burned through your precious and limited goal allocation.\n\nVisitor flow bypasses all that and produces the report in an interactive flow diagram. While it doesn\u2019t show you the holy grail of conversion likelihood by each path, you can segment visitor flow so that you can see very specifically how different segments of your visitor base behave.\n\nGo play with it now!", "year": "2011", "author": "Matt Curry", "author_slug": "mattcurry", "published": "2011-12-18T00:00:00+00:00", "url": "https://24ways.org/2011/getting-the-most-out-of-google-analytics/", "topic": "business"} {"rowid": 265, "title": "Designing for Perfection", "contents": "Hello, 24 ways readers. I hope you\u2019re having a nice run up to Christmas. This holiday season I thought I\u2019d share a few things with you that have been particularly meaningful in my work over the last year or so. They may not make you wet your santa pants with new-idea-excitement, but in the context of 24 ways I think they may serve as a nice lesson and a useful seasonal reminder going into the New Year. Enjoy!\n\nStory\n\nDespite being a largely scruffy individual for most of my life, I had some interesting experiences regarding kitchen tidiness during my third year at university. \n\nAs a kid, my room had always been pretty tidy, and as a teenager I used to enjoy reordering my CDs regularly (by artist, label, colour of spine \u2013 you get the picture); but by the time I was twenty I\u2019d left most of these traits behind me, mainly due to a fear that I was turning into my mother. The one remaining anally retentive part of me that remained however, lived in the kitchen. For some reason, I couldn\u2019t let all the pots and crockery be strewn across the surfaces after cooking. I didn\u2019t care if they were washed up or not, I just needed them tidied. The surfaces needed to be continually free of grated cheese, breadcrumbs and ketchup spills. Also, the sink always needed to be clear. Always. Even a lone teabag, discarded casually into the sink hours previously, would give me what I used to refer to as \u201ckitchen rage\u201d.\n\nWhilst this behaviour didn\u2019t cause any direct conflicts, it did often create weirdness. We would be happily enjoying a few pre-night out beverages (Jack Daniels and Red Bull \u2013 nice) when I\u2019d notice the state of the kitchen following our round of customized 49p Tesco pizzas. Kitchen rage would ensue, and I\u2019d have to blitz the kitchen, which usually resulted in me having to catch everyone up at the bar afterwards.\n\nOne evening as we were just about to go out, I was stood there, in front of the shithole that was our kitchen with the intention of cleaning it all up, when a realization popped into my head. In hindsight, it was a pretty obvious one, but it went along the lines of \u201cWhat the fuck are you doing? Sort your life out\u201d. I sodded the washing up, rolled out with my friends, and had a badass evening of partying.\n\nAfter this point, whenever I got the urge to clean the kitchen, I repeated that same realization in my head. My tidy kitchen obsession strived for a level of perfection that my housemates just didn\u2019t share, so it was ultimately pointless. It didn\u2019t make me feel that good, either; it was like having a cigarette after months of restraint \u2013 initially joyous but soon slightly shameful.\n\nLesson\n\nNow, around seven years later, I\u2019m a designer on the web and my life is chaotic. It features no planning for significant events, no day-to-day routine or structure, no thought about anything remotely long-term, and I like to think I do precisely what I want. It seems my days at striving for something ordered and tidy, in most parts of my life, are long gone.\n\nFor much of my time as a designer, though, it\u2019s been a different story. I relished industry-standard terms such as \u2018pixel perfection\u2019 and \u2018polished PSDs\u2019, taking them into my stride as I strove to design everything that was put on my plate perfectly. Even down to grids and guidelines, all design elements would be painstakingly aligned to a five-pixel grid. There were no seven-pixel margins or gutters to be found in my design work, that\u2019s for sure. I put too much pride and, inadvertently, too much ego into my work. Things took too long to create, and because of the amount of effort put into the work, significant changes, based on client feedback for example, were more difficult to stomach.\n\nOver the last eighteen months I\u2019ve made a conscious effort to change the way I approach designing for the web. Working on applications has probably helped with this; they seem to have a more organic development than rigid content-based websites. Mostly though, a realization similar to my kitchen rage one came about when I had to make significant changes to a painstakingly crafted Photoshop document I had created. The changes shouldn\u2019t have been difficult or time-consuming to implement, but they were turning out to be. One day, frustrated with how long it was taking, the refrain \u201cWhat the fuck are you doing? Sort your life out\u201d again entered my head. I blazed the rest of the work, not rushing or doing scruffy work, but just not adhering to the insane levels of perfection I had previously set for myself. When the changes were presented, everything went down swimmingly. The client in this case (and I\u2019d argue most cases) cared more about the ideas than the perfect way in which they had been implemented. I had taken myself and my ego out of the creative side of the work, and it had been easier to succeed.\n\nArgument\n\nI know many other designers who work on the web share such aspirations to perfection. I think it\u2019s a common part of the designer DNA, but I\u2019m not sure it really has a place when designing for the web.\n\nFirst, there\u2019s the environment. The landscape in which we work is continually shifting and evolving. The inherent imperfection of the medium itself makes attempts to create perfect work for it redundant. Whether you consider it a positive or negative point, the products we make are never complete. They\u2019re always scaling and changing. \n\nLike many aspects of web design, this striving for perfection in our design work is a way of thinking borrowed from other design industries where it\u2019s more suited. A physical product cannot be as easily altered or developed after it has been manufactured, so the need to achieve perfection when designing is more apt.\n\nDesigners who can relate to anything I\u2019ve talked about can easily let go of that anal retentiveness if given the right reasons to do so. Striving for perfection isn\u2019t a bad thing, but I simply don\u2019t think it can be achieved in such a fast-moving, unique industry. I think design for the web works better when it begins with quick and simple, followed by iteration and polish over time. \n\nTo let go of ego and to publish something that you\u2019re not completely happy with is perhaps the most difficult part of the job for designers like us, but it\u2019s followed by a satisfaction of knowing your product is alive and breathing, whereas others (possibly even competitors) may still be sitting in Photoshop, agonizing over whether a margin should be twenty or forty pixels.\n\nI keep telling myself to stop sitting on those two hundred ideas that are all half-finished. Publish them, clean them up and iterate over time. I\u2019ve been telling myself this for months and, hopefully, writing this article will give me the kick in the arse I need. Hopefully, it will also give someone else the same kick.", "year": "2011", "author": "Greg Wood", "author_slug": "gregwood", "published": "2011-12-17T00:00:00+00:00", "url": "https://24ways.org/2011/designing-for-perfection/", "topic": "process"} {"rowid": 283, "title": "CSS3 Patterns, Explained", "contents": "Many of you have probably seen my CSS3 patterns gallery. It became very popular throughout the year and it showed many web developers how powerful CSS3 gradients really are. But how many really understand how these patterns are created? The biggest benefit of CSS-generated backgrounds is that they can be modified directly within the style sheet. This benefit is void if we are just copying and pasting CSS code we don\u2019t understand. We may as well use a data URI instead.\n\nImportant note\n\nIn all the examples that follow, I\u2019ll be using gradients without a vendor prefix, for readability and brevity. However, you should keep in mind that in reality you need to use all the vendor prefixes (-moz-, -ms-, -o-, -webkit-) as no browser currently implements them without a prefix. Alternatively, you could use -prefix-free and have the current vendor prefix prepended at runtime, only when needed.\n\nThe syntax described here is the one that browsers currently implement. The specification has since changed, but no browser implements the changes yet. If you are interested in what is coming, I suggest you take a look at the dev version of the spec.\n\nIf you are not yet familiar with CSS gradients, you can read these excellent tutorials by John Allsopp and return here later, as in the rest of the article I assume you already know the CSS gradient basics:\n\n\n\tCSS3 Linear Gradients\n\tCSS3 Radial Gradients\n\n\nThe main idea\n\nI\u2019m sure most of you can imagine the background this code generates:\n\nbackground: linear-gradient(left, white 20%, #8b0 80%);\n\nIt\u2019s a simple gradient from one color to another that looks like this:\n\n See this example live\n\nAs you probably know, in this case the first 20% of the container\u2019s width is solid white and the last 20% is solid green. The other 60% is a smooth gradient between these colors. Let\u2019s try moving these color stops closer to each other:\n\nbackground: linear-gradient(left, white 30%, #8b0 70%);\n\n See this example live\n\nbackground: linear-gradient(left, white 40%, #8b0 60%);\n\n See this example live\n\nbackground: linear-gradient(left, white 50%, #8b0 50%);\n\n See this example live\n\nNotice how the gradient keeps shrinking and the solid color areas expanding, until there is no gradient any more in the last example. We can even adjust the position of these two color stops to control where each color abruptly changes into another:\n\nbackground: linear-gradient(left, white 30%, #8b0 30%);\n\n See this example live\n\nbackground: linear-gradient(left, white 90%, #8b0 90%);\n\n See this example live\n\nWhat you need to take away from these examples is that when two color stops are at the same position, there is no gradient, only solid colors. Even without going any further, this trick is useful for a number of different use cases like faux columns or the effect I wanted to achieve in my homepage or the -prefix-free page where the background is only shown on one side and hidden on the other:\n\n\n\nCombining with background-size\n\nWe can do wonders, however, if we combine this with the CSS3 background-size property:\n\nbackground: linear-gradient(left, white 50%, #8b0 50%);\nbackground-size: 100px 100px;\n\n See this example live\n\nAnd there it is. We just created the simplest of patterns: (vertical) stripes. We can remove the first parameter (left) or replace it with top and we\u2019ll get horizontal stripes. However, let\u2019s face it: Horizontal and vertical stripes are kinda boring. Most stripey backgrounds we see on the web are diagonal. So, let\u2019s try doing that.\n\nOur first attempt would be to change the angle of the gradient to something like 45deg. However, this results in an ugly pattern like this: \n\n See this example live\n\nBefore reading on, think for a second: why didn\u2019t this produce the desired result? Can you figure it out?\n\nThe reason is that the gradient angle rotates the gradient inside each tile, not the tiled background as a whole. However, didn\u2019t we have the same problem the first time we tried to create diagonal stripes with an image? And then we learned that every stripe has to be included twice, like so:\n\n\n\nSo, let\u2019s try to create that effect with CSS gradients. It\u2019s essentially what we tried before, but with more color stops:\n\nbackground: linear-gradient(45deg, white 25%,\n #8b0 25%, #8b0 50%, \n white 50%, white 75%, \n #8b0 75%);\nbackground-size:100px 100px;\n\n See this example live\n\nAnd there we have our stripes! An easy way to remember the order of the percentages and colors it is that you always have two of the same in succession, except the first and last color.\n\nNote: Firefox for Mac also needs an additional 100% color stop at the end of any pattern with more than two stops, like so: ..., white 75%, #8b0 75%, #8b0). The bug was reported in February 2011 and you can vote for it and track its progress at Bugzilla.\n\nUnfortunately, this is essentially a hack and we will realize that if we try to change the gradient angle to 60deg:\n\n See this example live\n\nNot that maintainable after all, eh? Luckily, CSS3 offers us another way of declaring such backgrounds, which not only helps this case but also results in much more concise code:\n\nbackground: repeating-linear-gradient(60deg, white, white 35px, #8b0 35px, #8b0 70px);\n\n See this example live\n\nIn this case, however, the size has to be declared in the color stop positions and not through background-size, since the gradient is supposed to cover the entire container. You might notice that the declared size is different from the one specified the previous way. This is because the size of the stripes is measured differently: in the first example we specify the dimensions of the tile itself; in the second, the width of the stripes (35px), which is measured diagonally.\n\nMultiple backgrounds\n\nUsing only one gradient you can create stripes and that\u2019s about it. There are a few more patterns you can create with just one gradient (linear or radial) but they are more or less boring and ugly. Almost every pattern in my gallery contains a number of different backgrounds. For example, let\u2019s create a polka dot pattern:\n\nbackground: radial-gradient(circle, white 10%, transparent 10%),\nradial-gradient(circle, white 10%, black 10%) 50px 50px;\nbackground-size:100px 100px;\n\n See this example live\n\nNotice that the two gradients are almost the same image, but positioned differently to create the polka dot effect. The only difference between them is that the first (topmost) gradient has transparent instead of black. If it didn\u2019t have transparent regions, it would effectively be the same as having a single gradient, as the topmost gradient would obscure everything beneath it.\n\nThere is an issue with this background. Can you spot it?\n\nThis background will be fine for browsers that support CSS gradients but, for browsers that don\u2019t, it will be transparent as the whole declaration is ignored. We have two ways to provide a fallback, each for different use cases. We have to either declare another background before the gradient, like so:\n\nbackground: black;\nbackground: radial-gradient(circle, white 10%, transparent 10%),\nradial-gradient(circle, white 10%, black 10%) 50px 50px;\nbackground-size:100px 100px;\n\nor declare each background property separately:\n\nbackground-color: black;\nbackground-image: radial-gradient(circle, white 10%, transparent 10%),\nradial-gradient(circle, white 10%, transparent 10%);\nbackground-size:100px 100px;\nbackground-position: 0 0, 50px 50px;\n\nThe vigilant among you will have noticed another change we made to our code in the last example: we altered the second gradient to have transparent regions as well. This way background-color serves a dual purpose: it sets both the fallback color and the background color of the polka dot pattern, so that we can change it with just one edit. Always strive to make code that can be modified with the least number of edits. You might think that it will never be changed in that way but, almost always, given enough time, you\u2019ll be proved wrong.\n\nWe can apply the exact same technique with linear gradients, in order to create checkerboard patterns out of right triangles:\n\nbackground-color: white;\nbackground-image: linear-gradient(45deg, black 25%, transparent 25%, transparent 75%, black 75%), \nlinear-gradient(45deg, black 25%, transparent 25%, transparent 75%, black 75%);\nbackground-size:100px 100px;\nbackground-position: 0 0, 50px 50px;\n\n See this example live\n\nUsing the right units\n\nDon\u2019t use pixels for the sizes without any thought. In some cases, ems make much more sense. For example, when you want to make a lined paper background, you want the lines to actually follow the text. If you use pixels, you have to change the size every time you change font-size. If you set the background-size in ems, it will naturally follow the text and you will only have to update it if you change line-height.\n\nIs it possible?\n\nThe shapes that can be achieved with only one gradient are:\n\n\n\tstripes\n\tright triangles\n\tcircles and ellipses\n\tsemicircles and other shapes formed from slicing ellipses horizontally or vertically\n\n\nYou can combine several of them to create squares and rectangles (two right triangles put together), rhombi and other parallelograms (four right triangles), curves formed from parts of ellipses, and other shapes.\n\nJust because you can doesn\u2019t mean you should\n\nTechnically, anything can be crafted with these techniques. However, not every pattern is suitable for it. The main advantages of this technique are:\n\n\n\tno extra HTTP requests\n\tshort code\n\thuman-readable code (unlike data URIs) that can be changed without even leaving the CSS file.\n\n\nComplex patterns that require a large number of gradients are probably better left to SVG or bitmap images, since they negate almost every advantage of this technique:\n\n\n\tthey are not shorter\n\tthey are not really comprehensible \u2013 changing them requires much more effort than using an image editor\n\n\nThey still save an HTTP request, but so does a data URI.\n\nI have included some very complex patterns in my gallery, because even though I think they shouldn\u2019t be used in production (except under very exceptional conditions), understanding how they work and coding them helps somebody understand the technology in much more depth.\n\nAnother rule of thumb is that if your pattern needs shapes to obscure parts of other shapes, like in the star pattern or the yin yang pattern, then you probably shouldn\u2019t use it. In these patterns, changing the background color requires you to also change the color of these shapes, making edits very tedious.\n\nIf a certain pattern is not practicable with a reasonable amount of CSS, that doesn\u2019t mean you should resort to bitmap images. SVG is a very good alternative and is supported by all modern browsers.\n\nBrowser support\n\nCSS gradients are supported by Firefox 3.6+, Chrome 10+, Safari 5.1+ and Opera 11.60+ (linear gradients since Opera 11.10). Support is also coming in Internet Explorer when IE10 is released. You can get gradients in older WebKit versions (including most mobile browsers) by using the proprietary -webkit-gradient(), if you really need them.\n\nEpilogue\n\nI hope you find these techniques useful for your own designs. If you come up with a pattern that\u2019s very different from the ones already included, especially if it demonstrates a cool new technique, feel free to send a pull request to the github repo of the patterns gallery. Also, I\u2019m always fascinated to see my techniques put in practice, so if you made something cool and used CSS patterns, I\u2019d love to know about it!\n\nHappy holidays!", "year": "2011", "author": "Lea Verou", "author_slug": "leaverou", "published": "2011-12-16T00:00:00+00:00", "url": "https://24ways.org/2011/css3-patterns-explained/", "topic": "code"} {"rowid": 287, "title": "Extracting the Content", "contents": "As we throw away our canvas in approaches and yearn for a content-out process, there remains a pain point: the Content. It is spoken of in the hushed tones usually reserved for Lord Voldemort. The-thing-that-someone-else-is-responsible-for-that-must-not-be-named.\n\nDesigners and developers have been burned before by not knowing what the Content is, how long it is, what style it is and when the hell it\u2019s actually going to be delivered, in internet eons past. Warily, they ask clients for it. But clients don\u2019t know what to make, or what is good, because no one taught them this in business school. Designers struggle to describe what they need and when, so the conversation gets put off until it\u2019s almost too late, and then everyone is relieved that they can take the cop-out of putting up a blog and maybe some product descriptions from the brochure.\n\nThe Content in content out.\n\nI\u2019m guessing, as a smart, sophisticated, and, may I say, nicely-scented reader of the honourable and venerable tradition of 24 ways, that you sense something better is out there. Bunches of boxes to fill in just don\u2019t cut it any more in a responsive web design world. The first question is, how are you going to design something to ensure users have the easiest access to the best Content, if you haven\u2019t defined at the beginning what that Content is? Of course, it\u2019s more than possible that your clients have done lots of user research before approaching you to start this project, and have a plethora of finely tuned Content for you to design with.\n\nHave you finished laughing yet? Alright then. Let\u2019s just assume that, for whatever reason of gross oversight, this hasn\u2019t happened. What next?\n\nBringing up Content for the first time with a client is like discussing contraception when you\u2019re in a new relationship. It might be awkward and either party would probably rather be doing something else, but it needs to be broached before any action happens (that, and it\u2019s disastrous to assume the other party has the matter in hand). If we can\u2019t talk about it, how can we expect people to be doing it right and not making stupid mistakes? That being the case, how do we talk about Content? Let\u2019s start by finding a way to talk about it without blushing and scuffing our shoes. And there\u2019s a reason I\u2019ve been treating Content as a Proper Noun. \n\nThe first step, and I mean really-first-step-way-back-at-the-beginning-of-the-project-while-you-are-still-scoping-out-what-the-hell-you-might-do-for-each-other-and-it\u2019s-still-all-a-bit-awkward-like-a-first-date, is for you to explain to the client how important it is that you, together, work out what is important to your users as part of the user experience design, so that your users get the best user experience. The trouble is that, in most cases, this would lead to blank stares, possibly followed by a light cough and a query about using Comic Sans because it seems friendly.\n\nLet\u2019s start by ensuring your clients understand the task ahead. You see, all the time we talk about the Content we do our clients a big disservice. Content is poorly defined. It looms over a project completion point like an unscalable (in the sense of a dozen stacked Kilimanjaros), seething, massive, singular entity. The Content.\n\nDefining the problem. \n\nWe should really be thinking of the Content as \u2018contents\u2019; as many parts that come together to form a mighty experience, like hit 90s kids\u2019 TV show Mighty Morphin Power Rangers*.\n\n*For those of you who might have missed the Power Rangers, they were five teenagers with attitude, each given crazy mad individual skillz and a coloured lycra suit from an alien overlord. In return, they had to fight a new monster of the week using their abilities and weaponry in sync (even if the audio was not) and then, finally, in thrilling combination as a Humongous Mechanoid Machine of Awesome. They literally joined their individual selves, accessories and vehicles into a big robot. It was a toy manufacturer\u2019s wet dream.\n\nSo, why do I say Content is like the Power Rangers? Because Content is not just a humongous mecha. It is a combination of well-crafted pieces of contents that come together to form a well-crafted humongous mecha. Of Content.\n\nThe Red Power Ranger was always the leader. You can imagine your text contents, found on about pages, product descriptions, blog articles, and so on, as being your Red Power Ranger.\n\nMaybe your pictures are your Yellow Power Ranger; video is Blue (not used as much as the others, but really impressive when given a good storyline); maybe Pink is your infographics (it\u2019s wrong to find it sexier than the other equally important Rangers, but you kind of do anyway). And so on. \n\nThese bits of content \u2013 Red Text Ranger, Yellow Picture Ranger and others \u2013 often join together on a page, like they are teaming up to fight the bad guy in an action scene, and when they all come together (your standard workaday huge mecha) in a launched site, that\u2019s when Content becomes an entity.\n\nWhile you might have a vision for the whole site, Content rarely works that way. Of course, you keep your eye on the bigger prize, the completion of your mega robot, but to get there you need to assemble your working parts, the cogs and springs of contents that will mesh together to finally create your Humongous Mecha of Content. You create parts and join them to form a whole. (It\u2019s rarely seamless; often we need to adjust as we go, but we can create our Mecha\u2019s blueprint by making sure we have all the requisite parts.)\n\nThe point here is the order these parts were created. No alien overlord plans a Humongous Mechanoid and then thinks, \u201cGee, how can I split this into smaller fighting units powered by teenagers in snazzy shiny suits?\u201d No toy manufacturer goes into production of a mega robot, made up of model mecha vehicles with detachable arsenal, without thinking how they will easily fit back together to form the \u2018Buy all five now to create the mega robot\u2019 set. No good contents are created as a singular entity and chunked up to be slotted in to place any which way, into the body of a site.\n\nThink contents, not the Content. Think of contents as smaller units, or as a plural. The Content is what you have at the end. The contents are what you are creating and they are easy to break down. You are no longer scaling the unscalable. You can draw the map and plot the path, page by page, section by section.\n\nThe page table is your friend\n\nTo do this, I use a page table. A page table is a simple table template you can create in the word processor of your choice, that you use to tell you everything about the contents of a page \u2013 everything except the contents itself. \n\nHere\u2019s a page table I created for an employee\u2019s guide to redundancy in the alpha.gov.uk website:\n\n \n\nGuide to redundancy for employees\n\n\n\tPage objective: Provide specific information for employees who are facing redundancy about the process, their options and next steps.\n\tSource content: directgov page on Redundancy.\n\tScope: In scope\n\n\n\n\t\t\n\t\t\tPage title \n\t\t\t An employee\u2019s guide to redundancy \n\t\t\n\t\t\n\t\t\tPriority content \n\t\t\t Message: You have rights as an employee facing redundancy\nMethod: A guide written in plain English, with links to appropriate additional content.\nA video guide (out of scope).\nCovers the stages of redundancy and rights for those in trade unions and not in trade unions. Glossary of unfamiliar terms.\nCall to action: Read full guide, act to explore redundancy actions, benefits or new employment.\nAssets: link to redundancy calculator. \n\t\t\n\t\t\n\t\t\tSecondary \n\t\t\t Related items, or popular additional links. \nAdditional tools, such as search and suggestions.\n\n\n\n\tlocation set v not set states\n\tmicrocopy encouraging location set where location may make a difference to the content \u2013 ie, Scotland/Northern Ireland.\n\n\t\t\n\t\t\n\t\t\tTertiary \n\t\t\t Footer and standard links. \n\t\t\n\n\n\n\tContent creation: Content exists but was created within the constraints of the previous CMS. Review, correct and edit where necessary.\n\tMaintenance: should be flagged for review upon advice from Department of Work and Pensions, and annually.\n\tTechnology/Publishing/Policy implications: Should be reviewed once the glossary styles have been decided. No video guide in scope at this time, so languages should be simple and screen reader friendly.\n\tReliance on third parties: None, all content and source exists in house.\n\tOutstanding questions: None.\n\n\n \n\nDownload a copy of this page table\n\nThis particular page table template owes a lot to Brain Traffic\u2019s version found in Kristina Halvorson\u2019s book Content Strategy for the Web. With smaller clients than, say, the government, I might use something a bit more casual. With clients who like timescales and deadlines, I might turn it into a covering sheet, with signatures and agreements from two departments who have to work together to get the piece done on time.\n\nI use page tables, and the process of working through them, to reassure clients that I understand the task they face and that I can help them break it down section by section, page stack to page, down to product descriptions and interaction copy. About 80% of my clients break into relieved smiles. Most clients want to work with you to produce something good, they just don\u2019t understand how, and they want you to show them the mountain path on the map. With page tables, clients can understand that with baby steps they can break down their content requirements and commission content they need in time for the designers to work with it (as opposed to around it). If I was Santa, these clients would be on my nice list for sure.\n\nMy own special brand of Voldemort-content-evilness comes in how I wield my page tables for the other 20%. Page tables are not always thrilling, I\u2019ll admit. Sometimes they get ignored in favour of other things, yet they are crucial to the continual growth and maintenance of a truly content-led site. For these naughty list clients who, even when given the gift of the page table, continually say \u201cOoh, yes. Content. Right\u201d, I have a special gift. I have a stack of recycled paper under my desk and a cheap black and white laser printer. And I print a blank page table for every conceivable page I can find on the planned redesign. If I\u2019m feeling extra nice, I hole punch them and put them in a fat binder. \n\nThere is nothing like saying, \u201cThis is all the contents you need to have in hand for launch\u201d, and the satisfying thud the binder makes as it hits the table top, to galvanize even the naughtiest clients to start working with you to create the content you need to really create in a content-out way.", "year": "2011", "author": "Relly Annett-Baker", "author_slug": "rellyannettbaker", "published": "2011-12-15T00:00:00+00:00", "url": "https://24ways.org/2011/extracting-the-content/", "topic": "content"} {"rowid": 279, "title": "Design the Invisible to Tell Better Stories on the Web", "contents": "For design to be meaningful we need to tell stories. We need to design the invisible, the cues, the messages and the extra detail hidden beneath the aesthetics. It\u2019s all about the story.\n\n\n\nFrom verbal exchanges around the campfire to books, the web and everything in between, storytelling allows us to share, organize and process information more efficiently. It helps us understand our surroundings and make emotional connections to people, places and experiences.\n\nWeb design lends itself perfectly to the conventions of storytelling, a universal process. However, the stories vary because they\u2019re defined by culture, society, politics and religion. All of which need considering if you are to design stories that are relevant to your target audience.\n\nThe benefits of approaching design with storytelling in mind from the very start of the project is that we are creating considered design that allows users to quickly gather meaning from the website. They do this by reading between the lines and drawing on the wealth of knowledge they have acquired about the associations between colours, typyefaces and signs.\n\nWith so much recognition and analysis happening subconsciously you have to consider how design communicates on this level. This invisible layer has a significant impact on what you say, how you say it and who you say it to.\n\nHow can we design something that\u2019s invisible?\n\nBy researching and making conscious decisions about exactly what you are communicating, you can make the invisible visible. As is often quoted, good design is like air, you only notice it when it\u2019s bad. So by designing the invisible the aim is to design stories that the audience receive subliminally, so that they go somewhat unnoticed, like good air.\n\nStorytelling strands\n\nTo share these stories through design, you can break it down into several strands. Each strand tells a story on its own, but when combined they may start to tell a different story altogether. These strands are colour, typefaces, branding, tone of voice and symbols. All are literal and visible but the invisible element is the meaning behind them \u2013 meaning that you can extract and share. In this article I want to focus on colour, typefaces and tone of voice and on how combining story strands can change the meaning.\n\nColour\n\nLet\u2019s start with colour. Red represents emotions such as love but can also signify war. Green is commonly used for all things environmental and purple is a colour that connotes wealth and royalty. These associations between colour and emotion or value have been learned over time and are continually reinforced through media and culture. \n\n\n\nWith this knowledge come expectations from your users. For example, they will expect Valentine\u2019s Day sites to be red and kids\u2019 sites to be bright and colourful. This is true in the same way audiences have expectations of certain genres of film or music. These conventions help savvy audiences decode texts and read between the lines or, rather, to draw meaning from the invisible. It\u2019s practically an innate skill. This is why you need to design the invisible: because users will quickly deduce meaning from your site and fill in the story\u2019s gaps, it\u2019s important to give them as much of that story to begin with. A story relevant to their culture.\n\n\n\nOf all the ways you can tell stories through web design, colour is the most fascinating and important. Not only does it evoke emotions in users but its meaning varies significantly between cultures. In the west, for example, white is a colour associated with weddings, and black is the colour of mourning. This is signified by the traditions of brides wearing white and those in mourning wearing black. In other cultures the meanings are reversed, as black is a colour that represents good luck and white is a colour that signifies mourning. If you assume the same values are true in all cultures then you risk offending the very people you are targeting.\n\nWhen colours combine, the story being told can change. If you design using red, white and blue then it\u2019s going to be difficult to shake off patriotic connotations because this colour combination is so ingrained as being American or British or French thanks largely to their flags. This extends to politics too. Each party has its own representative colour. In the UK, the Conservatives are blue and Labour is red so it would be inappropriate storytelling to design a Labour-related website in blue as there would be a conflict between the content and the design, a conflict that would result in a poor user experience.\n\nConflicts become more of an issue when you start to combine story strands. I once saw a No VAT advert use the symbol on the left:\n\n\n\nThere\u2019s a complete conflict in storytelling here between the sign and its colour. The prohibition sign was used over the word VAT to mean no VAT; that makes sense. But this is a symbol that is used to communicate to people that something is being prohibited or prevented, it mustn\u2019t continue. So to use green contradicts the message of the sign itself; green is used as a colour to say yes, go, proceed, enter. The same would be true if we had a tick in red and a cross in green. Bad design here means the story is flawed and the user experience is compromised.\n\nTypefaces\n\nTypefaces also tell stories. They are so much more than the words that are written with them because they connote different values. Here are a few:\n\n\n\nSerif fonts are more formal and are associated with tradition, sophistication and high-end values. Sans serif fonts, on the other hand, are synonymous with modernity, informality and friendliness. These perceptions are again reinforced through more traditional media such as newspaper mastheads, where the serious news-focused broadsheets have serif titles, and the showbiz and gossip-led tabloids have sans serif titles. This translates to the web as well. With these associations already familiar to users, they may see copy and focus on the words, but if the way that copy is displayed jars with the context then we are back to having conflicting stories like the No VAT sign earlier.\n\nLet\u2019s take official institutions, for example. The White House, the monarchy, 10 Downing Street and other government departments are formal, traditional and important organisations. If the copy on their websites were written in a typeface like Cooper Black, it would erase any authority and respect that they were due. They need people to take them seriously and trust them, and part of the way to do this is to have a typeface that represents those values.\n\nIt works both ways though. If Innocent, Threadless or other fun companies used traditional typefaces, they wouldn\u2019t be accurate reflections of their core values, brand and personality. They are better positioned to use friendly, informal and modern typefaces. But still never Comic Sans.\n\nTone of voice\n\nClosely tied to this is tone of voice, my absolute bugbear on the web. Tone of voice isn\u2019t what is said but, rather, how it is said. When we interact with others in person we don\u2019t just listen to the words they say, but we also draw meaning from their body language, and pitch and tone of voice. Just because the web removes that face-to-face interaction with your audience it doesn\u2019t mean you can\u2019t have a tone of voice. \n\n\n\nInnocent pioneered the informal chatty tone of voice that so many others have since emulated, but unless it is representative of your company, then it\u2019s not appropriate. It works for Innocent because the tone of voice is consistent across all the company\u2019s materials, both online and offline. Ben and Jerry\u2019s takes the same approach, as does Threadless, but maybe you need a more formal or corporate tone of voice. It really depends on what your business or service is and who it is for, and that is why I think LoveFilm has it all wrong. \n\nLoveFilm offers a film and game rental service, something fun for people in their downtime. While they aren\u2019t particularly stuffy, neither is their tone of voice very friendly or informal, which is what I would expect from a service like theirs. The reason they have it wrong is in the language they use and the way their sentences are constructed.\n\nThis is the first time we\u2019ve discussed language because, on the whole, designing the invisible isn\u2019t concerned with language at all. But that doesn\u2019t mean that these strands can\u2019t still elicit an emotional response in users. Jon Tan quoted Dr Mazviita Chirimuuta in his New Adventures in Web Design talk in January 2011:\n\n\n\tAlthough there is no absolute separation between language and emotion, there will still be countless instances where you have emotional response without verbal input or linguistic cognition. In general language is not necessary for emotion.\n\n\nThis is even more pertinent when the emotions evoked are connected to people\u2019s culture, surroundings and way of life. It makes design personal, something that audiences can connect with at more than just face value but, rather, on a subliminal or, indeed, invisible level. \n\nIt also means that when you are asked the inevitable question of why \u2013 why is blue the dominant colour? why have you used that typeface? why don\u2019t we sound like Innocent? \u2013 you will have a rationale behind each design decision that can explain what story you are telling, how you discovered the story and how it is targeted at the core audience.\n\nResearch\n\nThis is where research plays a vital role in the project cycle. If you don\u2019t know and understand your audience then you don\u2019t know what story to design. Every project lends itself to some level of research, but how in-depth and what methods are most appropriate will be dictated by project requirements and budget restrictions \u2013 but do your research. \n\nEven if you think you know your audience, it doesn\u2019t hurt to research and reaffirm that because cultures and society do change, albeit slowly, but they can change. So ask questions at the start of the project during the research phase:\n\n\n\tWhat do different colours mean for your audience\u2019s culture?\n\tDo the typeface and tone of voice appeal to the demographic?\n\tDoes the brand identity represent the values and personality of your service?\n\tAre there any social, political or religious significances associated with your audience that you need to take into consideration so you don\u2019t offend them?\n\n\nAsk questions, understand your audience, design your story based on these insights, and create better user experiences in context that have good, solid storytelling at their heart.\n\nMajor hat tip to Gareth Strange for the beautiful graphics used within this article.", "year": "2011", "author": "Robert Mills", "author_slug": "robertmills", "published": "2011-12-14T00:00:00+00:00", "url": "https://24ways.org/2011/design-the-invisible/", "topic": "design"} {"rowid": 276, "title": "Your jQuery: Now With 67% Less Suck", "contents": "Fun fact: more websites are now using jQuery than Flash.\n\njQuery is an amazing tool that\u2019s made JavaScript accessible to developers and designers of all levels of experience. However, as Spiderman taught us, \u201cwith great power comes great responsibility.\u201d The unfortunate downside to jQuery is that while it makes it easy to write JavaScript, it makes it easy to write really really f*&#ing bad JavaScript. Scripts that slow down page load, unresponsive user interfaces, and spaghetti code knotted so deep that it should come with a bottle of whiskey for the next sucker developer that has to work on it. \n\nThis becomes more important for those of us who have yet to move into the magical fairy wonderland where none of our clients or users view our pages in Internet Explorer. The IE JavaScript engine moves at the speed of an advancing glacier compared to more modern browsers, so optimizing our code for performance takes on an even higher level of urgency.\n\nThankfully, there are a few very simple things anyone can add into their jQuery workflow that can clear up a lot of basic problems. When undertaking code reviews, three of the areas where I consistently see the biggest problems are: inefficient selectors; poor event delegation; and clunky DOM manipulation. We\u2019ll tackle all three of these and hopefully you\u2019ll walk away with some new jQuery batarangs to toss around in your next project.\n\nSelector optimization\n\nSelector speed: fast or slow?\n\nSaying that the power behind jQuery comes from its ability to select DOM elements and act on them is like saying that Photoshop is a really good tool for selecting pixels on screen and making them change color \u2013 it\u2019s a bit of a gross oversimplification, but the fact remains that jQuery gives us a ton of ways to choose which element or elements in a page we want to work with. However, a surprising number of web developers are unaware that all selectors are not created equal; in fact, it\u2019s incredible just how drastic the performance difference can be between two selectors that, at first glance, appear nearly identical. For instance, consider these two ways of selecting all paragraph tags inside a <div> with an ID.\n\n$(\"#id p\");\n\n$(\"#id\").find(\"p\");\n\nWould it surprise you to learn that the second way can be more than twice as fast as the first? Knowing which selectors outperform others (and why) is a pretty key building block in making sure your code runs well and doesn\u2019t frustrate your users waiting for things to happen.\n\nThere are many different ways to select elements using jQuery, but the most common ways can be basically broken down into five different methods. In order, roughly, from fastest to slowest, these are:\n\n\n\t$(\"#id\"); \nThis is without a doubt the fastest selector jQuery provides because it maps directly to the native document.getElementbyId() JavaScript method. If possible, the selectors listed below should be prefaced with an ID selector in conjunction with jQuery\u2019s .find() method to limit the scope of the page that has to be searched (as in the $(\"#id\").find(\"p\") example shown above).\n\t$(\"p\");, $(\"input\");, $(\"form\"); and so on\nSelecting elements by tag name is also fast, since it maps directly to the native document.getElementsByTagname() method.\n\t$(\".class\"); \nSelecting by class name is a little trickier. While still performing very well in modern browsers, it can cause some pretty significant slowdowns in IE8 and below. Why? IE9 was the first IE version to support the native document.getElementsByClassName() JavaScript method. Older browsers have to resort to using much slower DOM-scraping methods that can really impact performance.\n\t$(\"[attribute=value]\");\nThere is no native JavaScript method for this selector to use, so the only way that jQuery can perform the search is by crawling the entire DOM looking for matches. Modern browsers that support the querySelectorAll() method will perform better in certain cases (Opera, especially, runs these searches much faster than any other browser) but, generally speaking, this type of selector is Slowey McSlowersons.\n\t$(\":hidden\");\nLike attribute selectors, there is no native JavaScript method for this one to use. Pseudo-selectors can be painfully slow since the selector has to be run against every element in your search space. Again, modern browsers with querySelectorAll() will perform slightly better here, but try to avoid these if at all possible. If you must use one, try to limit the search space to a specific portion of the page: $(\"#list\").find(\":hidden\");\n\n\nBut, hey, proof is in the performance testing, right? It just so happens that said proof is sitting right here. Be sure to notice the class selector numbers beside IE7 and 8 compared to other browsers and then wonder how the people on the IE team at Microsoft manage to sleep at night. Yikes.\n\nChaining\n\nAlmost all jQuery methods return a jQuery object. This means that when a method is run, its results are returned and you can continue executing more methods on them. Rather than writing out the same selector multiple times over, just making a selection once allows multiple actions to be run on it.\n\nWithout chaining\n\n$(\"#object\").addClass(\"active\");\n$(\"#object\").css(\"color\",\"#f0f\");\n$(\"#object\").height(300);\n\nWith chaining\n\n$(\"#object\").addClass(\"active\").css(\"color\", \"#f0f\").height(300);\n\nThis has the dual effect of making your code shorter and faster. Chained methods will be slightly faster than multiple methods made on a cached selector, and both ways will be much faster than multiple methods made on non-cached selectors. Wait\u2026 \u201ccached selector\u201d? What is this new devilry? \n\nCaching\n\nAnother easy way to speed up your code that seems to be a mystery to developers is the idea of caching your selectors. Think of how many times you end up writing the same selector over and over again in any project. Every $(\".element\") selector has to search the entire DOM each time, regardless of whether or not that selector had been previously run. Running the selection once and then storing the results in a variable means that the DOM only has to be searched once. Once the results of a selector have been cached, you can do anything with them.\n\nFirst, run your search (here we\u2019re selecting all of the <li> elements inside <ul id=\"blocks\">): \n\nvar blocks = $(\"#blocks\").find(\"li\");\n\nNow, you can use the blocks variable wherever you want without having to search the DOM every time.\n\n$(\"#hideBlocks\").click(function() {\n blocks.fadeOut();\n});\n$(\"#showBlocks\").click(function() {\n blocks.fadeIn();\n});\n\nMy advice? Any selector that gets run more than once should be cached. This jsperf test shows just how much faster a cached selector runs compared to a non-cached one (and even throws some chaining love in to boot).\n\nEvent delegation\n\nEvent listeners cost memory. In complex websites and apps it\u2019s not uncommon to have a lot of event listeners floating around, and thankfully jQuery provides some really easy methods for handling event listeners efficiently through delegation.\n\nIn a bit of an extreme example, imagine a situation where a 10\u00d710 cell table needs to have an event listener on each cell; let\u2019s say that clicking on a cell adds or removes a class that defines the cell\u2019s background color. A typical way that this might be written (and something I\u2019ve often seen during code reviews) is like so:\n\n$('table').find('td').click(function() {\n $(this).toggleClass('active');\n});\n\njQuery 1.7 has provided us with a new event listener method, .on(). It acts as a utility that wraps all of jQuery\u2019s previous event listeners into one convenient method, and the way you write it determines how it behaves. To rewrite the above .click() example using .on(), we\u2019d simply do the following:\n\n$('table').find('td').on('click',function() {\n $(this).toggleClass('active');\n});\n\nSimple enough, right? Sure, but the problem here is that we\u2019re still binding one hundred event listeners to our page, one to each individual table cell. A far better way to do things is to create one event listener on the table itself that listens for events inside it. Since the majority of events bubble up the DOM tree, we can bind a single event listener to one element (in this case, the <table>) and wait for events to bubble up from its children. The way to do this using the .on() method requires only one change from our code above:\n\n$('table').on('click','td',function() {\n $(this).toggleClass('active');\n});\n\nAll we\u2019ve done is moved the td selector to an argument inside the .on() method. Providing a selector to .on() switches it into delegation mode, and the event is only fired for descendants of the bound element (table) that match the selector (td). With that one simple change, we\u2019ve gone from having to bind one hundred event listeners to just one. You might think that the browser having to do one hundred times less work would be a good thing and you\u2019d be completely right. The difference between the two examples above is staggering.\n\n(Note that if your site is using a version of jQuery earlier than 1.7, you can accomplish the very same thing using the .delegate() method. The syntax of how you write the function differs slightly; if you\u2019ve never used it before, it\u2019s worth checking the API docs for that page to see how it works.)\n\nDOM manipulation\n\njQuery makes it very easy to manipulate the DOM. It\u2019s trivial to create new nodes, insert them, remove other ones, move things around, and so on. While the code to do this is simple to write, every time the DOM is manipulated, the browser has to repaint and reflow content which can be extremely costly. This is no more evident than in a long loop, whether it be a standard for() loop, while() loop, or jQuery $.each() loop.\n\nIn this case, let\u2019s say we\u2019ve just received an array full of image URLs from a database or Ajax call or wherever, and we want to put all of those images in an unordered list. Commonly, you\u2019ll see code like this to pull this off:\n\nvar arr = [reallyLongArrayOfImageURLs]; \n $.each(arr, function(count, item) {\n var newImg = '<li><img src=\"'+item+'\"></li>';\n $('#imgList').append(newImg);\n });\n\nThere are a couple of problems with this. For one (which you should have already noticed if you\u2019ve read the earlier part of this article), we\u2019re making the $(\"#imgList\") selection once for each iteration of our loop. The other problem here is that each time the loop iterates, it\u2019s adding a new <li> to the DOM. Each of those insertions is going to be costly, and if our array is quite large then this could lead to a massive slowdown or even the dreaded \u2018A script is causing this page to run slowly\u2019 warning.\n\nvar arr = [reallyLongArrayOfImageURLs],\n tmp = ''; \n$.each(arr, function(count, item) {\n tmp += '<li><img src=\"'+item+'\"></li>';\n});\n$('#imgList').append(tmp);\n\nAll we\u2019ve done here is create a tmp variable that each <li> is added to as it\u2019s created. Once our loop has finished iterating, that tmp variable will contain all of our list items in memory, and can be appended to our <ul> all in one go. Browsers work much faster when working with objects in memory rather than on screen, so this is a much faster, more CPU-cycle-friendly method of building a list.\n\nWrapping up\n\nThese are far from being the only ways to make your jQuery code run better, but they are among the simplest ones to implement. Though each individual change may only make a few milliseconds of difference, it doesn\u2019t take long for those milliseconds to add up. Studies have shown that the human eye can discern delays of as few as 100ms, so simply making a few changes sprinkled throughout your code can very easily have a noticeable effect on how well your website or app performs. Do you have other jQuery optimization tips to share? Leave them in the comments and help make us all better.\n\nNow go forth and make awesome!", "year": "2011", "author": "Scott Kosman", "author_slug": "scottkosman", "published": "2011-12-13T00:00:00+00:00", "url": "https://24ways.org/2011/your-jquery-now-with-less-suck/", "topic": "code"} {"rowid": 288, "title": "Displaying Icons with Fonts and Data- Attributes", "contents": "Traditionally, bitmap formats such as PNG have been the standard way of delivering iconography on websites. They\u2019re quick and easy, and it also ensures they\u2019re as pixel crisp as possible. Bitmaps have two drawbacks, however: multiple HTTP requests, affecting the page\u2019s loading performance; and a lack of scalability, noticeable when the page is zoomed or viewed on a screen with a high pixel density, such as the iPhone 4 and 4S.\n\nThe requests problem is normally solved by using CSS sprites, combining the icon set into one (physically) large image file and showing the relevant portion via background-position. While this works well, it can get a bit fiddly to specify all the positions. In particular, scalability is still an issue. A vector-based format such as SVG sounds ideal to solve this, but browser support is still patchy.\n\n\n\nThe rise and adoption of web fonts have given us another alternative. By their very nature, they\u2019re not only scalable, but resolution-independent too. No need to specify higher resolution graphics for high resolution screens! \n\nThat\u2019s not all though:\n\n\n\tBrowser support: Unlike a lot of new shiny techniques, they have been supported by Internet Explorer since version 4, and, of course, by all modern browsers. We do need several different formats, however!\n\tDesign on the fly: The font contains the basic graphic, which can then be coloured easily with CSS \u2013 changing colours for themes or :hover and :focus styles is done with one line of CSS, rather than requiring a new graphic. You can also use CSS3 properties such as text-shadow to add further effects. Using -webkit-background-clip: text;, it\u2019s possible to use gradient and inset shadow effects, although this creates a bitmap mask which spoils the scalability.\n\tSmall file size: specially designed icon fonts, such as Drew Wilson\u2019s Pictos font, can be as little as 12Kb for the .woff font. This is because they contain fewer characters than a fully fledged font. You can see Pictos being used in the wild on sites like Garrett Murray\u2019s Maniacal Rage.\n\n\nAs with all formats though, it\u2019s not without its disadvantages: \n\n\n\tIcons can only be rendered in monochrome or with a gradient fill in browsers that are capable of rendering CSS3 gradients. Specific parts of the icon can\u2019t be a different colour.\n\tIt\u2019s only appropriate when there is an accompanying text to provide meaning. This can be alleviated by wrapping the text label in a tag (I like to use <b> rather than <span>, due to the fact that it\u2019s smaller and isn\u2019 t being used elsewhere) and then hiding it from view with text-indent:-999em.\n\tCreating an icon font can be a complex and time-consuming process. While font editors can carry out hinting automatically, the best results are achieved manually.\n\tUnless you\u2019re adept at creating your own fonts, you\u2019re restricted to what is available in the font. However, fonts like Pictos will cover the most common needs, and icons are most effective when they\u2019re using familiar conventions.\n\n\nThe main complaint about using fonts for icons is that it can mean adding a meaningless character to our markup. The good news is that we can overcome this by using one of two methods \u2013 CSS generated content or the data-icon attribute \u2013 in combination with the :before and :after pseudo-selectors, to keep our markup minimal and meaningful. \n\nOur simple markup looks like this:\n\n<a href=\"/basket\" class=\"icon basket\">View Basket</a>\n\nNote the multiple class attributes. Next, we\u2019ll import the Pictos font using the @font-face web fonts property in CSS:\n\n@font-face {\n font-family: 'Pictos';\n src: url('pictos-web.eot');\n src: local('\u263a'), \n url('pictos-web.woff') format('woff'), \n url('pictos-web.ttf') format('truetype'),\n url('pictos-web.svg#webfontIyfZbseF') format('svg');\n}\n\nThis rather complicated looking set of rules is (at the time of writing) the most bulletproof way of ensuring as many browsers as possible load the font we want. We\u2019ll now use the content property applied to the :before pseudo-class selector to generate our icon. Once again, we\u2019ll use those multiple class attribute values to set common icon styles, then specific styles for .basket. This helps us avoid repeating styles:\n\n.icon {\n font-family: 'Pictos';\n font-size: 22px:\n}\n\n.basket:before {\n content: \"$\";\n}\n\nWhat does the :before pseudo-class do? It generates the dollar character in a browser, even when it\u2019s not present in the markup. Using the generated content approach means our markup stays simple, but we\u2019ll need a new line of CSS, defining what letter to apply to each class attribute for every icon we add.\n\ndata-icon is a new alternative approach that uses the HTML5 data- attribute in combination with CSS attribute selectors. This new attribute lets us add our own metadata to elements, as long as its prefixed by data- and doesn\u2019t contain any uppercase letters. In this case, we want to use it to provide the letter value for the icon. Look closely at this markup and you\u2019ll see the data-icon attribute.\n\n<a href=\"/basket\" class=\"icon\" data-icon=\"$\">View Basket</a>\n\n\n\nWe could add others, in fact as many as we like.\n\n<a href=\"/\" class=\"icon\" data-icon=\"k\">Favourites</a>\n<a href=\"/\" class=\"icon\" data-icon=\"t\">History</a>\n<a href=\"/\" class=\"icon\" data-icon=\"@\">Location</a>\n\n\n\nThen, we need just one CSS attribute selector to style all our icons in one go:\n\n.icon:before {\n content: attr(data-icon);\n /* Insert your fancy colours here */\n }\n\nBy placing our custom attribute data-icon in the selector in this way, we can enable CSS to read the value of that attribute and display it before the element (in this case, the anchor tag). It saves writing a lot of CSS rules. I can imagine that some may not like the extra attribute, but it does keep it out of the actual content \u2013 generated or not.\n\n\n\n\n\nThis could be used for all manner of tasks, including a media player and large simple illustrations. See the demo for live examples. Go ahead and zoom the page, and the icons will be crisp, with the exception of the examples that use -webkit-background-clip: text as mentioned earlier.\n\nFinally, it\u2019s worth pointing out that with both generated content and the data-icon method, the letter will be announced to people using screen readers. For example, with the shopping basket icon above, the reader will say \u201cdollar sign view basket\u201d. As accessibility issues go, it\u2019s not exactly the worst, but could be confusing. You would need to decide whether this method is appropriate for the audience. Despite the disadvantages, icon fonts have huge potential.", "year": "2011", "author": "Jon Hicks", "author_slug": "jonhicks", "published": "2011-12-12T00:00:00+00:00", "url": "https://24ways.org/2011/displaying-icons-with-fonts-and-data-attributes/", "topic": "code"} {"rowid": 281, "title": "Nine Things I've Learned", "contents": "I\u2019ve been a professional graphic designer for fourteen years and for just under four of those a professional web designer. Like most designers I\u2019ve learned a lot in my time, both from a design point of view and in business as freelance designer. A few of the things I\u2019ve learned stick out in my mind, so I thought I\u2019d share them with you. They\u2019re pretty random and in no particular order.\n\n1. Becoming the designer you want to be\n\nWhen I started out as a young graphic designer, I wanted to design posters and record sleeves, pretty much like every other young graphic designer. The problem is that the reality of the world means that when you get your first job you\u2019re designing the back of a paracetamol packet or something equally weird. I recently saw a tweet that went something like this: \u201cYou\u2019ll never become the designer you always dreamt of being by doing the work you never wanted to do\u201d. This is so true; to become the designer you want to be, you need to be designing the things you\u2019re passionate about designing. This probably this means working in the evenings and weekends for little or no money, but it\u2019s time well spent. Doing this will build up your portfolio with the work that really shows what you can do! Soon, someone will ask you to design something based on having seen this work. From this point, you\u2019re carving your own path in the direction of becoming the designer you always wanted to be.\n\n2. Compete on your own terms\n\nAs well as all being friends, we are also competitors. In order to win new work we need a selling point, preferably a unique selling point. Web design is a combination of design disciplines \u2013 user experience design, user interface Design, visual design, development, and so on. Some companies will sell themselves as UX specialists, which is fine, but everyone who designs a website from scratch does some sort of UX, so it\u2019s not really a unique selling point. Of course, some people do it better than others.\n\nOne area of web design that clients have a strong opinion on, and will judge you by, is visual design. It\u2019s an area in which it\u2019s definitely possible to have a unique selling point. Designing the visual aesthetic for a website is a combination of logical decision making and a certain amount of personal style. If you can create a unique visual style to your work, it can become a selling point that\u2019s unique to you.\n\n3. How much to charge and staying motivated\n\nWhen you\u2019re a freelance designer one of the hardest things to do is put a price on your work and skills. Finding the right amount to charge is a fine balance between supplying value to your customer and also charging enough to stay motivated to do a great job. It\u2019s always tempting to offer a low price to win work, but it\u2019s often not the best approach: not just for yourself but for the client as well.\n\nA client once asked me if I could reduce my fee by \u00a31,000 and still be motivated enough to do a good job. In this case the answer was yes, but it was the question that resonated with me. I realized I could use this as a gauge to help me price projects. Before I send out a quote I now always ask myself the question \u201cIs the amount I\u2019ve quoted enough to make me feel motivated to do my best on this project?\u201d I never send out a quote unless the answer is yes. In my mind there\u2019s no point in doing any project half-heartedly, as every project is an opportunity to build your reputation and expand your portfolio to show potential clients what you can do. Offering a client a good price but not being prepared to put everything you have into it, isn\u2019t value for money. \n\n4. Supplying the right design\n\nWhen I started out as a graphic designer it seemed to be the done thing to supply clients with a ton of options for their logo or brochure designs. In a talk given by Dan Rubin, he mentioned that this was a legacy of agencies competing with each other in a bid to create the illusion of offering more value for money. Over the years, I\u2019ve realized that offering more than one solution makes no sense. The reason a client comes to you as a designer is because you\u2019re the person than can get it right. If I were to supply three options, I\u2019d be knowingly offering my client at least two options that I didn\u2019t think worked.\n\nTo this day I still get asked how many homepage design options I\u2019ll supply for the quoted amount. The answer is one. Of course, I\u2019m more than happy to iterate upon the design to fine-tune it and, on the odd occasion, I do revisit a design concept if I just didn\u2019t nail the design first time around. Your time is much better spent refining the right design option than rushing out three substandard designs in the same amount of time.\n\n5. Colour is key\n\nThere are many contributing factors that go into making a good visual design, but one of the simplest ways to do this is through the use of colour. The colour palette used in a design can have such a profound effect on a visual design that it almost feels like you\u2019re cheating. It\u2019s easy to add more and more subtle shades of colour to add a sense of sophistication and complexity to a design, but it dilutes the overall visual impact. When I design, I almost have a rule that only allows me to use a very limited colour palette. I don\u2019t always stick to it, but it\u2019s always in mind and something I\u2019m constantly reviewing through my design process.\n\n6. Creative thinking is central to good or boundary-pushing web design\n\nWhen we think of creativity in web design we often link this to the visual design, as there is an obvious opportunity to be creative in this area if the brief allows it. Something that I\u2019ve learnt in my time as a web designer is that there\u2019s a massive need for creative thinking in the more technical aspects of web design. The tools we use for building websites are there to be manipulated and used in creative ways to design exciting and engaging user experiences. Great developers are constantly using their creativity to push the boundaries of what can be done with CSS, jQuery and JavaScript. \n\nBeing creative and creative thinking are things we should embrace as an industry and they are qualities that can be found in anyone, whether they be a visual designer or Rails developer.\n\n7. Creative block: don\u2019t be afraid to get things wrong\n\nCreative block can be a killer when designing. It\u2019s often applied to visual design, which is more subjective. I suffer from creative block on a regular basis. It\u2019s hugely frustrating and can screw up your schedule. Having thought about what creative block actually is, I\u2019ve come to the conclusion that it\u2019s actually more of a lack of direction than a lack of ideas. You have ideas and solutions in mind but don\u2019t feel committed to any of them. You\u2019re scared that whatever direction you take, it\u2019ll turn out to be wrong. I\u2019ve found that the best remedy for this is to work through this barrier. It\u2019s a bit like designing with a blindfold on \u2013 you don\u2019t really know where you\u2019re going. If you stick to your guns and keep pressing forward I find that, nine times out of ten, this process leads to a solution. As the page begins to fill, the direction you\u2019re looking for slowly begins to take shape.\n\n8. You get better at designing by designing\n\nI often get emails asking me what books someone can read to help them become a better designer. There are a lot of good books on subjects like HTML5, CSS, responsive web design and the like, that will really help improve anyone\u2019s web design skills. But, when it comes to visual design, the best way to get better is to design as much as possible. You can\u2019t follow instructions for these things because design isn\u2019t following instructions. A large part of web design is definitely applying a set of widely held conventions, but there\u2019s another part to it that is invention and the only way to get better at this is to do it as much as possible.\n\n9. Self-belief is overrated\n\nThroughout our lives we\u2019re told to have self-belief. Self-belief and confidence in what we do, whatever that may be. The problem is that some people find it easier than others to believe in themselves. I\u2019ve spent years trying to convince myself to believe in what I do but have always found it difficult to have complete confidence in my design skills. Self-doubt always creeps in.\n\nI\u2019ve realized that it\u2019s ok to doubt myself and I think it might even be a good thing! I\u2019ve realized that it\u2019s my self-doubt that propels me forward and makes me work harder to achieve the best results. The reason I\u2019m sharing this is because I know I\u2019m not the only designer that feels this way. You can spend a lot of time fighting self-doubt only to discover that it\u2019s your body\u2019s natural mechanism to help you do the best job possible.", "year": "2011", "author": "Mike Kus", "author_slug": "mikekus", "published": "2011-12-11T00:00:00+00:00", "url": "https://24ways.org/2011/nine-things-ive-learned/", "topic": "business"} {"rowid": 275, "title": "Context First: Web Strategy in Four Handy Ws", "contents": "Many, many years ago, before web design became my proper job, I trained and worked as a journalist. I studied publishing in London and spent three fun years learning how to take a few little nuggets of information and turn them into a story. I learned a bunch of stuff that has all been a huge help to my design career. Flatplanning, layout, typographic theory. All of these disciplines have since translated really well to web design, but without doubt the most useful thing I learned was how to ask difficult questions.\n\nPretty much from day one of journalism school they hammer into you the importance of the Five Ws. Five disarmingly simple lines of enquiry that eloquently manage to provide the meat of any decent story. And with alliteration thrown in too. For a young journo, it\u2019s almost too good to be true.\n\nWho? What? Where? When? Why? It seems so obvious to almost be trite but, fundamentally, any story that manages to answer those questions for the reader is doing a pretty good job. You\u2019ll probably have noticed feeling underwhelmed by certain news pieces in the past \u2013 disappointed, like something was missing. Some irritating oversight that really lets the story down. No doubt it was one of the Ws \u2013 those innocuous little suckers are generally only noticeable by their absence, but they sure get missed when they\u2019re not there. \n\nQuestion everything\n\nI\u2019ve always been curious. An inveterate tinkerer with things and asker of dopey questions, often to the point of abject annoyance for anyone unfortunate enough to have ended up in my line of fire. So, naturally, the Five Ws started drifting into other areas of my life. I\u2019d scrutinize everything, trying to justify or explain my rationale using these Ws, but I\u2019d also find myself ripping apart the stuff that clearly couldn\u2019t justify itself against the same criteria.\n\nSo when I started working as a designer I applied the same logic and, sure enough, the Ws pretty much mapped to the exact same needs we had for gathering requirements at the start of a project. It seemed so obvious, such a simple way to establish the purpose of a product. What was it for? Why we were making it? And, of course, who were we making it for? It forced clients to stop and think, when really what they wanted was to get going and see something shiny. Sometimes that was a tricky conversation to have, but it\u2019s no coincidence that those who got it also understood the value of strategy and went on to have good solid products, while those that didn\u2019t often ended up with arrogantly insular and very shiny but ultimately unsatisfying and expendable products. Empty vessels make the most noise and all that\u2026\n\nContent first\n\nI was both surprised and pleased when the whole content first idea started to rear its head a couple of years back. Pleased, because without doubt it\u2019s absolutely the right way to work. And surprised, because personally it\u2019s always been the way I\u2019ve done it \u2013 I wasn\u2019t aware there was even an alternative way. Content in some form or another is the whole reason we were making the things we were making. I can\u2019t even imagine how you\u2019d start figuring out what a site needs to do, how it should be structured, or how it should look without a really good idea of what that content might be. It baffles me still that this was somehow news to a lot of people. What on earth were they doing? Design without purpose is just folly, surely?\n\nIt\u2019s great to see the idea gaining momentum but, having watched it unfold, it occurred to me recently that although it\u2019s fantastic to see a tangible shift in thinking \u2013 away from those bleak times, where making things up was somehow deemed an appropriate way to do things \u2013 there\u2019s now a new bad guy in town.\n\nWith any buzzword solution of the moment, there\u2019s always a catch, and it seems like some have taken the content first approach a little too literally. By which I mean, it\u2019s literally the first thing they do. The project starts, there\u2019s a very cursory nod towards gathering requirements, and off they go, cranking content. Writing copy, making video, commissioning illustrations.\n\nAll that\u2019s happened is that the \u2018making stuff up\u2019 part has shifted along the line, away from layout and UI, back to the content. \n\nStarting is too easy\n\nI can\u2019t remember where I first heard that phrase, but it\u2019s a great sentiment which applies to so much of what we do on the web. The medium is so accessible and to an extent disposable; throwing things together quickly carries far less burden than in any other industry. We\u2019re used to tweaking as we go, changing bits, iterating things into shape. The ubiquitous beta tag has become the ultimate caveat, and has made the unfinished and unpolished acceptable. Of course, that can work brilliantly in some circumstances. Occasionally, a product offers such a paradigm shift it\u2019s beyond the level of deep planning and prelaunch finessing we\u2019d ideally like. But, in the main, for most client sites we work on, there really is no excuse not to do things properly. To ask the tricky questions, to challenge preconceptions and really understand the Ws behind the products we\u2019re making before we even start. \n\nThe four Ws\n\nFor product definition, only four of the five Ws really apply, although there\u2019s a lot of discussion around the idea of when being an influencing factor. For example, the context of a user\u2019s engagement with your product is something you can make a call on depending on the specifics of the project.\n\nSo, here\u2019s my take on the four essential Ws. I\u2019ll point out here that, of course, these are not intended to be autocratic dictums. Your needs may differ, your clients\u2019 needs may differ, but these four starting points will get you pretty close to where you need to be.\n\nWho \n\nIt\u2019s surprising just how many projects start without a real understanding of the intended audience. Many clients think they have an idea, but without really knowing \u2013 it\u2019s presumptive at best, and we all know what presumption is the mother of, right? Of course, we can\u2019t know our audiences in the same way a small shop owner might know their customers. But we can at least strive to find out what type of people are likely to be using the product. I\u2019m not talking about deep user research. That should come later.\n\nThese are the absolute basics. What\u2019s the context for their visit? How informed are they? What\u2019s their level of comprehension? Are they able to self-identify and relate to categories you have created? I could go on, and it changes on a per-project basis. You\u2019ll only find this out by speaking to them, if not in person, then indirectly through surveys, questionnaires or polls. The mechanism is less important than actually reaching out and engaging with them, because without that understanding it\u2019s impossible to start to design with any empathy.\n\nWhat\n\nOnce you become deeply involved directly with a product or service, it\u2019s notoriously difficult to see things as an outsider would. You learn the thing inside and out, you develop shortcuts and internal phraseology. Colloquialisms creep in. You become too close. So it\u2019s no surprise when clients sometimes struggle to explain what it is their product actually does in a way that others can understand.\n\nOften products are complex but, really, the core reasons behind someone wanting to use that product are very simple. There\u2019s a value proposition for the customer and, if they choose to engage with it, there\u2019s a value exchange. If that proposition or exchange isn\u2019t transparent, then people become confused and will likely go elsewhere. Make sure both your client and you really understand what that proposition is and, in turn, what the expected exchange should be. In a nutshell: what is the intended outcome of that engagement? Often the best way to do this is strip everything back to nothing. Verbosity is rife on the web. Just because it\u2019s easy to create content, that shouldn\u2019t be a reason to do so. Figure out what the value proposition is and then reintroduce content elements that genuinely help explain or present that to a level that is appropriate for the audience. \n\nWhy \n\nIn advertising, they talk about the truths behind a product or service. Truths can be both tangible or abstract, but the most important part is the resonance those truths hit with a customer. In a digital product or service those truths are often exposed as benefits. Why is this what I need? Why will it work for me? Why should I trust you? The why is one of the more fluffy Ws, yet it\u2019s such an important one to nail. Clients can get prickly when you ask them to justify the why behind their product, but it\u2019s a fantastic way to make sure the value proposition is clear, realistic and meets with the expectations of both client and customer.\n\nIt\u2019s our job as designers to question things: we\u2019re not just a pair of hands for clients. Just recently I spoke to a potential client about a site for his business. I asked him why people would use his product and also why his product seemed so fractured in its direction. He couldnt answer that question so, instead of ploughing on regardless, he went back to his directors and is now re-evaluating that business. It was awkward but he thanked me and hopefully he\u2019ll have a better product as a result.\n\nWhere\n\nIn this instance, where is not so much a geographical thing, although in some cases that level of context may indeed become a influencing factor\u2026 The where we\u2019re talking about here is the position of the product in relation to others around it. By looking at competitors or similar services around the one you are designing, you can start to get a sense for many of the things that are otherwise hard to pin down or have yet to be defined. For example, in a collection of sites all selling cars, where does yours fit most closely? Where are the overlaps? How are they communicating to their customers? How is the product range presented or categorized?\n\nIt\u2019s good to look around and see how others are doing it. Not in a quest for homogeneity but more to reference or to avoid certain patterns that may or may not make sense for your own particular product. Clients often strive to be different for the sake of it. They feel they need to provide distinction by going against the flow a bit. We know different. We know users love convention. They embrace familiar mental models. They\u2019re comfortable with things that they\u2019ve experienced elsewhere. By showing your client that position is a vital part of their strategy, you can help shape their product into something great. \n\nTo conclude\n\nSo there we have it \u2013 the four Ws. Each part tells a different and vital part of the story you need to be able to make a really good product. It might sound like a lot of work, particularly when the client is breathing down your neck expecting to see things, but without those pieces in place, the story you\u2019re building your product on, and the content that you\u2019re creating to form that product can only ever fit into one genre. Fiction.", "year": "2011", "author": "Alex Morris", "author_slug": "alexmorris", "published": "2011-12-10T00:00:00+00:00", "url": "https://24ways.org/2011/context-first/", "topic": "content"} {"rowid": 285, "title": "Composing the New Canon: Music, Harmony, Proportion", "contents": "Ohne Musik w\u00e4re das Leben ein Irrtum\n\u2014Friedrich NIETZSCHE, G\u00f6tzen-D\u00e4mmerung, Spr\u00fcche und Pfeile 33, 1889\n\n\nSomehow, music is hardcoded in human beings. It is something we understand and respond to without prior knowledge. Music exercises the emotions and our imaginative reflex, not just our hearing. It behaves so much like our emotions that music can seem to symbolize them, to bear them from one person to another. Not surprisingly, it conjures memories: the word music derives from Greek \u03bc\u03bf\u03c5\u03c3\u03b9\u03ba\u03ae (mousike), art of the Muses, whose mythological mother was Mnemosyne, memory. But it can also summon up the blood, console the bereaved, inspire fanaticism, bolster governments and dissenters alike, help us learn, and make web designers dance. And what would Christmas be without music?\n\nMusic moves us, often in ways we can\u2019t explain. By some kind of alchemy, music frees us from the elaborate nuisance and inadequacy of words. Across the world and throughout recorded history \u2013 and no doubt well before that \u2013 people have listened and made (and made out to) music.\n\n\n\t[I]t appears probable that the progenitors of man, either the males or females or both sexes, before acquiring the power of expressing their mutual love in articulate language, endeavoured to charm each other with musical notes and rhythm.\n\u2014Charles DARWIN, The Descent of Man, and Selection in Relation to Sex, 1871\n\n\nIt\u2019s so integral to humankind, we\u2019ve sent it into space as a totem for who we are. (Who knows? It might be important.) Music is essential, a universal compulsion; as Nietzsche wrote, without music life would be a mistake.\n\nMusic, design and web design\n\nThere are some obvious and notable similarities between music and visual design. Both can convey mood and evoke emotion but, even under close scrutiny, how they do that remains to a great extent mysterious. Each has formal qualities or parts that can be abstracted, analysed and discussed, often using the same terminology: composition, harmony, rhythm, repetition, form, theme; even colour, texture and tone.\n\nA possible reason for these shared aspects is that both visual design and music are means to connect with people in deep and lasting ways. Furthermore, I believe the connections to be made can complement direct emotional appeal. Certain aesthetic qualities in music work on an unconscious and, it could be argued, universal level. Using musical principles in our designs, then, can help provide the connectedness between content, device and user that we now seek as web designers.\n\nYet, when we talk about music and web design, the conversation is almost always about the music designers listen to while working, a theme finding its apotheosis in Designers.MX. Sometimes, articles in that dreary list format seek inspiration from music industry websites. There\u2019s even a service offering pre-templated web designs for bands, and at least one book surveyed the landscape back in 2006. Occasionally, discussions broaden somewhat into whether and how different kinds of music can inspire and influence the design work we produce.\n\nSuch enquiries, it seems to me, are beside the point. Do I really design differently when I listen to Bach rather than Bacharach? Will the barely restrained energy of Count Basie\u2019s The Kid from Red Bank mean I choose a lively colour palette, and rural, autumnal shades when inspired by Fleet Foxes? Mahler means a thirteen-column layout? Gillian Welch leads to distressed black and white photography? While reflecting the importance we place in music and how it seems to help us in our work, surveys on musical taste and lists of favourite artists fail to recognize that some of the fundamental aesthetic characteristics of music can be adapted and incorporated into modern web design.\n\nAntiphonal geometry\n\nOver recent years, web designers have embraced grid systems as powerful tools to help create good-looking and intuitive user experiences. With the advent of responsive design, these grids and their contents must adapt to the different screen sizes and properties of all kinds of user devices. Finding and using grid values that can scale well and retain or enhance their proportions and relationships while making the user experience meaningful in several different contexts is more important than ever.\n\nIn print, this challenge has always started with the dimensions and proportions of the page. Content can thereby be made to belong inside the page and be bound to it. And music has been used for centuries to further this aim. As Robert Bringhurst says in The Elements of Typographical Style:\n\n\n\tIndeed, one of the simplest of all systems of page proportions is based on the familiar intervals of the diatonic scale. Pages that embody these basic musical proportions have been in common use in Europe for more than a thousand years.\n\n\nVery well. But while he goes on to list (from the full chromatic scale, rather than just diatonic) the proportions and the musical intervals they\u2019re based on, Bringhurst fails to mention what they\u2019re ratios of or their potential effects. Shame. In his favour, however, he later touches on how proportions in print might be considered to work:\n\n\n\tThe page is a piece of paper. It is also a visible and tangible proportion, silently sounding the thoroughbass of the book. On it lies the textblock, which must answer to the page. The two together \u2013 page and textblock \u2013 produce an antiphonal geometry. That geometry alone can bond the reader to the book. Or conversely, it can put the reader to sleep, or put the reader\u2019s nerves on edge, or drive the reader away.\n\n\nSo what does Bringhurst mean by antiphonal geometry, a phrase that marries the musical to the spatial? By stating that the textblock \u201cmust answer to the page\u201d, the implication is that the relationship between the proportions of the page and the shape of the textblock printed on it embodies a spatial (geometrical) call-and-response (antiphony) that can be appealing or not.\n\nBoulton\u2019s new canon\n\nBut, as Mark Boulton has pointed out, on the web \u201cthere are no edges. There are no \u2018pages\u2019. We\u2019ve made them up.\u201d So, what is to be done? In January 2011 at the New Adventures in Web Design conference, Boulton presented his vision of a new canon of web design, a set of principles to guide us as we design the web. There are three overlapping tenets:\n\n\n\tdesign from the content out\n\tcreate connectedness between the different content elements\n\tbind the content to the web device\n\n\nRather than design from the edges in, we need to design layout systems from the content out. To this end, Boulton asserts that grid system design should begin with a constraint, and he suggests we use the size of a fixed content element, such as an advertising unit or image, as a starting point for online grid calculations. Khoi Vinh advocates the same approach in his book, Ordering Disorder: Grid Principles for Web Design.\n\nBoulton\u2019s second and third tenets, however, are more complex and overlap significantly with each other. Connecting the different parts of the content and binding the content to the device share many characteristics and solutions:\n\n\n\tadopting ems and percentages as units of size for layout elements\n\taltering text size, line length and line height for different viewport dimensions\n\tproviding higher resolution images for devices with greater pixel densities\n\tfluid layout grids, flexible images and responsive design\n\n\nAll can help relate the presentation of the content to its delivery in a certain context.\n\nBut how do we determine the relationship between one element of a layout and another? How can we avoid making arbitrary decisions about the relative sizes of parts of our designs? What can we use to connect the parts of our design to one another, and how can we bind the presentation of the content to the user\u2019s device?\n\nTim Brown\u2019s application of modular typographic scales hints at an answer. In the very useful tool he created for calculating such scales, Brown includes two musical ratios: the perfect fifth (2:3); and the perfect fourth (3:4). Why? Where do they come from? And what do they mean?\n\nHarmonies musical and visual\n\nFundamental to music are rhythm and harmony.\n\nAs any drummer will tell you, without rhythm there is no music. Even when there\u2019s no regular beat, any tune follows a rhythm, however irregular, simply because a change of note is a point of change in the music. Although rhythm, timing and pacing are all relevant to interaction design, right now it\u2019s harmony we\u2019re interested in.\n\nSometimes harmony is called the vertical aspect of music, and melody the horizontal. But this conceit overlooks the fact that harmony is both vertical and horizontal. A single melodic line, as it is played, implies various sets of harmonies on which it is grounded, whether or not those harmonies are played. So, harmony doesn\u2019t just sit vertically beneath the horizontal melody, but moves horizontally as well, through harmonic progression.\n\nTo stretch this arrangement pixel-thin, we could argue that in onscreen design melody is the content, and the layout and arrangement of the content is the harmony. We sometimes say a design is harmonious when the interplay of different elements of a design is pleasing or balanced or in proportion, and the content (the melody) is set off or conveyed well by or appropriate to the design.\n\nWe seem to know instinctively whether a layout is harmonious\u2026\n\nIn the design of The Great Discontent, the relationships between different elements combine to form a balanced whole.\n\n\u2026or not.\n\nThere\u2019s no harmony in the Department for Education\u2019s website because the different parts of the content don\u2019t feel related to one another.\n\nWhat is it that makes one design harmonious and another dissonant? It\u2019s not just whether things line up, though that\u2019s a start. I believe there are much deeper aesthetic forces at work, forces we can tap into in our onscreen designs. Now, I\u2019m not going start a difficult discussion about aesthetics. For our purposes, we just need to know that it\u2019s the branch of philosophy dealing with the nature of beauty, and the creation and perception of beauty. And among the key components in the perception of beauty are harmony and proportion. These have been part of traditional western aesthetics since Plato (about 2,500 years).\n\nOne of the ways we appreciate the beauty of music is through the harmonic intervals we hear. A musical interval is a combination of two notes and it describes the distance between the two pitches. For example, the distance between C and the G above it (if we take C as the tonic or root) is called a perfect fifth.\n\nLeft: C to G, a perfect fifth. Right: C and G, not a perfect fifth.\n\n \n\nAnd, to get superficially scientific for a moment, each musical interval can be expressed as a ratio of the wavelength frequencies of the notes; for our perfect fifth, with every two wavelengths of C, there are three of G. And what is a ratio, if not a proportion of one thing to another?\n\nSo, simple musical harmony (using what\u2019s known as just intonation1) affords us a set of proportions, expressed as ratios. Where better to apply these ideas of harmony and proportion from music in web design, than grid systems?\n\nA digression: whither \u03c6?\n\nQuite often in our discussions of pure design and aesthetics, we mention the golden ratio and regurgitate the same justifications for its use: roots in antiquity; embodied in classical and Renaissance architecture and art; occurrence in nature; the New Twitter, and so forth (oh, really?).\n\nYet the ratios of musical intervals from just intonation are equally venerable and much more widespread: described by Pythagorus; employed in Palladian architecture, and printing, books and art from the Renaissance onwards; in modern times, film and television dimensions; standard international paper sizes (ISO 216, the A and B series); and, again and again, screen dimensions \u2013 chances are that screen you\u2019re probably looking at right now has the proportions 2:3 (iPhone and iPod Touch), 3:4 (iPad and Kindle), 3:5 (many smartphones), 5:8 or 16:9 (many widescreen monitors), all ratios of musical intervals.\n\nBack to our theme\u2026\n\nMusical interval ratios\n\nLet\u2019s take a look at most of the ratios within a couple of octaves and crunch some numbers to generate some percentages and other values that we can use in our designs. First, the intervals and their ratios in just intonation and expressed as ratios of one:\n\n\n\t\t\n\t\t\tName \n\t\t\tInterval in C \n\t\t\tRatio \n\t\t\tRatio (1:x) \n\t\t\n\t\t\n\t\t\t unison \n\t\t\t C\u2192C \n\t\t\t 1:1 \n\t\t\t 1:1 \n\t\t\n\t\t\n\t\t\t minor second \n\t\t\t C\u2192D\u266d \n\t\t\t 15:16 \n\t\t\t 1:1.067 \n\t\t\n\t\t\n\t\t\t major second \n\t\t\t C\u2192D \n\t\t\t 8:9 \n\t\t\t 1:1.125 \n\t\t\n\t\t\n\t\t\t minor third \n\t\t\t C\u2192E\u266d \n\t\t\t 5:6 \n\t\t\t 1:1.2 \n\t\t\n\t\t\n\t\t\t major third \n\t\t\t C\u2192E \n\t\t\t 4:5 \n\t\t\t 1:1.25 \n\t\t\n\t\t\n\t\t\t perfect fourth \n\t\t\t C\u2192F \n\t\t\t 3:4 \n\t\t\t 1:1.333 \n\t\t\n\t\t\n\t\t\t augmented fourth \nor diminished fifth \n\t\t\t C\u2192F\u266f/G\u266d \n\t\t\t 1:\u221a2 \n\t\t\t 1:1.414 \n\t\t\n\t\t\n\t\t\t perfect fifth \n\t\t\t C\u2192G \n\t\t\t 2:3 \n\t\t\t 1:1.5 \n\t\t\n\t\t\n\t\t\t minor sixth \n\t\t\t C\u2192A\u266d \n\t\t\t 5:8 \n\t\t\t 1:1.6 \n\t\t\n\t\t\n\t\t\t major sixth \n\t\t\t C\u2192A \n\t\t\t 3:5 \n\t\t\t 1:1.667 \n\t\t\n\t\t\n\t\t\t minor seventh \n\t\t\t C\u2192B\u266d \n\t\t\t 9:16 \n\t\t\t 1:1.778 \n\t\t\n\t\t\n\t\t\t major seventh \n\t\t\t C\u2192B \n\t\t\t 8:15 \n\t\t\t 1:1.875 \n\t\t\n\t\t\n\t\t\t octave \n\t\t\t C\u2192C\u2191 \n\t\t\t 1:2 \n\t\t\t 1:2 \n\t\t\n\t\t\n\t\t\t major tenth \n\t\t\t C\u2192E\u2191 \n\t\t\t 2:5 \n\t\t\t 1:2.5 \n\t\t\n\t\t\n\t\t\t major eleventh \n\t\t\t C\u2192F\u2191 \n\t\t\t 3:8 \n\t\t\t 1:2.667 \n\t\t\n\t\t\n\t\t\t major twelfth \n\t\t\t C\u2192G\u2191 \n\t\t\t 1:3 \n\t\t\t 1:3 \n\t\t\n\t\t\n\t\t\t double octave \n\t\t\t C\u2192C\u2191 \n\t\t\t 1:4 \n\t\t\t 1:4 \n\t\t\n\t\t\n\t\t\tName \n\t\t\tInterval in C \n\t\t\tRatio \n\t\t\tRatio (1:x) \n\t\t\n\n\nAnd now as percentages, of both the larger and smaller values in the ratios:\n\n\n\t\t\n\t\t\tName \n\t\t\tRatio \n\t\t\t% of larger value \n\t\t\t% of smaller value \n\t\t\n\t\t\n\t\t\t unison \n\t\t\t 1:1 \n\t\t\t 100% \n\t\t\t 100% \n\t\t\n\t\t\n\t\t\t minor second \n\t\t\t 15:16 \n\t\t\t 93.75% \n\t\t\t 106.667% \n\t\t\n\t\t\n\t\t\t major second \n\t\t\t 8:9 \n\t\t\t 88.889% \n\t\t\t 112.5% \n\t\t\n\t\t\n\t\t\t minor third \n\t\t\t 5:6 \n\t\t\t 83.333% \n\t\t\t 120% \n\t\t\n\t\t\n\t\t\t major third \n\t\t\t 4:5 \n\t\t\t 80% \n\t\t\t 125% \n\t\t\n\t\t\n\t\t\t perfect fourth \n\t\t\t 3:4 \n\t\t\t 75% \n\t\t\t 133.333% \n\t\t\n\t\t\n\t\t\t augmented fourth \nor diminished fifth \n\t\t\t 1:\u221a2 \n\t\t\t 70.711% \n\t\t\t 141.421% \n\t\t\n\t\t\n\t\t\t perfect fifth \n\t\t\t 2:3 \n\t\t\t 66.667% \n\t\t\t 150% \n\t\t\n\t\t\n\t\t\t minor sixth \n\t\t\t 5:8 \n\t\t\t 62.5% \n\t\t\t 160% \n\t\t\n\t\t\n\t\t\t major sixth \n\t\t\t 3:5 \n\t\t\t 60% \n\t\t\t 166.667% \n\t\t\n\t\t\n\t\t\t minor seventh \n\t\t\t 9:16 \n\t\t\t 56.25% \n\t\t\t 177.778% \n\t\t\n\t\t\n\t\t\t major seventh \n\t\t\t 8:15 \n\t\t\t 53.333% \n\t\t\t 187.5% \n\t\t\n\t\t\n\t\t\t octave \n\t\t\t 1:2 \n\t\t\t 50% \n\t\t\t 200% \n\t\t\n\t\t\n\t\t\t major tenth \n\t\t\t 2:5 \n\t\t\t 40% \n\t\t\t 250% \n\t\t\n\t\t\n\t\t\t major eleventh \n\t\t\t 3:8 \n\t\t\t 37.5% \n\t\t\t 266.667% \n\t\t\n\t\t\n\t\t\t major twelfth \n\t\t\t 1:3 \n\t\t\t 33.333% \n\t\t\t 300% \n\t\t\n\t\t\n\t\t\t double octave \n\t\t\t 1:4 \n\t\t\t 25% \n\t\t\t 400% \n\t\t\n\t\t\n\t\t\tName \n\t\t\tRatio \n\t\t\t% of larger value \n\t\t\t% of smaller value \n\t\t\n\n\nAs you can see, the simple musical intervals are expressed as ratios of small whole numbers (integers). We can then normalize them as ratios of one, as well as derive percentage values, both in terms of the smaller value to the larger, and vice versa. These are the numbers we can incorporate into our designs. If you\u2019ve ever written something like body { font: 100%/1.5 \"Museo Sans\", Helvetica, sans-serif; } in your CSS, you\u2019re already using a musical ratio: the perfect fifth.\n\nModular scales allow us to generate a set of numbers based on a musical interval that can be used for all kinds of typographic and layout decisions to create harmony in a visual design for the web. As Tim Brown said at the 2010 Build conference:\n\n\n\tI think that from that most atomic unit \u2013 type \u2013 whole experiences can resonate, whole experiences can be harmonious. And whole experiences can have a purpose suited to our design intentions.\n\n\nOnce more, with feeling: connectedness\n\nAs well as modular scales, there are other methods of incorporating musical interval ratios into our work. Setting the ratio of font size to line height in CSS is one such example. We could also create a typographic hierarchy using the same principle and combining several ratios that we know harmonize well musically in a chord:\n\nbody { font-size: 75%; } /* =12px = base size or tonic */\n\nh1 { font-size: 32px; font-size: 2.667rem; }\n /* =32px = 3:8 = major eleventh (C\u2192F\u2191) */\n\nh2 { font-size: 24px; font-size: 2rem; }\n /* =24px = 1:2 = octave (C\u2192C\u2191) */\n\nh3 { font-size: 20px; font-size: 1.667rem; }\n /* =20px = 3:5 = major sixth (C\u2192A) */\n\nfigcaption, small { font-size: 9px; font-size : 0.75rem }\n /* =9px = 3:4 = perfect fourth (C\u2192F) */\n\nWhoa! Hold your reindeer, Santa! How can we know what interval combinations work well together to form chords? Well, I\u2019m a classically trained musician, so perhaps I have an advantage. To avoid a long, technically complex digression into musical harmony, here are a few basic combinations of intervals that are harmonious in one way or another:\n\n\n\tunison; major third; perfect fifth; octave\n\tunison; perfect fourth; major sixth; octave\n\tunison; minor third; minor sixth; octave\n\tunison; minor third; diminished fifth; major sixth; octave\n\n\nThis isn\u2019t to say that other combinations can\u2019t be used to interesting effect and particular purpose \u2013 they surely can \u2013 but I have to make sure there\u2019s something left for you to experiment with in the wee small hours over the holiday. Bear in mind, though, were I to play you two notes from the same scale to form a minor second, for example, you\u2019d probably say it was dissonant and maybe that quality of the 15:16 ratio would be translated to the design.\n\n \n\nIn the typographic hierarchy above, you\u2019ll notice I used an interval in the higher octave, which affords a broader range of ratios while retaining the harmony. Thus, a perfect fifth (2:3) becomes a major twelfth (1:3), or a major sixth (3:5) becomes a major thirteenth (3:10).\n\nThe harmonic ratios can obviously be used as proportions for layout as well, in several different ways:\n\n\n\timage width and height (for example, 450\u00d7800px = 9:16 = minor seventh)\n\tmain content to page width (67%:100% = 2:3 = perfect fifth)\n\tpage width to viewport width (80%:100% = 4:5 = major third)\n\n\nOne great benefit of using such ratios in web design work is that they can be applied in responsive web design. Proportional values, based on percentages or equivalent em units, will scale with changing viewports, so your layout and image proportions can be maintained or deliberately changed, as we\u2019re about to find out, across devices.\n\nSmall speakers, tall speakers: binding to the device\n\nThe musical interval ratios also provide an opportunity, not only to create connectedness between the parts of a layout, but to bind the content to a device \u2013 that elusive antiphonal geometry. Just as a textblock and page resonate together, so too can web content and the screen. Earlier, I mentioned that several common screen aspect ratios match musical interval ratios. It would seem, then, that we have a set of proportions that we can use in different ways to establish and retain a sense of harmony that can be based on and change with those contexts. Using musical interval ratios, we can bind the display of our content to the device it\u2019s displayed on.\n\nIf you haven\u2019t met already, let me introduce you to the device-aspect-ratio property of CSS media queries.\n\n@media only screen and (device-aspect-ratio: 3/4) { }\n@media only screen and (device-aspect-ratio: 480/640) { }\n@media only screen and (device-aspect-ratio: 600/800) { }\n@media only screen and (device-aspect-ratio: 768/1024) { }\n\nRegardless of the precise pixel values, each of these media queries would apply to devices whose display area has an aspect ratio of 3:4. It works by comparing the device-width with the device-height. (It\u2019s not to be confused with aspect-ratio, which is defined by the width and height of the browser within the device.) The values in the media query are always presented as width/height, with portrait being the default orientation for smartphones and tablets; that is, to match an iPhone screen, you\u2019d use device-aspect-ratio: 2/3, not 3/2, which won\u2019t work.\n\nHere\u2019s a table of the musical intervals with their corresponding screens.\n\n\n\t\t\n\t\t\tName \n\t\t\tdevice-aspect-ratio \n\t\t\tScreens \n\t\t\tCommon resolutions (pixels) \n\t\t\n\t\t\n\t\t\t major third \n\t\t\t 5/4 \n\t\t\t TFT LCD computer screens \n\t\t\t 1,280\u00d71,024 \n\t\t\n\t\t\n\t\t\t perfect fourth \n\t\t\t 3/4 or 4/3 \n\t\t\t iPad, Kindle and other tablets, PDAs \n\t\t\t 320\u00d7240, 768\u00d71,024 \n\t\t\n\t\t\n\t\t\t perfect fifth \n\t\t\t 2/3 \n\t\t\t iPhone, iPod Touch \n\t\t\t 320\u00d7480, 640\u00d7960 \n\t\t\n\t\t\n\t\t\t minor sixth \n\t\t\t 8/5 (16/10) \n\t\t\t Many widescreens \n\t\t\t 1,152\u00d7720, 1,440\u00d7900, 1,920\u00d71,200 \n\t\t\n\t\t\n\t\t\t major sixth \n\t\t\t 3/5 \n\t\t\t Many smartphones \n\t\t\t 240\u00d7400, 480\u00d7800 \n\t\t\n\t\t\n\t\t\t minor seventh \n\t\t\t 16/9 or 9/16 \n\t\t\t Many widescreens and some smartphones \n\t\t\t 720\u00d71,280, 1,366\u00d7768, 1,920\u00d71,080, 2,560\u00d71,440 \n\t\t\n\n\n[You might argue that I\u2019m playing fast and loose with the ratios. I suppose, mathematically speaking, 9:16 is not the same as 16:9: I\u2019m no expert. But let\u2019s not throw the baby out with the bath water, particularly at Christmas.]\n\nWith this in mind, we can begin to write media queries that will influence various typographic and layout values in line with the aspect ratios of specific screens and in combination with em-based min-width queries that work from smaller, mobile screens to larger, desktop screens.\n\nHere\u2019s a very simple demo page with only some text, an image with a caption and a little basic layout: no seasonal overindulgence here.\n\nDemo: Sample page using device-aspect-ratio media queries based on musical interval ratios\n\nOur initial styles for all devices are based on the perfect fifth, with the major third and octave rounding things out into a harmonious whole, whether or not media queries are supported. For example:\n\nhtml { font-size: 100%; line-height: 1.5; }\n /* font-size:line-height = 16:24 = 2:3 = perfect fifth */\n\nh1 { font-size: 32px; font-size: 2rem; line-height: 1.25; }\n /* font-size:line-height = 32:40 = 4:5 = major third\n body:h1 = 16:32 = 1:2 = octave */\n\nWhile we should really consider methods of delivering images appropriate to the screen size, let\u2019s just stick to a single image for all devices. But why don\u2019t we change its aspect ratio from 4:3 to 3:2, to fit with our harmonic scheme? It\u2019s easy enough to do with overflow:hidden on the <figure> element to hide a part of the image, and a negative margin fudge:\n\nfigure img { margin: -8.5% 0 0 0; width: 100%; max-width: 100%; }\n\nOur first break point targets devices 320 pixels wide with an aspect ratio of 2:3, namely the iPhone and iPod Touch:\n\n/* 320px = 20\u00d716 */\n@media only screen and (min-width: 20em) and (device-aspect-ratio: 2/3) { }\n\nWe\u2019re actually already there, of course, as the intervals we\u2019ve chosen resonate with this aspect ratio \u2013 the content is already bound to the device.\n\nOur next media query, then, will make some changes to match a different ratio, the major sixth (3:5), which is same as that of many smartphones:\n\n/* 480px = 30\u00d716 */\n@media only screen and (min-width: 30em) and (device-aspect-ratio: 3/5) { }\n\nA different aspect ratio might require a change in harmony. For devices with these proportions, we\u2019ll now use the perfect fourth (3:4) and the major sixth (3:5) along with the octave we already have to create a new resonating harmony. For instance, a slightly wider screen means we can increase the line-height to aid the legibility of longer lines:\n\nhtml { line-height: 1.667; }\n /* font-size:line-height = 16:26.672 = 3:5 = major sixth */\n\nh1 { font-size: 32px; font-size: 2rem; line-height: 1.667; }\n /* font-size:line-height = 32:53.333 = 3:5 = major sixth\n body:h1 = 16:32 = 1:2 = octave */\n\nand we can remove the negative margin to display our 4:3 image in its entirety.\n\nEach screen displays content styled using relationships related to its own proportions. On the left, an iPhone 4 (2:3); on the right, a Samsung Nexus S (3:5). Your mileage may vary.\n\nAnother device, another media query. At 768 pixels, screens are wide enough to add columns. The ratios we\u2019ve used for the 3:5 screens include the perfect fourth (3:4) so we don\u2019t need to change any of the font measurements, but we can base the proportions of the columns on the major sixth interval:\n\narticle { float: left; width: 56%; }\n /* width of main column 3:5 (60% of 100%, major sixth)\n incorporating gutter width */\n\naside { float : right; width : 36%; }\n\nOn devices with a 3:4 aspect ratio, this works even better in landscape orientation.\n\nWhile not every screen over 768 pixels wide will have 3:4 proportions, the range of intervals informing the design ensure harmonious relationships between the different parts of the layout.\n\nFor wide screens proper (break point at 1,280 pixels) we can employ a new set of harmonious intervals. Many laptop and desktop screens have a 16:10 aspect ratio, which boils down to 8:5, equivalent to the minor sixth (5:8). Combined with a minor third (5:6) and the octave (1:2), this creates a new harmony appropriate to these devices. Let\u2019s increase the font size and change the image\u2019s aspect ratio to match:\n\nhtml { font-size: 120%; line-height: 1.6; }\n /* font-size increased for wider screens from 16px to 19.2px\n (5:6 = minor third)\n font-size:line-height = 19.2:30.72 = 5:8 = minor sixth */\n\nfigure img { margin: -12.5% 0 0 ; }\t\n /* using -ve margin combined with overflow:hidden\n on the figure element\n to crop the image from 4:3 to 8:5 = minor sixth */\n\nA wide screen with a 8:5 (16:10) aspect ratio and an image to match.\n\nWith more pixels at our disposal, we can also now use the musical interval ratios to determine the width of the layout, and change the column proportions as well:\n\nsection { margin: 0 auto; width: 83.333%; }\n /* content width:screen width = 5:6 = minor third */\n\narticle { width: 60%; }\n /* width of main column 5:8 (62.5% of 100%, minor sixth)\n incorporating gutter width */\n\naside { width: 35%; }\n\nWith some carefully targeted media queries, we can begin to reach towards fulfilling the second and third tenets of Boulton\u2019s new canon for web design: connecting the parts of content through relationships embodied in the layout design; and binding the content to the devices people use to access it.\n\nCoda\n\nMusical interval ratios and screen aspect ratios reveal more than convenient correspondence. These proportions work on a deep aesthetic level. Much is claimed for the golden ratio \u03c6, but none of the screens pervading our lives use it. Perhaps that\u2019s an accident of technology, but can making screens to \u03c6\u2019s proportions be more difficult or expensive than 2:3 or 3:4 or 16:10? Here, then, is not just one but a set of proportions with a uniquely human focus, originating in nature, recognized in antiquity, fundamental still.\n\nWe find music to be an art steeped with meaning, yet, unlike literary and representational arts, purely instrumental music has no obvious semantic content. It boasts an ability to express emotions while remaining an abstract art in some sense, which makes it very like design. These days, we\u2019re rightly encouraged to design for emotion, to make our users\u2019 experience meaningful, seductive, delightful. Using musical ideas and principles in our designs can help achieve those ends.\n\nLet\u2019s not be na\u00efve, of course; designing web pages is even less like composing music than it\u2019s like designing for print. In visual design, the eye will always be sovereign to the ear; following these principles will only get us so far. We cannot truly claim that a carefully composed web page layout will have the same qualities and effect as any musical patterns that inform it. In music, a set of intervals is always harmonious in relation to other sets of intervals: music rarely stands still. What aspect ratios will future screens take? Already today there is great variation in devices and support for media queries (and within that, support for device-aspect-ratio). And what of non-western musical traditions? Or rhythm, form, tempo and dynamics? What I\u2019ve demonstrated above is only a suggestion, a tentative exploration of one possible way forward.\n\nBut as our discipline matures and we become more articulate about what we do, we must look longer and deeper into areas of human endeavour already rich with value. Music is a fertile ground to explore and has the potential to yield up new approaches for web design.\n\nFootnotes\n\n \n Just intonation is a system of tuning that uses small integers to describe the musical intervals, based initially on the perfect fifth, that most consonant of intervals. Simple instruments such as vibrating strings and natural horns, as well as unaccompanied voices, tend to fall into just intonation naturally.", "year": "2011", "author": "Owen Gregory", "author_slug": "owengregory", "published": "2011-12-09T00:00:00+00:00", "url": "https://24ways.org/2011/composing-the-new-canon/", "topic": "design"} {"rowid": 269, "title": "Adaptive Images for Responsive Designs\u2026 Again", "contents": "When I was asked to write an article for 24 ways I jumped at the chance, as I\u2019d been wanting to write about some fun hacks for responsive images and related parsing behaviours. My heart sank a little when Matt Wilcox beat me to the subject, but it floated back up when I realized I disagreed with his method and still had something to write about.\n\nSo, Matt Wilcox, if that is your real name (and I\u2019m pretty sure it is), I disagree. I see your dirty server-based hack and raise you an even dirtier client-side hack. Evil laugh, etc., etc.\n\nYou guys can stomach yet another article about responsive design, right? Right?\n\nHalf the room gets up to leave\n\nWhoa, whoa\u2026 OK, I\u2019ll cut to the chase\u2026\n\nTL;DR\n\nIn a previous episode, we were introduced to Debbie and her responsive cat poetry page. Well, now she\u2019s added some reviews of cat videos and some images of cats. Check out her new page and have a play around with the browser window. At smaller widths, the images change and the design responds. The benefits of this method are:\n\n\n\tit\u2019s entirely client-side\n\timages are still shown to users without JavaScript\n\tyour media queries stay in your CSS file\n\tno repetition of image URLs\n\tno extra downloads per image\n\tit\u2019s fast enough to work on resize\n\tit\u2019s pure filth\n\n\nWhat\u2019s wrong with the server-side solution?\n\nResponsive design is a client-side issue; involving the server creates a boatload of problems.\n\n\n\tIt sets a cookie at the top of the page which is read in subsequent requests. However, the cookie is not guaranteed to be set in time for requests on the same page, so the server may see an old value or no value at all.\n\tServing images via server scripts is much slower than plain old static hosting.\n\tThe URL can only cache with vary: cookie, so the cache breaks when the cookie changes, even if the change is unrelated. Also, far-future caching is out for devices that can change width.\n\tIt depends on detecting screen width, which is rather messy on mobile devices.\n\tResponding to things other than screen width (such as DPI) means packing more information into the cookie, and a more complicated script at the top of each page.\n\n\nSo, why isn\u2019t this straightforward on the client?\n\nClient-side solutions to the problem involve JavaScript testing user agent properties (such as screen width), looping through some images and setting their URLs accordingly. However, by the time JavaScript has sprung into action, the original image source has already started downloading. If you change the source of an image via JavaScript, you\u2019re setting off yet another request.\n\nImages are downloaded as soon as their DOM node is created. They don\u2019t need to be visible, they don\u2019t need to be in the document.\n\nnew Image().src = url\n\nThe above will start an HTTP request for url. This is a handy trick for quick requests and preloading, but also shows the browser\u2019s eagerness to download images.\n\nHere\u2019s an example of that in action. Check out the network tab in Web Inspector (other non-WebKit development aids are available) to see the image requests.\n\nBecause of this, some client-side solutions look like this:\n\n<img src=\"t.gif\" data-src=\"real-image.jpg\" data-bigger-src=\"real-bigger-image.jpg\">\n\nwhere t.gif is a 1\u00d71px tiny transparent GIF.\n\nThis results in no images if JavaScript isn\u2019t available. Dealing with the absence of JavaScript is still important, even on mobile. I was recently asked to make a website work on an old Blackberry 9000. I was able to get most of the way there by preventing that OS parsing any JavaScript, and that was only possible because the site didn\u2019t depend on it.\n\nWe need to delay loading images for JavaScript users, but ensure they load for users without JavaScript. How can we conditionally parse markup depending on JavaScript support?\n\nOh yeah! <noscript>!\n\n<noscript>\n <img src=\"image.jpg\">\n</noscript>\n\nWhoa! First spacer GIFs and now <noscript>? This really is the future! The image above will only load for users without JavaScript support. Now all we need to do is send JavaScript in there to get the textContent of the <noscript> element, then we can alter the image source before handing it to the DOM for parsing.\n\nHere\u2019s an example of that working \u2026 unless you\u2019re using Internet Explorer.\n\nInternet Explorer doesn\u2019t retain the content of <noscript> elements. As soon as it\u2019s parsed, it considers it an empty element. FANKS INTERNET EXPLORER. This is why some solutions do this:\n\n<noscript data-src=\"image.jpg\">\n <img src=\"image.jpg\">\n</noscript>\n\nso JavaScript can still get at the URL via the data-src attribute. However, repeating stuff isn\u2019t great. Surely we can do better than that.\n\nA dirty, dirty hack\n\nThankfully, I managed to come up with a solution, and by me, I mean someone cleverer than me. Pornel\u2019s solution uses <noscript>, but surely we don\u2019t need that.\n\nNow, before we look at this, I can\u2019t stress how dirty it is. It\u2019s so dirty that if you\u2019ve seen it, schools will refuse to employ you.\n\n<script>document.write('<' + '!--')</script>\n<img src=\"image.jpg\">\n<!---->\n\nPhwoar! Dirty, isn\u2019t it? I\u2019ll stop for a moment, so you can go have a wash.\n\nDone? Excellent.\n\nWith this, the image is wrapped in a comment only for users with JavaScript. Without JavaScript, we get the image. Unlike the <noscript> example above, we can get the text content of the comment pretty easily.\n\nHurrah! But wait\u2026 Some browsers are sometimes downloading the image, even with JavaScript enabled. Notably Firefox. Huh?\n\nImages are downloaded in comments now? What?\n\nNo. What we\u2019re seeing here is the effect of speculative parsing. Here\u2019s what\u2019s happening:\n\n\n\nWhile the browser is parsing the script, it parses the rest of the document. This is usually a good thing, as it can download subsequent images and scripts without waiting for the script to complete. The problem here is we create an unbalanced tree.\n\n An unbalanced tree, yesterday.\n\nIn this case, the browser must throw away its speculative parsing and reparse from the end of the <script> element, taking our document.write into consideration. Unfortunately, by this stage it may have already discovered the image and sent an HTTP request for it.\n\nA dirty, dirty hack\u2026 that works\n\nPornel was right: we still need the <noscript> element to cater for browsers with speculative parsing.\n\n<script>document.write('<' + '!--')</script><noscript>\n <img src=\"image.jpg\">\n</noscript -->\n\nAnd there we have it. We can now prevent images loading for users with JavaScript, but we can still get at the markup.\n\nWe\u2019re still creating an unbalanced tree and there\u2019s a performance impact in that. However, the parser won\u2019t have got far by the time our script executes, so the impact is small. Unbalanced trees are more of a concern for external scripts; a lot of parsing can happen by the time the script has downloaded and parsed.\n\nUsing dirtiness to create responsive images\n\nNow all we need to do is give each of our dirty scripts a class name, then JavaScript can pick them up, grab the markup from the comment and decide what to do with the images.\n\nThis technique isn\u2019t exclusively useful for responsive images. It could also be used to delay images loading until they\u2019ve scrolled into view. But to do that you\u2019ll need a bulletproof way of detecting when elements are in view. This involves getting the height of the viewport, which is extremely unreliable on mobile devices.\n\nHere\u2019s a hastily thrown together example showing how it can be used for responsive images.\n\nI adjust the end of the image URLs conditionally depending on the result of media queries. This is done on page load, and on resize.\n\nI\u2019m using regular expressions to alter the URLs. Using regex to deal with HTML is usually a sign of insanity, but parsing it with the browser\u2019s DOM parser would trigger the download of images before we change the URLs. My implementation currently requires double-quoted image URLs, because I\u2019m lazy. Wanna fight about it?\n\nMedia querying via JavaScript\n\nJeremy Keith used document.documentElement.clientWidth in his example, which is great as a proof of concept, but unfortunately is rather unreliable across mobile devices.\n\nThankfully, standards are coming to the rescue with window.matchMedia, which lets us provide a media query string and get a boolean result. There\u2019s even a great polyfill for browsers that don\u2019t support it (as long as they support media queries in CSS).\n\nI didn\u2019t go with that for three reasons:\n\n\n\tI\u2019d like to keep media queries in the CSS file, if possible.\n\tI wanted something a little lighter to keep things speedy while resizing.\n\tIt\u2019s just not dirty enough yet.\n\n\nTo make things ultra-dirty, I add a test element to the page with a specific class, let\u2019s say media-test. Then, I control the width of it using media queries in my CSS file:\n\n@media all and (min-width: 640px) {\n .media-test {\n width: 1px;\n }\n}\n@media all and (min-width: 926px) {\n .media-test {\n width: 2px;\n }\n}\n\nThe JavaScript part changes the URL suffix depending on the width of media-test. I\u2019m using a min-width media query, but you can use others such as pixel-ratio to detect high DPI displays. Basically, it\u2019s a hacky way for CSS to set a value that can be picked up by JavaScript. It means the bit that signals changes to the images sits with the rest of the responsive code, without duplication.\n\nAlso, phwoar, dirty!\n\nThe API\n\nI threw a script together to demonstrate the technique. I\u2019m not particularly attached to it, I\u2019m not even sure I like it, but here\u2019s the API:\n\nresponsiveGallery({\n // Class name of dirty script element(s) to target\n scriptClass: 'dirty-gallery-script',\n // Class name for our test element\n testClass: 'dirty-gallery-test',\n // The initial suffix of URLs, the bit that changes.\n initialSuffix: '-mobile.jpg',\n // A map of suffixes, for each width of 'dirty-gallery-test'\n suffixes: {\n '1': '-desktop.jpg',\n '2': '-large-desktop.jpg',\n '3': '-mobile-retina.jpg'\n }\n});\n\nThe API can cover individual images or multiple galleries at once. In the example I gave at the start of the article I make two calls to the API, one for both galleries, and one for the single image above the video reviews. They\u2019re separate calls because they respond slightly differently.\n\nThe future\n\nHopefully, we\u2019ll get a proper solution to this soon. My favourite suggestion is the <picture> element that Bruce Lawson covers.\n\n<picture alt=\"Angry pirate\">\n <source src=\"hires.png\" media=\"min-width:800px\">\n <source src=\"midres.png\" media=\"min-width:480px\">\n <source src=\"lores.png\">\n <!-- fallback for browsers without support -->\n <img src=\"midres.png\" alt=\"Angry pirate\">\n</picture>\n\nUnfortunately, we\u2019re nowhere near that yet, and I\u2019d still rather have my media queries stay in CSS. Perhaps the source elements could be skipped if they\u2019re display:none; then they could have class names and be controlled via CSS. Sigh.\n\nWell, I\u2019m tired of writing now and I\u2019m sure you\u2019re tired of reading. I realize what I\u2019ve presented is a yet another dirty hack to the responsive image problem (perhaps the dirtiest?) and may be completely unfeasible in professional situations. But isn\u2019t that the true spirit of Christmas?\n\nNo.", "year": "2011", "author": "Jake Archibald", "author_slug": "jakearchibald", "published": "2011-12-08T00:00:00+00:00", "url": "https://24ways.org/2011/adaptive-images-for-responsive-designs-again/", "topic": "ux"} {"rowid": 282, "title": "Front-end Style Guides", "contents": "We all know that feeling: some time after we launch a site, new designers and developers come in and make adjustments. They add styles that don\u2019t fit with the content, use typefaces that make us cringe, or chuck in bloated code. But if we didn\u2019t leave behind any documentation, we can\u2019t really blame them for messing up our hard work.\n\nTo counter this problem, graphic designers are often commissioned to produce style guides as part of a rebranding project. A style guide provides details such as how much white space should surround a logo, which typefaces and colours a brand uses, along with when and where it is appropriate to use them.\n\nDesign guidelines\n\nSome design guidelines focus on visual branding and identity. The UK National Health Service (NHS) refer to theirs as \u201cbrand guidelines\u201d. They help any designer create something such as a trustworthy leaflet for an NHS doctor\u2019s surgery. Similarly, Transport for London\u2019s \u201cdesign standards\u201d ensure the correct logos and typefaces are used in communications, and that they comply with the Disability Discrimination Act.\n\nSome guidelines go further, encompassing a whole experience, from the visual branding to the messaging, and the icon sets used. The BBC calls its guidelines a \u201cGlobal Experience Language\u201d or GEL. It\u2019s essential for maintaining coherence across multiple sites under the same BBC brand.\n\n\nThe BBC\u2019s Global Experience Language.\n\nDesign guidelines may be brief and loose to promote creativity, like Mozilla\u2019s \u201cbrand toolkit\u201d, or be precise and run to many pages to encourage greater conformity, such as Apple\u2019s \u201cHuman Interface Guidelines\u201d.\n\nWhatever name or form they\u2019re given, documenting reusable styles is invaluable when maintaining a brand identity over time, particularly when more than one person (who may not be a designer) is producing material.\n\nCode standards documents\n\nWe can make a similar argument for code. For example, in open source projects, where hundreds of developers are submitting code, it makes sense to set some standards. Drupal and Wordpress have written standards that make editing code less confusing for users, and more maintainable for contributors.\n\nEach community has nuances: Drupal requests that developers indent with two spaces, while Wordpress stipulates a tab. Whatever the rules, good code standards documents also explain why they make their recommendations.\n\nThe front-end developer\u2019s style guide\n\nDesign style guides and code standards documents have been a successful way of ensuring brand and code consistency, but in between the code and the design examples, web-based style guides are emerging. These are maintained by front-end developers, and are more dynamic than visual design guidelines, documenting every component and its code on the site in one place.\n\nHere are a few examples I\u2019ve seen in the wild:\n\nNatalie Downe\u2019s pattern portfolio\n\nNatalie created the pattern portfolio system while working at Clearleft. The phrase describes a single HTML page containing all the site\u2019s components and styles that can act as a deliverable.\n\n\nPattern portfolio by Natalie Downe for St Paul\u2019s School, kept up to date when new components are added. The entire page is about four times the length shown.\n\nEach different item within a pattern portfolio is a building block or module. The components are decoupled from the layout, and linearized so they can slot into anywhere on a page.\n\n\n\tThe pattern portfolio expresses every component and layout structure in the smallest number of documents. It sets out how the markup and CSS should be, and is used to illustrate the project\u2019s shared vocabulary.\n\n\tNatalie Downe\n\n\nBy developing a system, rather than individual pages, the result is flexible when the client wants to add more pages later on.\n\nPaul Lloyd\u2019s style guide\n\nPaul Lloyd has written an extremely comprehensive style guide for his site. Not only does it feature every plausible element, but it also explains in detail when it\u2019s appropriate to use each one.\n\n\nPaul\u2019s style guide is also great educational material for people learning to write code.\n\nOli Studholme\u2019s style guide\n\nEven though Oli\u2019s style guide is specific to his site, he\u2019s written it as though it\u2019s for someone else. It\u2019s exhaustive and gives justifications for all his decisions. In some places, he links to browser bug tickets and makes recommendations for cross-browser compatibility.\n\n\nOli has released his style guide under a Creative Commons Attribution Share-alike license, and encourages others to create their own versions.\n\nJeremy Keith\u2019s pattern primer\n\nFront-end style guides may have comments written in the code, annotations that appear on the page, or they may list components alongside their code, like Jeremy\u2019s pattern primer.\n\n\nYou can watch or fork Jeremy\u2019s pattern primer on Github.\n\n\n\nLinearizing components like this resembles a kind of mobile first approach to development, which Jeremy talks about on the 5by5 podcast: The Web Ahead 3.\n\nThe benefits of maintaining a front-end style guide\n\nIf you still need convincing that producing documentation like this for every project is worth the effort, here are a few nice side-effects to working this way:\n\nEasier to test\n\nA unified style guide makes it easier to spot where your design breaks. It\u2019s simple to check how components adapt to different screen widths, test for browser bugs and develop print style sheets when everything is on the same page. When I worked with Natalie, she\u2019d resize the browser window and bump the text size up and down during development to see if anything would break.\n\nBetter workflow\n\nThe approach also forces you to think how something works in relation to the whole site, rather than just a specific page, making it easier to add more pages later on. Starting development by creating a style guide makes a lot more sense than developing on a page-by-page basis.\n\nShared vocabulary\n\nNatalie\u2019s pattern portfolio in particular creates a shared vocabulary of names for components (teaser, global navigation, carousel\u2026), so a team can refer to different regions of the site and have a shared understanding of its meaning.\n\nUseful reference\n\nA combined style guide also helps designers and writers to see the elements that will be incorporated in the site and, therefore, which need to be designed or populated. A boilerplate list of components for every project can act as a reminder of things that may get missed in the design, such as button states or error messages. \n\nCreating your front-end style guide\n\nAs you\u2019ve seen, there are plenty of variations on the web style guide. Which method is best depends on your project and workflow. Let\u2019s say you want to show your content team how blockquotes and asides look, when it\u2019s appropriate to use them, and how to create them within the CMS. In this case, a combination of Jeremy\u2019s pattern primer and Paul\u2019s descriptive style guide \u2013 with the styled component alongside a code snippet and a description of when to use it \u2013 may be ideal. \n\nStart work on your style guide as soon as you can, preferably during the design stage:\n\n\n\tSimply presenting flat image comps is by no means enough\u2009-\u2009it\u2019s only the start\u2026 As layouts become more adaptable, flexible and context-specific, so individual components will become the focus of our design. It is therefore essential to get the foundational aspects of our designs right, and style guides allow us to do that.\n\n\tPaul Lloyd on Style guides for the Web \n\n\n\n\tPrint out the designs and label the unique elements and components you\u2019ll need to add to your style guide. Make a note of the purpose of each component. While you\u2019re doing this, identify the main colours used for things like links, headings and buttons.\nI draw over the print-outs on to tracing paper so I can make more annotations. Here, I\u2019ve started annotating the widths from the designer\u2019s mockup so I can translate these into percentages.\n\tStart developing your style guide with base styles that target core elements: headings, links, tables, blockquotes, ordered lists, unordered lists and forms. For these elements, you could maintain a standard document to reuse for every project.\n\tNext, add the components that override the base styles, like search boxes, breadcrumbs, feedback messages and blog comments. Include interaction styles, such as hover, focus and visited state on links, and hover, focus and active states on buttons.\n\tNow start adding layout and begin slotting the components into place. You may want to present each layout as a separate document, or you could have them all on the same page stacked beneath one another.\n\n\nDocument code practices\n\nCode can look messy when people use different conventions, so note down a standard approach alongside your style guide. For example, Paul Stanton has documented how he writes CSS.\n\nThe gift wrapping\n\nPresenting this documentation to your client may be a little overwhelming so, to be really helpful, create a simple page that links together all your files and explains what each of them do.\n\n\nThis is an example of a contents page that Clearleft produce for their clients. They\u2019ve added date stamps, subversion revision numbers and written notes for each file.\n\nEncourage participation\n\nThere\u2019s always a risk that the person you\u2019re writing the style guide for will ignore it completely, so make your documentation as user-friendly as possible. Justify why you do things a certain way to make it more approachable and encourage similar behaviour.\n\nAs always, good communication helps. Working with the designer to put together this document will improve the site. It\u2019s often not practical for designers to provide a style for everything, so drafting a web style guide and asking for feedback gives designers a chance to make sure any default styles fit in.\n\nIf you work in a team with other developers, documenting your code and development decisions will not only be useful as a deliverable, but will also force you to think about why you do things a certain way.\n\nFuture-friendly\n\nThe roles of designer and developer are increasingly blurred, yet all too often we work in isolation. Working side-by-side with designers on web style guides can vastly improve the quality of our work, and the collaborative approach can spark discussions like \u201chow would this work on a smaller screen?\u201d\n\nSometimes we can be so focused on getting the site ready and live, that we lose sight of what happens after it\u2019s launched, and how it\u2019s going to be maintained. A simple web style guide can make all the difference.\n\nIf you make your own style guide, I\u2019d love to add it to my stash of examples so please share a link to it in the comments.", "year": "2011", "author": "Anna Debenham", "author_slug": "annadebenham", "published": "2011-12-07T00:00:00+00:00", "url": "https://24ways.org/2011/front-end-style-guides/", "topic": "process"} {"rowid": 286, "title": "Defending the Perimeter Against Web Widgets", "contents": "On July 14, 1789, citizens of Paris stormed the Bastille, igniting a revolution that toppled the French monarchy. On July 14 of this year, there was a less dramatic (though more tweeted) takedown: The Deck network, which delivers advertising to some of the most popular web design and culture destinations, was down for about thirty minutes. During this period, most partner sites running ads from The Deck could not be viewed as result.\n\nA few partners were unaffected (aside from not having an ad to display). Fortunately, Dribbble, was one of them. In this article, I\u2019ll discuss outages like this and how to defend against them. But first, a few qualifiers: The Deck has been rock solid \u2013 this is the only downtime we\u2019ve witnessed since joining in June. More importantly, the issues in play are applicable to any web widget you might add to your site to display third-party content.\n\nDown and out\n\nYour defense is only as good as its weakest link. Web pages are filled with links, some of which threaten the ability of your page to load quickly and correctly. If you want your site to work when external resources fail, you need to identify the weak links on your site. In this article, we\u2019ll talk about web widgets as a point of failure and defensive JavaScript techniques for handling them.\n\nWidgets 101\n\nImagine a widget that prints out a Pun of the Day on your site. A simple technique for both widget provider and consumer is for the provider to expose a URL:\n\nhttp://widgetjonesdiary.com/punoftheday.js\n\nwhich returns a JavaScript file like this:\n\ndocument.write(\"<h2>The Pun of the Day</h2><p>Where do frogs go for beers after work? Hoppy hour!</p>\");\n\nThe call to document.write() injects the string passed into the document where it is called. So to display the widget on your page, simply add an external script tag where you want it to appear:\n\n<div class=\"punoftheday\">\n <script src=\"http://widgetjonesdiary.com/punoftheday.js\"></script>\n <!-- Content appears here as output of script above -->\n</div>\n\nThis approach is incredibly easy for both provider and consumer. But there are implications\u2026\n\ndocument.write()\u2026 or wrong?\n\nAs in the example above, scripts may perform a document.write() to inject HTML. Page rendering halts while a script is processed so any output can be inlined into the document. Therefore, page rendering speed depends on how fast the script returns the data. If an external JavaScript widget hangs, so does the page content that follows. It was this scenario that briefly stalled partner sites of The Deck last summer.\n\nThe elegant solution\n\nTo make our web widget more robust, calls to document.write() should be avoided. This can be achieved with a technique called JSONP (AKA JSON with padding). In our example, instead of writing inline with document.write(), a JSONP script passes content to a callback function:\n\npublishPun(\"<h2>Pun of the Day</h2><p>Where do frogs go for beers after work? Hoppy hour!</p>\");\n\nThen, it\u2019s up to the widget consumer to implement a callback function responsible for displaying the content. Here\u2019s a simple example where our callback uses jQuery to write the content into a target <div>:\n\n<!-- Where widget content should appear -->\n<div class=\"punoftheday\"></div>\n\n\u2026\n\n\n<br />\nfunction publishPun(content) {\n $(&#8216;.punoftheday&#8217;).html(content); // Writes content display location<br />\n}<br />\n\n\n\n\nView Example 1\n\nEven if the widget content appears at the top of the page, our script can be included at the bottom so it\u2019s non-blocking: a slow response leaves page rendering unaffected. It simply invokes the callback which, in turn, writes the widget content to its display destination.\n\nThe hack\n\nBut what to do if your provider doesn\u2019t support JSONP? This was our case with The Deck. Returning to our example, I\u2019m reminded of computer scientist David Wheeler\u2019s statement, \u201cAll problems in computer science can be solved by another level of indirection\u2026 Except for the problem of too many layers of indirection.\u201d\n\nIn our case, the indirection is to move the widget content into position after writing it to the page. This allows us to place the widget <script> tag at the bottom of the page so rendering won\u2019t be blocked, but still display the widget in the target. The strategy:\n\n\n\tLoad widget content into a hidden <div> at the bottom of the page.\n\tMove the loaded content from the hidden <div> to its display location.\n\n\nand the code:\n\n<!-- Where widget content should appear -->\n<div class=\"punoftheday\"></div>\n\n\u2026\n\n\n\n \n \n\n\n\n<br />\n$(&#8216;.punoftheday&#8217;).append($(&#8216;.loading-dock&#8217;).children(&#8216;:gt(0)&#8217;));<br />\n\nView Example 2\n\nAfter the external punoftheday.js script has processed, the rendered HTML will look as follows:\n\n<div class=\"loading-dock hidden\">\n <script src=\"http://widgetjonesdiary.com/punoftheday.js\"></script>\n <h2>Pun of the Day</h2>\n <p>Where do frogs go for beers after work? Hoppy hour!</p>\n</div>\n\nThe \u2018loading-dock\u2019 <div> now includes the widget content, albeit hidden from view (if we\u2019ve styled the \u2018hidden\u2019 class with display: none). There\u2019s just one more step: move the content to its display destination. This line of jQuery (from above) does the trick:\n\n$('.punoftheday').append($('.loading-dock').children(':gt(0)'));\n\nThis selects all child elements in the \u2018loading-doc\u2019 <div> except the first \u2013 the widget <script> tag which generated it \u2013 and moves it to the display destination. Worth noting is the :gt(0) jQuery selector extension, which allows us to exclude the first (in a 0-based array) child element \u2013 the widget <script> tag \u2013 from selection.\n\nSince all of this happens at the bottom of the page, just before the </body> tag, no rendering has to wait on the external widget script. The only thing that fails if our widget hangs is\u2026 the widget itself. Our weakest link has been strengthened and so has our site. DE-FENSE!", "year": "2011", "author": "Rich Thornett", "author_slug": "richthornett", "published": "2011-12-06T00:00:00+00:00", "url": "https://24ways.org/2011/defending-the-perimeter-against-web-widgets/", "topic": "process"} {"rowid": 266, "title": "Collaborative Development for a Responsively Designed Web", "contents": "In responsive web design we\u2019ve found a technique that allows us to design for the web as a medium in its own right: one that presents a fluid, adaptable and ever changing canvas.\n\nUntil this point, we gave little thought to the environment in which users will experience our work, caring more about the aggregate than the individual. The applications we use encourage rigid layouts, whilst linear processes focus on clients signing off paintings of websites that have little regard for behaviour and interactions. The handover of pristine, pixel-perfect creations to developers isn\u2019t dissimilar to farting before exiting a crowded lift, leaving front-end developers scratching their heads as they fill in the inevitable gaps. If you haven\u2019t already, I recommend reading Drew\u2019s checklist of things to consider before handing over a design.\n\nSomehow, this broken methodology has survived for the last fifteen years or so. Even the advent of web standards has had little impact. Now, as we face an onslaught of different devices, the true universality of the web can no longer be ignored.\n\nResponsive web design is just the thin end of the wedge. Largely concerned with layout, its underlying philosophy could ignite a trend towards interfaces that adapt to any number of different variables: input methods, bandwidth availability, user preference \u2013 you name it!\n\nWith such adaptability, a collaborative and iterative process is required. Ethan Marcotte, who worked with the team behind the responsive redesign of the Boston Globe website, talked about such an approach in his book:\n\n\n\tThe responsive projects I\u2019ve worked on have had a lot of success combining design and development into one hybrid phase, bringing the two teams into one highly collaborative group.\n\n\nWhilst their process still involved the creation of desktop-centric mock-ups, these were presented to the entire team early on, where questions about how pages might adapt and behave at different sizes were asked. Mock-ups were quickly converted into HTML prototypes, meaning further decisions could be based on usage rather than guesswork (and endless hours spent in Photoshop).\n\nRegardless of the exact process, it\u2019s clear that the relationship between our two disciplines is more crucial than ever. Yet, historically, it seems a wedge has been driven between us \u2013 perhaps a result of segregation and waterfall-style processes \u2013 resulting in animosity.\n\nSo how can we improve this relationship? Ultimately, we\u2019ll need to adapt, but even within existing workflows we can start to overlap. Simply adjusting our attitude can effect change, and bring design and development teams closer together.\n\n\n\tGood design is constant contact.\n\n\tMark Otto\n\n\nThe way we work needs to be more open and inclusive. For example, ensuring members of the development team attend initial kick-off meetings and design workshops will not only ensure technical concerns are raised, but mean that those implementing our designs better understand the problems we\u2019re trying to solve.\n\nIt can also be useful at this stage to explain how you work and the sort of deliverables you expect to produce. This will give developers a chance to make recommendations on how these can be optimized for their own needs.\n\nYou may even find opportunities to share the load. On a recent project I worked on, our development partners offered to produce the interactive prototypes needed for user testing. This allowed us to concentrate on refining the experience, whilst they were able to get a head start on building the product.\n\nWhile developers should be involved at the beginning of projects, it\u2019s also important that designers are able to review and contribute to a product as it\u2019s being built. Any handover should be done in person, and ideally you\u2019ll have a day set aside to do so. Having additional budget available for follow-up design reviews is also recommended. Learning how to use version control tools like Subversion or Git will allow you to work within the same environment as developers, and allow you to contribute code or graphic assets directly to a project if needed.\n\nDon\u2019t underestimate the benefits of designer and developer sitting next to each other. Subtle nuances can be explored far more easily than if they were conducted over email or phone. As Ethan writes, \u201c\u2018Design\u2019 is the means, not merely the end; the path we walk over the course of a project, the choices we make\u201d.\n\nIt\u2019s from collaboration like this that I\u2019ve become fond of producing visual style guides. These demonstrate typographic treatments for common markup and patterns (blockquotes, lists, pagination, basic form controls and so on). Thinking in terms of components rather than individual pages not only fits in better with how a developer will implement a site, but can also ensure your design works as a coherent whole.\n\nDespite the amount of research and design produced, when it comes to the crunch, there will always be a need for compromise. As the old saying goes, \u2018fast, cheap and good \u2013 pick two.\u2019 It\u2019s important that you know which pieces are crucial to a design and which areas can allow for movement. Pick your battles wisely. Having an agreed set of design principles can be useful when making such decisions, as they help everyone focus on the goals of the project.\n\n\n\tThe best compromises are reached when both sides understand the issues of the other.\n\n\tRichard Rutter\n\n\nUltimately, better collaboration comes through a shared understanding of the different competencies required to build a website. Instead of viewing ourselves in terms of discrete roles, we should instead look to emphasize our range of abilities, and work with others whose skills are complementary.\n\nPerhaps somebody who actively seeks to broaden their knowledge is the mark of a professional. Seek these people out.\n\nThe best developers I\u2019ve worked with have a respect for design, probably having attempted to do some themselves! Having wrangled with a few MySQL databases myself, I certainly believe the obverse is true. While knowing HTML won\u2019t necessarily make you a better designer, it will help you understand the issues being faced by a front-end developer and, more importantly, allow you to offer solutions or alternative approaches.\n\nSo take a moment to think about how you work with developers and how you could improve your relationship with them. What are you doing to ease the path towards our collaborative future?", "year": "2011", "author": "Paul Lloyd", "author_slug": "paulrobertlloyd", "published": "2011-12-05T00:00:00+00:00", "url": "https://24ways.org/2011/collaborative-development-for-a-responsively-designed-web/", "topic": "business"} {"rowid": 274, "title": "Adaptive Images for Responsive Designs", "contents": "So you\u2019ve been building some responsive designs and you\u2019ve been working through your checklist of things to do:\n\n\n\tYou started with the content and designed around it, with mobile in mind first.\n\tYou\u2019ve gone liquid and there\u2019s nary a px value in sight; % is your weapon of choice now.\n\tYou\u2019ve baked in a few media queries to adapt your layout and tweak your design at different window widths.\n\tYou\u2019ve made your images scale to the container width using the fluid Image technique.\n\tYou\u2019ve even done the same for your videos using a nifty bit of JavaScript.\n\n\nYou\u2019ve done a good job so pat yourself on the back. But there\u2019s still a problem and it\u2019s as tricky as it is important: image resolutions.\n\nHTML has an <img> problem\n\nCSS is great at adapting a website design to different window sizes \u2013 it allows you not only to tweak layout but also to send rescaled versions of the design\u2019s images. And you want to do that because, after all, a smartphone does not need a 1,900-pixel background image1.\n\nHTML is less great. In the same way that you don\u2019t want CSS background images to be larger than required, you don\u2019t want that happening with <img>s either. A smartphone only needs a small image but desktop users need a large one. Unfortunately <img>s can\u2019t adapt like CSS, so what do we do?\n\nWell, you could just use a high resolution image and the fluid image technique would scale it down to fit the viewport; but that\u2019s sending an image five or six times the file size that\u2019s really needed, which makes it slow to download and unpleasant to use. Smartphones are pretty impressive devices \u2013 my ancient iPhone 3G is more powerful in every way than my first proper computer \u2013 but they\u2019re still terribly slow in comparison to today\u2019s desktop machines. Sending a massive image means it has to be manipulated in memory and redrawn as you scroll. You\u2019ll find phones rapidly run out of RAM and slow to a crawl.\n\nWell, OK. You went mobile first with everything else so why not put in mobile resolution <img>s too? Because even though mobile devices are rapidly gaining share in your analytics stats, they\u2019re still not likely to be the major share of your user base. I don\u2019t think desktop users would be happy with pokey little mobile resolution images, do you? What we need are adaptive images.\n\nAdaptive image techniques\n\nThere are a number of possible solutions, each with pros and cons, and it\u2019s not as simple to find a graceful solution as you might expect.\n\nYour first thought might be to use JavaScript to trawl through the markup and rewrite the source attribute. That\u2019ll get you the right end result, but it\u2019ll have done it in a way you absolutely don\u2019t want. That\u2019s because of the way browsers load resources. It starts to load the HTML and builds the page on-the-fly; as soon as it finds an <img> element it immediately asks the server for that image. After the HTML has finished loading, the JavaScript will run, change the src attribute, and then the browser will request that new image too. Not instead of, but as well as. Not good: that\u2019s added more bloat instead of cutting it.\n\nPlain JavaScript is out then, which is a problem, because what other tools do we have to work with as web designers? Let\u2019s ignore that for now and I\u2019ll outline another issue with the concept of serving different resolution images for different window widths: a basic file management problem. To request a different image, that image has to exist on the server. How\u2019s it going to get there? That\u2019s not a trivial problem, especially if you have non-technical users that update content through a CMS. Let\u2019s say you solve that \u2013 do you plan on a simple binary switch: big image|little image? Is that really efficient or future-proof? What happens if you have an archive of existing content that needs to behave this way? Can you apply such a solution to existing content or markup?\n\nThere\u2019s a detailed round-up of potential techniques for solving the adaptive images problem over at the Cloud Four blog if you fancy a dig around exploring all the options currently available. But I\u2019m here to show you what I think is the most flexible and easy to implement solution, so here we are.\n\nAdaptive Images\n\nAdaptive Images aims to mitigate most of the issues surrounding the problems of bringing the venerable <img> tag into the 21st century. And it works by leaving that tag completely alone \u2013 just add that desktop resolution image into the markup as you\u2019ve been doing for years now. We\u2019ll fix it using secret magic techniques and bottled pixie dreams. Well, fine: with one .htaccess file, one small PHP file and one line of JavaScript. But you\u2019re killing the mystique with that kind of talk.\n\nSo, what does this solution do?\n\n\n\tIt allows <img>s to adapt to the same break points you use in your media queries, giving granular control in the same way you get with your CSS.\n\tIt installs on your server in five minutes or less and after that is automatic and you don\u2019t need to do anything.\n\tIt generates its own rescaled images on the server and doesn\u2019t require markup changes, so you can apply it to existing web content.\n\tIf you wish, it will make all of your images go mobile-first (just in case that\u2019s what you want if JavaScript and cookies aren\u2019t available).\n\n\nSound good? I hope so. Here\u2019s what you do.\n\nSetting up and rolling out\n\nI\u2019ll assume you have some basic server knowledge along with that wealth of front-end wisdom exploding out of your head: that you know not to overwrite any existing .htaccess file for example, and how to set file permissions on your server. Feeling up to it? Excellent.\n\n\n\tDownload the latest version of Adaptive Images either from the website or from the GitHub repository.\n\tUpload the included .htaccess and adaptive-images.php files into the root folder of your website.\n\tCreate a directory called ai-cache and make sure the server can write to it (CHMOD 755 should do it).\n\tAdd the following line of JavaScript into the <head> of your site:\n\n\n<script>document.cookie='resolution='+Math.max(screen.width,screen.height)+'; path=/\u2018;</script>\n\nThat\u2019s it, unless you want to tweak the default settings. You likely do, but essentially you\u2019re already up and running.\n\nHow it works\n\nAdaptive Images does a number of things depending on the scenario the script has to handle, but here\u2019s a basic overview of what it does when you load a page running it:\n\n\n\tA session cookie is written with the value of the visitor\u2019s screen size in pixels.\n\tThe HTML encounters an <img> tag and sends a request to the server for that image. It also sends the cookie, because that\u2019s how browsers work.\n\tApache sits on the server and receives the request for the image. Apache then has a look in the .htaccess file to see if there are any special instructions for files in the requested URL.\n\tThere are! The .htaccess says \u201cHey, server! Any request you get for a JPG, GIF or PNG file just send to the adaptive-images.php file instead.\u201d\n\tThe PHP file then does some intelligent thinking which can cover a number of scenarios, but I\u2019ll illustrate one path that can happen:\n\n\n\t\n\t\tThe PHP file looks for the cookie and finds out that the user has a maximum screen width of 480px.\n\t\tThe PHP has a look at the available media query sizes that were configured and decides which one matches the user\u2019s device.\n\t\tIt then has a look inside the /ai-cache/480/ folder to see if a rescaled image already exists there.\n\t\tWe\u2019ll pretend it doesn\u2019t \u2013 the PHP then goes to the actual requested URI and finds that the original file does exist.\n\t\tIt has a look to see how wide that image is. If it\u2019s already smaller than the user\u2019s screen width it sends it along and stops there. But, let\u2019s pretend the image is 1,000px wide.\n\t\tThe PHP then resizes the image and saves it into the /ai-cache/480 folder ready for the next time someone needs it.\n\t\n\nIt also does a few other things when needs arise, for example:\n\n\n\tIt sends images with a cache header field that tells proxies not to cache the image, while telling browsers they should. This avoids problems with proxy servers and network caching systems grabbing the wrong image and storing it.\n\tIt handles cases where there isn\u2019t a cookie set, and you can choose whether to then send the mobile version or the largest configured media query size.\n\tIt compares timestamps between the source image and the generated cache image \u2013 to ensure that if the source image gets updated, the old cached file won\u2019t be sent.\n\n\nCustomizing\n\nThere are a few options you can customize if you don\u2019t like the default values. By looking in the PHP\u2019s configuration section at the top of the file, you can:\n\n\n\tSet the resolution breakpoints to match your media query break points.\n\tChange the name and location of the ai-cache folder.\n\tChange the quality level any generated JPG images are saved at.\n\tHave it perform a subtle sharpen on rescaled images to help keep detail.\n\tToggle whether you want it to compare the files in the cache folder with the source ones or not.\n\tSet how long the browser should cache the images for.\n\tSwitch between a mobile-first or desktop-first approach when a cookie isn\u2019t found.\n\n\nMore importantly, you probably want to omit a few folders from the AI behaviour. You don\u2019t need or want it resizing the images you\u2019re using in your CSS, for example. That\u2019s fine \u2013 just open up the .htaccess file and follow the instructions to list any directories you want AI to ignore. Or, if you\u2019re a dab hand at RewriteRules you can remove the exclamation mark at the start of the rule and it\u2019ll only apply AI behaviour to a given list of folders.\n\nCaveats\n\nAs I mentioned, I think this is one of the most flexible, future-proof, retrofittable and easy to use solutions available today. But, there are problems with this approach as there are with all of the ones I\u2019ve seen so far.\n\nThis is a PHP solution\n\nI wish I was smarter and knew some fancy modern languages the cool kids discuss at parties, but I don\u2019t. So, you need PHP on your server. That said, Adaptive Images has a Creative Commons licence2 and I would welcome anyone to contribute a port of the code3. \n\nContent delivery networks\n\nAdaptive Images relies on the server being able to: intercept requests for images; do some logic; and send one of a given number of responses. Content delivery networks are generally dumb caches, and they won\u2019t allow that to happen. Adaptive Images will not work if you\u2019re using a CDN to deliver your website.\n\nA minor but interesting cookie issue.\n\nAs Yoav Weiss pointed out in his article Preloaders, cookies and race conditions, there is no way to guarantee that a cookie will be set before images are requested \u2013 even though the JavaScript that sets the cookie is loaded by the browser before it finds any <img> tags. That could mean images being requested without a cookie being available. Adaptive Images has a two-fold mechanism to avoid this being a problem:\n\n\n\tThe $mobile_first toggle allows you to choose what to send to a browser if a cookie isn\u2019t set. If FALSE then it will send the highest configured resolution; if TRUE it will send the lowest.\n\tEven if set to TRUE, Adaptive Images checks the User Agent String. If it discovers the user is on a desktop environment, it will override $mobile_first and set it to FALSE.\n\n\nThis means that if $mobile_first is set to TRUE and the user was unlucky (their browser didn\u2019t write the cookie fast enough), mobile devices will be supplied with the smallest image, and desktop devices will get the largest.\n\nThe best way to get a cookie written is to use JavaScript as I\u2019ve explained above, because it\u2019s the fastest way. However, for those that want it, there is a JavaScript-free method which uses CSS and a bogus PHP \u2018image\u2019 to set the cookie. A word of caution: because it requests an external file, this method is slower than the JavaScript one, and it is very likely that the cookie won\u2019t be set until after images have been requested.\n\nThe future\n\nFor today, this is a pretty good solution. It works, and as it doesn\u2019t interfere with your markup or source material in any way, the process is non-destructive. If a future solution is superior, you can just remove the Adaptive Images files and you\u2019re good to go \u2013 you\u2019d never know AI had been there.\n\nHowever, this isn\u2019t really a long-term solution, not least because of the intermittent problem of the cookie and image request race condition. What we really need are a number of standardized ways to handle this in the future.\n\nFirst, we could do with browsers sending far more information about the user\u2019s environment along with each HTTP request (device size, connection speed, pixel density, etc.), because the way things work now is no longer fit for purpose. The web now is a much broader entity used on far more diverse devices than when these technologies were dreamed up, and we absolutely require the server to have better knowledge about device capabilities than is currently possible. Relying on cookies to do this job doesn\u2019t cut it, and the User Agent String is a complete mess incapable of fulfilling the various purposes we are forced to hijack it for.\n\nSecondly, we need a W3C-backed markup level solution to supply semantically different content at different resolutions, not just rescaled versions of the same content as Adaptive Images does.\n\nI hope you\u2019ve found this interesting and will find Adaptive Images useful.\n\nFootnotes\n\n1 While I\u2019m talking about preventing smartphones from downloading resources they don\u2019t need: you should be careful of your media query construction if you want to stop WebKit downloading all the images in all of the CSS files.\n\n2 Adaptive Images has a very broad Creative Commons licence and I warmly welcome feedback and community contributions via the GitHub repository. \n\n3 There is a ColdFusion port of an older version of Adaptive Images. I do not have anything to do with ported versions of Adaptive Images.", "year": "2011", "author": "Matt Wilcox", "author_slug": "mattwilcox", "published": "2011-12-04T00:00:00+00:00", "url": "https://24ways.org/2011/adaptive-images-for-responsive-designs/", "topic": "ux"} {"rowid": 284, "title": "Subliminal User Experience", "contents": "The term \u2018user experience\u2019 is often used vaguely to quantify common elements of the interaction design process: wireframing, sitemapping and so on. UX undoubtedly involves all of these principles to some degree, but there really is a lot more to it than that.\n\nGood UX is characterized by providing the user with constant feedback as they step through your interface. It means thinking about and providing fallbacks and error resolutions in even the rarest of scenarios. It\u2019s about omitting clutter to make way for the necessary, and using the most fundamental of design tools to influence a user\u2019s path. It means making no assumptions, designing right down to the most distinct details and going one step further every single time. In many cases, good UX is completely subliminal.\n\nThere are simple tools and subtleties we can build into our products to enhance the overall experience but, in order to do so, we really have to step beyond where we usually draw the line on what to design.\n\nThe purpose of this article is not to provide technical how-tos, as the functionality is, in most cases, quite simple and could be implemented in a myriad of ways. Rather, it will present a handful of ideas for enhancing the experience of an interface at a deeper level of design without relying on the container.\n\nWe\u2019ll cover three elements that should get you thinking in the right mindset:\n\n\n\tprogress activity and post-active states\n\tpseudo-class preloading\n\tbuttons and their (mis)behaviour\n\n\nProgress activity and the post-active state\n\nWe\u2019ve long established that we can\u2019t control the devices our products are viewed on, which browser they\u2019ll run in or what connection speed will be used to access them. We accept this all as factual, so why is it so often left to the browser to provide feedback to the user when an event is triggered or an error encountered? The browser isn\u2019t part of the interface \u2014 it\u2019s merely a container. A simple, visual recognition of your users\u2019 activity may be all it takes to make or break the product.\n\nLet\u2019s begin with a commonly overlooked case: progress activity.\n\nA user moves their cursor over a hyperlink or button, which is clearly defined as one by the visual language of your content. Upon doing so, they trigger the :hover state to confirm this element is indeed interactive. So far, so good. What happens next is where it starts to fall apart: the user hits this link, presumably triggering an :active state, which is then returned to the normal state upon release. And then what?\n\n\n\nFrom this point on, your user is in limbo. The link has fallen back to either its regular or :visited state. You\u2019ve effectively abandoned them and are relying entirely on the browser they\u2019re using to communicate that something is happening. This poses quite a few problems:\n\n\n\tThe user may lose focus of what they were doing.\n\tThere is little consistency between progress indication in browsers.\n\tThe user may not even notice that their action has been acknowledged.\n\n\nHow many times have one or more of these events happened to you due to a lack of communication from the interface?\n\nThink about the differences between Safari and Chrome in this area \u2014 two browsers that, when compared to each other, are relatively similar in nature, though this basic feature differs in execution.\n\n\n\nLike all aspects of designing the user experience, there is no one true way to fix this problem, but we can introduce details that many users will unconsciously appreciate.\n\nConsider the basic loading indicator. It\u2019s nothing new \u2014 in fact, some would argue it\u2019s quite a clich\u00e9. However, whether using a spinning wheel or a progress bar, a gif or JavaScript, or something more sophisticated, these simple tools create an illusion of movement, progress and activity. Depending on the implementation, progress indication graphics can significantly increase a user\u2019s perception of the speed in which an event is taking place. Combine this with a cursor change and a lock over the element to prevent double-clicking or reloading, and your chances of keeping your user\u2019s valuable attention have significantly increased.\n\nDemo: Progress activity and the post-active state\n\nThis same logic applies to all aspects of defaulting in a browser, from micro-elements like this up to something as simple as a 404 page. The difference in a user\u2019s reaction to hitting the default Apache 404 and a hand-crafted, branded page are phenomenal and there are no prizes for guessing which one they\u2019re more likely to exit from.\n\nPseudo-class preloading\n\nAnother detail that it pays well to look after is the use and abuse of the :hover element and, more importantly, the content revealed by it. Chances are you\u2019re using the :hover pseudo-class somewhere in almost every screen you create. If content is being revealed on :hover and that content takes some time to load, there will inevitably be a delay the first time it is initiated. It appears tacky and half-finished when a tooltip or drop-down loads instantly, only to have its background or supporting elements follow through a second or two later. So, let\u2019s preload the elements we know we\u2019ll need.\n\nA very simple application of this would be to load each file into the default state of a visible element and offset them by a large number. This ensures our elements have loaded and are ready if and when they need to be displayed.\n\nelement {\n background: url(path/to/image.jpg) -9999em -9999em no-repeat;\n }\nelement .tooltip {\n display: none;\n }\nelement:hover .tooltip {\n display: block;\n background: url(path/to/image.jpg) 0 0;\n }\n\nBackground images are just one example. Of course, the same logic can apply to any form of revealed content. Using a sprite graphic can also be a clever \u2014 albeit tedious \u2014 method for achieving the same goal, so if you\u2019re using a sprite, preloading in this way may not be necessary\n\nThe differences between preloading and not can only be visualized properly with an actual demonstration.\n\nDemo: Preloading revealed content\n\nButtons and their (mis)behaviour\n\nAlmost all of the time, a button serves just one purpose: to be clicked (or tapped). When a button\u2019s pressed, therefore, if anything other than triggering the desired event occurs, a user naturally becomes frustrated. I often get funny looks when talking about this, but designing the details of a button is something I consider essential.\n\nIt goes without saying that a button should always visually recognise :hover and :active states. We can take that one step further and disable some actions that get in the way of pressing the button.\n\nIt\u2019s rare that a user would ever want to select and use the text on a button, so let\u2019s cleanly disable it:\n\nelement {\n -moz-user-select: -moz-none;\n -webkit-user-select: none;\n user-select: none;\n }\n\nIf the button is image-based or contains an image, we could also disable user dragging to make sure the image element stays locked to the button:\n\nelement {\n -moz-user-drag: -moz-none;\n -webkit-user-drag: none;\n user-drag: none;\n }\n\nDemo: A more usable button\n\nDisabling global features like this should be done with utmost caution as it\u2019s very easy to cross the line between enhancement and friction. Cases where this is acceptable are very rare, but it\u2019s a good trick to keep in mind nevertheless. Both Apple\u2019s iCloud and Metalab\u2019s Flow applications use these tools appropriately and to great extent.\n\nYou could argue that the visual feedback of having the text selected or image dragged when a user mis-hits the button is actually a positive effect, informing the user that their desired action did not work. However, covering for human error should be a designer\u2019s job, not that of our users. We can (almost) ensure it does work for them by accommodating for errors like this in most cases.\n\nFinal thoughts\n\nDesigning to this level of detail can seem obsessive, but as a designer and user of many interfaces and applications, I believe it can be the difference between a good user experience and a great one.\n\nThe samples you\u2019ve just seen are only a fraction of the detail we can design for. Keep in mind that the demonstrations, code and methods above outline just one way to do this. You may not agree with all of these processes or have the time and desire to consider them, but one fact remains: it\u2019s not the technology, or the way it\u2019s done that\u2019s important \u2014 it\u2019s the logic and the concept of designing everything.", "year": "2011", "author": "Chris Sealey", "author_slug": "chrissealey", "published": "2011-12-03T00:00:00+00:00", "url": "https://24ways.org/2011/subliminal-user-experience/", "topic": "ux"} {"rowid": 280, "title": "Conditional Loading for Responsive Designs", "contents": "On the eighteenth day of last year\u2019s 24 ways, Paul Hammond wrote a great article called Speed Up Your Site with Delayed Content. He outlined a technique for loading some content \u2014 like profile avatars \u2014 after the initial page load. This gives you a nice performance boost.\n\nThere\u2019s another situation where this kind of delayed loading could be really handy: mobile-first responsive design.\n\nResponsive design combines three techniques:\n\n\n\ta fluid grid\n\tflexible images\n\tmedia queries\n\n\nAt first, responsive design was applied to existing desktop-centric websites to allow the layout to adapt to smaller screen sizes. But more recently it has been combined with another innovative approach called mobile first.\n\nRather then starting with the big, bloated desktop site and then scaling down for smaller devices, it makes more sense to start with the constraints of the small screen and then scale up for larger viewports. Using this approach, your layout grid, your large images and your media queries are applied on top of the pre-existing small-screen design. It\u2019s taking progressive enhancement to the next level.\n\nOne of the great advantages of the mobile-first approach is that it forces you to really focus on the core content of your page. It might be more accurate to think of this as a content-first approach. You don\u2019t have the luxury of sidebars or multiple columns to fill up with content that\u2019s just nice to have rather than essential.\n\nBut what happens when you apply your media queries for larger viewports and you do have sidebars and multiple columns? Well, you can load in that nice-to-have content using the same kind of Ajax functionality that Paul described in his article last year. The difference is that you first run a quick test to see if the viewport is wide enough to accommodate the subsidiary content. This is conditional delayed loading.\n\nConsider this situation: I\u2019ve published an article about cats and I\u2019d like to include relevant cat-related news items in the sidebar \u2026but only if there\u2019s enough room on the screen for a sidebar.\n\nI\u2019m going to use Google\u2019s News API to return the search results. This is the ideal time to use delayed loading: I don\u2019t want a third-party service slowing down the rendering of my page so I\u2019m going to fire off the request after my document has loaded.\n\nHere\u2019s an example of the kind of Ajax function that I would write:\n\nvar searchNews = function(searchterm) {\n\tvar elem = document.createElement('script');\n\telem.src = 'http://ajax.googleapis.com/ajax/services/search/news?v=1.0&q='+searchterm+'&callback=displayNews';\n\tdocument.getElementsByTagName('head')[0].appendChild(elem);\n};\n\nI\u2019ve provided a callback function called displayNews that takes the JSON result of that Ajax request and adds it an element on the page with the ID newsresults:\n\nvar displayNews = function(news) {\n\tvar html = '',\n\titems = news.responseData.results,\n\ttotal = items.length;\n\tif (total>0) {\n\t\tfor (var i=0; i<total; i++) {\n\t\t\tvar item = items[i];\n\t\t\thtml+= '<article>';\n\t\t\thtml+= '<a href=\"'+item.unescapedUrl+'\">';\n\t\t\thtml+= '<h3>'+item.titleNoFormatting+'</h3>';\n\t\t\thtml+= '</a>';\n\t\t\thtml+= '<p>';\n\t\t\thtml+= item.content;\n\t\t\thtml+= '</p>';\n\t\t\thtml+= '</article>';\n\t\t}\n\t\tdocument.getElementById('newsresults').innerHTML = html;\n\t}\n};\n\nNow, I can call that function at the bottom of my document:\n\n<script>\n searchNews('cats');\n</script>\n\nIf I only want to run that search when there\u2019s room for a sidebar, I can wrap it in an if statement:\n\n<script>\nif (document.documentElement.clientWidth > 640) {\n searchNews('cats');\n}\n</script>\n\nIf the browser is wider than 640 pixels, that will fire off a search for news stories about cats and put the results into the newsresults element in my markup:\n\n<div id=\"newsresults\">\n <!-- search results go here -->\n</div>\n\nThis works pretty well but I\u2019m making an assumption that people with small-screen devices wouldn\u2019t be interested in seeing that nice-to-have content. You know what they say about assumptions: they make an ass out of you and umptions. I should really try to give everyone at least the option to get to that extra content:\n\n<div id=\"newsresults\">\n <a href=\"http://www.google.com/search?q=cats&tbm=nws\">Search Google News</a>\n</div>\n\nSee the result\n\nVisitors with small-screen devices will see that link to the search results; visitors with larger screens will get the search results directly.\n\nI\u2019ve been concentrating on HTML and JavaScript, but this technique has consequences for content strategy and information architecture. Instead of thinking about possible page content in a binary way as either \u2018on the page\u2019 or \u2018not on the page\u2019, conditional loading introduces a third \u2018it\u2019s complicated\u2019 option.\n\nThis was just a simple example but I hope it illustrates that conditional loading could become an important part of the content-first responsive design approach.", "year": "2011", "author": "Jeremy Keith", "author_slug": "jeremykeith", "published": "2011-12-02T00:00:00+00:00", "url": "https://24ways.org/2011/conditional-loading-for-responsive-designs/", "topic": "ux"} {"rowid": 271, "title": "Creating Custom Font Stacks with Unicode-Range", "contents": "Any web designer or front-end developer worth their salt will be familiar with the CSS @font-face rule used for embedding fonts in a web page. We\u2019ve all used it \u2014 either directly in our code ourselves, or via one of the web font services like Fontdeck, Typekit or Google Fonts.\n\nIf you\u2019re like me, however, you\u2019ll be used to just copying and pasting in a specific incantation of lines designed to get different formats of fonts working in different browsers, and may not have really explored all the capabilities of @font-face properties as defined by the spec.\n\nOne such property \u2014 the unicode-range descriptor \u2014 sounds pretty dull and is easily overlooked. It does, however, have some fairly interesting possibilities when put to use in creative ways.\n\nUnicode-range\n\nThe unicode-range descriptor is designed to help when using fonts that don\u2019t have full coverage of the characters used in a page. By adding a unicode-range property to a @font-face rule it is possible to specify the range of characters the font covers. \n\n@font-face {\n font-family: BBCBengali;\n src: url(fonts/BBCBengali.ttf) format(\"opentype\");\n unicode-range: U+00-FF;\n}\n\nIn this example, the font is to be used for characters in the range of U+00 to U+FF which runs from the unexciting control characters at the start of the Unicode table (symbols like the exclamation mark start at U+21) right through to \u00ff at U+FF \u2013 the extent of the Basic Latin character range.\n\nBy adding multiple @font-face rules for the same family but with different ranges, you can build up complete coverage of the characters your page uses by using different fonts.\n\nWhen I say that it\u2019s possible to specify the range of characters the font covers, that\u2019s true, but what you\u2019re really doing with the unicode-range property is declaring which characters the font should be used for. This becomes interesting, because instead of merely working with the technical constraints of available characters in a given font, we can start picking and choosing characters to use and selectively mix fonts together.\n\nThe best available ampersand\n\nA few years back, Dan Cederholm wrote a post encouraging designers to use the best available ampersand. Dan went on to outline how this can be achieved by wrapping our ampersands in a <span> element with a class applied:\n\n<span class=\"amp\">&</span>\n\nA CSS rule can then be written to select the <span> and apply a different font:\n\nspan.amp {\n font-family: Baskerville, Palatino, \"Book Antiqua\", serif;\n}\n\nThat\u2019s a perfectly serviceable technique, but the drawbacks are clear \u2014 you have to add extra markup which is borderline presentational, and you also have to be able to add that markup, which isn\u2019t always possible when working with a CMS.\n\nPerhaps we could do this with unicode-range.\n\nA better best available ampersand\n\nThe Unicode code point for an ampersand is U+26, so the ampersand font stack above can be created like so:\n\n@font-face {\n font-family: 'Ampersand';\n src: local('Baskerville'), local('Palatino'), local('Book Antiqua');\n unicode-range: U+26;\n}\n\nWhat we\u2019ve done here is specify a new family called Ampersand and created a font stack for it with the user\u2019s locally installed copies of Baskerville, Palatino or Book Antiqua. We\u2019ve then limited it to a single character range \u2014 the ampersand. Of course, those don\u2019t need to be local fonts \u2014 they could be web font files, too. If you have a font with a really snazzy ampersand, go for your life.\n\nWe can then use that new family in a regular font stack.\n\nh1 {\n font-family: Ampersand, Arial, sans-serif;\n}\n\nWith this in place, any <h1> elements in our page will use the Ampersand family (Baskerville, Palatino or Book Antiqua) for ampersands, and Arial for all other characters. If the user doesn\u2019t have any of the Ampersand family fonts available, the ampersand will fall back to the next item in the font stack, Arial.\n\nYou didn\u2019t think it was that easy, did you?\n\nOh, if only it were so. The problem comes, as ever, with the issue of browser support. The unicode-range property has good support in WebKit browsers (like Safari and Chrome, and the browsers on most popular smartphone platforms) and in recent versions of Internet Explorer. The big stumbling block comes in the form of Firefox, which has no support at all.\n\nIf you\u2019re familiar with how CSS works when it comes to unsupported properties, you\u2019ll know that if a browser encounters a property it doesn\u2019t implement, it just skips that declaration and moves on to the next. That works perfectly for things like border-radius \u2014 if the browser can\u2019t round off the corners, the declaration is skipped and the user sees square corners instead. Perfect.\n\nLess perfect when it comes to unicode-range, because if no range is specified then the default is that the font is applied for all characters \u2014 the whole range. If you\u2019re using a fancy font for flamboyant ampersands, you probably don\u2019t want that applied to all your text if unicode-range isn\u2019t supported. That would be bad. Really bad.\n\nEnsuring good fallbacks\n\nAs ever, the trick is to make sure that there\u2019s a sensible fallback in place if a browser doesn\u2019t have support for whatever technology you\u2019re trying to use. This is where being a super nerd about understanding the spec you\u2019re working with really pays off.\n\nWe can make use of the rules of the CSS cascade to make sure that if unicode-range isn\u2019t supported we get a sensible fallback font. What would be ideal is if we were able to follow up the @font-face rule with a second rule to override it if Unicode ranges aren\u2019t implemented.\n\n@font-face {\n font-family: 'Ampersand';\n src: local('Baskerville'), local('Palatino'), local('Book Antiqua');\n unicode-range: U+26;\n}\n@font-face {\n font-family: 'Ampersand';\n src: local('Arial');\n}\n\nIn theory, this code should make sense for all browsers. For those that support unicode-range the two rules become cumulative. They specify different ranges for the same family, and in WebKit browsers this has the expected result of using Arial for most characters, but Baskerville and friends for the ampersand. For browsers that don\u2019t have support, the second rule should just supersede the first, setting the font to Arial. \n\nUnfortunately, this code causes current versions of Firefox to freak out and use the first rule, applying Baskerville to the entire range. That\u2019s both unexpected and unfortunate. Bad Firefox. On your rug.\n\nIf that doesn\u2019t work, what can we do? Well, we know that if given a unicode-range Firefox will ignore the range and apply the font to all characters. That\u2019s really what we\u2019re trying to achieve. So what if we specified a range for the fallback font, but made sure it only covers some obscure high-value Unicode character we\u2019re never going to use in our page? Then it wouldn\u2019t affect the outcome for browsers that do support ranges.\n\n@font-face {\n font-family: 'Ampersand';\n src: local('Baskerville'), local('Palatino'), local('Book Antiqua');\n unicode-range: U+26;\n}\n@font-face {\n /* Ampersand fallback font */\n font-family: 'Ampersand';\n src: local('Arial');\n unicode-range: U+270C;\n}\n\nBy specifying a range on the fallback font, Firefox appears to correctly override the first based on the cascade sort order. Browsers that do support ranges take the second rule in addition, and apply Arial for that obscure character we\u2019re not using in any of our pages \u2014 U+270C.\n\nSo we get our nice ampersands in browsers that support unicode-range and, thanks to our styling of an obscure Unicode character, the font falls back to a perfectly acceptable Arial in browsers that do not offer support. Perfect!\n\nThat obscure character, my friends, is what Unicode defines as the VICTORY HAND.\n\n\u270c\n\nSo, how can we use this?\n\nAmpersands are a neat trick, and it works well in browsers that support ranges, but that\u2019s not really the point of all this. Styling ampersands is fun, but they\u2019re only really scratching the surface. Consider more involved examples, such as substituting a different font for numerals, or symbols, or even caps. Things certainly begin to get a bit more interesting.\n\nHow do you know what the codes are for different characters? Richard Ishida has a handy online conversion tool available where you can type in the characters and get the Unicode code points out the other end.\n\nOf course, the fact remains that browser support for unicode-range is currently limited, so any application needs to have fallbacks that you\u2019re still happy for a significant proportion of your visitors to see. In some cases, such as dedicated pages for mobile devices in an HTML-based phone app, this is immediately useful as support in WebKit browsers is already very good. In other cases, you\u2019ll have to use your own best judgement based on your needs and audience.\n\nOne thing to keep in mind is that if you\u2019re using web fonts, the entire font will be downloaded even if only one character is used. That said, the font shouldn\u2019t be downloaded if none of the characters within the Unicode range are present in a given page.\n\nAs ever, there are pros and cons to using unicode-range as well as varied but increasing support in browsers. It remains a useful tool to understand and have in your toolkit for when the right moment comes along.", "year": "2011", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2011-12-01T00:00:00+00:00", "url": "https://24ways.org/2011/creating-custom-font-stacks-with-unicode-range/", "topic": "code"} {"rowid": 223, "title": "Calculating Color Contrast", "contents": "Some websites and services allow you to customize your profile by uploading pictures, changing the background color or other aspects of the design. As a customer, this personalization turns a web app into your little nest where you store your data. As a designer, letting your customers have free rein over the layout and design is a scary prospect. So what happens to all the stock text and images that are designed to work on nice white backgrounds? Even the Mac only lets you choose between two colors for the OS, blue or graphite! Opening up the ability to customize your site\u2019s color scheme can be a recipe for disaster unless you are flexible and understand how to find maximum color contrasts.\n\nIn this article I will walk you through two simple equations to determine if you should be using white or black text depending on the color of the background. The equations are both easy to implement and produce similar results. It isn\u2019t a matter of which is better, but more the fact that you are using one at all! That way, even with the craziest of Geocities color schemes that your customers choose, at least your text will still be readable.\n\nLet\u2019s have a look at a range of various possible colors. Maybe these are pre-made color schemes, corporate colors, or plucked from an image.\n\n\n\nNow that we have these potential background colors and their hex values, we need to find out whether the corresponding text should be in white or black, based on which has a higher contrast, therefore affording the best readability. This can be done at runtime with JavaScript or in the back-end before the HTML is served up.\n\nThere are two functions I want to compare. The first, I call \u201950%\u2019. It takes the hex value and compares it to the value halfway between pure black and pure white. If the hex value is less than half, meaning it is on the darker side of the spectrum, it returns white as the text color. If the result is greater than half, it\u2019s on the lighter side of the spectrum and returns black as the text value.\n\nIn PHP:\n\nfunction getContrast50($hexcolor){\n return (hexdec($hexcolor) > 0xffffff/2) ? 'black':'white';\n}\n\nIn JavaScript:\n\nfunction getContrast50(hexcolor){\n return (parseInt(hexcolor, 16) > 0xffffff/2) ? 'black':'white';\n}\n\nIt doesn\u2019t get much simpler than that! The function converts the six-character hex color into an integer and compares that to one half the integer value of pure white. The function is easy to remember, but is naive when it comes to understanding how we perceive parts of the spectrum. Different wavelengths have greater or lesser impact on the contrast.\n\nThe second equation is called \u2018YIQ\u2019 because it converts the RGB color space into YIQ, which takes into account the different impacts of its constituent parts. Again, the equation returns white or black and it\u2019s also very easy to implement.\n\nIn PHP:\n\nfunction getContrastYIQ($hexcolor){\n\t$r = hexdec(substr($hexcolor,0,2));\n\t$g = hexdec(substr($hexcolor,2,2));\n\t$b = hexdec(substr($hexcolor,4,2));\n\t$yiq = (($r*299)+($g*587)+($b*114))/1000;\n\treturn ($yiq >= 128) ? 'black' : 'white';\n}\n\nIn JavaScript:\n\nfunction getContrastYIQ(hexcolor){\n\tvar r = parseInt(hexcolor.substr(0,2),16);\n\tvar g = parseInt(hexcolor.substr(2,2),16);\n\tvar b = parseInt(hexcolor.substr(4,2),16);\n\tvar yiq = ((r*299)+(g*587)+(b*114))/1000;\n\treturn (yiq >= 128) ? 'black' : 'white';\n}\n\nYou\u2019ll notice first that we have broken down the hex value into separate RGB values. This is important because each of these channels is scaled in accordance to its visual impact. Once everything is scaled and normalized, it will be in a range between zero and 255. Much like the previous \u201950%\u2019 function, we now need to check if the input is above or below halfway. Depending on where that value is, we\u2019ll return the corresponding highest contrasting color.\n\nThat\u2019s it: two simple contrast equations which work really well to determine the best readability.\n\nIf you are interested in learning more, the W3C has a few documents about color contrast and how to determine if there is enough contrast between any two colors. This is important for accessibility to make sure there is enough contrast between your text and link colors and the background.\n\nThere is also a great article by Kevin Hale on Particletree about his experience with choosing light or dark themes. To round it out, Jonathan Snook created a color contrast picker which allows you to play with RGB sliders to get values for YIQ, contrast and others. That way you can quickly fiddle with the knobs to find the right balance.\n\nComparing results\n\nLet\u2019s revisit our color schemes and see which text color is recommended for maximum contrast based on these two equations.\n\n\n\nIf we use the simple \u201950%\u2019 contrast function, we can see that it recommends black against all the colors except the dark green and purple on the second row. In general, the equation feels the colors are light and that black is a better choice for the text.\n\n\n\nThe more complex \u2018YIQ\u2019 function, with its weighted colors, has slightly different suggestions. White text is still recommended for the very dark colors, but there are some surprises. The red and pink values show white text rather than black. This equation takes into account the weight of the red value and determines that the hue is dark enough for white text to show the most contrast.\n\nAs you can see, the two contrast algorithms agree most of the time. There are some instances where they conflict, but overall you can use the equation that you prefer. I don\u2019t think it is a major issue if some edge-case colors get one contrast over another, they are still very readable.\n\nNow let\u2019s look at some common colors and then see how the two functions compare. You can quickly see that they do pretty well across the whole spectrum.\n\n\n\nIn the first few shades of grey, the white and black contrasts make sense, but as we test other colors in the spectrum, we do get some unexpected deviation. Pure red #FF0000 has a flip-flop. This is due to how the \u2018YIQ\u2019 function weights the RGB parts. While you might have a personal preference for one style over another, both are justifiable.\n\n\n\nIn this second round of colors, we go deeper into the spectrum, off the beaten track. Again, most of the time the contrasting algorithms are in sync, but every once in a while they disagree. You can select which you prefer, neither of which is unreadable.\n\nConclusion\n\nContrast in color is important, especially if you cede all control and take a hands-off approach to the design. It is important to select smart defaults by making the contrast between colors as high as possible. This makes it easier for your customers to read, increases accessibility and is generally just easier on the eyes. \n\nSure, there are plenty of other equations out there to determine contrast; what is most important is that you pick one and implement it into your system.\n\nSo, go ahead and experiment with color in your design. You now know how easy it is to guarantee that your text will be the most readable in any circumstance.", "year": "2010", "author": "Brian Suda", "author_slug": "briansuda", "published": "2010-12-24T00:00:00+00:00", "url": "https://24ways.org/2010/calculating-color-contrast/", "topic": "code"} {"rowid": 237, "title": "Circles of Confusion", "contents": "Long before I worked on the web, I specialised in training photographers how to use large format, 5\u00d74\u2033 and 10\u00d78\u2033 view cameras \u2013 film cameras with swing and tilt movements, bellows and upside down, back to front images viewed on dim, ground glass screens. It\u2019s been fifteen years since I clicked a shutter on a view camera, but some things have stayed with me from those years.\n\nIn photography, even the best lenses don\u2019t focus light onto a point (infinitely small in size) but onto \u2018spots\u2019 or circles in the \u2018film/image plane\u2019. These circles of light have dimensions, despite being microscopically small. They\u2019re known as \u2018circles of confusion\u2019.\n\nAs circles of light become larger, the more unsharp parts of a photograph appear. On the flip side, when circles are smaller, an image looks sharper and more in focus. This is the basis for photographic depth of field and with that comes the knowledge that no photograph can be perfectly focused, never truly sharp. Instead, photographs can only be \u2018acceptably unsharp\u2019. \n\nAcceptable unsharpness is now a concept that\u2019s relevant to the work we make for the web, because often \u2013 unless we compromise \u2013 websites cannot look or be experienced exactly the same across browsers, devices or platforms. Accepting that fact, and learning to look upon these natural differences as creative opportunities instead of imperfections, can be tough. Deciding which aspects of a design must remain consistent and, therefore, possibly require more time, effort or compromises can be tougher. Circles of confusion can help us, our bosses and our customers make better, more informed decisions.\n\nAcceptable unsharpness\n\nMany clients still demand that every aspect of a design should be \u2018sharp\u2019 \u2013 that every user must see rounded boxes, gradients and shadows \u2013 without regard for the implications. I believe that this stems largely from the fact that they have previously been shown designs \u2013 and asked for sign-off \u2013 using static images.\n\nIt\u2019s also true that in the past, organisations have invested heavily in style guides which, while maybe still useful in offline media, have a strictness that often fails to allow for the flexibility that we need to create experiences that are appropriate to a user\u2019s browser or device capabilities.\n\nWe live in an era where web browsers and devices have wide-ranging capabilities, and websites can rarely look or be experienced exactly the same across them. Is a particular typeface vital to a user\u2019s experience of a brand? How important are gradients or shadows? Are rounded corners really that necessary? These decisions determine how \u2018sharp\u2019 an element should be across browsers with different capabilities and, therefore, how much time, effort or extra code and images we devote to achieving consistency between them. To help our clients make those decisions, we can use circles of confusion.\n\nCircles of confusion\n\nUsing circles of confusion involves plotting aspects of a visual design into a series of concentric circles, starting at the centre with elements that demand the most consistency. Then, work outwards, placing elements in order of their priority so that they become progressively \u2018softer\u2019, more defocused as they\u2019re plotted into outer rings.\n\nIf layout and typography must remain consistent, place them in the centre circle as they\u2019re aspects of a design that must remain \u2018sharp\u2019.\n\nWhen gradients are important \u2013 but not vital \u2013 to a user\u2019s experience of a brand, plot them close to, but not in the centre. This makes everyone aware that to achieve consistency, you\u2019ll need to carve out extra images for browsers that don\u2019t support CSS gradients.\n\nIf achieving rounded corners or shadows in all browsers isn\u2019t important, place them into outer circles, allowing you to save time by not creating images or employing JavaScript workarounds.\n\nI\u2019ve found plotting aspects of a visual design into circles of confusion is a useful technique when explaining the natural differences between browsers to clients. It sets more realistic expectations and creates an environment for more meaningful discussions about progressive and emerging technologies. Best of all, it enables everyone to make better and more informed decisions about design implementation priorities.\n\nInvolving clients allows the implications of the decisions they make more transparent. For me, this has sometimes meant shifting deadlines or it has allowed me to more easily justify an increase in fees. Most important of all, circles of confusion have helped the people that I work with move beyond yesterday\u2019s one-size-fits-all thinking about visual design, towards accepting the rich diversity of today\u2019s web.", "year": "2010", "author": "Andy Clarke", "author_slug": "andyclarke", "published": "2010-12-23T00:00:00+00:00", "url": "https://24ways.org/2010/circles-of-confusion/", "topic": "process"} {"rowid": 238, "title": "Everything You Wanted To Know About Gradients (And a Few Things You Didn\u2019t)", "contents": "Hello. I am here to discuss CSS3 gradients. Because, let\u2019s face it, what the web really needed was more gradients.\n\nStill, despite their widespread use (or is it overuse?), the smartly applied gradient can be a valuable contributor to a designer\u2019s vocabulary. There\u2019s always been a tension between the inherently two-dimensional nature of our medium, and our desire for more intensity, more depth in our designs. And a gradient can evoke so much: the splay of light across your desk, the slow decrease in volume toward the end of your favorite song, the sunset after a long day. When properly applied, graded colors bring a much needed softness to our work.\n\nOf course, that whole \u2018proper application\u2019 thing is the tricky bit.\n\nBut given their place in our toolkit and their prominence online, it really is heartening to see we can create gradients directly with CSS. They\u2019re part of the draft images module, and implemented in two of the major rendering engines.\n\nStill, I\u2019ve always found CSS gradients to be one of the more confusing aspects of CSS3. So if you\u2019ll indulge me, let\u2019s take a quick look at how to create CSS gradients\u2014hopefully we can make them seem a bit more accessible, and bring a bit more art into the browser.\n\nGradient theory 101 (I hope that\u2019s not really a thing)\n\nRight. So before we dive into the code, let\u2019s cover a few basics. Every gradient, no matter how complex, shares a few common characteristics. Here\u2019s a straightforward one:\n\n I spent seconds hours designing this gradient. I hope you like it.\n\nAt either end of our image, we have a final color value, or color stop: on the left, our stop is white; on the right, black. And more color-rich gradients are no different:\n\n (Don\u2019t ever really do this. Please. I beg you.)\n\nIt\u2019s visually more intricate, sure. But at the heart of it, we have just seven color stops (red, orange, yellow, and so on), making for a fantastic gradient all the way.\n\nNow, color stops alone do not a gradient make. Between each is a transition point, the fail-over point between the two stops. Now, the transition point doesn\u2019t need to fall exactly between stops: it can be brought closer to one stop or the other, influencing the overall shape of the gradient.\n\nA tale of two syntaxes\n\nArmed with our new vocabulary, let\u2019s look at a CSS gradient in the wild. Behold, the simple input button:\n\n\n\nThere\u2019s a simple linear gradient applied vertically across the button, moving from a bright sunflowerish hue (#FAA51A, for you hex nuts in the audience) to a much richer orange (#F47A20). And here\u2019s the CSS that makes it happen:\n\ninput[type=submit] {\n\tbackground-color: #F47A20;\n\tbackground-image: -moz-linear-gradient(\n\t\t#FAA51A,\n\t\t#F47A20\n\t\t);\n\tbackground-image: -webkit-gradient(linear, 0 0, 0 100%,\n\t\tcolor-stop(0, #FAA51A),\n\t\tcolor-stop(1, #F47A20)\n\t\t);\n}\n\nI\u2019ve borrowed David DeSandro\u2019s most excellent formatting suggestions for gradients to make this snippet a bit more legible but, still, the code above might have turned your stomach a bit. And that\u2019s perfectly understandable\u2014heck, it sort of turned mine. But let\u2019s step through the CSS slowly, and see if we can\u2019t make it a little less terrifying.\n\nVerbose WebKit is verbose\n\nHere\u2019s the syntax for our little gradient on WebKit:\n\nbackground-image: -webkit-gradient(linear, 0 0, 0 100%,\n\tcolor-stop(0, #FAA51A),\n\tcolor-stop(1, #F47A20)\n\t);\n\nWoof. Quite a mouthful, no? Well, here\u2019s what we\u2019re looking at:\n\n\n\tWebKit has a single -webkit-gradient property, which can be used to create either linear or radial gradients.\n\tThe next two values are the starting and ending positions for our gradient (0 0 and 0 100%, respectively). Linear gradients are simply drawn along the path between those two points, which allows us to change the direction of our gradient simply by altering its start and end points.\n\tAfterward, we specify our color stops with the oh-so-aptly named color-stop parameter, which takes the stop\u2019s position on the gradient (0 being the beginning, and 100% or 1 being the end) and the color itself.\n\n\nFor a simple two-color gradient like this, -webkit-gradient has a bit of shorthand notation to offer us:\n\nbackground-image: -webkit-gradient(linear, 0 0, 0 100%,\n\tfrom(#FAA51A),\n\tto(#FAA51A)\n\t);\n\nfrom(#FAA51A) is equivalent to writing color-stop(0, #FAA51A), and to(#FAA51A) is the same as color-stop(1, #FAA51A) or color-stop(100%, #FAA51A)\u2014in both cases, we\u2019re simply declaring the first and last color stops in our gradient.\n\nTerse Gecko is terse\n\nWebKit proposed its syntax back in 2008, heavily inspired by the way gradients are drawn in the canvas specification. However, a different, leaner syntax came to the fore, eventually appearing in a draft module specification in CSS3.\n\nNaturally, because nothing on the web was meant to be easy, this is the one that Mozilla has implemented.\n\nHere\u2019s how we get gradient-y in Gecko:\n\nbackground-image: -moz-linear-gradient(\n\t#FAA51A,\n\t#F47A20\n\t);\n\nWait, what? Done already? That\u2019s right. By default, -moz-linear-gradient assumes you\u2019re trying to create a vertical gradient, starting from the top of your element and moving to the bottom. And, if that\u2019s the case, then you simply need to specify your color stops, delimited with a few commas.\n\nI know: that was almost\u2026 painless. But the W3C/Mozilla syntax also affords us a fair amount of flexibility and control, by introducing features as we need them.\n\nWe can specify an origin point for our gradient:\n\nbackground-image: -moz-linear-gradient(50% 100%,\n\t#FAA51A,\n\t#F47A20\n\t);\n\nAs well as an angle, to give it a direction:\n\nbackground-image: -moz-linear-gradient(50% 100%, 45deg,\n\t#FAA51A,\n\t#F47A20\n\t);\n\nAnd we can specify multiple stops, simply by adding to our comma-delimited list:\n\nbackground-image: -moz-linear-gradient(50% 100%, 45deg,\n\t#FAA51A,\n\t#FCC,\n\t#F47A20\n\t);\n\nBy adding a percentage after a given color value, we can determine its position along the gradient path:\n\nbackground-image: -moz-linear-gradient(50% 100%, 45deg,\n\t#FAA51A,\n\t#FCC 20%,\n\t#F47A20\n\t);\n\nSo that\u2019s some of the flexibility implicit in the W3C/Mozilla-style syntax.\n\nNow, I should note that both syntaxes have their respective fans. I will say that the W3C/Mozilla-style syntax makes much more sense to me, and lines up with how I think about creating gradients. But I can totally understand why some might prefer WebKit\u2019s more verbose approach to the, well, looseness behind the -moz syntax. \u00c0 chacun son gradient syntax.\n\nStill, as the language gets refined by the W3C, I really hope some consensus is reached by the browser vendors. And with Opera signaling that it will support the W3C syntax, I suppose it falls on WebKit to do the same.\n\nReusing color stops for fun and profit\n\nBut CSS gradients aren\u2019t all simple colors and shapes and whatnot: by getting inventive with individual color stops, you can create some really complex, compelling effects.\n\nTim Van Damme, whose brain, I believe, should be posthumously donated to science, has a particularly clever application of gradients on The Box, a site dedicated to his occasional podcast series. Now, there are a fair number of gradients applied throughout the UI, but it\u2019s the feature image that really catches the eye.\n\nYou see, there\u2019s nothing that says you can\u2019t reuse color stops. And Tim\u2019s exploited that perfectly.\n\nHe\u2019s created a linear gradient, angled at forty-five degrees from the top left corner of the photo, starting with a fully transparent white (rgba(255, 255, 255, 0)). At the halfway mark, he\u2019s established another color stop at an only slightly more opaque white (rgba(255, 255, 255, 0.1)), making for that incredibly gradual brightening toward the middle of the photo.\n\n\n\nBut then he has set another color stop immediately on top of it, bringing it back down to rgba(255, 255, 255, 0) again. This creates that fantastically hard edge that diagonally bisects the photo, giving the image that subtle gloss.\n\n\n\nAnd his final color stop ends at the same fully transparent white, completing the effect. Hot? I do believe so.\n\nRocking the radials\n\nWe\u2019ve been looking at linear gradients pretty exclusively. But I\u2019d be remiss if I didn\u2019t at least mention radial gradients as a viable option, including a modest one as a link accent on a navigation bar:\n\n\n\nAnd here\u2019s the relevant CSS:\n\nbackground: -moz-radial-gradient(50% 100%, farthest-side,\n\trgb(204, 255, 255) 1%,\n\trgb(85, 85, 85) 15%,\n\trgba(85, 85, 85, 0)\n\t);\nbackground: -webkit-gradient(radial, 50% 100%, 0, 50% 100%, 15,\n\tfrom(rgb(204, 255, 255)),\n\tto(rgba(85, 85, 85, 0))\n\t);\n\nNow, the syntax builds on what we\u2019ve already learned about linear gradients, so much of it might be familiar to you, picking out color stops and transition points, as well as the two syntaxes\u2019 reliance on either a separate property (-moz-radial-gradient) or parameter (-webkit-gradient(radial, \u2026)) to shift into circular mode.\n\nMozilla introduces another stand-alone property (-moz-radial-gradient), and accepts a starting point (50% 100%) from which the circle radiates. There\u2019s also a size constant defined (farthest-side), which determines the reach and shape of our gradient.\n\nWebKit is again the more verbose of the two syntaxes, requiring both starting and ending points (50% 100% in both cases). Each also accepts a radius in pixels, allowing you to control the skew and breadth of the circle.\n\nAgain, this is a fairly modest little radial gradient. Time and article length (and, let\u2019s be honest, your author\u2019s completely inadequate grasp of geometry) prevent me from covering radial gradients in much more detail, because they are incredibly powerful. For those interested in learning more, I can\u2019t recommend the references at Mozilla and Apple strongly enough.\n\nLeave no browser behind\n\nBut no matter the kind of gradients you\u2019re working with, there is a large swathe of browsers that simply don\u2019t support gradients. Thankfully, it\u2019s fairly easy to declare a sensible fallback\u2014it just depends on the kind of fallback you\u2019d like. Essentially, gradient-blind browsers will disregard any properties containing references to either -moz-linear-gradient, -moz-radial-gradient, or -webkit-gradient, so you simply need to keep your fallback isolated from those properties.\n\nFor example: if you\u2019d like to fall back to a flat color, simply declare a separate background-color:\n\n.nav {\n\tbackground-color: #000;\n\tbackground-image: -moz-linear-gradient(rgba(0, 0, 0, 0), rgba(255, 255, 255, 0.45));\n\tbackground-image: -webkit-gradient(linear, 0 0, 0 100%, from(rgba(0, 0, 0, 0)), to(rgba(255, 255, 255, 0.45)));\n}\n\nOr perhaps just create three separate background properties.\n\n.nav {\n\tbackground: #000;\n\tbackground: #000 -moz-linear-gradient(rgba(0, 0, 0, 0), rgba(255, 255, 255, 0.45));\n\tbackground: #000 -webkit-gradient(linear, 0 0, 0 100%, from(rgba(0, 0, 0, 0)), to(rgba(255, 255, 255, 0.45)));\n}\n\nWe can even build on this to fall back to a non-gradient image:\n\n.nav {\n\tbackground: #000 <strong>url(\"faux-gradient-lol.png\") repeat-x</strong>;\n\tbackground: #000 -moz-linear-gradient(rgba(0, 0, 0, 0), rgba(255, 255, 255, 0.45));\n\tbackground: #000 -webkit-gradient(linear, 0 0, 0 100%, from(rgba(0, 0, 0, 0)), to(rgba(255, 255, 255, 0.45)));\n}\n\nNo matter the approach you feel most appropriate to your design, it\u2019s really just a matter of keeping your fallback design quarantined from its CSS3-ified siblings.\n\n(If you\u2019re feeling especially masochistic, there\u2019s even a way to get simple linear gradients working in IE via Microsoft\u2019s proprietary filters. Of course, those come with considerable performance penalties that even Microsoft is quick to point out, so I\u2019d recommend avoiding those.\n\nAnd don\u2019t tell Andy Clarke I told you, or he\u2019ll probably unload his Derringer at me. Or something.)\n\nGo forth and, um, gradientify!\n\nIt\u2019s entirely possible your head\u2019s spinning. Heck, mine is, but that might be the effects of the \u2019nog. But maybe you\u2019re wondering why you should care about CSS gradients. After all, images are here right now, and work just fine. \n\nWell, there are some quick benefits that spring to mind: fewer HTTP requests are needed; CSS3 gradients are easily made scalable, making them ideal for variable widths and heights; and finally, they\u2019re easily modifiable by tweaking a few CSS properties. Because, let\u2019s face it, less time spent yelling at Photoshop is a very, very good thing.\n\nOf course, CSS-generated gradients are not without their drawbacks. The syntax can be confusing, and it\u2019s still under development at the W3C. As we\u2019ve seen, browser support is still very much in flux. And it\u2019s possible that gradients themselves have some real performance drawbacks\u2014so test thoroughly, and gradient carefully.\n\nBut still, as syntaxes converge, and support improves, I think generated gradients can make a compelling tool in our collective belts. The tasteful design is, of course, entirely up to you.\n\nSo have fun, and get gradientin\u2019.", "year": "2010", "author": "Ethan Marcotte", "author_slug": "ethanmarcotte", "published": "2010-12-22T00:00:00+00:00", "url": "https://24ways.org/2010/everything-you-wanted-to-know-about-gradients/", "topic": "code"} {"rowid": 227, "title": "A Contentmas Epiphany", "contents": "The twelve days of Christmas fall between 25 December, Christmas Day, and 6 January, the Epiphany of the Kings. Traditionally, these have been holidays and a lot of us still take a good proportion of these days off. Equally, a lot of us have a got a personal site kicking around somewhere that we sigh over and think, \u201cOne day I\u2019ll sort you out!\u201d Why not take this downtime to give it a big ol\u2019 refresh? I know, good idea, huh?\n\nHEY WAIT! WOAH! NO-ONE\u2019S TOUCHING PHOTOSHOP OR DOING ANY CSS FANCYWORK UNTIL I\u2019M DONE WITH YOU!\n\nBe honest, did you immediately think of a sketch or mockup you have tucked away? Or some clever little piece of code you want to fiddle with? Now ask yourself, why would you start designing the container if you haven\u2019t worked out what you need to put inside?\n\nAnyway, forget the content strategy lecture; I haven\u2019t given you your gifts yet.\nI present The Twelve Days of Contentmas!\n\nThis is a simple little plan to make sure that your personal site, blog or portfolio is not just looking good at the end of these twelve days, but is also a really useful repository of really useful content.\n\nWARNING KLAXON: There are twelve parts, one for each day of Christmas, so this is a lengthy article. I\u2019m not expecting anyone to absorb this in one go. Add to Instapaper. There is no TL;DR for this because it\u2019s a multipart process, m\u2019kay? Even so, this plan of mine cuts corners on a proper applied strategy for content. You might find some aspects take longer than the arbitrary day I\u2019ve assigned. And if you apply this to your company-wide intranet, I won\u2019t be held responsible for the mess.\n\nThat said, I encourage you to play along and sample some of the practical aspects of organising existing content and planning new content because it is, honestly, an inspiring and liberating process. For one thing, you get to review all the stuff you have put out for the world to look at and see what you could do next. This always leaves me full of ideas on how to plug the gaps I\u2019ve found, so I hope you are similarly motivated come day twelve.\n\nLet\u2019s get to it then, shall we?\n\nOn the first day of Contentmas, Relly gave to me:\n\n1. A (partial) content inventory\n\nI\u2019m afraid being a site owner isn\u2019t without its chores. With great power comes great responsibility and all that. There are the domain renewing, hosting helpline calls and, of course, keeping on top of all the content that you have published.\n\nIf you just frowned a little and thought, \u201cWell, there\u2019s articles and images and\u2026 stuff\u201d, then I\u2019d like to introduce you to the idea of a content inventory. \n\nA content inventory is a list of all your content, in a simple spreadsheet, that allows you to see at a glance what is currently on your site: articles; about me page; contact form, and so on.\n\nYou add the full URL so that you can click directly to any page listed. You add a brief description of what it is and what tags it has. In fact, I\u2019ll show you. I\u2019ve made a Google Docs template for you. Sorry, it isn\u2019t wrapped.\n\nDoes it seem like a mammoth task? Don\u2019t feel you have to do this all in one day. But do do it. For one thing, looking back at all the stuff you\u2019ve pushed out into the world gives you a warm fuzzy feeling which keeps the heating bill down.\n\nGrab a glass of mulled cider and try going month-by-month through your blog archives, or project-by-project through your portfolio. Do a little bit each day for the next twelve days and you\u2019ll have done something awesome. The best bit is that this exploration of your current content helps you with the next day\u2019s task.\n\nBonus gift: for more on content auditing and inventory, check out Jeff Veen\u2019s article on just this topic, which is also suitable for bigger business sites too.\n\nOn the second day of Contentmas, Relly gave to me:\n\n2. Website loves\n\nRemember when you were a kid, you\u2019d write to Santa with a wish list that would make your parents squirm, because your biggest hope for your stocking would be either impossible or impossibly expensive. Do you ever get the same thing now as a grown-up where you think, \u201cWouldn\u2019t it be great if I could make a video blog every week\u201d, or \u201cI could podcast once a month about this\u201d, and then you push it to the back of your mind, assuming that you won\u2019t have time or you wouldn\u2019t know what to talk about anyway?\n\nTrue fact: content doesn\u2019t just have to be produced when we are so incensed that we absolutely must blog about a topic. Neither does it have to be a drain to a demanding schedule. You can plan for it. In fact, you\u2019re about to.\n\nSo, today, get a pen and a notebook. Move away from your computer. My gift to you is to grab a quiet ten minutes between turkey sandwiches and relatives visiting and give your site some of the attention it deserves for 2011.\n\nWhat would you do with your site if you could? I don\u2019t mean what would you do purely visually \u2013 although by all means note those things down too \u2013 but to your site as a whole. Here are some jumping off points:\n\n\n\tWould you like to individually illustrate and design some of your articles?\n\tWhat about a monthly exploration of your favourite topic through video or audio?\n\tWho would you like to collaborate with?\n\tWhat do you want your site to be like for a user?\n\tWhat tone of voice would you like to use?\n\tHow could you use imagery and typography to support your content?\n\tWhat would you like to create content about in the new year?\n\n\nIt\u2019s okay if you can\u2019t do these things yet. It\u2019s okay to scrub out anything where you think, \u201cNah, never gonna happen.\u201d But do give some thought to what you might want to do next. The best inspiration for this comes from what you\u2019ve already done, so keep on with that inventory.\n\nBonus gift: a Think Vitamin article on podcasting using Skype, so you can rope in a few friends to join in, too.\n\nOn the third day of Contentmas, Relly gave to me:\n\n3. Red pens\n\nShock news, just in: the web is not print!\n\nOne of the hardest things as a writer is to reach the point where you say, \u201cYeah, okay, that\u2019s it. I\u2019m done\u201d and send off your beloved manuscript or article to print. I\u2019m convinced that if deadlines didn\u2019t exist, nothing would get finished. Why? Well, at the point you hand it over to the publishing presses, you can make no more changes. At best, you can print an erratum or produce an updated second edition at a later date. And writers love to \u2013 no, they live to \u2013 tweak their creations, so handing them over is quite a struggle. Just one more comma and\u2026\n\nOnline, we have no such constraints. We can edit, correct, test, tweak, twiddle until we\u2019re blooming sick of it. Our red pens never run out of ink. It is time for you to run a more critical eye over your content, especially the stuff already published. Relish in the opportunity to change stuff on the fly. I am not so concerned by blog articles and such (although feel free to apply this concept to those, too), but mainly by your more concrete content: about pages; contact pages; home page navigation; portfolio pages; 404 pages.\n\nNow, don\u2019t go running amok with the cut function yet. First, put all these evergreen pages into your inventory. In the notes section, write a quick analysis of how useful this copy is. Example questions:\n\n\n\tIs your contact page up-to-date?\n\tDoes your about page link to the right places?\n\tIs your portfolio current?\n\tDoes your 404 page give people a way to find what they were looking for?\n\n\nWe\u2019ll come back to this in a few days once we have a clearer idea of how to improve our content.\n\nBonus gift: the audio and slides of a talk I gave on microcopy and 404 pages at @media WebDirections last year.\n\nOn the fourth day of Contentmas, Relly gave to me:\n\n4. Stalling nerds\n\nActually, I guess more accurately this is something I get given a lot. Designers and developers particularly can find a million ways to extract themselves from the content of a site but, as the site owner, and this being your personal playground and all, you mustn\u2019t. You actually can\u2019t, sorry. \n\nBut I do understand that at this point, \u2018sorting out your site\u2019 suddenly seems a lot less exciting, especially if you are a visually-minded person and words and lists aren\u2019t really your thing. So far, there has been a lot of not-very-exciting exercises in planning, and there\u2019s probably a nice pile of DVDs and video games that you got from Santa worth investigating. \n\nStay strong my friend. By now, you have probably hit upon an idea of some sort you are itching to start on, so for every half-hour you spend doing inventory, gift yourself another thirty minutes to play with that idea.\n\nBonus gift: the Pomodoro Technique. Take one kitchen timer and a to-do list and see how far you can go.\n\nOn the fifth day of Contentmas, Relly gave to me:\n\n5. Golden rules\n\nHere are some guidelines for writing online:\n\n\n\tMake headlines for tutorials and similar content useful and descriptive; use a subheading for any terrible pun you want to work in.\n\n\n\n\tCreate a broad opening paragraph that addresses what your article is about. Part of the creative skill in writing is to do this in a way that both informs the reader and captures their attention. If you struggle with this, consider a boxout giving a summary of the article.\n\n\n\n\tUse headings to break up chunks of text and allow people to scan. Most people will have a scoot about an article before starting at the beginning to give it a proper read. These headings should be equal parts informative and enticing. Try them out as questions that might be posed by the reader too.\n\n\n\n\tFinish articles by asking your reader to take an affirmative action: subscribe to your RSS feed; leave a comment (if comments are your thing \u2013 more on that later); follow you on Twitter; link you to somewhere they have used your tutorial or code. The web is about getting excited, making things and sharing with others, so give your readers the chance to do that.\n\n\n\n\tFor portfolio sites, this call to action is extra important as you want to pick up new business. Encourage people to e-mail you or call you \u2013 don\u2019t just rely on a number in the footer or an e-mail link at the top. Think up some consistent calls-to-action you can use and test them out.\n\n\nSo, my gift to you today is a simplified page table for planning out your content to make it as useful as possible.\n\nFeel free to write a new article or tutorial, or work on that great idea from yesterday and try out these guidelines for yourself. \n\nIt\u2019s a simple framework \u2013 good headline; broad opening; headings to break up volume; strong call to action \u2013 but it will help you recognise if what you\u2019ve written is in good shape to face the world. It doesn\u2019t tell you anything about how to create it \u2013 that\u2019s your endeavour \u2013 but it does give you a start. No more staring at a blank page.\n\nBonus gift: okay, you have to buy yourself this one, but it is the gift that keeps on giving: Ginny Reddish\u2019s Letting Go of the Words \u2013 the hands down best guide to web writing there is, with a ton of illustrative examples.\n\nOn the sixth day of Contentmas, Relly gave to me:\n\n6. Foundation-a-laying\n\nYesterday, we played with a page table for articles. Today, we are going to set the foundations for your new, spangly, spruced up, relaunched site (for when you\u2019re ready, of course). We\u2019ve checked out what we\u2019ve got, we\u2019ve thought about what we\u2019d like, we have a wish list for the future. Now is the time for a small reality check. \n\nBe realistic with yourself. Can you really give your site some attention every day? Record a short snippet of audio once a week? A photo diary post once a month? Look back at the wish list you made.\n\n\n\tWhat can you do?\n\tWhat can you aim for?\n\tWhat just isn\u2019t possible right now?\n\n\nAs much as we\u2019d all love to be producing a slick video podcast and screencast three times a week, it\u2019s better to set realistic expectations and work your way up.\n\nWhere does your site sit in your online world?\n\n\n\tDo you want it to be the hub of all your social interactions, a lifestream, a considered place of publication or a free for all?\n\tDo you want to have comments (do you have the personal resource to monitor comments?) or would you prefer conversation to happen via Twitter, Facebook or not at all?\n\tDoes this apply to all pages, posts and content types or just some?\n\tGet these things straight in your head and it\u2019s easier to know what sort of environment you want to create and what content you\u2019ll need to sustain it.\n\n\nGet your notebook again and think about specific topics you\u2019d like to cover, or aspects of a project you want to go into more, and how you can go ahead and do just that. A good motivator is to think what you\u2019ll get out of doing it, even if that is \u201cAnd I\u2019ll finally show the poxy $whatever_community that my $chosen_format is better than their $other_format.\u201d\n\nWhat topics have you really wanted to get off your chest? Look through your inventory again. What gaps are there in your content just begging to be filled?\n\nToday, you\u2019re going to give everyone the gift of your opinion. Find one of those things where someone on the internet is wrong and create a short but snappy piece to set them straight. Doesn\u2019t that feel good? Soon you\u2019ll be able to do this in a timely manner every time someone is wrong on the internet!\n\nBonus gift: we\u2019re halfway through, so I think something fun is in order. How about a man sledding naked down a hill in Brighton on a tea tray? Sometimes, even with a whole ton of content planning, it\u2019s the spontaneous stuff that is still the most fun to share.\n\nOn the seventh day of Contentmas, Relly gave to me:\n\n7. Styles-a-guiding\n\nNot colour style guides or brand style guides or code style guides. Content style guides. You could go completely to town and write yourself a full document defining every aspect of your site\u2019s voice and personality, plus declaring your view on contracted phrases and the Oxford comma, but this does seem a tad excessive. Unless you\u2019re writing an entire site as a fictional character, you probably know your own voice and vocabulary better than anyone. It\u2019s in your head, after all.\n\nInstead, equip yourself with a good global style guide (I like the Chicago Manual of Style because I can access it fully online, but the Associated Press (AP) Stylebook has a nifty iPhone app and, if I\u2019m entirely honest, I\u2019ve found a copy of Eats, Shoots and Leaves has set me right on all but the most technical aspects of punctuation). Next, pick a good dictionary and bookmark thesaurus.com. Then have a go at Kristina Halvorson\u2019s \u2018Voice and Tone\u2019 exercise from her book Content Strategy for the Web, to nail down what you\u2019d like your future content to be like:\n\nTo introduce the voice and tone qualities you\u2019re [looking to create], a good approach is to offer contrasting values. For example:\n\n\n\tProfessional, not academic.\n\tConfident, not arrogant.\n\tClever, not cutesy.\n\tSavvy, not hipster.\n\tExpert, not preachy.\n\n\n\nTake a look around some of your favourite sites and examine the writing and stylistic handling of content. What do you like? What do you want to emulate? What matches your values list?\n\nToday\u2019s gift to you is an idea. Create a \u2018swipe file\u2019 through Evernote or Delicious and save all the stuff you come across that, regardless of topic, makes you think, \u201cThat\u2019s really cool.\u201d This isn\u2019t the same as an Instapaper list you\u2019d like to read. This is stuff you have read or have seen that is worth looking at in closer detail.\n\n\n\tWhy is it so good?\n\tWhat is the language and style like?\n\tWhat impact does the typography have?\n\tHow does the imagery work to enhance the message?\n\n\nThis isn\u2019t about creating a personal brand or any such piffle. It\u2019s about learning to recognise how good content works and how to create something awesome yourself. Obviously, your ideas are brilliant, so take the time to understand how best to spring them on the unsuspecting public for easier world domination.\n\nBonus gift: a nifty style guide is a must when you do have to share content creation duties with others. Here is Leeds University\u2019s publicly available PDF version for you to take a gander at. I especially like the Rationale sections for chopping off dissenters at the knees. \n\nOn the eighth day of Contentmas, Relly gave to me:\n\n8. Times-a-making\n\nYou have an actual, real plan for what you\u2019d like to do with your site and how it is going to sound (and probably some ideas on how it\u2019s going to look, too). I hope you are full of enthusiasm and Getting Excited To Make Things. Just before we get going and do exactly that, we are going to make sure we have made time for this creative outpouring.\n\nHave you tried to blog once a week before and found yourself losing traction after a month or two? Are there a couple of podcasts lurking neglected in your archives? Whereas half of the act of running is showing up for training, half of creating is making time rather than waiting for it to become urgent. It\u2019s okay to write something and set a date to come back to it (which isn\u2019t the same as leaving it to decompose in your drafts folder).\n\nPutting a date in your calendar to do something for your site means that you have a forewarning to think of a topic to write about, and space in your schedule to actually do it. Crucially, you\u2019ve actually made some time for this content lark.\n\nTo do this, you need to think about how long it takes to get something out of the door/shipped/published/whatever you want to call it. It might take you just thirty minutes to record a podcast, but also a further hour to research the topic beforehand and another hour to edit and upload the clips. Suddenly, doing a thirty minute podcast every day seems a bit unlikely. But, on the flipside, it is easy to see how you could schedule that in three chunks weekly. \n\nPut it in your calendar. Do it, publish it, book yourself in for the next week. Keep turning up.\n\nToday my gift to you is the gift of time. Set up your own small content calendar, using your favourite calendar system, and schedule time to play with new ways of creating content, time to get it finished and time to get it on your site. Don\u2019t let good stuff go to your drafts folder to die of neglect.\n\nBonus gift: lots of writers swear by the concept of \u2018daily pages\u2019. That is, churning out whatever is in your head to see if there is anything worth building upon, or just to lose the grocery list getting in the way. 750words.com is a site built around this concept. Go have a play.\n\nOn the ninth day of Contentmas, Relly gave to me:\n\n9. Copy enhancing\n\nAn incredibly radical idea for day number nine. We are going to look at that list of permanent pages you made back on day three and rewrite the words first, before even looking at a colour palette or picking a font! Crazy as it sounds, doing it this way round could influence your design. It could shape the imagery you use. It could affect your choice of typography. IMAGINE THE POSSIBILITIES!\n\nLook at the page table from day five. Print out one for each of your homepage, about page, contact page, portfolio, archive, 404 page or whatever else you have. Use these as a place to brainstorm your ideas and what you\u2019d like each page to do for your site. Doodle in the margin, choose words you think sound fun to say, daydream about pictures you\u2019d like to use and colours you think would work, but absolutely, completely and utterly fill in those page tables to understand how much (or how little) content you\u2019re playing with and what you need to do to get to \u2018launch\u2019.\n\nThen, use them for guidance as you start to write. Don\u2019t skimp. Don\u2019t think that a fancy icon of an envelope encourages people to e-mail you. Use your words.\n\nPeople get antsy at this bit. Writing can be hard work and it\u2019s easy for me to say, \u201cGo on and write it then!\u201d I know this. I mean, you should see the faces I pull when I have to do anything related to coding. The closest equivalent would be when scientists have to stick their hands in big gloves attached to a glass box to do dangerous experiments.\n\nHere\u2019s today\u2019s gift, a little something about writing that I hope brings you comfort: \n\n\n\tTo write something fantastic you almost always have to write a rubbish draft first.\n\n\nNow, you might get lucky and write a \u2018good enough\u2019 draft first time and that\u2019s fab \u2013 you\u2019ve cut some time getting to \u2018fantastic\u2019. If, however, you\u2019ve always looked at your first attempt to write more than the bare minimum and sighed in despair, and resigned yourself to adding just a title, date and a screenshot, be cheered because you have taken the first step to being able to communicate with clarity, wit and panache.\n\nKeep going. Look at writing you admire and emulate it. Think about how you will lovingly design those words when they are done. Know that you can go back and change them. Check back with your page table to keep you on track. Do that first draft.\n\nBonus gift: becoming a better writer helps you to explain design concepts to clients.\n\nOn the tenth day of Contentmas, Relly gave to me:\n\n10. Ideas for keeping\n\nHurrah! You have something down on paper, ready to start evolving your site around it. Here\u2019s where the words and visuals and interaction start to come together. Because you have a plan, you can think ahead and do things you wouldn\u2019t be able to pull together otherwise.\n\n\n\tHow about finding a fresh-faced stellar illustrator on Dribbble to create you something perfect to pep up your contact page or visualize your witty statement on statements of work. A List Apart has been doing it for years and it hasn\u2019t worked out too badly for them, has it?\n\n\n\n\tWhat about spending this month creating a series of introductory tutorials on a topic, complete with screencasts and audio and give them a special home on your site?\n\n\n\n\tHow about putting in some hours creating a glorious about me page, with a biography, nice picture, and where you spend your time online?\n\n\n\n\tYou could even do the web equivalent of getting up in the attic and sorting out your site\u2019s search to make it easier to find things in your archives. Maybe even do some manual recommendations for relevant content and add them as calls to action.\n\n\n\n\tHow about writing a few awesome case studies with individual screenshots of your favourite work, and creating a portfolio that plays to your strengths? Don\u2019t just rely on the pretty pictures; use your words. Otherwise no-one understands why things are the way they are on that screenshot and BAM! you\u2019ll be judged on someone else\u2019s tastes. (Elliot has a head start on you for this, so get to it!)\n\n\n\n\tDo you have a serious archive of content? What\u2019s it like being a first-time visitor to your site? Could you write them a guide to introduce yourself and some of the most popular stuff on your site? Ali Edwards is a massively popular crafter and every day she gets new visitors who have found her multiple papercraft projects on Flickr, Vimeo and elsewhere, so she created a welcome guide just for them.\n\n\n\n\tWhat about your microcopy? Can you improve on your blogging platform\u2019s defaults for search, comment submission and labels? I\u2019ll bet you can.\n\n\n\n\tMaybe you could plan a collaboration with other like-minded souls. A week of posts about the more advanced wonders of HTML5 video. A month-long baton-passing exercise in extolling the virtues of IE (shut up, it could happen!). Just spare me any more online advent calendars.\n\n\n\n\tWatch David McCandless\u2019s TED talk on his jawdropping infographic work and make something as awesome as the Billion Dollar O Gram. I dare you.\n\n\nBonus gift: Grab a copy of Brian Suda\u2019s Designing with Data, in print or PDF if Santa didn\u2019t put one in your stocking, and make that awesome something with some expert guidance.\n\nOn the eleventh day of Contentmas, Relly gave to me:\n\n11. Pixels pushing\n\nOh, go on then. Make a gorgeous bespoke velvet-lined container for all that lovely content. It\u2019s proper informed design now, not just decoration. Mr. Zeldman says so.\n\nBonus gift: I made you a movie! If books were designed like websites.\n\nOn the twelfth day of Contentmas, Relly gave to me:\n\n12. Delighters delighting\n\nThe Epiphany is upon us; your site is now well on its way to being a beautiful, sustainable hub of content and you have a date in your calendar to help you keep that resolution of blogging more. What now?\n\n\n\tKeep on top of your inventory. One day it will save your butt, I promise.\n\tKeep making a little bit of time regularly to create something new: an article; an opinion piece; a small curation of related links; a photo diary; a new case study. That\u2019s easier than an annual content bootcamp for sure.\n\tAnd today\u2019s gift: look for ways to play with that content and make something a bit special. Stretch yourself a little. It\u2019ll be worth it.\n\n\nBonus gift: Paul Annett\u2019s presentation on Ooh, that\u2019s clever: Delighters in design from SxSW 09.\n\nAll my favourite designers and developers have their own unique styles and touches. It\u2019s what sets them apart. My very, very favourites have an eloquence and expression that they bring to their sites and to their projects. I absolutely love to explore a well-crafted, well-written site \u2013 don\u2019t we all? I know the time it takes. I appreciate the time it takes. But the end results are delicious. Do please share your spangly, refreshed sites with me in the comments.\n\nCatch me on Twitter, I\u2019m @RellyAB, and I\u2019ve been your host for these Twelve Days of Contentmas.", "year": "2010", "author": "Relly Annett-Baker", "author_slug": "rellyannettbaker", "published": "2010-12-21T00:00:00+00:00", "url": "https://24ways.org/2010/a-contentmas-epiphany/", "topic": "content"} {"rowid": 218, "title": "Put Yourself in a Corner", "contents": "Some backstory, and a shameful confession\n\nFor the first couple years of high school I was one of those jerks who made only the minimal required effort in school. Strangely enough, how badly I behaved in a class was always in direct proportion to how skilled I was in the subject matter. In the subjects where I was confident that I could pass without trying too hard, I would give myself added freedom to goof off in class.\n\nBecause I was a closeted lit-nerd, I was most skilled in English class. I\u2019d devour and annotate required reading over the weekend, I knew my biblical and mythological allusions up and down, and I could give you a postmodern interpretation of a text like nobody\u2019s business. But in class, I\u2019d sit in the back and gossip with my friends, nap, or scribble patterns in the margins of my textbooks. I was nonchalant during discussion, I pretended not to listen during lectures. I secretly knew my stuff, so I did well enough on tests, quizzes, and essays. But I acted like an ass, and wasn\u2019t getting the most I could out of my education.\n\nThe day of humiliation, but also epiphany\n\nOne day in Ms. Kaney\u2019s AP English Lit class, I was sitting in the back doodling. An earbud was dangling under my sweater hood, attached to the CD player (remember those?) sitting in my desk. Because of this auditory distraction, the first time Ms. Kaney called my name, I barely noticed. I definitely heard her the second time, when she didn\u2019t call my name so much as roar it. I can still remember her five feet frame stomping across the room and grabbing an empty desk. It screamed across the worn tile as she slammed it next to hers. She said, \u201cThis is where you sit now.\u201d My face gets hot just thinking about it.\n\nI gathered my things, including the CD player (which was now impossible to conceal), and made my way up to the newly appointed Seat of Shame. There I sat, with my back to the class, eye-to-eye with Ms. Kaney. From my new vantage point I couldn\u2019t see my friends, or the clock, or the window. All I saw were Ms. Kaney\u2019s eyes, peering at me over her reading glasses while I worked. In addition to this punishment, I was told that from now on, not only would I participate in class discussions, but I would serve detention with her once a week until an undetermined point in the future.\n\nDuring these detentions, Ms. Kaney would give me new books to read, outside the curriculum, and added on to my normal homework. They ranged from classics to modern novels, and she read over my notes on each book. We\u2019d discuss them at length after class, and I grew to value not only our private discussions, but the ones in class as well. After a few weeks, there wasn\u2019t even a question of this being punishment. It was heaven, and I was more productive than ever.\n\nTo the point\n\nPlease excuse this sentimental story. It\u2019s not just about honoring a teacher who cared enough to change my life, it\u2019s really about sharing a lesson. The most valuable education Ms. Kaney gave me had nothing to do with literature. She taught me that I (and perhaps other people who share my special brand of crazy) need to be put in a corner to flourish. When we have physical and mental constraints applied, we accomplish our best work.\n\nFor those of you still reading, now seems like a good time to insert a pre-emptive word of mediation. Many of you, maybe all of you, are self-disciplined enough that you don\u2019t require the rigorous restrictions I use to maximize productivity. Also, I know many people who operate best in a stimulating and open environment. I would advise everyone to seek and execute techniques that work best for them. But, for those of you who share my inclination towards daydreams and digressions, perhaps you\u2019ll find something useful in the advice to follow.\n\nIn which I pretend to be Special Agent Olivia Dunham\n\nNow that I\u2019m an adult, and no longer have Ms. Kaney to reign me in, I have to find ways to put myself in the corner. By rejecting distraction and shaping an environment designed for intense focus, I\u2019m able to achieve improved productivity.\n\nLately I\u2019ve been obsessed with the TV show Fringe, a sci-fi series about an FBI agent and her team of genius scientists who save the world (no, YOU\u2019RE a nerd). There\u2019s a scene in the show where the primary character has to delve into her subconscious to do extraordinary things, and she accomplishes this by immersing herself in a sensory deprivation tank. The premise is this: when enclosed in a space devoid of sound, smell, or light, she will enter a new plane of consciousness wherein she can tap into new levels of perception.\n\nThis might sound a little nuts, but to me this premise has some real-world application. When I am isolated from distraction, and limited to only the task at hand, I\u2019m able to be productive on a whole new level. Since I can\u2019t actually work in an airtight iron enclosure devoid of input, I find practical ways to create an interruption-free environment.\n\nSince I work from home, many of my methods for coping with distractions wouldn\u2019t be necessary for my office-bound counterpart. However for some of you 9-to-5-ers, the principles will still apply.\n\nConsider your visual input\n\nFirst, I have to limit my scope to the world I can (and need to) affect. In the largest sense, this means closing my curtains to the chaotic scene of traffic, birds, the post office, a convenience store, and generally lovely weather that waits outside my window. When the curtains are drawn and I\u2019m no longer surrounded by this view, my sphere is reduced to my desk, my TV, and my cat. Sometimes this step alone is enough to allow me to focus. \n\nBut, my visual input can be whittled down further still. For example, the desk where I usually keep my laptop is littered with twelve owl figurines, a globe, four books, a three-pound weight, and various nerdy paraphernalia (hard drives, Wacom tablets, unnecessary bluetooth accessories, and so on). It\u2019s not so much a desk as a dumping ground for wacky flea market finds and impulse technology buys. Therefore, in addition to this Official Desk, I have an adult version of Ms. Kaney\u2019s Seat of Shame. It\u2019s a rusty old student\u2019s desk I picked up at the Salvation Army, almost an exact replica of the model Ms. Kaney dragged across the classroom all those years ago. This tiny reproduction Seat of Shame is literally in a corner, where my only view is a blank wall. When I truly need to focus, this is where I take refuge, with only a notebook and a pencil (and occasionally an iPad).\n\nFind out what works for your ears\n\nEven from my limited sample size of two people, I know there are lots of different ways to cope with auditory distraction. I prefer silence when focused on independent work, and usually employ some form of a white noise generator. I\u2019ve yet to opt for the fancy \u2018real\u2019 white noise machines; instead, I use a desktop fan or our allergy filter machine. This is usually sufficient to block out the sounds of the dishwasher and the cat, which allows me to think only about the task of hand.\n\nMy boyfriend, the other half of my extensive survey, swears by another method. He calls it The Wall of Sound, and it\u2019s basically an intense blast of raucous music streamed directly into his head. The outcome of his technique is really the same as mine; he\u2019s blocking out unexpected auditory input. If you can handle the grating sounds of noisy music while working, I suggest you give The Wall of Sound a try.\n\nDon\u2019t count the minutes\n\nWhen I sat in the original Seat of Shame in lit class, I could no longer see the big classroom clock slowly ticking away the seconds until lunch. Without the marker of time, the class period often flew by. The same is true now when I work; the less aware of time I am, the less it feels like time is passing too quickly or slowly, and the more I can focus on the task (not how long it takes). \n\nNowadays, to assist in my effort to forget the passing of time, I sometimes put a sticky note over the clock on my monitor. If I\u2019m writing, I\u2019ll use an app like WriteRoom, which blocks out everything but a simple text editor. \n\nThere are situations when it\u2019s not advisable to completely lose track of time. If I\u2019m working on a project with an hourly rate and a tight scope, or if I need to be on time to a meeting or call, I don\u2019t want to lose myself in the expanse of the day. In these cases, I\u2019ll set an alarm that lets me know it\u2019s time to reign myself back in (or on some days, take a shower).\n\nPut yourself in a mental corner, too\n\nWhen Ms. Kaney took action and forced me to step up my game, she had the insight to not just change things physically, but to challenge me mentally as well. She assigned me reading material outside the normal coursework, then upped the pressure by requiring detailed reports of the material. While this additional stress was sometimes uncomfortable, it pushed me to work harder than I would have had there been less of a demand. Just as there can be freedom in the limitations of a distraction-free environment, I\u2019d argue there is liberty in added mental constraints as well.\n\nDeadlines as a constraint\n\nMuch has been written about the role of deadlines in the creative process, and they seem to serve different functions in different cases. I find that deadlines usually act as an important constraint and, without them, it would be nearly impossible for me to ever consider a project finished. There are usually limitless ways to improve upon the work I do and, if there\u2019s no imperative for me to be done at a certain point, I will revise ad infinitum. (Hence, the personal site redesign that will never end \u2013 Coming Soon, Forever!). But if I have a clear deadline in mind, there\u2019s a point when the obsessive tweaking has to stop. I reach a stage where I have to gather up the nerve to launch the thing.\n\nPutting the pro in procrastination\n\nSometimes I\u2019ve found that my tendency to procrastinate can help my productivity. (Ducks, as half the internet throws things at her.) I understand the reasons why procrastination can be harmful, and why it\u2019s usually a good idea to work diligently and evenly towards a goal. I try to divide my projects up in a practical way, and sometimes I even pull it off. But for those tasks where you work aimlessly and no focus comes, or you find that every other to-do item is more appealing, sometimes you\u2019re forced to bring it together at the last moment. And sometimes, this environment of stress is a formula for magic. Often when I\u2019m down to the wire and have no choice but to produce, my mind shifts towards a new level of clarity. There\u2019s no time to endlessly browse for inspiration, or experiment with convoluted solutions that lead nowhere.\n\nObviously a life lived perpetually on the edge of a deadline would be a rather stressful one, so it\u2019s not a state of being I\u2019d advocate for everyone, all the time. But every now and then, the work done when I\u2019m down to the wire is my best.\n\nKeep one toe outside your comfort zone\n\nWhen I\u2019m choosing new projects to take on, I often seek out work that involves an element of challenge. Whether it\u2019s a design problem that will require some creative thinking, or a coding project that lends itself to using new technology like HTML5, I find a manageable level of difficulty to be an added bonus. The tension that comes from learning a new skill or rethinking an old standby is a useful constraint, as it keeps the work interesting, and ensures that I continue learning.\n\nThere you have it\n\nWell, I think I\u2019ve spilled most of my crazy secrets for forcing my easily distracted brain to focus. As with everything we web workers do, there are an infinite number of ways to encourage productivity. I hope you\u2019ve found a few of these to be helpful, and please share your personal techniques in the comments. Have a happy and productive new year!", "year": "2010", "author": "Meagan Fisher", "author_slug": "meaganfisher", "published": "2010-12-20T00:00:00+00:00", "url": "https://24ways.org/2010/put-yourself-in-a-corner/", "topic": "process"} {"rowid": 229, "title": "Sketching to Communicate", "contents": "As a web designer I\u2019ve always felt that I\u2019d somehow cheated the system, having been absent on the day God handed out the ability to draw. I didn\u2019t study fine art, I don\u2019t have a natural talent to effortlessly knock out a realistic bowl of fruit beside a water jug, and yet somehow I\u2019ve still managed to blag my way this far. I\u2019m sure many of you may feel the same.\n\nI had no intention of becoming an artist, but to have enough skill to convey an idea in a drawing would be useful. Instead, my inadequate instrument would doodle drunkenly across the page leaving a web of unintelligible paths instead of the refined illustration I\u2019d seen in my mind\u2019s eye. This \u2013 and the natural scrawl of my handwriting \u2013 is fine (if somewhat frustrating) when it\u2019s for my eyes only but, when sketching to communicate a concept to a client, such amateur art would be offered with a sense of embarrassment. So when I had the opportunity to take part in some sketching classes whilst at Clearleft I jumped at the chance.\n\nWhy sketch?\n\nIn UX workshops early on in a project\u2019s life, sketching is a useful and efficient way to convey and record ideas. It\u2019s disposable and inexpensive, but needn\u2019t look amateur. A picture may be worth a thousand words, but a well executed sketch of how you\u2019ll combine funny YouTube videos with elephants to make Lolephants.com could be worth millions in venture capital. Actually, that\u2019s not bad\u2026 ;-)\n\nAlthough (as you will see) the basics of sketching are easy to master, the kudos you will receive from clients for being a \u2018proper designer\u2019 makes it worthwhile!\n\nWhere to begin?\n\nStart by not buying yourself a sketch pad. If you were the type of child who ripped the first page out of a school exercise book and started again if you made even a tiny mistake (you\u2019re not alone!), Wreck This Journal may offer a helping hand. Practicing on plain A4 paper instead of any \u2018special\u2019 notepad will make the process a whole lot easier, no matter how deliciously edible those Moleskines look.\n\nDo buy yourself a black fine-liner pen and a set of grey Pro Markers for shading. These pens are unlike any you will have used before, and look like blended watercolours once the ink is dry. Although multiple strokes won\u2019t create unsightly blotches of heavy ink on the page, they will go right through your top sheet so always remember to keep a rough sheet in the second position as an ink blotter.\n\n photo by Tom Harrison\n\nDon\u2019t buy pencils to sketch with, as they lack the confidence afforded by the heavy black ink strokes of marker pens and fine-liners.\n\nIf you\u2019re going to be sketching with clients then invest in some black markers and larger sheets of paper. At the risk of sounding like a stationery brand whore, Sharpies are ideal, and these comedy-sized Post-Its do the job far better than cheaper, less sticky alternatives. Although they\u2019re thicker than most standard paper, be sure to double-layer them if you\u2019re writing on them on a wall, unless you fancy a weekend redecorating your client\u2019s swanky boardroom.\n\nThe best way to build confidence and improve your sketching technique is, obviously, to practise. Reading this article will be of no help unless you repeat the following examples several times each. Go grab a pen and some paper now, and notice how you improve within even a short period of time.\n\nSketching web UI\n\nMost elements of any website can be drawn as a combination of geometric shapes.\n\n photo by Nathanael Boehm\n\nCircles\n\nTo draw a circle, get in position and start by resting your hand on the page and making the circular motion a few times without putting pen to paper. As you lower your pen whilst continuing the motion, you should notice the resulting shape is more regular than it otherwise would have been.\n\nSquares and rectangles\n\nDraw one pair of parallel lines first, followed by the others to complete the shapes. Slightly overlap the ends of the lines to make corners feel more solid than if you were to leave gaps. If you\u2019re drawing a container, always draw the contents first, that way it won\u2019t be a squash to fit them in. If you\u2019re drawing a grid (of thumbnails, for instance), draw all parallel lines first as a series of long dashes to help keep line lengths and angles consistent.\n\n\n\nShadows\n\nTo lift elements from the page for emphasis, add a subtle shadow with a grey marker. For the most convincing look, assume the light source to be at the top left of the page \u2013 the shadow should simply be a thick grey line along the bottom and up the right edge of your shape. If the shape is irregular, the shadow should follow its outline. This is a good way to emphasise featured items, speech bubbles, form buttons, and so on.\n\n\n\nSketching ideas\n\nArrows\n\nUse arrows to show steps in a process or direction of movement. Giving shadows a 3-D feel, or adding a single colour, will help separate them from the rest of the sketch.\n\n\n\nFaces\n\nStart by drawing the circle. The direction of the nose (merely a point) indicates the direction of the person\u2019s gaze. The eyes and mouth show emotion: more open and curvy for happy thoughts; more closed and jagged for angry thoughts. Try out a few shapes and see what emotions they convey.\n\n\n\nPeople\n\nRemember, we\u2019re aiming for communication rather than realism here. A stick man would be fine. Give him a solid body, as shown in this example, and it becomes easier to pose him.\n\n\n\nI know you think hands are hard, but they\u2019re quite important to convey some ideas, and for our purposes we don\u2019t need to draw hands with any detail. An oval with a stick does the job of a pointing hand. Close-ups might need more fingers showing, but still don\u2019t require any degree of realism.\n\nSignage\n\nDon\u2019t be afraid to use words. We\u2019re sketching to communicate, so if the easiest way to show an office block is a building with a big \u2018office\u2019 sign on the roof, that\u2019s fine!\n\n\n\nLabels\n\nLikewise, feel free to label interactions. Use upper-case letters for legibility and slightly angle the horizontal bars upwards to create a more positive feel.\n\nClich\u00e9s\n\nClich\u00e9s are your friend! Someone\u2019s having an idea? Light bulb above the head. Computer\u2019s crashed? Cloud of smoke with \u201c$\u00a3%*!\u201d\n\n\n\n\n\nIt\u2019s good to practise regularly. Try applying these principles to still life, too. Look around you now and draw the cup on the table, or the books on the shelf. Think of it as a combination of shapes and aim for symbolism rather than realism, and it\u2019s not as hard as you\u2019d think.\n\nI hope this has given you the confidence to give it a shot, and the ability to at least not be mortified with the results!\n\nTip: If you\u2019re involving clients in design games like Leisa Reichelt\u2019s \u2018Design Consequences\u2019 it may be wise to tone down the quality of your drawings at that point so they don\u2019t feel intimidated. Remember, it\u2019s important for them to feel at ease with the idea of wireframing in front of you and their colleagues, no matter how bad their line work.\n\nFor more information see davegrayinfo.com \u2013 Dave Gray taught me everything I know :-)", "year": "2010", "author": "Paul Annett", "author_slug": "paulannett", "published": "2010-12-19T00:00:00+00:00", "url": "https://24ways.org/2010/sketching-to-communicate/", "topic": "business"} {"rowid": 219, "title": "Speed Up Your Site with Delayed Content", "contents": "Speed remains one of the most important factors influencing the success of any website, and the first rule of performance (according to Yahoo!) is reducing the number of HTTP requests. Over the last few years we\u2019ve seen techniques like sprites and combo CSS/JavaScript files used to reduce the number of HTTP requests. But there\u2019s one area where large numbers of HTTP requests are still a fact of life: the small avatars attached to the comments on articles like this one.\n\nAvatars\n\nMany sites like 24 ways use a fantastic service called Gravatar to provide user images. As a user, you can sign up to Gravatar, give them your e-mail address, and upload an image to represent you. Sites can then include your image by generating a one way hash of your e-mail address and using that to build an image URL. For example, the markup for the comments on this page looks something like this:\n\n<div>\n\t<h4><a href=\"http://allinthehead.com/\">\n\t\t<img src=\"http://www.gravatar.com/avatar.php?gravatar_id=13734b0cb20708f79e730809c29c3c48&size=100\" class=\"gravatar\" alt=\"\" height=\"100\" width=\"100\" />\n Drew McLellan\n\t</a></h4>\n\t<p>This is a great article!</p>\n</div>\n\nThe Gravatar URL contains two parts. 100 is the size in pixels of the image we want. 13734b0cb20708f79e730809c29c3c48 is an MD5 digest of Drew\u2019s e-mail address. Using MD5 means we can request an image for a user without sharing their e-mail address with anyone who views the source of the page.\n\nSo what\u2019s wrong with avatars?\n\nThe problem is that a popular article can easily get hundreds of comments, and every one of them means another image has to be individually requested from Gravatar\u2019s servers. Each request is small and the Gravatar servers are fast but, when you add them up, it can easily add seconds to the rendering time of a page. Worse, they can delay the loading of more important assets like the CSS required to render the main content of the page.\n\nThese images aren\u2019t critical to the page, and don\u2019t need to be loaded up front. Let\u2019s see if we can delay loading them until everything else is done. That way we can give the impression that our site has loaded quickly even if some requests are still happening in the background.\n\nDelaying image loading\n\nThe first problem we find is that there\u2019s no way to prevent Internet Explorer, Chrome or Safari from loading an image without removing it from the HTML itself. Tricks like removing the images on the fly with JavaScript don\u2019t work, as the browser has usually started requesting the images before we get a chance to stop it.\n\nRemoving the images from the HTML means that people without JavaScript enabled in their browser won\u2019t see avatars. As Drew mentioned at the start of the month, this can affect a large number of people, and we can\u2019t completely ignore them. But most sites already have a textual name attached to each comment and the avatars are just a visual enhancement. In most cases it\u2019s OK if some of our users don\u2019t see them, especially if it speeds up the experience for the other 98%.\n\nRemoving the images from the source of our page also means we\u2019ll need to put them back at some point, so we need to keep a record of which images need to be requested. All Gravatar images have the same URL format; the only thing that changes is the e-mail hash. Storing this is a great use of HTML5 data attributes.\n\nHTML5 data what?\n\nData attributes are a new feature in HTML5. The latest version of the spec says:\n\n\n\tA custom data attribute is an attribute in no namespace whose name starts with the string \u201cdata-\u201d, has at least one character after the hyphen, is XML-compatible, and contains no characters in the range U+0041 to U+005A (LATIN CAPITAL LETTER A to LATIN CAPITAL LETTER Z).\n[\u2026]\nCustom data attributes are intended to store custom data private to the page or application, for which there are no more appropriate attributes or elements. These attributes are not intended for use by software that is independent of the site that uses the attributes.\n\n\nIn other words, they\u2019re attributes of an HTML element that start with \u201cdata-\u201d which you can use to share data with scripts running on your site. They\u2019re great for adding small bits of metadata that don\u2019t fit into an existing markup pattern the way microformats do.\n\nLet\u2019s see this in action\n\nTake a look at the markup for comments again:\n\n<div>\n\t<h4><a href=\"http://allinthehead.com/\">\n\t\t<img src=\"http://www.gravatar.com/avatar.php?gravatar_id=13734b0cb20708f79e730809c29c3c48&size=100\" class=\"gravatar\" alt=\"\" height=\"100\" width=\"100\" />\n Drew McLellan\n\t</a></h4>\n\t<p>This is a great article!</p>\n</div>\n\nLet\u2019s replace the <img> element with a data-gravatar-hash attribute on the <a> element:\n\n<div>\n\t<h4><a href=\"http://allinthehead.com/\" data-gravatar-hash=\"13734b0cb20708f79e730809c29c3c48\">\n Drew McLellan\n\t</a></h4>\n\t<p>This is a great article!</p>\n</div>\n\nOnce we\u2019ve done this, we\u2019ll need a small bit of JavaScript to find all these attributes, and replace them with images after the page has loaded. Here\u2019s an example using jQuery:\n\n$(window).load(function() {\n\t$('a[data-gravatar-hash]').prepend(function(index){\n\t\tvar hash = $(this).attr('data-gravatar-hash')\n\t\treturn '<img width=\"100\" height=\"100\" alt=\"\" src=\"http://www.gravatar.com/avatar.php?size=100&gravatar_id=' + hash + '\">'\n\t})\n})\n\nThis code waits until everything on the page is loaded, then uses jQuery.prepend to insert an image into every link containing a data-gravatar-hash attribute. It\u2019s short and relatively simple, but in tests it reduced the rendering time of a sample page from over three seconds to well under one.\n\nFinishing touches\n\nWe still need to consider the appearance of the page before the avatars have loaded. When our script adds extra content to the page it will cause a browser reflow, which is visually annoying. We can avoid this by using CSS to reserve some space for each image before it\u2019s inserted into the HTML:\n\n#comments div {\n\tpadding-left: 110px;\n\tmin-height: 100px;\n\tposition: relative;\n}\n#comments div h4 img {\n\tdisplay: block;\n\tposition: absolute;\n\ttop: 0;\n\tleft: 0;\n}\n\nIn a real world example, we\u2019ll also find that the HTML for a comment is more varied as many users don\u2019t provide a web page link. We can make small changes to our JavaScript and CSS to handle this case.\n\nPut this all together and you get this example.\n\nTaking this idea further\n\nThere\u2019s no reason to limit this technique to sites using Gravatar; we can use similar code to delay loading any images that don\u2019t need to be present immediately. For example, this year\u2019s redesigned Flickr photo page uses a \u201cdata-defer-src\u201d attribute to describe any image that doesn\u2019t need to be loaded straight away, including avatars and map tiles.\n\nYou also don\u2019t have to limit yourself to loading the extra resources once the page loads. You can get further bandwidth savings by waiting until the user takes an action before downloading extra assets. Amazon has taken this tactic to the extreme on its product pages \u2013 extra content is loaded as you scroll down the page.\n\nSo next time you\u2019re building a page, take a few minutes to think about which elements are peripheral and could be delayed to allow more important content to appear as quickly as possible.", "year": "2010", "author": "Paul Hammond", "author_slug": "paulhammond", "published": "2010-12-18T00:00:00+00:00", "url": "https://24ways.org/2010/speed-up-your-site-with-delayed-content/", "topic": "ux"} {"rowid": 231, "title": "Designing for iOS: Life Beyond Media Queries", "contents": "Although not a new phenomenon, media queries seem to be getting a lot attention online recently and for the right reasons too \u2013 it\u2019s great to be able to adapt a design with just a few lines of CSS \u2013 but many people are relying only on them to create an iPhone-specific version of their website. \n\nI was pleased to hear at FOWD NYC a few weeks ago that both myself and Aral Balkan share the same views on why media queries aren\u2019t always going to be the best solution for mobile. Both of us specialise in iPhone design ourselves and we opt for a different approach to media queries. The trouble is, regardless of what you have carefully selected to be display:none; in your CSS, the iPhone still loads everything in the background; all that large imagery for your full scale website also takes up valuable mobile bandwidth and time.\n\nYou can greatly increase the speed of your website by creating a specific site tailored to mobile users with just a few handy pointers \u2013 media queries, in some instances, might be perfectly suitable but, in others, here\u2019s what you can do.\n\nRedirect your iPhone/iPod Touch users\n\nTo detect whether someone is viewing your site on an iPhone or iPod Touch, you can either use JavaScript or PHP. \n\nThe JavaScript \n\nif((navigator.userAgent.match(/iPhone/i)) || (navigator.userAgent.match(/iPod/i))) { \n if (document.cookie.indexOf(\"iphone_redirect=false\") == -1) window.location = \"http://mobile.yoursitehere.com\"; \n}\n\nThe PHP\n\nif(strstr($_SERVER['HTTP_USER_AGENT'],'iPhone') || strstr($_SERVER['HTTP_USER_AGENT'],'iPod')) \n{\n header('Location: http://mobile.yoursitehere.com');\n exit();\n}\n\nBoth of these methods redirect the user to a site that you have made specifically for the iPhone. At this point, be sure to provide a link to the full version of the website, in case the user wishes to view this and not be thrown into an experience they didn\u2019t want, with no way back.\n\nTailoring your site\n\nSo, now you\u2019ve got 320\u2009\u00d7\u2009480 pixels of screen to play with \u2013 and to create a style sheet for, just as you would for any other site you build. There are a few other bits and pieces that you can add to your code to create a site that feels more like a fully immersive iPhone app rather than a website.\n\nRetina display\n\nWhen building your website specifically tailored to the iPhone, you might like to go one step further and create a specific style sheet for iPhone 4\u2019s Retina display. Because there are four times as many pixels on the iPhone 4 (640\u2009\u00d7\u2009960 pixels), you\u2019ll find specifics such as text shadows and borders will have to be increased. \n\n<link rel=\"stylesheet\" \n media=\"only screen and (-webkit-min-device-pixel-ratio: 2)\" \n type=\"text/css\" href=\"../iphone4.css\" />\n\n(Credit to Thomas Maier)\n\nPrevent user scaling\n\nThis declaration, added into the <head>, stops the user being able to pinch-zoom in and out of your design, which is perfect if you are designing to the exact pixel measurements of the iPhone screen. \n\n<meta name=\"viewport\" \n content=\"width=device-width; initial-scale=1.0; maximum-scale=1.0;\">\n\nDesigning for orientation \n\nAs iPhones aren\u2019t static devices, you\u2019ll also need to provide a style sheet for horizontal orientation. We can do this by inserting some JavaScript into the <head> as follows: \n\n<script type=\"text/javascript\">\nfunction orient() {\n switch(window.orientation) {\n case 0: \n document.getElementById(\"orient_css\").href = \"css/iphone_portrait.css\";\n break;\n case -90: \n document.getElementById(\"orient_css\").href = \"css/iphone_landscape.css\";\n break;\n case 90: \n document.getElementById(\"orient_css\").href = \"css/iphone_landscape.css\";\n break;\n }\n}\nwindow.onload = orient();\n</script>\n\nYou can also specify orientation styles using media queries. This is absolutely fine, as by this point you\u2019ll already be working with mobile-specific graphics and have little need to set a lot of things to display:none;\n\n<link rel=\"stylesheet\" \n media=\"only screen and (max-device-width: 480px)\" href=\"/iphone.css\">\n<link rel=\"stylesheet\" \n media=\"only screen and (orientation: portrait)\" href=\"/portrait.css\">\n<link rel=\"stylesheet\" \n media=\"only screen and (orientation: landscape)\u201d href=\"/landscape.css\">\n\nRemove the address and status bars, top and bottom\n\nTo give you more room on-screen and to make your site feel more like an immersive web app, you can place the following declaration into the <head> of your document\u2019s code to remove the address and status bars at the top and bottom of the screen. \n\n<meta name=\"apple-mobile-web-app-capable\" content=\"yes\" />\n\nMaking the most of inbuilt functions\n\nSimilar to mailto: e-mail links, the iPhone also supports another two handy URI schemes which are great for enhancing contact details. When tapped, the following links will automatically bring up the appropriate call or text interface:\n\n<a href=\"tel:01234567890\">Call us</a>\n<a href=\"sms:01234567890\">Text us</a>\n\niPhone-specific Web Clip icon\n\nAlthough I believe them to be fundamentally flawed, since they rely on the user bookmarking your site, iPhone Web Clip icons are still a nice touch. You need just two declarations, again in the <head> of your document:\n\n<link rel=\"apple-touch-icon\" href=\"icons/regular_icon.png\" />\n<link rel=\"apple-touch-icon\" sizes=\"114x114\" href=\"icons/retina_icon.png\" />\n\nFor iPhone 4 you\u2019ll need to create a 114\u2009\u00d7\u2009114 pixels icon; for a non-Retina display, a 57\u2009\u00d7\u200957 pixels icon will do the trick.\n\nPrecomposed \n\nApple adds its standard gloss \u2018moon\u2019 over the top of any icon. If you feel this might be too much for your particular icon and would prefer a matte finish, you can add precomposed to the end of the apple-touch-icon declaration to remove the standard gloss. \n\n<link rel=\"apple-touch-icon-precomposed\" href=\"/images/touch-icon.png\" />\n\nWrapping up\n\nMedia queries definitely have their uses. They make it easy to build a custom experience for your visitor, regardless of their browser\u2019s size. For more complex sites, however, or where you have lots of imagery and other content that isn\u2019t necessary on the mobile version, you can now use these other methods to help you out. Remember, they are purely for presentation and not optimisation; for busy people on the go, optimisation and faster-running mobile experiences can only be a good thing. \n\nHave a wonderful Christmas fellow Webbies!", "year": "2010", "author": "Sarah Parmenter", "author_slug": "sarahparmenter", "published": "2010-12-17T00:00:00+00:00", "url": "https://24ways.org/2010/life-beyond-media-queries/", "topic": "code"}