{"rowid": 8, "title": "Coding Towards Accessibility", "contents": "\u201cCan we make it AAA-compliant?\u201d \u2013 does this question strike fear into your heart? Maybe for no other reason than because you will soon have to wade through the impenetrable WCAG documentation once again, to find out exactly what AAA-compliant means?\n\nI\u2019m not here to talk about that.\n\nThe Web Content Accessibility Guidelines are a comprehensive and peer-reviewed resource which we\u2019re lucky to have at our fingertips. But they are also a pig to read, and they may have contributed to the sense of mystery and dread with which some developers associate the word accessibility.\n\nThis Christmas, I want to share with you some thoughts and some practical tips for building accessible interfaces which you can start using today, without having to do a ton of reading or changing your tools and workflow.\n\nBut first, let\u2019s clear up a couple of misconceptions.\n\nDreary, flat experiences\n\nI recently built a front-end framework for the Post Office. This was a great gig for a developer, but when I found out about my client\u2019s stringent accessibility requirements I was concerned that I\u2019d have to scale back what was quite a complex set of visual designs.\n\nSites like Jakob Neilsen\u2019s old workhorse useit.com and even the pioneering GOV.UK may have to shoulder some of the blame for this. They put a premium on usability and accessibility over visual flourish. (Although, in fairness to Mr Neilsen, his new site nngroup.com is really quite a snazzy affair, comparatively.)\n\nOf course, there are other reasons for these sites\u2019 aesthetics \u2014 and it\u2019s not because of the limitations of the form. You can make an accessible site look as glossy or as plain as you want it to look. It\u2019s always our own ingenuity and attention to detail that are going to be the limiting factors.\n\nSynecdoche\n\nWe must always guard against the tendency to assume that catering to screen readers means we have the whole accessibility ballgame covered. \n\nThere\u2019s so much more to accessibility than assistive technology, as you know. And within the field of assistive technology there are plenty of other devices for us to consider.\n\nPlanning to accommodate all these users and devices can be daunting. When I first started working in this field I thought that the breadth of technology was prohibitive. I didn\u2019t even know what a screen reader looked like. (I assumed they were big and heavy, perhaps like an old typewriter, and certainly they would be expensive and difficult to fathom.) This is nonsense, of course. Screen reader emulators are readily available as browser extensions and can be activated in seconds. Chromevox and Fangs are both excellent and you should download one or the other right now.\n\nBut the really good news is that you can emulate many other types of assistive technology without downloading a byte. And this is where we move from misconceptions into some (hopefully) useful advice.\n\nThe mouse trap\n\nThe simplest and most effective way to improve your abilities as a developer of accessible interfaces is to unplug your mouse.\n\nKeyboard operation has its own WCAG chapter, because most users of assistive technology are navigating the web using only their keyboards. You can go some way towards putting yourself into their shoes so easily \u2014 just by ditching a peripheral.\n\nLearning this was a lightbulb moment for me. When I build interfaces I am constantly flicking between code and the browser, testing or viewing the changes I have made. Now, instead of checking a new element once, I check it twice: once with my mouse and then again without.\n\nDon\u2019t just :hover\n\nThe reality is that when you first start doing this you can find your site becomes unusable straightaway. It\u2019s easy to lose track of which element is in focus as you hit the tab key repeatedly.\n\nOne of the easiest changes you can make to your coding practice is to add :focus and :active pseudo-classes to every hover state that you write. I\u2019m still amazed at how many sites fail to provide a decent focus state for links (and despite previous 24 ways authors in 2007 and 2009 writing on this same issue!).\n\nYou may find that in some cases it makes sense to have something other than, or in addition to, the hover state on focus, but start with the hover state that your designer has taken the time to provide you with. It\u2019s a tiny change and there is no downside. So instead of this:\n\n.my-cool-link:hover {\n\tbackground-color: MistyRose ;\t\n}\n\n\u2026try writing this:\n\n.my-cool-link:hover,\n.my-cool-link:focus,\n.my-cool-link:active {\n\tbackground-color: MistyRose ;\t\n}\n\nI\u2019ve toyed with the idea of making a Sass mixin to take care of this for me, but I haven\u2019t yet. I worry that people reading my code won\u2019t see that I\u2019m explicitly defining my focus and active states so I take the hit and write my hover rules out longhand.\n\nJavaScript can play, too\n\nThis was another revelation for me. Keyboard-only navigation doesn\u2019t necessitate a JavaScript-free experience, and up-to-date screen readers can execute JavaScript. So we\u2019re able to create complex JavaScript-driven interfaces which all users can interact with.\n\nSome of the hard work has already been done for us. First, there are already conventions around keyboard-driven interfaces. Think about the last time you viewed a photo album on Facebook. You can use the arrow keys to switch between photos, and the escape key closes whichever lightbox-y UI thing Facebook is showing its photos in this week. Arrow keys (up/down as well as left/right) for progression through content; Escape to back out of something; Enter or space bar to indicate a positive intention \u2014 these are established keyboard conventions which we can apply to our interfaces to improve their accessiblity. \n\nOf course, by doing so we are improving our interfaces in general, giving all users the option to switch between keyboard and mouse actions as and when it suits them.\n\nSecond, this guy wants to help you out. Hans Hillen is a developer who has done a great deal of work around accessibility and JavaScript-powered interfaces. Along with The Paciello Group he has created a version of the jQuery UI library which has been fully optimised for keyboard navigation and screen reader use. It\u2019s a fantastic reference which I revisit all the time \n\nI\u2019m not a huge fan of the jQuery UI library. It\u2019s a pain to style and the code is a bit bloated. So I\u2019ve not used this demo as a code resource to copy wholesale. I use it by playing with the various components and seeing how they react to keyboard controls. Each component is also fully marked up with the relevant ARIA roles to improve screen reader announcement where possible (more on this below).\n\nCoding for accessibility promotes good habits\n\nThis is a another observation around accessibility and JavaScript. I noticed an improvement in the structure and abstraction of my code when I started adding keyboard controls to my interface elements. \n\nYour code has to become more modular and event-driven, because any number of events could trigger the same interaction. A mouse-click, the Enter key and the space bar could all conceivably trigger the same open function on a collapsed accordion element. (And you want to keep things DRY, don\u2019t you?) \n\nIf you aren\u2019t already in the habit of separating out your interface functionality into discrete functions, you will be soon.\n\nvar doSomethingCool = function(){\n\t// Do something cool here.\n}\n\n// Bind function to a button click - pretty vanilla\n$('.myCoolButton').on('click', function(){\n\tdoSomethingCool();\n\treturn false;\n});\n\n// Bind the same function to a range of keypresses\n$(document).keyup(function(e){\n\tswitch(e.keyCode) {\n\t\tcase 13: // enter\n\t\tcase 32: // spacebar\n\t\t\tdoSomethingCool();\n\t\t\tbreak;\n\t\tcase 27: // escape\n\t\t\tdoSomethingElse();\n\t\t\tbreak;\n\t}\n});\n\nTo be honest, if you\u2019re doing complex UI stuff with JavaScript these days, or if you\u2019ve been building any responsive interfaces which rely on JavaScript, then you are most likely working with an application framework such as Backbone, Angular or Ember, so an abstraced and event-driven application structure will be familar to you. It should be super easy for you to start helping out your keyboard-only users if you aren\u2019t already \u2014 just add a few more event bindings into your UI layer!\n\nManipulating the tab order\n\nSo, you\u2019ve adjusted your mindset and now you test every change to your codebase using a keyboard as well as a mouse. You\u2019ve applied all your hover states to :focus and :active so you can see where you\u2019re tabbing on the page, and your interactive components react seamlessly to a mixture of mouse and keyboard commands. Feels good, right?\n\nThere\u2019s another level of optimisation to consider: manipulating the tab order. Certain DOM elements are naturally part of the tab order, and others are excluded. Links and input elements are the main elements included in the tab order, and static elements like paragraphs and headings are excluded. What if you want to make a static element \u2018tabbable\u2019? \n\nA good example would be in an expandable accordion component. Each section of the accordion should be separated by a heading, and there\u2019s no reason to make that heading into a link simply because it\u2019s interactive.\n\n
\n\t

Tyrannosaurus

\n\t

Tyrannosaurus; meaning \"tyrant lizard\"...

\n\n\t

Utahraptor

\n\t

Utahraptor is a genus of theropod dinosaurs...

\n\n\t

Dromiceiomimus

\n\t

Ornithomimus is a genus of ornithomimid dinosaurs...

\n

\n\nAdding the heading elements to the tab order is trivial. We just set their tabindex attribute to zero. You could do this on the server or the client. I prefer to do it with JavaScript as part of the accordion setup and initialisation process.\n\n$('.accordion-widget h3').attr('tabindex', '0');\n\nYou can apply this trick in reverse and take elements out of the tab order by setting their tabindex attribute to \u22121, or change the tab order completely by using other integers. This should be done with great care, if at all. You have to be sure that the markup you remove from the tab order comes out because it genuinely improves the keyboard interaction experience. This is hard to validate without user testing. The danger is that developers will try to sweep complicated parts of the UI under the carpet by taking them out of the tab order. This would be considered a dark pattern \u2014 at least on my team!\n\nA farewell ARIA\n\nThis is where things can get complex, and I\u2019m no expert on the ARIA specification: I feel like I\u2019ve only dipped my toe into this aspect of coding for accessibility. But, as with WCAG, I\u2019d like to demystify things a little bit to encourage you to look into this area further yourself.\n\nARIA roles are of most benefit to screen reader users, because they modify and augment screen reader announcements. \n\nLet\u2019s take our dinosaur accordion from the previous section. The markup is semantic, so a screen reader that can\u2019t handle JavaScript will announce all the content within the accordion, no problem.\n\nBut modern screen readers can deal with JavaScript, and this means that all the lovely dino information beneath each heading has probably been hidden on document.ready, when the accordion initialised. It might have been hidden using display:none, which prevents a screen reader from announcing content. If that\u2019s as far as you have gone, then you\u2019ve committed an accessibility sin by hiding content from screen readers. Your user will hear a set of headings being announced, with no content in between. It would sound something like this if you were using Chromevox:\n\n> Tyrannosaurus. Heading Three.\n> Utahraptor. Heading Three.\n> Dromiceiomimus. Heading Three.\n\nWe can add some ARIA magic to the markup to improve this, using the tablist role. Start by adding a role of tablist to the widget, and roles of tab and tabpanel to the headings and paragraphs respectively. Set boolean values for aria-selected, aria-hidden and aria-expanded. The markup could end up looking something like this.\n\n
\n\t\t\n\t

Utahraptor

\n\tUtahraptor is a genus of theropod dinosaurs...

\n\t\t\n
\n\nNow, if a screen reader user encounters this markup they will hear the following:\n\n> Tyrannosaurus. Tab not selected; one of three.\n> Utahraptor. Tab not selected; two of three.\n> Dromiceiomimus. Tab not selected; three of three.\n\nYou could add arrow key events to help the user browse up and down the tab list items until they find one they like. \n\nYour accordion open() function should update the ARIA boolean values as well as adding whatever classes and animations you have built in as standard. Your users know that unselected tabs are meant to be interacted with, so if a user triggers the open function (say, by hitting Enter or the space bar on the second item) they will hear this:\n\n> Utahraptor. Selected; two of three.\n\nThe paragraph element for the expanded item will not be hidden by your CSS, which means it will be announced as normal by the screen reader.\n\nThis kind of thing makes so much more sense when you have a working example to play with. Again, I refer you to the fantastic resource that Hans Hillen has put together: this is his take on an accessible accordion, on which much of my example is based.\n\nConclusion\n\nGetting complex interfaces right for all of your users can be difficult \u2014 there\u2019s no point pretending otherwise. And there\u2019s no substitute for user testing with real users who navigate the web using assistive technology every day. This kind of testing can be time-consuming to recruit for and to conduct. On top of this, we now have accessibility on mobile devices to contend with. That\u2019s a huge area in itself, and it\u2019s one which I have not yet had a chance to research properly.\n\nSo, there\u2019s lots to learn, and there\u2019s lots to do to get it right. But don\u2019t be disheartened. If you have read this far then I\u2019ll leave you with one final piece of advice: don\u2019t wait.\n\nDon\u2019t wait until you\u2019re building a site which mandates AAA-compliance to try this stuff out. Don\u2019t wait for a client with the will or the budget to conduct the full spectrum of user testing to come along. Unplug your mouse, and start playing with your interfaces in a new way. You\u2019ll be surprised at the things that you learn and the issues you uncover. \n\nAnd the next time an true accessibility project comes along, you will be way ahead of the game.", "year": "2013", "author": "Charlie Perrins", "author_slug": "charlieperrins", "published": "2013-12-03T00:00:00+00:00", "url": "https://24ways.org/2013/coding-towards-accessibility/", "topic": "code"} {"rowid": 11, "title": "JavaScript: Taking Off the Training Wheels", "contents": "JavaScript is the third pillar of front-end web development. Of those pillars, it is both the most powerful and the most complex, so it\u2019s understandable that when 24 ways asked, \u201cWhat one thing do you wish you had more time to learn about?\u201d, a number of you answered \u201cJavaScript!\u201d\n\nThis article aims to help you feel happy writing JavaScript, and maybe even without libraries like jQuery. I can\u2019t comprehensively explain JavaScript itself without writing a book, but I hope this serves as a springboard from which you can jump to other great resources.\n\nWhy learn JavaScript?\n\nSo what\u2019s in it for you? Why take the next step and learn the fundamentals?\n\nConfidence with jQuery\n\nIf nothing else, learning JavaScript will improve your jQuery code; you\u2019ll be comfortable writing jQuery from scratch and feel happy bending others\u2019 code to your own purposes. Writing efficient, fast and bug-free jQuery is also made much easier when you have a good appreciation of JavaScript, because you can look at what jQuery is really doing. Understanding how JavaScript works lets you write better jQuery because you know what it\u2019s doing behind the scenes. When you need to leave the beaten track, you can do so with confidence.\n\nIn fact, you could say that jQuery\u2019s ultimate goal is not to exist: it was invented at a time when web APIs were very inconsistent and hard to work with. That\u2019s slowly changing as new APIs are introduced, and hopefully there will come a time when jQuery isn\u2019t needed.\n\nAn example of one such change is the introduction of the very useful document.querySelectorAll. Like jQuery, it converts a CSS selector into a list of matching elements. Here\u2019s a comparison of some jQuery code and the equivalent without.\n\n$('.counter').each(function (index) {\n $(this).text(index + 1);\n});\n\nvar counters = document.querySelectorAll('.counter');\n[].slice.call(counters).forEach(function (elem, index) {\n elem.textContent = index + 1;\n});\n\nSolving problems no one else has!\n\nWhen you have to go to the internet to solve a problem, you\u2019re forever stuck reusing code other people wrote to solve a slightly different problem to your own. Learning JavaScript will allow you to solve problems in your own way, and begin to do things nobody else ever has.\n\nNode.js\n\nNode.js is a non-browser environment for running JavaScript, and it can do just about anything! But if that sounds daunting, don\u2019t worry: the Node community is thriving, very friendly and willing to help.\n\nI think Node is incredibly exciting. It enables you, with one language, to build complete websites with complex and feature-filled front- and back-ends. Projects that let users log in or need a database are within your grasp, and Node has a great ecosystem of library authors to help you build incredible things. Exciting!\n\nHere\u2019s an example web server written with Node. http is a module that allows you to create servers and, like jQuery\u2019s $.ajax, make requests. It\u2019s a small amount of code to do something complex and, while working with Node is different from writing front-end code, it\u2019s certainly not out of your reach.\n\nvar http = require('http');\nhttp.createServer(function (req, res) {\n res.writeHead(200, {'Content-Type': 'text/plain'});\n res.end('Hello World');\n}).listen(1337);\nconsole.log('Server running at http://localhost:1337/');\n\nGrunt and other website tools\n\nNode has brought in something of a renaissance in tools that run in the command line, like Yeoman and Grunt. Both of these rely heavily on Node, and I\u2019ll talk a little bit about Grunt here.\n\nGrunt is a task runner, and many people use it for compiling Sass or compressing their site\u2019s JavaScript and images. It\u2019s pretty cool. You configure Grunt via the gruntfile.js, so JavaScript skills will come in handy, and since Grunt supports plug-ins built with JavaScript, knowing it unlocks the bucketloads of power Grunt has to offer.\n\nWays to improve your skills\n\nSo you know you want to learn JavaScript, but what are some good ways to learn and improve? I think the answer to that is different for different people, but here are some ideas.\n\nRebuild a jQuery app\n\nConverting a jQuery project to non-jQuery code is a great way to explore how you modify elements on the page and make requests to the server for data. My advice is to focus on making it work in one modern browser initially, and then go cross-browser if you\u2019re feeling adventurous. There are many resources for directly comparing jQuery and non-jQuery code, like Jeffrey Way\u2019s jQuery to JavaScript article.\n\nFind a mentor\n\nIf you think you\u2019d work better on a one-to-one basis then finding yourself a mentor could be a brilliant way to learn. The JavaScript community is very friendly and many people will be more than happy to give you their time. I\u2019d look out for someone who\u2019s active and friendly on Twitter, and does the kind of work you\u2019d like to do. Introduce yourself over Twitter or send them an email. I wouldn\u2019t expect a full tutoring course (although that is another option!) but they\u2019ll be very glad to answer a question and any follow-ups every now and then.\n\nGo to a workshop\n\nMany conferences and local meet-ups run workshops, hosted by experts in a particular field. See if there\u2019s one in your area. Workshops are great because you can ask direct questions, and you\u2019re in an environment where others are learning just like you are \u2014 no need to learn alone!\n\nSet yourself challenges\n\nThis is one way I like to learn new things. I have a new thing that I\u2019m not very good at, so I pick something that I think is just out of my reach and I try to build it. It\u2019s learning by doing and, even if you fail, it can be enormously valuable.\n\nWhere to start?\n\nIf you\u2019ve decided learning JavaScript is an important step for you, your next question may well be where to go from here.\n\nI\u2019ve collected some links to resources I know of or use, with some discussion about why you might want to check a particular site out. I hope this serves as a springboard for you to go out and learn as much as you want.\n\nBeginner\n\nIf you\u2019re just getting started with JavaScript, I\u2019d recommend heading to one of these places. They cover the basics and, in some cases, a little more advanced stuff. They\u2019re all reputable sources (although, I\u2019ve included something I wrote \u2014 you can decide about that one!) and will not lead you astray.\n\n\n\tjQuery\u2019s JavaScript 101 is a great first resource for JavaScript that will give you everything you need to work with jQuery like a pro.\n\tCodecademy\u2019s JavaScript Track is a small but useful JavaScript course. If you like learning interactively, this could be one for you.\n\tHTMLDog\u2019s JavaScript Tutorials take you right through from the basics of code to a brief introduction to newer technology like Node and Angular. [Disclaimer: I wrote this stuff, so it comes with a hazard warning!]\n\tThe tuts+ jQuery to JavaScript mentioned earlier is great for seeing how jQuery code looks when converted to pure JavaScript.\n\n\nGetting in-depth\n\nFor more comprehensive documentation and help I\u2019d recommend adding these places to your list of go-tos.\n\n\n\tMDN: the Mozilla Developer Network is the first place I go for many JavaScript questions. I mostly find myself there via a search, but it\u2019s a great place to just go and browse.\n\tAxel Rauschmayer\u2019s 2ality is a stunning collection of articles that will take you deep into JavaScript. It\u2019s certainly worth looking at.\n\tAddy Osmani\u2019s JavaScript Design Patterns is a comprehensive collection of patterns for writing high quality JavaScript, particularly as you (I hope) start to write bigger and more complex applications.\n\n\nAnd finally\u2026\n\nI think the key to learning anything is curiosity and perseverance. If you have a question, go out and search for the answer, even if you have no idea where to start. Keep going and going and eventually you\u2019ll get there. I bet you\u2019ll learn a whole lot along the way. Good luck!\n\nMany thanks to the people who gave me their time when I was working on this article: Tom Oakley, Jack Franklin, Ben Howdle and Laura Kalbag.", "year": "2013", "author": "Tom Ashworth", "author_slug": "tomashworth", "published": "2013-12-05T00:00:00+00:00", "url": "https://24ways.org/2013/javascript-taking-off-the-training-wheels/", "topic": "code"} {"rowid": 15, "title": "Git for Grown-ups", "contents": "You are a clever and talented person. You create beautiful designs, or perhaps you have architected a system that even my cat could use. Your peers adore you. Your clients love you. But, until now, you haven\u2019t *&^#^! been able to make Git work. It makes you angry inside that you have to ask your co-worker, again, for that *&^#^! command to upload your work.\n\nIt\u2019s not you. It\u2019s Git. Promise.\n\nYes, this is an article about the popular version control system, Git. But unlike just about every other article written about Git, I\u2019m not going to give you the top five commands that you need to memorize; and I\u2019m not going to tell you all your problems would be solved if only you were using this GUI wrapper or that particular workflow. You see, I\u2019ve come to a grand realization: when we teach Git, we\u2019re doing it wrong.\n\nLet me back up for a second and tell you a little bit about the field of adult education. (Bear with me, it gets good and will leave you feeling both empowered and righteous.) Andragogy, unlike pedagogy, is a learner-driven educational experience. There are six main tenets to adult education: \n\n\n\tAdults prefer to know why they are learning something.\n\tThe foundation of the learning activities should include experience.\n\tAdults prefer to be able to plan and evaluate their own instruction.\n\tAdults are more interested in learning things which directly impact their daily activities.\n\tAdults prefer learning to be oriented not towards content, but towards problems.\n\tAdults relate more to their own motivators than to external ones.\n\n\nNowhere in this list does it include \u201cmemorize the five most popular Git commands\u201d. And yet this is how we teach version control: init, add, commit, branch, push. You\u2019re an expert! Sound familiar? In the hierarchy of learning, memorizing commands is the lowest, or most basic, form of learning. At the peak of learning you are able to not just analyze and evaluate a problem space, but create your own understanding in relation to your existing body of knowledge.\n\n\u201cFine,\u201d I can hear you saying to yourself. \u201cBut I\u2019m here to learn about version control.\u201d Right you are! So how can we use this knowledge to master Git? First of all: I give you permission to use Git as a tool. A tool which you control and which you assign tasks to. A tool like a hammer, or a saw. Yes, your mastery of your tools will shape the kinds of interactions you have with your work, and your peers. But it\u2019s yours to control. Git was written by kernel developers for kernel development. The web world has adopted Git, but it is not a tool designed for us and by us. It\u2019s no Sass, y\u2019know? Git wasn\u2019t developed out of our frustration with managing CSS files in an increasingly complex ecosystem of components and atomic design. So, as you work through the next part of this article, give yourself a bit of a break. We\u2019re in this together, and it\u2019s going to be OK.\n\nWe\u2019re going to do a little activity. We\u2019re going to create your perfect Git cheatsheet.\n\nI want you to start by writing down a list of all the people on your code team. This list may include:\n\n\n\tdevelopers\n\tdesigners\n\tproject managers\n\tclients\n\n\nNext, I want you to write down a list of all the ways you interact with your team. Maybe you\u2019re a solo developer and you do all the tasks. Maybe you only do a few things. But I want you to write down a list of all the tasks you\u2019re actually responsible for. For example, my list looks like this:\n\n\n\twriting code\n\treviewing code\n\tpublishing tested code to your server(s)\n\ttroubleshooting broken code\n\n\nThe next list will end up being a series of boxes in a diagram. But to start, I want you to write down a list of your tools and constraints. This list potentially has a lot of noun-like items and verb-like items:\n\n\n\tcode hosting system (Bitbucket? GitHub? Unfuddle? self-hosted?)\n\tserver ecosystem (dev/staging/live)\n\tautomated testing systems or review gates\n\tautomated build systems (that Jenkins dude people keep referring to)\n\n\nBrilliant! Now you\u2019ve got your actors and your actions, it\u2019s time to shuffle them into a diagram. There are many popular workflow patterns. None are inherently right or wrong; rather, some are more or less appropriate for what you are trying to accomplish.\n\nCentralized workflow\n\nEveryone saves to a single place. This workflow may mean no version control, or a very rudimentary version control system which only ever has a single copy of the work available to the team at any point in time.\n\n \n\nBranching workflow\n\nEveryone works from a copy of the same place, merging their changes into the main copy as their work is completed. Think of the branches as a motorcycle sidecar: they\u2019re along for the ride and probably cannot exist in isolation of the main project for long without serious danger coming to the either the driver or sidecar passenger. Branches are a fundamental concept in version control \u2014 they allow you to work on new features, bug fixes, and experimental changes within a single repository, but without forcing the changes onto others working from the same branch.\n\n \n\nForking workflow\n\nEveryone works from their own, independent repository. A fork is an exact duplicate of a repository that a developer can make their own changes to. It can be kept up to date with additional changes made in other repositories, but it cannot force its changes onto another\u2019s repository. A fork is a complete repository which can use its own workflow strategies. If developers wish to merge their work with the main project, they must make a request of some kind (submit a patch, or a pull request) which the project collaborators may choose to adopt or reject. This workflow is popular for open source projects as it enforces a review process.\n\n \n\nGitflow workflow\n\nA specific workflow convention which includes five streams of parallel coding efforts: master, development, feature branches, release branches, and hot fixes. This workflow is often simplified down to a few elements by web teams, but may be used wholesale by software product teams. The original article describing this workflow was written by Vincent Driessen back in January 2010.\n\n \n\nBut these workflows aren\u2019t about you yet, are they? So let\u2019s make the connections.\n\nFrom the list of people on your team you identified earlier, draw a little circle. Give each of these circles some eyes and a smile. Now I want you to draw arrows between each of these people in the direction that code (ideally) flows. Does your designer create responsive prototypes which are pushed to the developer? Draw an arrow to represent this.\n\nChances are high that you don\u2019t just have people on your team, but you also have some kind of infrastructure. Hopefully you wrote about it earlier. For each of the servers and code repositories in your infrastructure, draw a square. Now, add to your diagram the relationships between the people and each of the machines in the infrastructure. Who can deploy code to the live server? How does it really get there? I bet it goes through some kind of code hosting system, such as GitHub. Draw in those arrows.\n\nBut wait!\n\nThe code that\u2019s on your development machine isn\u2019t the same as the live code. This is where we introduce the concept of a branch in version control. In Git, a repository contains all of the code (sort of). A branch is a fragment of the code that has been worked on in isolation to the other branches within a repository. Often branches will have elements in common. When we compare two (or more) branches, we are asking about the difference (or diff) between these two slivers. Often the master branch is used on production, and the development branch is used on our dev server. The difference between these two branches is the untested code that is not yet deployed.\n\nOn your diagram, see if you can colour-code according to the branch names at each of the locations within your infrastructure. You might find it useful to make a few different copies of the diagram to isolate each of the tasks you need to perform. For example: our team has a peer review process that each branch must go through before it is merged into the shared development branch.\n\nFinally, we are ready to add the Git commands necessary to make sense of the arrows in our diagram. If we are bringing code to our own workstation we will issue one of the following commands: clone (the first time we bring code to our workstation) or pull. Remembering that a repository contains all branches, we will issue the command checkout to switch from one branch to another within our own workstation. If we want to share a particular branch with one of our team mates, we will push this branch back to the place we retrieved it from (the origin). Along each of the arrows in your diagram, write the name of the command you are are going to use when you perform that particular task.\n\n \n\nFrom here, it\u2019s up to you to be selfish. Before asking Git what command it would like you to use, sketch the diagram of what you want. Git is your tool, you are not Git\u2019s tool. Draw the diagram. Communicate your tasks with your team as explicitly as you can. Insist on being a selfish adult learner \u2014 demand that others explain to you, in ways that are relevant to you, how to do the things you need to do today.", "year": "2013", "author": "Emma Jane Westby", "author_slug": "emmajanewestby", "published": "2013-12-04T00:00:00+00:00", "url": "https://24ways.org/2013/git-for-grownups/", "topic": "code"} {"rowid": 16, "title": "URL Rewriting for the Fearful", "contents": "I think it was Marilyn Monroe who said, \u201cIf you can\u2019t handle me at my worst, please just fix these rewrite rules, I\u2019m getting an internal server error.\u201d Even the blonde bombshell hated configuring URL rewrites on her website, and I think most of us know where she was coming from.\n\nThe majority of website projects I work on require some amount of URL rewriting, and I find it mildly enjoyable \u2014 I quite like a good rewrite rule. I suspect you may not share my glee, so in this article we\u2019re going to go back to basics to try to make the whole rigmarole more understandable.\n\nWhen we think about URL rewriting, usually that means adding some rules to an .htaccess file for an Apache web server. As that\u2019s the most common case, that\u2019s what I\u2019ll be sticking to here. If you work with a different server, there\u2019s often documentation specifically for translating from Apache\u2019s mod_rewrite rules. I even found an automatic converter for nginx.\n\nThis isn\u2019t going to be a comprehensive guide to every URL rewriting problem you might ever have. That would take us until Christmas. If you consider yourself a trial-and-error dabbler in the HTTP 500-infested waters of URL rewriting, then hopefully this will provide a little bit more of a basis to help you figure out what you\u2019re doing. If you\u2019ve ever found yourself staring at the white screen of death after screwing up your .htaccess file, don\u2019t worry. As Michael Jackson once insipidly whined, you are not alone.\n\nThe basics\n\nRewrite rules form part of the Apache web server\u2019s configuration for a website, and can be placed in a number of different locations as part of your virtual host configuration. By far the simplest and most portable option is to use an .htaccess file in your website root. Provided your server has mod_rewrite available, all you need to do to kick things off in your .htaccess file is:\n\nRewriteEngine on\n\nThe general formula for a rewrite rule is:\n\nRewriteRule URL/to/match URL/to/use/if/it/matches [options]\n\nWhen we talk about URL rewriting, we\u2019re normally talking about one of two things: redirecting the browser to a different URL; or rewriting the URL internally to use a particular file. We\u2019ll look at those in turn.\n\nRedirects\n\nRedirects match an incoming URL, and then redirect the user\u2019s browser to a different address. These can be useful for maintaining legacy URLs if content changes location as part of a site redesign. Redirecting the old URL to the new location makes sure that any incoming links, such as those from search engines, continue to work. \n\nIn 1998, Sir Tim Berners-Lee wrote that Cool URIs don\u2019t change, encouraging us all to go the extra mile to make sure links keep working forever. I think that sometimes it\u2019s fine to move things around \u2014 especially to correct bad URL design choices of the past \u2014 provided that you can do so while keeping those old URLs working. That\u2019s where redirects can help.\n\nA redirect might look like this\n\nRewriteRule ^article/used/to/be/here.php$ /article/now/lives/here/ [R=301,L]\n\nRewriting\n\nBy default, web servers closely map page URLs to the files in your site. On receiving a request for http://example.com/about/history.html the server goes to the configured folder for the example.com website, and then goes into the about folder and returns the history.html file.\n\nA rewrite rule changes that process by breaking the direct relationship between the URL and the file system. \u201cWhen there\u2019s a request for /about/history.html\u201d a rewrite rule might say, \u201cuse the file /about_section.php instead.\u201d\n\nThis opens up lots of possibilities for creative ways to map URLs to the files that know how to serve up the page. Most MVC frameworks will have a single rule to rewrite all page URLs to one single file. That file will be a script which kicks off the framework to figure out what to do to serve the page.\n\nRewriteRule ^for/this/url/$ /use/this/file.php [L] \n\nMatching patterns\n\nBy now you\u2019ll have noted the weird ^ and $ characters wrapped around the URL we\u2019re trying to match. That\u2019s because what we\u2019re actually using here is a pattern. Technically, it is what\u2019s called a Perl Compatible Regular Expression (PCRE) or simply a regex or regexp. We\u2019ll call it a pattern because we\u2019re not animals.\n\nWhat are these patterns? If I asked you to enter your credit card expiry date as MM/YY then chances are you\u2019d wonder what I wanted your credit card details for, but you\u2019d know that I wanted a two-digit month, a slash, and a two-digit year. That\u2019s not a regular expression, but it\u2019s the same idea: using some placeholder characters to define the pattern of the input you\u2019re trying to match.\n\nWe\u2019ve already met two regexp characters.\n\n\n\t^\n\tMatches the beginning of a string\n\t$\n\tMatches the end of a string\n\n\nWhen a pattern starts with ^ and ends with $ it\u2019s to make sure we match the complete URL start to finish, not just part of it. There are lots of other ways to match, too:\n\n\n\t[0-9]\n\tMatches a number, 0\u20139. [2-4] would match numbers 2 to 4 inclusive.\n\t[a-z]\n\tMatches lowercase letters a\u2013z\n\t[A-Z]\n\tMatches uppercase letters A\u2013Z\n\t[a-z0-9]\n\tCombining some of these, this matches letters a\u2013z and numbers 0\u20139\n\n\nThese are what we call character groups. The square brackets basically tell the server to match from the selection of characters within them. You can put any specific characters you\u2019re looking for within the brackets, as well as the ranges shown above. \n\nHowever, all these just match one single character. [0-9] would match 8 but not 84 \u2014 to match 84 we\u2019d need to use [0-9] twice.\n\n[0-9][0-9]\n\nSo, if we wanted to match 1984 we could to do this:\n\n[0-9][0-9][0-9][0-9] \n\n\u2026but that\u2019s getting silly. Instead, we can do this:\n\n[0-9]{4}\n\nThat means any character between 0 and 9, four times. If we wanted to match a number, but didn\u2019t know how long it might be (for example, a database ID in the URL) we could use the + symbol, which means one or more.\n\n[0-9]+\n\nThis now matches 1, 123 and 1234567.\n\nPutting it into practice\n\nLet\u2019s say we need to write a rule to match article URLs for this website, and to rewrite them to use /article.php under the hood. The articles all have URLs like this:\n\n2013/article-title/\n\nThey start with a year (from 2005 up to 2013, currently), a slash, and then have a URL-safe version of the article title (a slug), ending in a slash. We\u2019d match it like this:\n\n^[0-9]{4}/[a-z0-9-]+/$\n\nIf that looks frightening, don\u2019t worry. Breaking it down, from the start of the URL (^) we\u2019re looking for four numbers ([0-9]{4}). Then a slash \u2014 that\u2019s just literal \u2014 and then anything lowercase a\u2013z or 0\u20139 or a dash ([a-z0-9-]) one or more times (+), ending in a slash (/$).\n\nPutting that into a rewrite rule, we end up with this:\n\nRewriteRule ^[0-9]{4}/[a-z0-9-]+/$ /article.php\n\nWe\u2019re getting close now. We can match the article URLs and rewrite them to use article.php. Now we just need to make sure that article.php knows which article it\u2019s supposed to display.\n\nCapturing groups, and replacements\n\nWhen rewriting URLs you\u2019ll often want to take important parts of the URL you\u2019re matching and pass them along to the script that handles the request. That\u2019s usually done by adding those parts of the URL on as query string arguments. For our example, we want to make sure that article.php knows the year and the article title we\u2019re looking for. That means we need to call it like this:\n\n/article.php?year=2013&slug=article-title\n\nTo do this, we need to mark which parts of the pattern we want to reuse in the destination. We do this with round brackets or parentheses. By placing parentheses around parts of the pattern we want to reuse, we create what\u2019s called a capturing group. To capture an important part of the source URL to use in the destination, surround it in parentheses.\n\nOur pattern now looks like this, with parentheses around the parts that match the year and slug, but ignoring the slashes:\n\n^([0-9]{4})/([a-z0-9-]+)/$ \n\nTo use the capturing groups in the destination URL, we use the dollar sign and the number of the group we want to use. So, the first capturing group is $1, the second is $2 and so on. (The $ is unrelated to the end-of-pattern $ we used before.)\n\nRewriteRule ^([0-9]{4})/([a-z0-9-]+)/$ /article.php?year=$1&slug=$2 \n\nThe value of the year capturing group gets used as $1 and the article title slug is $2. Had there been a third group, that would be $3 and so on. In regexp parlance, these are called back-references as they refer back to the pattern.\n\nOptions\n\nSeveral brain-taxing minutes ago, I mentioned some options as the final part of a rewrite rule. There are lots of options (or flags) you can set to change how the rule is processed. The most useful (to my mind) are:\n\n\n\tR=301\n\tPerform an HTTP 301 redirect to send the user\u2019s browser to the new URL. A status of 301 means a resource has moved permanently and so it\u2019s a good way of both redirecting the user to the new URL, and letting search engines know to update their indexes.\n\tL\n\tLast. If this rule matches, don\u2019t bother processing the following rules.\n\n\nOptions are set in square brackets at the end of the rule. You can set multiple options by separating them with commas:\n\nRewriteRule ^([0-9]{4})/([a-z0-9-]+)/$ /article.php?year=$1&slug=$2 [L]\n\nor\n\nRewriteRule ^about/([a-z0-9-]+).jsp/$ /about/$1/ [R=301,L] \n\nCommon pitfalls\n\nOnce you\u2019ve built up a few rewrite rules, things can start to go wrong. You may have been there: a rule which looks perfectly good is somehow not matching. One common reason for this is hidden behind that [L] flag. \n\nL for Last is a useful option to tell the rewrite engine to stop once the rule has been matched. This is what it does \u2014 the remaining rules in the .htaccess file are then ignored. However, once a URL has been rewritten, the entire set of rules are then run again on the new URL. If the new URL matches any of the rules, that too will be rewritten and on it goes. \n\nOne way to avoid this problem is to keep your \u2018real\u2019 pages under a folder path that will never match one of your rules, or that you can exclude from the rewrite rules.\n\nUseful snippets\n\nI find myself reusing the same few rules over and over again, just with minor changes. Here are some useful examples to refer back to.\n\nExcluding a directory\n\nAs mentioned above, if you\u2019re rewriting lots of fancy URLs to a collection of real files it can be helpful to put those files in a folder and exclude it from rewrite rules. This helps solve the issue of rewrite rules reapplying to your newly rewritten URL. To exclude a directory, put a rule like this at the top of your file, before your other rules. Our files are in a folder called _source, the dash in the rule means do nothing, and the L flag means the following rules won\u2019t be applied.\n\nRewriteRule ^_source - [L]\n\nThis is also useful for excluding things like CMS folders from your website\u2019s rewrite rules\n\nRewriteRule ^perch - [L] \n\nAdding or removing www from the domain\n\nSome folk like to use a www and others don\u2019t. Usually, it\u2019s best to pick one and go with it, and redirect the one you don\u2019t want. On this site, we don\u2019t use www.24ways.org so we redirect those requests to 24ways.org.\n\nThis uses a RewriteCond which is like an if for a rewrite rule: \u201cIf this condition matches, then apply the following rule.\u201d In this case, it\u2019s if the HTTP HOST (or domain name, basically) matches this pattern, then redirect everything:\n\nRewriteCond %{HTTP_HOST} ^www.24ways.org$ [NC]\nRewriteRule ^(.*)$ http://24ways.org/$1 [R=301,L]\n\nThe [NC] flag means \u2018no case\u2019 \u2014 the match is case-insensitive. The dots in the domain are escaped with a backslash, as a dot is a regular expression character which means match anything, so we escape it because we literally mean a dot in this instance.\n\nRemoving file extensions\n\nSometimes all you need to do to tidy up a URL is strip off the technology-specific file extension, so that /about/history.php becomes /about/history. This is easily achieved with the help of some more rewrite conditions.\n\nRewriteCond %{REQUEST_FILENAME} !-f\nRewriteCond %{REQUEST_FILENAME} !-d\nRewriteCond %{REQUEST_FILENAME}.php -f\nRewriteRule ^(.+)$ $1.php [L,QSA]\n\nThis says if the file being asked for isn\u2019t a file (!-f) and if it isn\u2019t a directory (!-d) and if the file name plus .php is an actual file (-f) then rewrite by adding .php on the end. The QSA flag means \u2018query string append\u2019: append the existing query string onto the rewritten URL.\n\nIt\u2019s these sorts of more generic catch-all rules that you need to watch out for when your .htaccess gets rerun after a successful match. Without care they can easily rematch the newly rewritten URL.\n\nLogging for when it all goes wrong\n\nAlthough not possible within your .htaccess file, if you have access to your Apache configuration files you can enable rewrite logging. This can be useful to track down where a rule is going wrong, if it\u2019s matching incorrectly or failing to match. It also gives you an overview of the amount of work being done by the rewrite engine, enabling you to rearrange your rules and maximise performance.\n\nRewriteEngine On\nRewriteLog \"/full/system/path/to/rewrite.log\"\nRewriteLogLevel 5\n\nTo be doubly clear: this will not work from an .htaccess file \u2014 it needs to be added to the main Apache configuration files. (I sometimes work using MAMP PRO locally on my Mac, and this can be pasted into the snappily named Customized virtual host general settings box in the Advanced tab for your site.)\n\nThe white screen of death\n\nOne of the most frustrating things when working with rewrite rules is that when you make a mistake it can result in the server returning an HTTP 500 Internal Server Error. This in itself isn\u2019t an error message, of course. It\u2019s more of a notification that an error has occurred. The real error message can usually be found in your Apache error log.\n\nIf you have access to your server logs, check the Apache error log and you\u2019ll usually find a much more descriptive error message, pointing you towards your mistake. (Again, if using MAMP PRO, go to Server, Apache and the View Log button.)\n\nIn conclusion\n\nRewriting URLs can be a bear, but the advantages are clear. Keeping a tidy URL structure, disconnected from the technology or file structure of your site can result in URLs that are easier to use and easier to maintain into the future.\n\nIf you\u2019re redesigning a site, remember that cool URIs don\u2019t change, so budget some time to make sure that any content you move has a rewrite rule associated with it to keep any links working.\n\nFurther reading\n\nTo find out more about URL rewriting and perhaps even learn more about regular expressions, I can recommend the following resources.\n\n\n\tFrom the horse\u2019s mouth, the Apache mod_rewrite documentation\n\tParticularly useful with that documentation is the RewriteRule Flags listing\n\tYou may wish to don sunglasses to follow the otherwise comprehensive Regular-Expressions.info tutorial\n\tFriend of 24 ways, Neil Crosby has a mod_rewrite Beginner\u2019s Guide which I\u2019ve found handy over the years.\n\n\nAs noted at the start, this isn\u2019t a fully comprehensive guide, but I hope it\u2019s useful in finding your feet with a powerful but sometimes annoying technology. Do you have useful snippets you often use on projects? Feel free to share them in the comments.", "year": "2013", "author": "Drew McLellan", "author_slug": "drewmclellan", "published": "2013-12-01T00:00:00+00:00", "url": "https://24ways.org/2013/url-rewriting-for-the-fearful/", "topic": "code"} {"rowid": 18, "title": "Grunt for People Who Think Things Like Grunt are Weird and Hard", "contents": "Front-end developers are often told to do certain things:\n\n\n\tWork in as small chunks of CSS and JavaScript as makes sense to you, then concatenate them together for the production website.\n\tCompress your CSS and minify your JavaScript to make their file sizes as small as possible for your production website.\n\tOptimize your images to reduce their file size without affecting quality.\n\tUse Sass for CSS authoring because of all the useful abstraction it allows.\n\n\nThat\u2019s not a comprehensive list of course, but those are the kind of things we need to do. You might call them tasks.\n\nI bet you\u2019ve heard of Grunt. Well, Grunt is a task runner. Grunt can do all of those things for you. Once you\u2019ve got it set up, which isn\u2019t particularly difficult, those things can happen automatically without you having to think about them again.\n\nBut let\u2019s face it: Grunt is one of those fancy newfangled things that all the cool kids seem to be using but at first glance feels strange and intimidating. I hear you. This article is for you.\n\nLet\u2019s nip some misconceptions in the bud right away\n\nPerhaps you\u2019ve heard of Grunt, but haven\u2019t done anything with it. I\u2019m sure that applies to many of you. Maybe one of the following hang-ups applies to you.\n\nI don\u2019t need the things Grunt does\n\nYou probably do, actually. Check out that list up top. Those things aren\u2019t nice-to-haves. They are pretty vital parts of website development these days. If you already do all of them, that\u2019s awesome. Perhaps you use a variety of different tools to accomplish them. Grunt can help bring them under one roof, so to speak. If you don\u2019t already do all of them, you probably should and Grunt can help. Then, once you are doing those, you can keep using Grunt to do more for you, which will basically make you better at doing your job.\n\nGrunt runs on Node.js \u2014 I don\u2019t know Node\n\nYou don\u2019t have to know Node. Just like you don\u2019t have to know Ruby to use Sass. Or PHP to use WordPress. Or C++ to use Microsoft Word.\n\nI have other ways to do the things Grunt could do for me\n\nAre they all organized in one place, configured to run automatically when needed, and shared among every single person working on that project? Unlikely, I\u2019d venture.\n\nGrunt is a command line tool \u2014 I\u2019m just a designer\n\nI\u2019m a designer too. I prefer native apps with graphical interfaces when I can get them. But I don\u2019t think that\u2019s going to happen with Grunt1.\n\nThe extent to which you need to use the command line is:\n\n\n\tNavigate to your project\u2019s directory.\n\tType grunt and press Return.\n\n\nAfter set-up, that is, which again isn\u2019t particularly difficult.\n\nOK. Let\u2019s get Grunt installed\n\nNode is indeed a prerequisite for Grunt. If you don\u2019t have Node installed, don\u2019t worry, it\u2019s very easy. You literally download an installer and run it. Click the big Install button on the Node website.\n\nYou install Grunt on a per-project basis. Go to your project\u2019s folder. It needs a file there named package.json at the root level. You can just create one and put it there.\n\n package.json at root\n\nThe contents of that file should be this:\n\n{\n \"name\": \"example-project\",\n \"version\": \"0.1.0\",\n \"devDependencies\": {\n \"grunt\": \"~0.4.1\"\n }\n}\n\nFeel free to change the name of the project and the version, but the devDependencies thing needs to be in there just like that.\n\nThis is how Node does dependencies. Node has a package manager called NPM (Node packaged modules) for managing Node dependencies (like a gem for Ruby if you\u2019re familiar with that). You could even think of it a bit like a plug-in for WordPress.\n\nOnce that package.json file is in place, go to the terminal and navigate to your folder. Terminal rubes like me do it like this:\n\n Terminal rube changing directories\n\nThen run the command:\n\nnpm install\n\nAfter you\u2019ve run that command, a new folder called node_modules will show up in your project.\n\n Example of node_modules folder\n\nThe other files you see there, README.md and LICENSE are there because I\u2019m going to put this project on GitHub and that\u2019s just standard fare there.\n\nThe last installation step is to install the Grunt CLI (command line interface). That\u2019s what makes the grunt command in the terminal work. Without it, typing grunt will net you a \u201cCommand Not Found\u201d-style error. It is a separate installation for efficiency reasons. Otherwise, if you had ten projects you\u2019d have ten copies of Grunt CLI.\n\nThis is a one-liner again. Just run this command in the terminal:\n\nnpm install -g grunt-cli\n\nYou should close and reopen the terminal as well. That\u2019s a generic good practice to make sure things are working right. Kinda like restarting your computer after you install a new application was in the olden days.\n\nLet\u2019s make Grunt concatenate some files\n\nPerhaps in our project there are three separate JavaScript files:\n\n\n\tjquery.js \u2013 The library we are using.\n\tcarousel.js \u2013 A jQuery plug-in we are using.\n\tglobal.js \u2013 Our authored JavaScript file where we configure and call the plug-in.\n\n\nIn production, we would concatenate all those files together for performance reasons (one request is better than three). We need to tell Grunt to do this for us.\n\nBut wait. Grunt actually doesn\u2019t do anything all by itself. Remember Grunt is a task runner. The tasks themselves we will need to add. We actually haven\u2019t set up Grunt to do anything yet, so let\u2019s do that.\n\nThe official Grunt plug-in for concatenating files is grunt-contrib-concat. You can read about it on GitHub if you want, but all you have to do to use it on your project is to run this command from the terminal (it will henceforth go without saying that you need to run the given commands from your project\u2019s root folder):\n\nnpm install grunt-contrib-concat --save-dev\n\nA neat thing about doing it this way: your package.json file will automatically be updated to include this new dependency. Open it up and check it out. You\u2019ll see a new line:\n\n\"grunt-contrib-concat\": \"~0.3.0\"\n\nNow we\u2019re ready to use it. To use it we need to start configuring Grunt and telling it what to do.\n\nYou tell Grunt what to do via a configuration file named Gruntfile.js2\n\nJust like our package.json file, our Gruntfile.js has a very special format that must be just right. I wouldn\u2019t worry about what every word of this means. Just check out the format:\n\nmodule.exports = function(grunt) {\n\n // 1. All configuration goes here \n grunt.initConfig({\n pkg: grunt.file.readJSON('package.json'),\n\n concat: {\n // 2. Configuration for concatinating files goes here.\n }\n\n });\n\n // 3. Where we tell Grunt we plan to use this plug-in.\n grunt.loadNpmTasks('grunt-contrib-concat');\n\n // 4. Where we tell Grunt what to do when we type \"grunt\" into the terminal.\n grunt.registerTask('default', ['concat']);\n\n};\n\nNow we need to create that configuration. The documentation can be overwhelming. Let\u2019s focus just on the very simple usage example.\n\nRemember, we have three JavaScript files we\u2019re trying to concatenate. We\u2019ll list file paths to them under src in an array of file paths (as quoted strings) and then we\u2019ll list a destination file as dest. The destination file doesn\u2019t have to exist yet. It will be created when this task runs and squishes all the files together.\n\nBoth our jquery.js and carousel.js files are libraries. We most likely won\u2019t be touching them. So, for organization, we\u2019ll keep them in a /js/libs/ folder. Our global.js file is where we write our own code, so that will be right in the /js/ folder. Now let\u2019s tell Grunt to find all those files and squish them together into a single file named production.js, named that way to indicate it is for use on our real live website.\n\nconcat: { \n dist: {\n src: [\n 'js/libs/*.js', // All JS in the libs folder\n 'js/global.js' // This specific file\n ],\n dest: 'js/build/production.js',\n }\n}\n\nNote: throughout this article there will be little chunks of configuration code like above. The intention is to focus in on the important bits, but it can be confusing at first to see how a particular chunk fits into the larger file. If you ever get confused and need more context, refer to the complete file.\n\nWith that concat configuration in place, head over to the terminal, run the command:\n\ngrunt\n\nand watch it happen! production.js will be created and will be a perfect concatenation of our three files. This was a big aha! moment for me. Feel the power course through your veins. Let\u2019s do more things!\n\nLet\u2019s make Grunt minify that JavaScript\n\nWe have so much prep work done now, adding new tasks for Grunt to run is relatively easy. We just need to:\n\n\n\tFind a Grunt plug-in to do what we want\n\tLearn the configuration style of that plug-in\n\tWrite that configuration to work with our project\n\n\nThe official plug-in for minifying code is grunt-contrib-uglify. Just like we did last time, we just run an NPM command to install it:\n\nnpm install grunt-contrib-uglify --save-dev\n\nThen we alter our Gruntfile.js to load the plug-in:\n\ngrunt.loadNpmTasks('grunt-contrib-uglify');\n\nThen we configure it:\n\nuglify: {\n build: {\n src: 'js/build/production.js',\n dest: 'js/build/production.min.js'\n }\n}\n\nLet\u2019s update that default task to also run minification:\n\ngrunt.registerTask('default', ['concat', 'uglify']);\n\nSuper-similar to the concatenation set-up, right?\n\nRun grunt at the terminal and you\u2019ll get some deliciously minified JavaScript:\n\n Minified JavaScript\n\nThat production.min.js file is what we would load up for use in our index.html file.\n\nLet\u2019s make Grunt optimize our images\n\nWe\u2019ve got this down pat now. Let\u2019s just go through the motions. The official image minification plug-in for Grunt is grunt-contrib-imagemin. Install it:\n\nnpm install grunt-contrib-imagemin --save-dev\n\nRegister it in the Gruntfile.js:\n\ngrunt.loadNpmTasks('grunt-contrib-imagemin');\n\nConfigure it:\n\nimagemin: {\n dynamic: {\n files: [{\n expand: true,\n cwd: 'images/',\n src: ['**/*.{png,jpg,gif}'],\n dest: 'images/build/'\n }]\n }\n}\n\nMake sure it runs:\n\ngrunt.registerTask('default', ['concat', 'uglify', 'imagemin']);\n\nRun grunt and watch that gorgeous squishification happen:\n\n Squished images\n\nGotta love performance increases for nearly zero effort.\n\nLet\u2019s get a little bit smarter and automate\n\nWhat we\u2019ve done so far is awesome and incredibly useful. But there are a couple of things we can get smarter on and make things easier on ourselves, as well as Grunt:\n\n\n\tRun these tasks automatically when they should\n\tRun only the tasks needed at the time\n\n\nFor instance:\n\n\n\tConcatenate and minify JavaScript when JavaScript changes\n\tOptimize images when a new image is added or an existing one changes\n\n\nWe can do this by watching files. We can tell Grunt to keep an eye out for changes to specific places and, when changes happen in those places, run specific tasks. Watching happens through the official grunt-contrib-watch plugin.\n\nI\u2019ll let you install it. It is exactly the same process as the last few plug-ins we installed. We configure it by giving watch specific files (or folders, or both) to watch. By watch, I mean monitor for file changes, file deletions or file additions. Then we tell it what tasks we want to run when it detects a change.\n\nWe want to run our concatenation and minification when anything in the /js/ folder changes. When it does, we should run the JavaScript-related tasks. And when things happen elsewhere, we should not run the JavaScript-related tasks, because that would be irrelevant. So:\n\nwatch: {\n scripts: {\n files: ['js/*.js'],\n tasks: ['concat', 'uglify'],\n options: {\n spawn: false,\n },\n } \n}\n\nFeels pretty comfortable at this point, hey? The only weird bit there is the spawn thing. And you know what? I don\u2019t even really know what that does. From what I understand from the documentation it is the smart default. That\u2019s real-world development. Just leave it alone if it\u2019s working and if it\u2019s not, learn more.\n\nNote: Isn\u2019t it frustrating when something that looks so easy in a tutorial doesn\u2019t seem to work for you? If you can\u2019t get Grunt to run after making a change, it\u2019s very likely to be a syntax error in your Gruntfile.js. That might look like this in the terminal:\n\n Errors running Grunt\n\nUsually Grunt is pretty good about letting you know what happened, so be sure to read the error message. In this case, a syntax error in the form of a missing comma foiled me. Adding the comma allowed it to run.\n\nLet\u2019s make Grunt do our preprocessing\n\nThe last thing on our list from the top of the article is using Sass \u2014 yet another task Grunt is well-suited to run for us. But wait? Isn\u2019t Sass technically in Ruby? Indeed it is. There is a version of Sass that will run in Node and thus not add an additional dependency to our project, but it\u2019s not quite up-to-snuff with the main Ruby project. So, we\u2019ll use the official grunt-contrib-sass plug-in which just assumes you have Sass installed on your machine. If you don\u2019t, follow the command line instructions.\n\nWhat\u2019s neat about Sass is that it can do concatenation and minification all by itself. So for our little project we can just have it compile our main global.scss file:\n\nsass: {\n dist: {\n options: {\n style: 'compressed'\n },\n files: {\n 'css/build/global.css': 'css/global.scss'\n }\n } \n}\n\nWe wouldn\u2019t want to manually run this task. We already have the watch plug-in installed, so let\u2019s use it! Within the watch configuration, we\u2019ll add another subtask:\n\ncss: {\n files: ['css/*.scss'],\n tasks: ['sass'],\n options: {\n spawn: false,\n }\n}\n\nThat\u2019ll do it. Now, every time we change any of our Sass files, the CSS will automaticaly be updated.\n\nLet\u2019s take this one step further (it\u2019s absolutely worth it) and add LiveReload. With LiveReload, you won\u2019t have to go back to your browser and refresh the page. Page refreshes happen automatically and in the case of CSS, new styles are injected without a page refresh (handy for heavily state-based websites).\n\nIt\u2019s very easy to set up, since the LiveReload ability is built into the watch plug-in. We just need to:\n\n\nInstall the browser plug-in\nAdd to the top of the watch configuration:\n. watch: {\n options: {\n livereload: true,\n },\n scripts: { \n /* etc */\n\nRestart the browser and click the LiveReload icon to activate it.\nUpdate some Sass and watch it change the page automatically.\n\n\n Live reloading browser\n\nYum.\n\nPrefer a video?\n\nIf you\u2019re the type that likes to learn by watching, I\u2019ve made a screencast to accompany this article that I\u2019ve published over on CSS-Tricks: First Moments with Grunt\n\nLeveling up\n\nAs you might imagine, there is a lot of leveling up you can do with your build process. It surely could be a full time job in some organizations.\n\nSome hardcore devops nerds might scoff at the simplistic setup we have going here. But I\u2019d advise them to slow their roll. Even what we have done so far is tremendously valuable. And don\u2019t forget this is all free and open source, which is amazing.\n\nYou might level up by adding more useful tasks:\n\n\n\tRunning your CSS through Autoprefixer (A+ Would recommend) instead of a preprocessor add-ons.\n\tWriting and running JavaScript unit tests (example: Jasmine).\n\tBuild your image sprites and SVG icons automatically (example: Grunticon).\n\tStart a server, so you can link to assets with proper file paths and use services that require a real URL like TypeKit and such, as well as remove the need for other tools that do this, like MAMP.\n\tCheck for code problems with HTML-Inspector, CSS Lint, or JS Hint.\n\tHave new CSS be automatically injected into the browser when it ever changes.\n\tHelp you commit or push to a version control repository like GitHub.\n\tAdd version numbers to your assets (cache busting).\n\tHelp you deploy to a staging or production environment (example: DPLOY).\n\n\nYou might level up by simply understanding more about Grunt itself:\n\n\n\tRead Grunt Boilerplate by Mark McDonnell.\n\tRead Grunt Tips and Tricks by Nicolas Bevacqua.\n\tOrganize your Gruntfile.js by splitting it up into smaller files.\n\tCheck out other people\u2019s and projects\u2019 Gruntfile.js.\n\tLearn more about Grunt by digging into its source and learning about its API.\n\n\nLet\u2019s share\n\nI think some group sharing would be a nice way to wrap this up. If you are installing Grunt for the first time (or remember doing that), be especially mindful of little frustrating things you experience(d) but work(ed) through. Those are the things we should share in the comments here. That way we have this safe place and useful resource for working through those confusing moments without the embarrassment. We\u2019re all in this thing together!\n\n \n\n1 Maybe someday someone will make a beautiful Grunt app for your operating system of choice. But I\u2019m not sure that day will come. The configuration of the plug-ins is the important part of using Grunt. Each plug-in is a bit different, depending on what it does. That means a uniquely considered UI for every single plug-in, which is a long shot.\n\nPerhaps a decent middleground is this Grunt DevTools Chrome add-on.\n\n2 Gruntfile.js is often referred to as Gruntfile in documentation and examples. Don\u2019t literally name it Gruntfile \u2014 it won\u2019t work.", "year": "2013", "author": "Chris Coyier", "author_slug": "chriscoyier", "published": "2013-12-11T00:00:00+00:00", "url": "https://24ways.org/2013/grunt-is-not-weird-and-hard/", "topic": "code"} {"rowid": 20, "title": "Make Your Browser Dance", "contents": "It was a crisp winter\u2019s evening when I pulled up alongside the pier. I stepped out of my car and the bitterly cold sea air hit my face. I walked around to the boot, opened it and heaved out a heavy flight case. I slammed the boot shut, locked the car and started walking towards the venue.\n\nThis was it. My first gig. I thought about all those weeks of preparation: editing video clips, creating 3-D objects, making coloured patterns, then importing them all into software and configuring effects to change as the music did; targeting frequency, beat, velocity, modifying size, colour, starting point; creating playlists of these\u2026 and working out ways to mix them as the music played.\n\nThis was it. This was me VJing.\n\nThis was all a lifetime (well a decade!) ago.\n\nWhen I started web designing, VJing took a back seat. I was more interested in interactive layouts, semantic accessible HTML, learning all the IE bugs and mastering the quirks that CSS has to offer. More recently, I have been excited by background gradients, 3-D transforms, the @keyframe directive, as well as new APIs such as getUserMedia, indexedDB, the Web Audio API\n\nBut wait, have I just come full circle? Could it be possible, with these wonderful new things in technologies I am already familiar with, that I could VJ again, right here, in a browser?\n\nWell, there\u2019s only one thing to do: let\u2019s try it!\n\nLet\u2019s take to the dance floor \n\nOver the past couple of years working in The Lab I have learned to take a much more iterative approach to projects than before. One of my new favourite methods of working is to create a proof of concept to make sure my theory is feasible, before going on to create a full-blown product. So let\u2019s take the same approach here.\n\nThe main VJing functionality I want to recreate is manipulating visuals in relation to sound. So for my POC I need to create a visual, with parameters that can be changed, then get some sound and see if I can analyse that sound to detect some data, which I can then use to manipulate the visual parameters. Easy, right?\n\nSo, let\u2019s start at the beginning: creating a simple visual. For this I\u2019m going to create a CSS animation. It\u2019s just a funky i element with the opacity being changed to make it flash.\n\n See the Pen Creating a light by Rumyra (@Rumyra) on CodePen\n\nA note about prefixes: I\u2019ve left them out of the code examples in this post to make them easier to read. Please be aware that you may need them. I find a great resource to find out if you do is caniuse.com. You can also check out all the code for the examples in this article\n\nStart the music\n\nWell, that\u2019s pretty easy so far. Next up: loading in some sound. For this we\u2019ll use the Web Audio API. The Web Audio API is based around the concept of nodes. You have a source node: the sound you are loading in; a destination node: usually the device\u2019s speakers; and any number of processing nodes in between. All this processing that goes on with the audio is sandboxed within the AudioContext.\n\nSo, let\u2019s start by initialising our audio context.\n\nvar contextClass = window.AudioContext;\nif (contextClass) {\n //web audio api available.\n var audioContext = new contextClass();\n} else {\n //web audio api unavailable\n //warn user to upgrade/change browser\n}\n\nNow let\u2019s load our sound file into the new context we created with an XMLHttpRequest.\n\nfunction loadSound() {\n\t//set audio file url\n\tvar audioFileUrl = '/octave.ogg';\n\t//create new request\n\tvar request = new XMLHttpRequest();\n\trequest.open(\"GET\", audioFileUrl, true);\n\trequest.responseType = \"arraybuffer\";\n\n\trequest.onload = function() {\n\t\t//take from http request and decode into buffer\n\t\tcontext.decodeAudioData(request.response, function(buffer) {\n\t \taudioBuffer = buffer;\n\t });\n\t\t}\n\trequest.send();\n}\n\nPhew! Now we\u2019ve loaded in some sound! There are plenty of things we can do with the Web Audio API: increase volume; add filters; spatialisation. If you want to dig deeper, the O\u2019Reilly Web Audio API book by Boris Smus is available to read online free.\n\nAll we really want to do for this proof of concept, however, is analyse the sound data. To do this we really need to know what data we have.\n\n Learning the steps\n\nLet\u2019s take a minute to step back and remember our school days and science class. I\u2019m sure if I drew a picture of a sound wave, we would all start nodding our heads.\n\n \n\nThe sound you hear is caused by pressure differences in the particles in the air. Sound pushes these particles together, causing vibrations. Amplitude is basically strength of pressure. A simple example of change of amplitude is when you increase the volume on your stereo and the output wave increases in size.\n\nThis is great when everything is analogue, but the waveform varies continuously and it\u2019s not suitable for digital processing: there\u2019s an infinite set of values. For digital processing, we need discrete numbers.\n\nWe have to sample the waveform at set time intervals, and record data such as amplitude and frequency. Luckily for us, just the fact we have a digital sound file means all this hard work is done for us. What we\u2019re doing in the code above is piping that data in the audio context. All we need to do now is access it.\n\nWe can do this with the Web Audio API\u2019s analysing functionality. Just pop in an analysing node before we connect the source to its destination node.\n\nfunction createAnalyser(source) {\n\t//create analyser node\n\tanalyser = audioContext.createAnalyser();\n\t//connect to source\n\tsource.connect(analyzer);\n\t//pipe to speakers\n\tanalyser.connect(audioContext.destination);\n}\n\nThe data I\u2019m really interested in here is frequency. Later we could look into amplitude or time, but for now I\u2019m going to stick with frequency.\n\nThe analyser node gives us frequency data via the getFrequencyByteData method.\n\n Don\u2019t forget to count!\n\nTo collect the data from the getFrequencyByteData method, we need to pass in an empty array (a JavaScript typed array is ideal). But how do we know how many items the array will need when we create it?\n\nThis is really up to us and how high the resolution of frequencies we want to analyse is. Remember we talked about sampling the waveform; this happens at a certain rate (sample rate) which you can find out via the audio context\u2019s sampleRate attribute. This is good to bear in mind when you\u2019re thinking about your resolution of frequencies.\n\nvar sampleRate = audioContext.sampleRate;\n\nLet\u2019s say your file sample rate is 48,000, making the maximum frequency in the file 24,000Hz (thanks to a wonderful theorem from Dr Harry Nyquist, the maximum frequency in the file is always half the sample rate). The analyser array we\u2019re creating will contain frequencies up to this point. This is ideal as the human ear hears the range 0\u201320,000hz.\n\nSo, if we create an array which has 2,400 items, each frequency recorded will be 10Hz apart. However, we are going to create an array which is half the size of the FFT (fast Fourier transform), which in this case is 2,048 which is the default. You can set it via the fftSize property.\n\n//set our FFT size\nanalyzer.fftSize = 2048;\n//create an empty array with 1024 items\nvar frequencyData = new Uint8Array(1024);\n\nSo, with an array of 1,024 items, and a frequency range of 24,000Hz, we know each item is 24,000 \u00f7 1,024 = 23.44Hz apart.\n\nThe thing is, we also want that array to be updated constantly. We could use the setInterval or setTimeout methods for this; however, I prefer the new and shiny requestAnimationFrame.\n\nfunction update() {\n \t//constantly getting feedback from data\n \trequestAnimationFrame(update);\n \tanalyzer.getByteFrequencyData(frequencyData);\n}\n\n Putting it all together\n\nSweet sticks! Now we have an array of frequencies from the sound we loaded, updating as the sound plays. Now we want that data to trigger our animation from earlier.\n\nWe can easily pause and run our CSS animation from JavaScript:\n\nelement.style.webkitAnimationPlayState = \"paused\";\nelement.style.webkitAnimationPlayState = \"running\";\n\nUnfortunately, this may not be ideal as our animation might be a whole heap longer than just a flashing light. We may want to target specific points within that animation to have it stop and start in a visually pleasing way and perhaps not smack bang in the middle.\n\nThere is no really easy way to do this at the moment as Zach Saucier explains in this wonderful article. It takes some jiggery pokery with setInterval to try to ascertain how far through the CSS animation you are in percentage terms.\n\nThis seems a bit much for our proof of concept, so let\u2019s backtrack a little. We know by the animation we\u2019ve created which CSS properties we want to change. This is pretty easy to do directly with JavaScript.\n\nelement.style.opacity = \"1\";\nelement.style.opacity = \"0.2\";\n\nSo let\u2019s start putting it all together. For this example I want to trigger each light as a different frequency plays. For this, I\u2019ll loop through the HTML elements and change the opacity style if the frequency gain goes over a certain threshold.\n\n//get light elements\nvar lights = document.getElementsByTagName('i');\nvar totalLights = lights.length;\n\nfor (var i=0; i 160){\n //start animation on element\n lights[i].style.opacity = \"1\";\n } else {\n lights[i].style.opacity = \"0.2\";\n }\n}\n\nSee all the code in action here. I suggest viewing in a modern browser :)\n\nAwesome! It is true \u2014 we can VJ in our browser!\n\nLet\u2019s dance!\n\nSo, let\u2019s start to expand this simple example. First, I feel the need to make lots of lights, rather than just a few. Also, maybe we should try a sound file more suited to gigs or clubs.\n\nCheck it out!\n\nI don\u2019t know about you, but I\u2019m pretty excited \u2014 that\u2019s just a bit of HTML, CSS and JavaScript!\n\nThe other thing to think about, of course, is the sound that you would get at a venue. We don\u2019t want to load sound from a file, but rather pick up on what is playing in real time. The easiest way to do this, I\u2019ve found, is to capture what my laptop\u2019s mic is picking up and piping that back into the audio context. We can do this by using getUserMedia.\n\nLet\u2019s include this in this demo. If you make some noise while viewing the demo, the lights will start to flash.\n\n And relax :)\n\nThere you have it. Sit back, play some music and enjoy the Winamp like experience in front of you.\n\nSo, where do we go from here? I already have a wealth of ideas. We haven\u2019t started with canvas, SVG or the 3-D features of CSS. There are other things we can detect from the audio as well. And yes, OK, it\u2019s questionable whether the browser is the best environment for this. For one, I\u2019m using a whole bunch of nonsensical HTML elements (maybe each animation could be held within a web component in the future). But hey, it\u2019s fun, and it looks cool and sometimes I think it\u2019s OK to just dance.", "year": "2013", "author": "Ruth John", "author_slug": "ruthjohn", "published": "2013-12-02T00:00:00+00:00", "url": "https://24ways.org/2013/make-your-browser-dance/", "topic": "code"} {"rowid": 21, "title": "Keeping Parts of Your Codebase Private on GitHub", "contents": "Open source is brilliant, there\u2019s no denying that, and GitHub has been instrumental in open source\u2019s recent success. I\u2019m a keen open-sourcerer myself, and I have a number of projects on GitHub. However, as great as sharing code is, we often want to keep some projects to ourselves. To this end, GitHub created private repositories which act like any other Git repository, only, well, private!\n\nA slightly less common issue, and one I\u2019ve come up against myself, is the desire to only keep certain parts of a codebase private. A great example would be my site, CSS Wizardry; I want the code to be open source so that people can poke through and learn from it, but I want to keep any draft blog posts private until they are ready to go live. Thankfully, there is a very simple solution to this particular problem: using multiple remotes.\n\nBefore we begin, it\u2019s worth noting that you can actually build a GitHub Pages site from a private repo. You can keep the entire source private, but still have GitHub build and display a full Pages/Jekyll site. I do this with csswizardry.net. This post will deal with the more specific problem of keeping only certain parts of the codebase (branches) private, and expose parts of it as either an open source project, or a built GitHub Pages site.\n\nN.B. This post requires some basic Git knowledge.\n\nAdding your public remote\n\nLet\u2019s assume you\u2019re starting from scratch and you currently have no repos set up for your project. (If you do already have your public repo set up, skip to the \u201cAdding your private remote\u201d section.)\n\nSo, we have a clean slate: nothing has been set up yet, we\u2019re doing all of that now. On GitHub, create two repositories. For the sake of this article we shall call them site.com and private.site.com. Make the site.com repo public, and the private.site.com repo private (you will need a paid GitHub account).\n\nOn your machine, create the site.com directory, in which your project will live. Do your initial work in there, commit some stuff \u2014 whatever you need to do. Now we need to link this local Git repo on your machine with the public repo (remote) on GitHub. We should all be used to this:\n\n$ git remote add origin git@github.com:[user]/site.com.git\n\nHere we are simply telling Git to add a remote called origin which lives at git@github.com:[user]/site.com.git. Simple stuff. Now we need to push our current branch (which will be master, unless you\u2019ve explicitly changed it) to that remote:\n\n$ git push -u origin master\n\nHere we are telling Git to push our master branch to a corresponding master branch on the remote called origin, which we just added. The -u sets upstream tracking, which basically tells Git to always shuttle code on this branch between the local master branch and the master branch on the origin remote. Without upstream tracking, you would have to tell Git where to push code to (and pull it from) every time you ran the push or pull commands. This sets up a permanent bond, if you like.\n\nThis is really simple stuff, stuff that you will probably have done a hundred times before as a Git user. Now to set up our private remote.\n\nAdding your private remote\n\nWe\u2019ve set up our public, open source repository on GitHub, and linked that to the repository on our machine. All of this code will be publicly viewable on GitHub.com. (Remember, GitHub is just a host of regular Git repositories, which also puts a nice GUI around it all.) We want to add the ability to keep certain parts of the codebase private. What we do now is add another remote repository to the same local repository. We have two repos on GitHub (site.com and private.site.com), but only one repository (and, therefore, one directory) on our machine. Two GitHub repos, and one local one.\n\nIn your local repo, check out a new branch. For the sake of this article we shall call the branch dev. This branch might contain work in progress, or draft blog posts, or anything you don\u2019t want to be made publicly viewable on GitHub.com. The contents of this branch will, in a moment, live in our private repository.\n\n$ git checkout -b dev\n\nWe have now made a new branch called dev off the branch we were on last (master, unless you renamed it).\n\nNow we need to add our private remote (private.site.com) so that, in a second, we can send this branch to that remote:\n\n$ git remote add private git@github.com:[user]/private.site.com.git\n\nLike before, we are just telling Git to add a new remote to this repo, only this time we\u2019ve called it private and it lives at git@github.com:[user]/private.site.com.git. We now have one local repo on our machine which has two remote repositories associated with it.\n\nNow we need to tell our dev branch to push to our private remote:\n\n$ git push -u private dev\n\nHere, as before, we are pushing some code to a repo. We are saying that we want to push the dev branch to the private remote, and, once again, we\u2019ve set up upstream tracking. This means that, by default, the dev branch will only push and pull to and from the private remote (unless you ever explicitly state otherwise).\n\nNow you have two branches (master and dev respectively) that push to two remotes (origin and private respectively) which are public and private respectively.\n\nAny work we do on the master branch will push and pull to and from our publicly viewable remote, and any code on the dev branch will push and pull from our private, hidden remote.\n\nAdding more branches\n\nSo far we\u2019ve only looked at two branches pushing to two remotes, but this workflow can grow as much or as little as you\u2019d like. Of course, you\u2019d never do all your work in only two branches, so you might want to push any number of them to either your public or private remotes. Let\u2019s imagine we want to create a branch to try something out real quickly:\n\n$ git checkout -b test\n\nNow, when we come to push this branch, we can choose which remote we send it to:\n\n$ git push -u private test\n\nThis pushes the new test branch to our private remote (again, setting the persistent tracking with -u).\n\nYou can have as many or as few remotes or branches as you like.\n\nCombining the two\n\nLet\u2019s say you\u2019ve been working on a new feature in private for a few days, and you\u2019ve kept that on the private remote. You\u2019ve now finalised the addition and want to move it into your public repo. This is just a simple merge. Check out your master branch:\n\n$ git checkout master\n\nThen merge in the branch that contained the feature:\n\n$ git merge dev\n\nNow master contains the commits that were made on dev and, once you\u2019ve pushed master to its remote, those commits will be viewable publicly on GitHub:\n\n$ git push\n\nNote that we can just run $ git push on the master branch as we\u2019d previously set up our upstream tracking (-u).\n\nMultiple machines\n\nSo far this has covered working on just one machine; we had two GitHub remotes and one local repository. Let\u2019s say you\u2019ve got yourself a new Mac (yay!) and you want to clone an existing project:\n\n$ git clone git@github.com:[user]/site.com.git\n\nThis will not clone any information about the remotes you had set up on the previous machine. Here you have a fresh clone of the public project and you will need to add the private remote to it again, as above.\n\nDone!\n\nIf you\u2019d like to see me blitz through all that in one go, check the showterm recording.\n\nThe beauty of this is that we can still share our code, but we don\u2019t have to develop quite so openly all of the time. Building a framework with a killer new feature? Keep it in a private branch until it\u2019s ready for merge. Have a blog post in a Jekyll site that you\u2019re not ready to make live? Keep it in a private drafts branch. Working on a new feature for your personal site? Tuck it away until it\u2019s finished. Need a staging area for a Pages-powered site? Make a staging remote with its own custom domain.\n\nAll this boils down to, really, is the fact that you can bring multiple remotes together into one local codebase on your machine. What you do with them is entirely up to you!", "year": "2013", "author": "Harry Roberts", "author_slug": "harryroberts", "published": "2013-12-09T00:00:00+00:00", "url": "https://24ways.org/2013/keeping-parts-of-your-codebase-private-on-github/", "topic": "code"}