{"rowid": 249, "title": "Fast Autocomplete Search for Your Website", "contents": "Every website deserves a great search engine - but building a search engine can be a lot of work, and hosting it can quickly get expensive.\nI\u2019m going to build a search engine for 24 ways that\u2019s fast enough to support autocomplete (a.k.a. typeahead) search queries and can be hosted for free. I\u2019ll be using wget, Python, SQLite, Jupyter, sqlite-utils and my open source Datasette tool to build the API backend, and a few dozen lines of modern vanilla JavaScript to build the interface.\n\nTry it out here, then read on to see how I built it.\nFirst step: crawling the data\nThe first step in building a search engine is to grab a copy of the data that you plan to make searchable.\nThere are plenty of potential ways to do this: you might be able to pull it directly from a database, or extract it using an API. If you don\u2019t have access to the raw data, you can imitate Google and write a crawler to extract the data that you need.\nI\u2019m going to do exactly that against 24 ways: I\u2019ll build a simple crawler using wget, a command-line tool that features a powerful \u201crecursive\u201d mode that\u2019s ideal for scraping websites.\nWe\u2019ll start at the https://24ways.org/archives/ page, which links to an archived index for every year that 24 ways has been running.\nThen we\u2019ll tell wget to recursively crawl the website, using the --recursive flag.\nWe don\u2019t want to fetch every single page on the site - we\u2019re only interested in the actual articles. Luckily, 24 ways has nicely designed URLs, so we can tell wget that we only care about pages that start with one of the years it has been running, using the -I argument like this: -I /2005,/2006,/2007,/2008,/2009,/2010,/2011,/2012,/2013,/2014,/2015,/2016,/2017\nWe want to be polite, so let\u2019s wait for 2 seconds between each request rather than hammering the site as fast as we can: --wait 2\nThe first time I ran this, I accidentally downloaded the comments pages as well. We don\u2019t want those, so let\u2019s exclude them from the crawl using -X \"/*/*/comments\".\nFinally, it\u2019s useful to be able to run the command multiple times without downloading pages that we have already fetched. We can use the --no-clobber option for this.\nTie all of those options together and we get this command:\nwget --recursive --wait 2 --no-clobber \n -I /2005,/2006,/2007,/2008,/2009,/2010,/2011,/2012,/2013,/2014,/2015,/2016,/2017 \n -X \"/*/*/comments\" \n https://24ways.org/archives/ \nIf you leave this running for a few minutes, you\u2019ll end up with a folder structure something like this:\n$ find 24ways.org\n24ways.org\n24ways.org/2013\n24ways.org/2013/why-bother-with-accessibility\n24ways.org/2013/why-bother-with-accessibility/index.html\n24ways.org/2013/levelling-up\n24ways.org/2013/levelling-up/index.html\n24ways.org/2013/project-hubs\n24ways.org/2013/project-hubs/index.html\n24ways.org/2013/credits-and-recognition\n24ways.org/2013/credits-and-recognition/index.html\n...\nAs a quick sanity check, let\u2019s count the number of HTML pages we have retrieved:\n$ find 24ways.org | grep index.html | wc -l\n328\nThere\u2019s one last step! We got everything up to 2017, but we need to fetch the articles for 2018 (so far) as well. They aren\u2019t linked in the /archives/ yet so we need to point our crawler at the site\u2019s front page instead:\nwget --recursive --wait 2 --no-clobber \n -I /2018 \n -X \"/*/*/comments\" \n https://24ways.org/\nThanks to --no-clobber, this is safe to run every day in December to pick up any new content.\nWe now have a folder on our computer containing an HTML file for every article that has ever been published on the site! Let\u2019s use them to build ourselves a search index.\nBuilding a search index using SQLite\nThere are many tools out there that can be used to build a search engine. You can use an open-source search server like Elasticsearch or Solr, a hosted option like Algolia or Amazon CloudSearch or you can tap into the built-in search features of relational databases like MySQL or PostgreSQL.\nI\u2019m going to use something that\u2019s less commonly used for web applications but makes for a powerful and extremely inexpensive alternative: SQLite.\nSQLite is the world\u2019s most widely deployed database, even though many people have never even heard of it. That\u2019s because it\u2019s designed to be used as an embedded database: it\u2019s commonly used by native mobile applications and even runs as part of the default set of apps on the Apple Watch!\nSQLite has one major limitation: unlike databases like MySQL and PostgreSQL, it isn\u2019t really designed to handle large numbers of concurrent writes. For this reason, most people avoid it for building web applications.\nThis doesn\u2019t matter nearly so much if you are building a search engine for infrequently updated content - say one for a site that only publishes new content on 24 days every year.\nIt turns out SQLite has very powerful full-text search functionality built into the core database - the FTS5 extension.\nI\u2019ve been doing a lot of work with SQLite recently, and as part of that, I\u2019ve been building a Python utility library to make building new SQLite databases as easy as possible, called sqlite-utils. It\u2019s designed to be used within a Jupyter notebook - an enormously productive way of interacting with Python code that\u2019s similar to the Observable notebooks Natalie described on 24 ways yesterday.\nIf you haven\u2019t used Jupyter before, here\u2019s the fastest way to get up and running with it - assuming you have Python 3 installed on your machine. We can use a Python virtual environment to ensure the software we are installing doesn\u2019t clash with any other installed packages:\n$ python3 -m venv ./jupyter-venv\n$ ./jupyter-venv/bin/pip install jupyter\n# ... lots of installer output\n# Now lets install some extra packages we will need later\n$ ./jupyter-venv/bin/pip install beautifulsoup4 sqlite-utils html5lib\n# And start the notebook web application\n$ ./jupyter-venv/bin/jupyter-notebook\n# This will open your browser to Jupyter at http://localhost:8888/\nYou should now be in the Jupyter web application. Click New -> Python 3 to start a new notebook.\nA neat thing about Jupyter notebooks is that if you publish them to GitHub (either in a regular repository or as a Gist), it will render them as HTML. This makes them a very powerful way to share annotated code. I\u2019ve published the notebook I used to build the search index on my GitHub account. \n\u200b\n\nHere\u2019s the Python code I used to scrape the relevant data from the downloaded HTML files. Check out the notebook for a line-by-line explanation of what\u2019s going on.\nfrom pathlib import Path\nfrom bs4 import BeautifulSoup as Soup\nbase = Path(\"/Users/simonw/Dropbox/Development/24ways-search\")\narticles = list(base.glob(\"*/*/*/*.html\"))\n# articles is now a list of paths that look like this:\n# PosixPath('...24ways-search/24ways.org/2013/why-bother-with-accessibility/index.html')\ndocs = []\nfor path in articles:\n year = str(path.relative_to(base)).split(\"/\")[1]\n url = 'https://' + str(path.relative_to(base).parent) + '/'\n soup = Soup(path.open().read(), \"html5lib\")\n author = soup.select_one(\".c-continue\")[\"title\"].split(\n \"More information about\"\n )[1].strip()\n author_slug = soup.select_one(\".c-continue\")[\"href\"].split(\n \"/authors/\"\n )[1].split(\"/\")[0]\n published = soup.select_one(\".c-meta time\")[\"datetime\"]\n contents = soup.select_one(\".e-content\").text.strip()\n title = soup.find(\"title\").text.split(\" \u25c6\")[0]\n try:\n topic = soup.select_one(\n '.c-meta a[href^=\"/topics/\"]'\n )[\"href\"].split(\"/topics/\")[1].split(\"/\")[0]\n except TypeError:\n topic = None\n docs.append({\n \"title\": title,\n \"contents\": contents,\n \"year\": year,\n \"author\": author,\n \"author_slug\": author_slug,\n \"published\": published,\n \"url\": url,\n \"topic\": topic,\n })\nAfter running this code, I have a list of Python dictionaries representing each of the documents that I want to add to the index. The list looks something like this:\n[\n {\n \"title\": \"Why Bother with Accessibility?\",\n \"contents\": \"Web accessibility (known in other fields as inclus...\",\n \"year\": \"2013\",\n \"author\": \"Laura Kalbag\",\n \"author_slug\": \"laurakalbag\",\n \"published\": \"2013-12-10T00:00:00+00:00\",\n \"url\": \"https://24ways.org/2013/why-bother-with-accessibility/\",\n \"topic\": \"design\"\n },\n {\n \"title\": \"Levelling Up\",\n \"contents\": \"Hello, 24 ways. Iu2019m Ashley and I sell property ins...\",\n \"year\": \"2013\",\n \"author\": \"Ashley Baxter\",\n \"author_slug\": \"ashleybaxter\",\n \"published\": \"2013-12-06T00:00:00+00:00\",\n \"url\": \"https://24ways.org/2013/levelling-up/\",\n \"topic\": \"business\"\n },\n ...\nMy sqlite-utils library has the ability to take a list of objects like this and automatically create a SQLite database table with the right schema to store the data. Here\u2019s how to do that using this list of dictionaries.\nimport sqlite_utils\ndb = sqlite_utils.Database(\"/tmp/24ways.db\")\ndb[\"articles\"].insert_all(docs)\nThat\u2019s all there is to it! The library will create a new database and add a table to it called articles with the necessary columns, then insert all of the documents into that table.\n(I put the database in /tmp/ for the moment - you can move it to a more sensible location later on.)\nYou can inspect the table using the sqlite3 command-line utility (which comes with OS X) like this:\n$ sqlite3 /tmp/24ways.db\nsqlite> .headers on\nsqlite> .mode column\nsqlite> select title, author, year from articles;\ntitle author year \n------------------------------ ------------ ----------\nWhy Bother with Accessibility? Laura Kalbag 2013 \nLevelling Up Ashley Baxte 2013 \nProject Hubs: A Home Base for Brad Frost 2013 \nCredits and Recognition Geri Coady 2013 \nManaging a Mind Christopher 2013 \nRun Ragged Mark Boulton 2013 \nGet Started With GitHub Pages Anna Debenha 2013 \nCoding Towards Accessibility Charlie Perr 2013 \n...\n\nThere\u2019s one last step to take in our notebook. We know we want to use SQLite\u2019s full-text search feature, and sqlite-utils has a simple convenience method for enabling it for a specified set of columns in a table. We want to be able to search by the title, author and contents fields, so we call the enable_fts() method like this:\ndb[\"articles\"].enable_fts([\"title\", \"author\", \"contents\"])\nIntroducing Datasette\nDatasette is the open-source tool I\u2019ve been building that makes it easy to both explore SQLite databases and publish them to the internet.\nWe\u2019ve been exploring our new SQLite database using the sqlite3 command-line tool. Wouldn\u2019t it be nice if we could use a more human-friendly interface for that?\nIf you don\u2019t want to install Datasette right now, you can visit https://search-24ways.herokuapp.com/ to try it out against the 24 ways search index data. I\u2019ll show you how to deploy Datasette to Heroku like this later in the article.\nIf you want to install Datasette locally, you can reuse the virtual environment we created to play with Jupyter:\n./jupyter-venv/bin/pip install datasette\nThis will install Datasette in the ./jupyter-venv/bin/ folder. You can also install it system-wide using regular pip install datasette.\nNow you can run Datasette against the 24ways.db file we created earlier like so:\n./jupyter-venv/bin/datasette /tmp/24ways.db\nThis will start a local webserver running. Visit http://localhost:8001/ to start interacting with the Datasette web application.\nIf you want to try out Datasette without creating your own 24ways.db file you can download the one I created directly from https://search-24ways.herokuapp.com/24ways-ae60295.db\nPublishing the database to the internet\nOne of the goals of the Datasette project is to make deploying data-backed APIs to the internet as easy as possible. Datasette has a built-in command for this, datasette publish. If you have an account with Heroku or Zeit Now, you can deploy a database to the internet with a single command. Here\u2019s how I deployed https://search-24ways.herokuapp.com/ (running on Heroku\u2019s free tier) using datasette publish:\n$ ./jupyter-venv/bin/datasette publish heroku /tmp/24ways.db --name search-24ways\n-----> Python app detected\n-----> Installing requirements with pip\n\n-----> Running post-compile hook\n-----> Discovering process types\n Procfile declares types -> web\n\n-----> Compressing...\n Done: 47.1M\n-----> Launching...\n Released v8\n https://search-24ways.herokuapp.com/ deployed to Heroku\nIf you try this out, you\u2019ll need to pick a different --name, since I\u2019ve already taken search-24ways.\nYou can run this command as many times as you like to deploy updated versions of the underlying database.\nSearching and faceting\nDatasette can detect tables with SQLite full-text search configured, and will add a search box directly to the page. Take a look at http://search-24ways.herokuapp.com/24ways-b607e21/articles to see this in action.\n\u200b\n\nSQLite search supports wildcards, so if you want autocomplete-style search where you don\u2019t need to enter full words to start getting results you can add a * to the end of your search term. Here\u2019s a search for access* which returns articles on accessibility:\nhttp://search-24ways.herokuapp.com/24ways-ae60295/articles?_search=acces%2A\nA neat feature of Datasette is the ability to calculate facets against your data. Here\u2019s a page showing search results for svg with facet counts calculated against both the year and the topic columns:\nhttp://search-24ways.herokuapp.com/24ways-ae60295/articles?_search=svg&_facet=year&_facet=topic\nEvery page visible via Datasette has a corresponding JSON API, which can be accessed using the JSON link on the page - or by adding a .json extension to the URL:\nhttp://search-24ways.herokuapp.com/24ways-ae60295/articles.json?_search=acces%2A\nBetter search using custom SQL\nThe search results we get back from ../articles?_search=svg are OK, but the order they are returned in is not ideal - they\u2019re actually being returned in the order they were inserted into the database! You can see why this is happening by clicking the View and edit SQL link on that search results page.\nThis exposes the underlying SQL query, which looks like this:\nselect rowid, * from articles where rowid in (\n select rowid from articles_fts where articles_fts match :search\n) order by rowid limit 101\nWe can do better than this by constructing a custom SQL query. Here\u2019s the query we will use instead:\nselect\n snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet,\n articles_fts.rank, articles.title, articles.url, articles.author, articles.year\nfrom articles\n join articles_fts on articles.rowid = articles_fts.rowid\nwhere articles_fts match :search || \"*\"\n order by rank limit 10;\nYou can try this query out directly - since Datasette opens the underling SQLite database in read-only mode and enforces a one second time limit on queries, it\u2019s safe to allow users to provide arbitrary SQL select queries for Datasette to execute.\nThere\u2019s a lot going on here! Let\u2019s break the SQL down line-by-line:\nselect\n snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet,\nWe\u2019re using snippet(), a built-in SQLite function, to generate a snippet highlighting the words that matched the query. We use two unique strings that I made up to mark the beginning and end of each match - you\u2019ll see why in the JavaScript later on.\n articles_fts.rank, articles.title, articles.url, articles.author, articles.year\nThese are the other fields we need back - most of them are from the articles table but we retrieve the rank (representing the strength of the search match) from the magical articles_fts table.\nfrom articles\n join articles_fts on articles.rowid = articles_fts.rowid\narticles is the table containing our data. articles_fts is a magic SQLite virtual table which implements full-text search - we need to join against it to be able to query it.\nwhere articles_fts match :search || \"*\"\n order by rank limit 10;\n:search || \"*\" takes the ?search= argument from the page querystring and adds a * to the end of it, giving us the wildcard search that we want for autocomplete. We then match that against the articles_fts table using the match operator. Finally, we order by rank so that the best matching results are returned at the top - and limit to the first 10 results.\nHow do we turn this into an API? As before, the secret is to add the .json extension. Datasette actually supports multiple shapes of JSON - we\u2019re going to use ?_shape=array to get back a plain array of objects:\nJSON API call to search for articles matching SVG\nThe HTML version of that page shows the time taken to execute the SQL in the footer. Hitting refresh a few times, I get response times between 2 and 5ms - easily fast enough to power a responsive autocomplete feature.\nA simple JavaScript autocomplete search interface\nI considered building this using React or Svelte or another of the myriad of JavaScript framework options available today, but then I remembered that vanilla JavaScript in 2018 is a very productive environment all on its own.\nWe need a few small utility functions: first, a classic debounce function adapted from this one by David Walsh:\nfunction debounce(func, wait, immediate) {\n let timeout;\n return function() {\n let context = this, args = arguments;\n let later = () => {\n timeout = null;\n if (!immediate) func.apply(context, args);\n };\n let callNow = immediate && !timeout;\n clearTimeout(timeout);\n timeout = setTimeout(later, wait);\n if (callNow) func.apply(context, args);\n };\n};\nWe\u2019ll use this to only send fetch() requests a maximum of once every 100ms while the user is typing.\nSince we\u2019re rendering data that might include HTML tags (24 ways is a site about web development after all), we need an HTML escaping function. I\u2019m amazed that browsers still don\u2019t bundle a default one of these:\nconst htmlEscape = (s) => s.replace(\n />/g, '>'\n).replace(\n /Autocomplete search\n
\n

\n
\n
\nAnd now the autocomplete implementation itself, as a glorious, messy stream-of-consciousness of JavaScript:\n// Embed the SQL query in a multi-line backtick string:\nconst sql = `select\n snippet(articles_fts, -1, 'b4de2a49c8', '8c94a2ed4b', '...', 100) as snippet,\n articles_fts.rank, articles.title, articles.url, articles.author, articles.year\nfrom articles\n join articles_fts on articles.rowid = articles_fts.rowid\nwhere articles_fts match :search || \"*\"\n order by rank limit 10`;\n\n// Grab a reference to the \nconst searchbox = document.getElementById(\"searchbox\");\n\n// Used to avoid race-conditions:\nlet requestInFlight = null;\n\nsearchbox.onkeyup = debounce(() => {\n const q = searchbox.value;\n // Construct the API URL, using encodeURIComponent() for the parameters\n const url = (\n \"https://search-24ways.herokuapp.com/24ways-866073b.json?sql=\" +\n encodeURIComponent(sql) +\n `&search=${encodeURIComponent(q)}&_shape=array`\n );\n // Unique object used just for race-condition comparison\n let currentRequest = {};\n requestInFlight = currentRequest;\n fetch(url).then(r => r.json()).then(d => {\n if (requestInFlight !== currentRequest) {\n // Avoid race conditions where a slow request returns\n // after a faster one.\n return;\n }\n let results = d.map(r => `\n
\n

${htmlEscape(r.title)}

\n

${htmlEscape(r.author)} - ${r.year}

\n

${highlight(r.snippet)}

\n
\n `).join(\"\");\n document.getElementById(\"results\").innerHTML = results;\n });\n}, 100); // debounce every 100ms\nThere\u2019s just one more utility function, used to help construct the HTML results:\nconst highlight = (s) => htmlEscape(s).replace(\n /b4de2a49c8/g, ''\n).replace(\n /8c94a2ed4b/g, ''\n);\nThis is what those unique strings passed to the snippet() function were for.\nAvoiding race conditions in autocomplete\nOne trick in this code that you may not have seen before is the way race-conditions are handled. Any time you build an autocomplete feature, you have to consider the following case:\n\nUser types acces\nBrowser sends request A - querying documents matching acces*\nUser continues to type accessibility\nBrowser sends request B - querying documents matching accessibility*\nRequest B returns. It was fast, because there are fewer documents matching the full term\nThe results interface updates with the documents from request B, matching accessibility*\nRequest A returns results (this was the slower of the two requests)\nThe results interface updates with the documents from request A - results matching access*\n\nThis is a terrible user experience: the user saw their desired results for a brief second, and then had them snatched away and replaced with those results from earlier on.\nThankfully there\u2019s an easy way to avoid this. I set up a variable in the outer scope called requestInFlight, initially set to null.\nAny time I start a new fetch() request, I create a new currentRequest = {} object and assign it to the outer requestInFlight as well.\nWhen the fetch() completes, I use requestInFlight !== currentRequest to sanity check that the currentRequest object is strictly identical to the one that was in flight. If a new request has been triggered since we started the current request we can detect that and avoid updating the results.\nIt\u2019s not a lot of code, really\nAnd that\u2019s the whole thing! The code is pretty ugly, but when the entire implementation clocks in at fewer than 70 lines of JavaScript, I honestly don\u2019t think it matters. You\u2019re welcome to refactor it as much you like.\nHow good is this search implementation? I\u2019ve been building search engines for a long time using a wide variety of technologies and I\u2019m happy to report that using SQLite in this way is genuinely a really solid option. It scales happily up to hundreds of MBs (or even GBs) of data, and the fact that it\u2019s based on SQL makes it easy and flexible to work with.\nA surprisingly large number of desktop and mobile applications you use every day implement their search feature on top of SQLite.\nMore importantly though, I hope that this demonstrates that using Datasette for an API means you can build relatively sophisticated API-backed applications with very little backend programming effort. If you\u2019re working with a small-to-medium amount of data that changes infrequently, you may not need a more expensive database. Datasette-powered applications easily fit within the free tier of both Heroku and Zeit Now.\nFor more of my writing on Datasette, check out the datasette tag on my blog. And if you do build something fun with it, please let me know on Twitter.", "year": "2018", "author": "Simon Willison", "author_slug": "simonwillison", "published": "2018-12-19T00:00:00+00:00", "url": "https://24ways.org/2018/fast-autocomplete-search-for-your-website/", "topic": "code"}