Extending The Offline Web

event_video_ASYNCJS.jpg

In this talk Glenn Jones – designer, coder and co-founder of Madgex – presents a number of experiments on the crossovers of remote and client built user experiences. The aim, to find the edges of what features can be commonly provided across these two environments. This talk was filmed at Asyncjs London June \[00:00:06\] Today \[…\]

Introduction

In this talk Glenn Jones – designer, coder and co-founder of Madgex – presents a number of experiments on the crossovers of remote and client built user experiences. The aim, to find the edges of what features can be commonly provided across these two environments.

This talk was filmed at Asyncjs London June

[00:00:06] Today I’m going to be talking about the offline web. When I’m trying to think about design problems, I need to start with some sort of conceptual model. This is my conceptual model for the offline web. Basically, the persistence of experience for distance sources of context. Now, like a lot of people, I think very three dimensionally when I’m thinking about problems. When I’m on the web, I consider content to have distance from me. Like a little facet, like a surface that’s in front of me, the devices I’m using, either a laptop or a mobile phone. Then there are these bits of content that live a bit of distance away from me. I don’t know where but they are at distance from me. The offline experience is the persistence of that, that experience close to me. I think the word “persistence” is really interesting and really sums it up for me when trying to construct the offline. If you need at the definition of persistence, it’s the continuing in the course of action in spite of the difficulty or opposition and to hold something offline definitely needs some sort of force of action, continual force or action to actually keep it there. It degrades otherwise. You have to keep updating the information, you have to sync it, you need to make sure it’s up to date all the time. It’s like a persistent effort that’s needed. That word is quite important.

[00:01:28] A lot of what I’m going to be talking about is me using this type of model. I particularly chose these words because it gives me a very wide scope as well. I didn’t want to narrow down how I perceive the offline web to some single page application or some other kind of mental models. This really helps me to keep referring back to it. End of the philosophy bit. From now on, we’ll just move into the next bit, which is to look at some of the design patterns and some of the recent design patterns. Although, I was thinking about this, about 14 months ago, when I actually built the library. This is the Washington post new progressive web app, hence the PWA directory. It’s only been out a few weeks. If you use a mobile device and you go to this URL, you’ll find that they’ve built a completely separate UI, away from their main site. It’s not the desktop site, it’s actually completely a greenfields build site that’s all its own in this little directory. We’ll come on to why I think that’s not a particularly good idea. It is actually really fast and if you go on there, it persists. It’s offline, they’ve got sub-100 millisecond page loads for some of their articles, which is just unbelievable. On the downside, if you hit it with a non-mobile browser, you get this page on this side, which is basically like the old splash screens you used to get saying, “This site doesn’t work on IA6 or this site doesn’t work with this technology.” It’s a bit of a slight step backwards.

[00:03:05] An actual fact, Andrew Bates, and if you don’t know this name, this was the person who led up the financial times FT lab. They’ve got a lot of background in this area. They’ve been building offline experiences using slightly older technology, the app catch technology for quite a long time. They’ve done a lot of work in this area. Wrote quite a passionate piece. I have to be quite careful here because both the Washington post and the FT are both clients of Madgex, so my words will be chosen carefully. He wasn’t overly happy about the Washington post piece and he wrote this article. It’s well worth you having a read of it. I’ve got the URLs at the end, you can use the URLs that I’ve listed for you. Towards the end of the piece, after he’d talked about why he wasn’t happy, he then did sort of a more positive thing and listed a whole series of bullet points about what the community, in general, could do to improve the position. One of these bullet points was this one here where he said, “Basically, we should focus on adding these new offline features to mature sites.” This was definitely my thinking a while back and I knew this problem was going to come and raise its ugly head at some point, and now is the time. Now is the time we’re having this debate.

[00:04:24] For me, when I was thinking about offline experiences, I moved back to some of the websites I helped design, well, five years ago. I did the first two iterations of the design work on the Guardian Jobs site in conjunction with the product team at Guardian and I think it was Flow Interactive as well who did a lot of the user research for us. This to me represents – the reason why I’m showing you this and I’m going to show you a few examples of this site – it represents one of these mature sites to me. It’s got some of the features and it’s been around for a while, it’s got millions of users, it’s a heavily used site. It’s been through multiple iterations to get to the point that it’s at, at the moment. Like most modern sites, it’s got a responsive layout and typical of most of the responsive layouts of today, as the screen of real-estate changes, there are some reduction in the features. Also, some pairing backing some of the material that’s on there, but just generally reorganising itself to best show the features that are there, without actually degrading the user experience, if possible. It’s pretty classic and I think we’ll all very familiar with that type of thing. There I was, thinking about this and thinking, “Okay, if I took my mature site, my Guardian site, what would if look like if I built an offline experience? Particularly, what would it look like if I built a progressive offline experience. One that was iterative and started with relatively small things and built on the site as it was, not like Washington post did. Like build a brand new site and stick it in another domain.”

[00:06:01] Before I show you this little mock-up, I’ve just got to say caveat, a big caveat. This is not what Madgex is doing for the Guardian or what the Guardian even asked for. This is purely just me mocking it up to see what I would think about. If I was doing that type of thing, I would probably come up with a design that’s a bit like this. Now, the Guardian jobs has about, at any one time, between 12 and 15 thousand jobs on it. Those jobs rotate every 28 days. That means a job changes less than every three minutes on it. It’s actually a very high data trafficking site. I don’t think we could dump all of that data from the main site into the client, because it’s just not very practical. The amount of updates we’d have to send to the phone would be really substantial. There are lots of things that are on that site that are really useful to the use, that we could possible translate over. I’ve just picked a few of them out, which I think would be quite useful. One of the things that users do is to shortlist jobs, so they go on and they just basically star the ones that they’re interested in. We could star that and show those. That would definitely be of interest to the user. We could do something like, okay, as the user visits the site, we could track them through and keep the last 20 jobs that they looked at. If in the morning they’re looking at some jobs and then they go on the train and they’re halfway through the class “you’re in the tunnel” type of situation. They could go there and actually look at the offline version and just reread that job that they were interested in and maybe consider the jobs that are there. I think that would be very useful, just to keep that little, tiny history for them. Maybe it would be 20 jobs, maybe it would be 30, I don’t know. Whatever. Like a sub-set of the jobs that they looked at.

[00:07:38] Then, obviously, if they’re applying for a job, we could keep that information there because that’s really useful. What state the applications are in? When they made the application? All of that type of information. I mocked this up to show you what I think a small step into offline progressive web app would look like for the Guardian. Then I thought, “Maybe you’ll just think that’s me and I know these particular sites very well.” I went a step further and just took two other web experiences that I’ve been involved in, in the last few weeks, and thought, “What might they look like as well?” Just purely making about on my own. One, I went over to Dusseldorf a little while ago, flew on British Airways. Wouldn’t it be great if the British Airways mobile site offline both the boarding passes and my booking so that I have the pertinent information, just that stuff? I’m expecting to be able to search for flights. I know I can’t do that when I’m offline. At least I should have that; I should have that piece of information. Really funny, I did actually do these mock-ups about three weeks ago. Then at the progressive web apps conference, just this week, an airline from Berlin has actually done exactly this. Which is great, so that proves my points. I’m really happy about that because it was just a rough idea. Then equally as well, Amazon, like all of us, I use Amazon for doing different stuff, these are the books I’ve got on order at the moment. Amazon could provide me just basically my current orders that are offline, which ones I’ve got and when they’re going to be delivered. That would be really useful. Again, I don’t expect to be able to search Amazon, that would be stupid. They have hundreds of thousands of products. I’m not going to be able to do that, but they could provide me this.

[00:09:18] The end of that point that Andrew made, he actually said this bit, which I think is a good statement as well. Which is some of this stuff can be done easily, if not elegantly, and we don’t have to fork the whole thing and build a completely brand new UI and go down that route. We can just enhance what we’re already got. Those mock-ups were not me adding new features to the site, so we’re just paring back the menu to those features that I think should be offlined, because they are already part of those sites. Okay, there’s also this debate that’s going on at the moment about App Shell, and actually I think that this debate about App Shell is just another version of what I’ve just talked about. If you haven’t come across the concept, App Shell, have people heard about App Shell? Okay. At the moment, the Chrome Dev group are doing a really good job at trying to explain all the different steps that are involved in trying to offline a website. To do so, they’ve come up with phraseologies and terminologies to encapsulate the different steps and one of these steps is referred to as building the App Shell. What that means is when you want to offline something, you take all of what we would have called in old terms, “The site furniture or the page furniture”, the header, the footer, the menu, all of those types of things and you build a specific cache that saves them. Actually, you can quite easily offline those. You can build a cache that just pulls those pieces of items from within the browsers memory, basically, and reuses them without them having to go off.

[00:11:04] This is just one phase of how you would offline. There are lots of different phases or different recipes as they’re referred to as well of doing offlining. I think, unfortunately, the App Shell thing also implies some other stuff. It implies that you should build a single page app or a singular app much like the Washington Post one, by the very word “app” in it. Also, it’s not particularly helped by the fact that most of the examples use the material design, which is very related to apps as well. I think it’s just another version of the problem that I just mentioned before. Everybody is really aware of this discussion, absolutely. It was very nice at the summit at the beginning of the week and some of the comments that were coming out and what this one, particularly, from Jake Archibald, is you know, we need to change the perception that progressive web apps must be a single page rendered app. They can be either. You can take and enhance a current site and just use what rendered web pages install them into memory, or you can build a single page app using something like Angular or Ember. The choice is yours. The technology is not descriptive. It can actually use both methodologies and we need to be careful that the terminology doesn’t force people down one way or another, which is a problem with the app cache thing. The examples I shows you in my little mock-ups, I would refer to them as more site furniture caches than the App Shell. I know it’s just a little bit of tweaking on language but I think it’s a very important tweak on language.

[00:12:50] Back to the Guardian and 14 months ago when I was looking at this. As I’ve said, I regard this as a mature site. There’s something else that’s also very different about this site to some of the modern messaging sites that we see. Like Twitter or any of those types of streaming navigations with notification. I class these types of sites as using document search. I couldn’t really find the phraseology that really encapsulates it well, but it I try and explain what I mean by document search. If you look at the interface here, it’s pretty much driven by free tech search. In the far side there, people type in and they type the job title in. Behind the scenes, what we actually do is we store all of our jobs in a big SQL cluster, so a classic relational database system. To the side of that we use elastic search, which I think a lot of companies do now. Like a proper free tech searching system as well and that’s what drives this box here. It’s paired the tech search to GO search as well, so you can say, “I’m looking for JavaScript jobs in Brighton or JavaScript jobs in London” and it finds those things for you. Here we have a classic free tech search over documents that are stored, in this case, in elastic search. Then, secondary to that is a classic facet navigation system. Facet navigations are best explained by what’s on Amazon. When you go down the column on the left hand side, you click on one item. I don’t know, if I want to look at books and then I want to look at computer books and slowly you work your way through your navigation and it drops you down. Each time it tells you what the subsections are for each of them, so you drill down through it. The facets are generated automatically from the data that’s stored. Typically, strangely we find that some people like this type of search and some people like that. We’ve never got to a point where one has dominated another. To be honest, they’re often found together. That’s why something like Solar, which is another variant of elastic search, they’re all based off the same underlying technology, which is a leucine engine. They also include facets as part of the core of their system.

[00:15:09] I was looking at this and saying, “Mature sites, I’m interested in offlining”. I can do the bits that I showed you in my mock-ups but how could I extend that further? What could I do to move this type of functionality into the offline world? I’m really interested in this document type searching and I want to bring that into my offline experience. Maybe not the first step of when I enhance a site but I’d probably do what I showed you, the first step, but the what the second step would be and could I product a piece of technology that would allow me to do that? That’s where this project came in. That’s why I started this project, not actually to product a library for people to use but for me to experiment with that concept and find out where the limits are. Mani is a document search tool based in JavaScript. It’s actually up online; I have put it up online. It’s on the GitHub repository. It has a great big label on the top of it saying, “Unstable API” because it is an experiment and I don’t want people to think otherwise. I have fully documented it so that people can play with it and use it. You can play with it as well. I’ll show you it in a second. This tweak came out and I think it’s absolutely great. Yes, friends should stop friends from building databases. Absolutely. Databases are complex things, so you need a lot of hardcore domain knowledge to really build a proper one and to get it performant and maybe I shouldn’t be trying to build my own database that runs offline. I think there’s another thing that comes with this statement as well, that sometimes it’s really great to lift the bonnet and tinker with an engine, because in doing so, you find the limits of a technology and in doing that it gives you ideas about maybe possibilities that you wouldn’t of thought about otherwise. I often do this, play with technologies just to find out where the boarders are. This project is definitely that.

[00:17:09] Before I get into doing the demo, I’m only going to show you a tiny bit of code and then I’m going to show you how it works. Just so you know how easy it is to set something like this up. For free tech search, you need to create an index. There’s no real value in indexing everything. There are certain things like bouillon information that wouldn’t be good. You’re really interested in specific pieces of text, so you’d need to decide which pieces of text you want to index. All the options do is say, “This field” in this case the title field and the text field I’m interested in, I want you to index those. On the title field, can you give it a boost of 20? A boost basically tells the index that that’s more important text. If you find the term in there, that’s more important than if you find it in a main body of the article, in this case. Once you’ve created the options, it’s really simple to setup Mani. You basically instantiate an instance of it, giving it the options and it then provides you with an index. You can product multiple indexes if you want to. In this case, we just produce the one and then once you’ve got the index you add your article or your record, your item to it. It’s added as a JSON object with a series of properties. For the demo that I’m about to show you, what I’ve done is I’ve gone onto the Async site and taken all of the events that have ever happened at Async and turned them all into JSON objects and we’re actually running in a browser here, so I’ve loaded up Main and I’ve loaded up the JSON so that we can play with it. Just doing this here.

[00:18:52] To do the search, it’s pretty simple as well, basically you take the index.search and then I’m going to search for the text and the text I’m searching for is promises. The next few screens are going to be setup like this and I’ll sit down and show you how they work. On this side, on the left hand side, basically, you’ve got a text area with some JSON in it, which is going to be the search terms that I’m going to type in. On this side, it’s just going to print out whatever the results are coming from our data set, which is all of the events from Async. At the moment, it’s fired up, promises and these are the four events it’s found that have the word “Promises” in. If I change that to CSS, you can see it pretty much straight away, changes over to CSS. I can do CSS 3 if I want to do CSS 3, just back again. We can do things like modules. You can see that it’s actually picking up all of the stuff with modules. I think at this time, if I tell you what I’ve actually done is use a whole series of underlying libraries to create this. Obviously, creating it from scratch would be a bit of a silly thing to do. I just wanted to see where the technology went. The underlying library that runs this is a thing called, “Lunar JS” which you’ve probably heard of before. Some of you may have heard of before, which is basically a free tech search engine. It’s based on that same, well, originally, it’s based on the concept of Solar, which is one of these leucine engines. What it does, very quickly, is it takes a piece of text, it tokenises it and by tokenising, all I mean is it breaks it up into words. Then you’ve got a whole load of words and then does this thing called, “Stemming” which sounds really heavy duty, but all that is, is to take things like started and turn it to start and anything starts and that would go to start as well. Basically, it compresses it down to the smallest term. This particular one uses the Martin Porter Stemmer, if any of you are interested in that type of stuff. That’s basically, generally, the one that’s used for English.

[00:21:02] It’s pretty good and when it then searches through, what it actually does is provide a ranking. When the results come back to me, it injects a score into the JSON record and the score is just a number. The bigger the number, the more it matches and then I just order one score. You can see I’ve got in there, I’ve also then, no top of that, added my own paging. I can say, okay, page me down to just two items. Then if I want to start at item two. That way I can create pages through the whole thing, literally by just modifying a start number and the limit is basically the page limit. I set Mani up so that the results come back without paging information if you don’t give it a start or a limit. As soon as you give it a start or limit, it also injects the paging information, so you can quickly product a UI for paging if you want. It also has a sort. As I said, it actually sorts. I don’t think there’s much practical use. All I’m doing here is just hidden behind the scenes, defaulted for text is to sort by the score and reverse the order for it’s at the top. That’s basically the free tech stuff. Facets. I didn’t really use a library for facets. I just built it myself. It’s about 80 lines of code. I think I could even compress it more than that. Facets are actually quite powerful. At the moment this is the tags. You can see that we’ve got a hundred odd tags with that. What else can we look at? Okay, so we can do – you’ll have to excuse my typing. There you go. There. We’re nearly at the point where middle street is just about to take over from the skiff, as the venue of choice. You can see history moving on a bit more like that. Or, we could do, more interesting maybe, is speakers. That’s actually an array. I built into this a whole JSON pathing thing. When I’m doing the path there, it’s basically pathing into the JSON and saying, “Giving me the object speaker. That’s an array, so give me the first item of it and then give me the name of the speaker.” There you can see Dan has done the most talks. That’s interesting. I’m down there though, at number three talks.

[00:23:46] We can start to add a few extra bits to that. Again, I can put a limit in. If I just want four of them, I can facet limit it. One of the things that’s quite interesting when you’re dealing with tags is that they can often come in mixed case. I just put in quickly a lower casing system, because I find that quite useful when you’re dealing with it. We’re got those there. The next bit is you can then combine the facets with the free tech search. If you see down the bottom there, it’s got text equals CSS. What we’ve got here is a facet structure related to the speakers that have talked about CSS and the accounts that they’ve done. It’s starting to combine together the two different things. You have to remember; this is all operating in this browser. None of this is going back to the server. Obviously, then, I’ve got my free tech search and I’ve got my facets. I probably still want to do structured queries. Structured queries are what you classically do with a database. Here, there are lots of ways to query. At the time I was doing this, I was doing a lot of Mongo DB work, so I’ve basically used a cut down Mongo DB querying system. Here we’ve got sponsors equals null. It’s basically saying, “Give me all of the talks that have never been sponsored and that’s the list there. We can do much more complicated operators. This is like give me everything from a certain data. I can go, okay, from 15, so that’s now cutting down in size. I can do into the month if I want to. Cut it down that way. Then you can start to add them together. You can also do – here you can see, on the date – I can do two bits into it as well. I can do from and two as a date range. Down the bottom, I’ve got something like the sponsors being null. Or I could switch that around if I wanted to and do something like – if I can remember the syntax – I think it’s going to be not equal. Yes, those are the ones.

[00:26:17] Which way is it? That’s it. There we go. There we go. I’ve got them listed on the site. Being dyslexic doesn’t help when you’re trying to work out which way around the letters are. Anyway, you can see it does structured queries. It’s not a full list of what you can get on Mongo DB but it’s a reasonable list to allow you to do structured queries, which you can also join in. That’s great and good. The next thing I want is GEO and specifically the one that I’m really interested in because there are lots of ways of searching on GEO, you can search within a bounded area. The one that I really like is the nearby type searches. I took the GEO lib library and if you’ve not used it before, fantastic library. Really great. It can do stuff like finding what’s nearest to you. It does calculations of longitude and latitude. Takes into account the curvature of the earth and stuff like that. Really clever. Again, if you’re going to do GEO stuff, you need to basically create an index, because you need to tell it where the GEO information is. I’m going to swap data sets now. I have a list of places that I visit. It’s got about 190 in it at the moment. I just keep it as part of some of the work that I’m doing. I’m going to use that data set. Within there, it’s got a location dot with the longitude and latitude in it. You just add that to the options and it will create the GEO index. I’m also creating a text index at the same time from the same data. The GEO, going again, you can see here, basically, the search is kicked off. You just say, “Nearby” give it the longitude and latitude. That longitude and latitude is for this room, I believe, or somewhere near this room. If I start to actually delete out the specificity of it, you can see that it drops away and goes to a slightly wider position. You can do an offset. An offset is basically – if I set that to zero, you can see a middle streak comes up first, because that’s the nearest. That’s within one meter of where I am, which is about right. Clear left is 14 meters. I don’t know why? That’s probably because they’re a bit further over that way when I took the reading.

[00:28:37] Literally, the number that goes in offset basically deletes from the top of the list. That’s really useful if you’re doing apps where you want to get nearest to where I am at the moment and you don’t want to include actually where I am. Also, you can just keep going if you want to. For a bit of a laugh, I’ve added, if you noticed, look at the distances, I’ve added the concept of British distance, which we don’t have any single measurement for. For me, anything that’s nearby is measured in meters. If we just keep knocking it out, after that it goes to minutes walked. Part of the GEO library in which I’ve made available is a conversion from meters to miles and kilometres and stuff like that. All I’ve done is divide the miles by 22 minutes and that gives me how far I go and then obviously, if you go out a little bit further, as soon as you get over a mile I start talking about miles. That’s obviously British measurements, which I’ve managed to include in there. As with the other stuff, this becomes most powerful when you start to add it together. You can add nearby searches and then I can do a structured query on the tags to say “pubs”. Very important for after we’ve finished, so we know where exactly the nearest pubs are. Of course, the Hot Polls is nearest, at 52 meters. There we go. I think that works quite well. Okay. That’s the end of the demos. Just to explain a little bit. I’m not going to go into mass about the architecture, but just so you know how it works. Basically, there’s a document store that hold the JSON. For the time being, I’m actually holding it in memory. Just because it was easy to do while I was building it. You add, update, delete and remove from the document store. I then added event emitters. I built this actually as a node project and browserify it, so it’s using the node event emitter thing. Every time a document is changed in any way; whether it’s added, removed, updated, it fires events. The two indexes on either side if you create them. Listen for those events. Then they take that document and create what they need, so that they can do their own searches.

[00:30:55] For the text one, that’s a really complex structure which stores how to do the searches. For the GEO one, it’s really simple, it’s just the ID and the location. Basically, the linkages are the single ID between them. Then when I execute a search, there’s a top piece of logic at the top that just looks at the JSON that you’ve passed in. You saw the JSON as I built it and works out how to fire each of these in sequence, in the right types of sequence. They basically do their search and return their IDs. That then creates a subset of IDs. That’s then passed to the next bit that then does it’s work and either filters and uses those IDs and that’s how I can integrate one type of search with another type of search to provide a unified output at the end. That’s how it works. As long as your document store supports events, so if you could do something with index DB or Dixy or some of the other offline search things, like Pouch and stuff. As long as it’s got event in, I can swap that central bit out and make this work. Which is what I would plan to do if I moved it forward, but for now it was just done in memory. There are a few other extra bits to it, which you really need if you’re going to do this type of thing. Which is to serialise your data. Basically, you can see there, when I create an index and I add documents to it, there are two extra functions. There’s one which is index to JSON, which basically takes the whole thing, so the compressed indexes, plus the document store and builds a JSON file. That’s important because building the index, especially the text index, it can be quite performant. It takes quite a lot of CPU, so it’s good to actually offlay it and persist it if you want to do that. That’s often done with the Lunar JS, for static builds. If you want to use this type of technology, when you build a blog and each time you do a post, you build it statically. You can actually create an index and just send the index and load it straight up and do it that way. Build the index on load and actually deploy it.

[00:33:09] Again, obviously I need a pass function to pull it back in again. Once you’ve got that, then you’ve got the next stage. Of course, as soon as you can serialise something, you can persist it. In this case, again, I just did the simplest thing because I just wanted to see how it would work. I took local forage, forced it to use the index DB as the store, mainly because it’s bigger than using any of the other browser storage mechanisms. It actually dumps the whole thing, just as a single file in. It literally just dumps it in. I’m just using that because I want the larger storage bit. The persistence library is pretty small. All it does is listen for those events. As soon as anything changes in the document store, it asks for a new version of the JSON and persists it back into the thing, so it keeps it up to date all of the time. I don’t have to do any other calls. Once you put this piece of code in, it automatically persists to the browser. At start-up will lift and load as well. As I said, it was built on top of other people’s shoulders, so I better give them to kudos they deserve. The free tech was done on top of Lunar, the query engine I pinched from this neb DB. I’m not quite sure how to pronounce that. JL lib was the nearby search and local forage for doing the persistence. There is still quite a lot missing from this to make it into something that maybe I would use. It depends on what type of project you’re doing. The things you might want to think about is sync. How do you sync between the server and your device or your browser? Obviously, things like pouch DB have got sync built into them and that’s like one of the strong bits. There are some things I could use and maybe by using something else to do the data store, I’ll get sync, so I can maybe get this on top something else like Pound or one of the other ones. Real-time updates. The types of stuff that you get with Pusher or with Fire Base. When you’re building streaming type applications you might want real-time updates. It depends on what the type of application you are. Which of these types of way of passing data in. I’ve got some examples of web workers. You definitely want to use web workers to offlay this type of work. Mani if fine, I’ve run it through a web worker. It absolutely works fine that way.

[00:35:31] There is a methodology for actually baking it into the actual files so people don’t have to do it themselves. I haven’t done that yet. I could build it onto index DB rather than memory storage. I want to try that out. The interface is really simple at the moment. I think I should move it over to promises. I think that’s a no brainer. The last thing, which is actually more difficult and I give a bit of sleight of hand, is that I’m not typing the data types at the moment. You know that bit where I was showing you about the dates and stuff? That was actually a text comparison, but because I’m using ISO dates, it actually works. You start moving to things like time zones and stuff like that, then you need to actually type your information. Which means that when we’re creating the index, I need to go through and say, “This is the data you’re going to get and these are the types it’s going to be. It’s going to be a number; it’s going to be a string or a date”, so that the comparators can kick in. Actually, I have actually got the comparators actually built in the system that it works out from, I just haven’t put the schema in place yet to do that data typing. It is just an experiment, there is loads of other stuff out there. Maybe, you don’t need to reinvent the way it will. You should look at some of this other stuff. Elastic Lunar, which is a variant of the Lunar one that I’m using, uses a different scoring mechanism. Lunar is more akin to the scoring mechanism that you will find on Solar. Obviously, that one is more akin to the elastic search one. That’s important if you want to use server side search and then replicate it there, because you’ll get different listings and that will be a bit uncanny value for some people.

[00:37:16] Index promises, a great little library by Jake Archibald, which just basically wraps some of the nastiness of index DB up into a promise library. It’s only about 3k. It doesn’t completely take away the nastiness of index DB. If you really want to take away all of that stuff, a much higher level API is Dixy, really good. Again, Promises really does abstract you away from all the mess that is index DB. Patch DB has obviously got sink, fantastic. You could possible use that. Hoody, which builds on top of Pound DB and Couch DB. Another project you should look at maybe. Pusher and Fire Base basically for the real-time messaging. I think it depends on your application design.

[00:37:58] Wrapping all up. What did I learn? It was an experiment, so there must be some learning out of it, that was the whole point of it. I think the first thing from the design side is that it is okay to build asymmetric offline experiences. It’s a valid design decision. What do I mean by that? Probably the opposite to saying, “It’s isomorphic”, that’s a horrible word where it has to be exactly the same on both the server and the client. I hope I’ve shown you some nice designs or nice design ideas, which are not completely isomorphic. It doesn’t mean one way or another, it’s a bit like the whole thing about whether you build in a single page app or you don’t, but I think you need to think about it and make a positive decision. Not just follow everybody else and follow the crowd. There are some really good examples of that. I think the other thing to say is, you should probably before other tenants who have come with a responsive design, which is don’t create a completely different experience, try and pear down the one that you’ve got and product something that’s very similar or rather the same to what you’ve got. It’s like a pared down version. I think that’s the best way. Occasionally, that’s not possible. A good example of when it’s not possible is the Android version of maps where they can’t allow you to view the whole of the world when you’re offline because that’s just ridiculous. The database they’d have to store on the phone would be huge. They are allowing you now to capture certain areas that then go offline and that’s provided as a separate piece of functionality. That’s quite interesting. I think that was one of the big learnings for me.

[00:39:43] Okay. This type of document search: the free tech search, the GEO. Mixing that with structured queries and facets and stuff. You can provide it. I think I’ve just demonstrated to you; you can provide it. The caveats are within very tight constraints. It’s only certain sizes of data that you can deal with. You have to be sensible. I’ve shown this to a few people and the first thing they’ve asked me was, “How big? How big can I do it?” The answer is, it depends. It depends on the size of your indexes. How much text are your indexing? If you’re just indexing titles, then you can do quite a lot but if you want to take the whole body text from blog articles and do that, you’ll end up with a very big index. It does depend and it depends on your design as well. A lot of it goes to some of the other constraints that we were talking about. It’s not actually obvious straight away, but things like the frequency of updates are really important. If you have something that’s like a 10MB database that you’ve stored. Well, that’s fine. You can probably deal with that. You could probably upload that over time. If that database has to refresh every three or four days, what’s going to happen to someone’s data plan as you continually keep pumping up? When do you get to do that? When do you get to actually update that? When they open up their webpage? Are you going to start downloading straight away a few MBs of data? It depends on the profile of what you’re trying to achieve. Within those profiles, and you can look at it, you can get away with it.

[00:41:15] The best way is maybe by example. One example that I’ve just used it for, which I found really useful, was just a little app for me and my friends for the great escape. You know the great escape? I think it’s on here. It’s 450 odd bands, 30 venues, they play twice each. Perfect. It’s too much for you to go through and scroll through. 400 odd bands playing twice, that’s ridiculous. They’ve all got things like genre tags added to them. I can go on Tuesday, in the afternoon, what bands are playing electronic music? Better than that, if I get to one of the venues, which I often did and they were full, okay, I’m here. What are the nearest venues to me so I can go to another venue? Perfect. Small dataset. Too big for you to scroll through but just perfect for that type of searching. You could probably stretch from hundreds to thousands. If you get over the five thousand, ten thousand mark, it will really start to melt. The issue will not be, actually, performance in terms of searching. It will be about you uploading and keeping the data up to date. I think I’ve proved that the techniques and approaches work but I will then say that this is not production code. Don’t take it and try and use it as it stands. It’s just there to prove a point. If I’m brutally honest, I’ve got it 90% of the way there and I’m not fixing it up to 100% because I know people will start using it and then I’ll have to start supporting it. I already spend a month a year looking after the happy swagger stuff and a lot of the happy modules. That’s quite enough as it is. Maybe I might do it. I purposefully nobbled it, if I’m honest. That’s it.