GOPHER 2.0 - THE CLIENT When I first sat down to write this final part, I ended up with 5,500 words. Oh no, too much! I've spent the rest of the week chopping it mercilessly down to a fraction of that. The three perspectives ================================================================= Once again, I find it helpful to consider a client through the eyes of: developers, content creators, and end-users. 1. Developers As a developer, all I really care about is simplicity and lack of ambiguity. I want to be able to write a minimal client in 20 lines of code. The only thing that worries me about building this client so far is the cryptography element. 2. Content creators Okay, this is the interesting one for me. I want this client to not just be a way to *read* content, I want it to help make it easy to *create* content. How cool would this be: point your client at your "local site" - a directory of content - and view it exactly as if it were on a server. *Then* when you want to edit a page, hit a command and now you're editing the content (either an in-client editor or ex- ternal editor of your choice). When you save and exit the edi- tor, now you're back in the client, looking at the new content. Want to publishing to a server? Just rsync the site up, as-is. Now *that* is what I call simple. 3. End-users As a client end-user, I want a great reading experience on my desktop/terminal/phone/vr headset/neural jack. I also want book- marking, tabbed browsing (and maybe something neat like a naviga- ble history tree), and I want tons of keyboard customization, etc. My developer and end-user selves don't get along very well. :-) That's it ================================================================= In a high-level way, that completely covers my wishlist. Feel free to stop right here and fill in the details with your imagi- nation. What follows are additional details, thoughts, and ideas. So how would that "local site" thing work? ================================================================= It's a classic problem: to have a usable Web "site" which works locally without a server, you have to use all relative links (like "../../foo.html"). There are two problems with relative paths: First, it's really easy to get them wrong. Secondly (and this problem is much worse), if I *move* a piece of content up or down a level, now all of my relative links to *and* from that content are wrong and I have to go change them. (I remember when desktop website building tools tried to keep track of relative paths - do *you* remember HotDog, FrontPage, Dreamweaver? - but they always ended up screwing them up anyway.) The alternative to relative paths is absolute paths. But how do we make absolute paths that are portable? I think we already have a very widespread example with VCS software like Mercurial and Git: you traverse up directories until it hits a sentinel file or directory. Let's say for the sake of argument, that we specify the root of a site with a file called "site_root". Local clients simply search up directories until they see the "site_root" file. Maybe then there could be different syntax to specify "site root" paths with "/" and "server root" with "//" (I'm just shooting at the hip here). These two links might point to the same place: link:/diary/02-beans link://users/ratfactor/diary/02-beans In the first link, we're looking for the site root, as defined by the first "up" directory with a "site_root" file. In the second link, we're going straight to the root of the server. I guess we also need a way to link to other domains. :-) I feel like I'm getting off into the weeds on this one, but there is one other benefit to this that has occurred to me... robots.txt ================================================================= I think the concept of a "site root" also provides an answer to a problem I saw come up on the Gopher mailing list. [0] The problem is: how do you control crawlers on a host with lots of user con- tent? For example, how do we SDF users tell Gopher crawlers that they should not crawl a selector? Well, right now we can't. But if you could specify this information in the "site_root" file, then you'd have that ability. Directory-aware client path traversal ================================================================= On a related subject, I think it might be really neat to have the concept of "up" in a client: as in "up a directory". What if sites were guaranteed to serve "something" for every di- rectory? Could be auto-generated by the server, could the static content. That might be a neat way to provide navigation in a ubiquitous way. Search ================================================================= What if the client and server had search built in? Gopher has the concept of search, but doesn't provide any guaran- tees about search being available on a site. The Web has search, but it's completely ad hoc...and largely in the hands of a single giant company whose name ends in "oogle". What if site searching worked locally (provided by the client) and remotely (negotiated between the client and server) and was guaranteed to be available? You could have a personal Wiki created in this system which was searchable and editable locally and searchable on a server. That would be cool! Etc. ================================================================= I have lots of other half-formed ideas. But beyond a certain point, obviously, you have too many features and the "simple" content delivery/authoring platform is lost. Also, I'm aiming for under 1,000 words for this post, so I *have* to stop. :-) Thanks for reading! I'll read whatever feedback I spot on the Gophersphere. Thanks for reading and putting up with all of this. I suspect I might write a few little follow-ups...but for the most part, I'm going to try to post some non-technical content for a while. Good night, Gophers! <3 [0] https://lists.debian.org/gopher-project/2019/05/threads.html