Offline computing: Miscellaneous
On my right, through the window, the orange sun sends me its evening
rays. In a cloudy blue sky, it slowly descends towards the horizon,
leaving little by little the night take its watch; and I, mine. It is
in this quiet atmosphere of the weekend that I begin my first of three
night shifts. And I take the opportunity to continue my series of
entries around offline computing--this one probably being the last.
It is with pleasure that I have read canfood's post related to my
offline computing (gopher://sdf.org/0/users/canfood/phlog/2022-08-13).
He explains his own way of doing things, which comes from his own
experience and which meets his specific needs. As I said before, I am
always very interested in this kind of feedback on everyone's personal
practices which I find really enriching.
Mainly he focuses on his use of RSS to follow the sites he frequents,
as well as youtube-dl to download content from streaming sites. I've
already covered my use of RSS but I'd like to talk a bit about how I
handle all my multimedia content in offline use.
About books and comics
I have various books in Epub (DRM free) and PDF format (PDF when I
can't find the Epub version). I was using Calibre to help me organize
my virtual bookcase but it's a hell lot of dependencies (QT5 and a
bunch of python) when you try to have a simple and small system. I
love Calibre, really, but I ended up managing my books by hand, books
being sorted in directories with a "fr/n/name, surname/" structure
where "fr" is the lang (French here) and "n" is the first letter of the
author last name. The filenames as well has a fixed structure:
"Surname Name - Serie Title 01 - Book Title (Year) [CATEGORY]".
The categories are made of two letters according to the type of the
book. I keep a list of them in a separate text file to help me
remember them (it's in French):
---8<--- categories.txt
AV : aventure
BI : biographie/auto-biographie
CL : classique
DI : dictionnaire
FA : fantastique/fantasy
HI : histoire/historique
HO : horreur
JE : jeunesse
NA : N/A
PH : philosophie
PO : policier/enquete
RO : romance/romantique
SC : sciences
SF : science-fiction
SO : societe/politique
TH : thriller
XX : erotique/pornographique
---8<---
And for the metadatas of the Epub files? Well, since Epub are just ZIP
files, I edit the content.opf in them transparently with emacs(1). If
you don't know it, Emacs opens ZIP files, allows you to edit their
contents, and regenerates the ZIP on the fly when the edited file is
saved. Very handy. I would like to have a little program to edit the
metadatas of Epub files (something like id3v2(1) command line tool for
MP3) but I didn't found it yet. So, for now, Emacs does the job.
To help me in the task, I keep a list of the xml entries I always edit,
based on what does Calibre (yes, I am quite picky about the books I
have, I like everything to be neat, even in the metadata):
---8<--- opfref.txt
Book title
Surname Name
Surname Name
2000-01-01T23:00:00+00:00
Publisher
Category
en
--->8---
I read PDF with zathura(1) (poppler plugin) and Epub on the computer
with the CLI ebook reader epy (https://github.com/wustho/epy). But
lately, and since Amazon decided in Mars 2022 to authorize Epub on
their readers, I borrow my wife's Kindle Paperwhite (so much better for
the eye strain).
For the comics, I use the CBZ format. It's convenient since it is just
ZIP files. And when comics come in PDF format, I just extract the
pictures with pdfimages(1) (option "-j" to have JPEG) and create a CBZ
from them thanks to 7zip ("7z a -tzip Great-comic.cbz Great-comic/*").
I read them with zathura(1) as well, thanks to its comic book archive
plugin.
About music and podcasts
It will be quick here. All my music is in MP3 format and I use the
console audio player moc(1) to listen to it, and i3dv2(1) to edit the
tags when needed.
For the podcasts, I use podboat(1), a helper program to newsboat(1)
which queues podcast downloads into a file. I play them with
mpg123(1).
On the go, I transfer music and podcasts to my phone which have VLC
installed.
About television
When it comes to TV content (including what YouTube offers), I
obviously use the very popular youtube-dl. As for the URLs of sites
I'm interested in, I keep a list of video URLs to download, and when I
have the opportunity I feed youtube-dl with it. Since I keep videos
URLs in the same file that textual content URLs, I precede the said
URLs with a key character in order to dissociate them easily afterwards
with a script. Simply, for "normal" links (i.e. textual content) I use
"l:", while when it's a video, I use "v:".
The script I use is named videosync.sh, it's in bash and have two basic
function and a case statement:
---8<--- $HOME/bin/videosync.sh
#!/usr/bin/bash
LIST=$HOME/x/files/linkbox.txt
YTDL="youtube-dl --config-location $HOME/.youtube-dlrc"
# to prevent hairy quoting and escaping later
qt='"'
bq='`'
eq="'"
# a function to display a title message
function msg
{
MESSAGE="--< $TITLE >"
echo -en $MESSAGE
printf %"$(expr $(tput cols) - $(echo $MESSAGE |wc -c) + 1)"s | tr " " "-"
echo
}
# the function "check" help to determine whether there is or not a
# video URL present in the linkbox text file (link preceded by "v:")
function check
{
TITLE="Checking presence of videos to download" msg
if [ ! -z "$(cat $LIST |grep "v:")" ]; then
echo "Linkbox has valid urls: run ${bq}videosync.sh dl${eq} when ready."
else
echo "Nothing to download."
fi
}
# the function "dl" will just download the video to $DEST
function dl
{
if [ -z "$(cat $LIST |grep "v:")" ]; then
echo "error: nothing to download."
exit 1
fi
DEST=$HOME/tmp/
TITLE="Downloading video list to $DEST" msg
cd $DEST
for i in $(cat $LIST|grep "v:"|sed 's/v://'); do
$YTDL $i
done
echo
echo "To empty the list queue, enter the command:"
echo "sed -i '/v:htt.*$/d' $LIST"
}
# parse command line arguments
case "$1" in
check)
check
;;
dl)
dl
;;
*)
echo "error: bad or missing destination; must be 'check' or 'dl'"
exit 1
;;
esac
exit 0
--->8---
I launch "videosync.sh check" to verify the presence of valid URLs and
"videosync.sh dl" to download them in a predefined folder. The script
is quite straight forward but it might be interesting to note that I
prefer to check for video URLs in the text file and manually start the
download (I like to check the list beforehand to edit it if needed).
Similarly, I added a hint to delete entries with sed(1) in the text
file after the download: I prefer to do this manually as well, having
made sure that everything has been done correctly.
Others may prefer a more automatic solution but I like to keep this
frictions between this three actions (check, download, delete) even if
it means losing some time checking every step.
Just for the curious, here is my youtube-dl configuration, the most
important being probably the preferred format (480p):
---8<--- $HOME/.youtube-dlrc
--no-part
--no-color
--external-downloader aria2c
--output '%(uploader)s-%(upload_date)s-%(title)s-%(id)s.%(ext)s'
--restrict-filenames
--format '[height<=480][ext=mp4]'
--->8---
About video games
I don't play much but I have various video games that runs well on my
Thinkpad X270, including:
- Celeste
- Duke Nukem 3D (eduke32 port)
- Doom (chocolate-doom and lzdoom ports)
- Enter The Gungeon
- NetHack
- Open Lara (Open Source port of Tomb Raider)
- Starbound
- Dosbox (various DOS games)
- Scummvm (various point'n click games)
Conclusion
To conclude, I think I've covered a wide range of my offline computing
usage. I have to say that, while I spent most of my time without
Internet, I don't feel disconnected at all. Of course I lack
synchronized communications, but I still manage to keep valuable
digital contacts. And when I can, I'm always very happy to log in SDF
or tilde.town, and spawn my IRC client for more direct and
instantaneous interactions.
I hope you enjoyed this little series of articles. I'd love to hear
about your own experiences, so please feel free to put them into words
and share them! I, and certainly many other people, will be delighted
to read you. In the meantime, take good care of yourself and your
loved ones, and be well.