<?xml version="1.0" encoding="utf-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Aaron Manning - Articles</title><link>https://aaronmanning.net</link><description></description><managingEditor>contact@aaronmanning.net</managingEditor><item><title>Unifying Sync and Backup - Part 1</title><link>https://aaronmanning.net//blog/unifying sync and backup - part 1.html</link><author>Aaron Manning</author><pubDate>Sun, 02 Feb 2025 05:00:00 UTC</pubDate><content:encoded><![CDATA[<h1>Unifying Sync and Backup</h1>
<p>For a little over three years I have been using a single Subversion repository as the way I keep all my files synced across multiple computers, much like how someone would use Dropbox or Google Drive. This article is a bit of the story of how I got here, why it works, and why it's not as crazy of an idea as it probably sounds.</p>
<h2>Background</h2>
<p>The story starts when I was still a Windows user, and relying on Google Drive's desktop client to keep files backed up and synced between my laptop and desktop computer. This was adequate for some time, but it was rather rough around the edges, leading to occasional frustrations.</p>
<p>For instance, if I shut down my desktop computer too soon after making a change, it wouldn't properly upload in time, and would therefore be missing on my laptop. The worst part of this though is that because every individual edit is atomic, rather than grouped in logical units, I would sometimes not notice that only a few things were off with the files on my laptop until after making many conflicting changes. These small textual changes, in the context of computer programming, were often significant semantic changes. Because tools like Google Drive make an idealistic assumption that everything will always stay in sync, when such a small synchronisation errors happen, there are no good facilities for handling these, making the problem potentially much worse, with the previous version from my desktop computer overwriting the changes on my laptop entirely.</p>
<p>There is also no good way to tell Google Drive to exclude certain files which may come and go easily, including compilation artifacts. The most extreme example occured when programming in OCaml, where I noticed some extremely strange inconsistencies within the compiled file. Namely, sometimes I would compile the code and get a binary which simply would not run, and on other occasions I would make a change to the code, recompile, and then the changes would not be part of the binary that I run. Because at the time I had only been programming in OCaml for a few months, and was working with unusual compilation settings while trying to write my own standard library, I had assumed I did something wrong. On the contrary, it so happens that as a result of quick creation and deletion of compilation artefacts by my build system, Google Drive was overwriting the latest compilation artefacts immediately after they were created. This was most starkly noticed when opening the file explorer, clearing all the compilation artefacts with my makefile, and then noticing them automatically show up again within a second by Google Drive, which didn't recognise the edit as intended.</p>
<p>Soon after this experience, I switched to Linux for reasons entirely unrelated to the above. When I did so, there were only a small handful of applications that I used in Windows which weren't available, and Google Drive's syncing client was one of them. So, I did the natural thing, and tried a third party syncing client called <a href="https://www.insynchq.com/">insync</a>, hoping that such a client would not be prone to the same problems as the official one. I was wrong. Insync, very frustratingly, treats file system renames as deletions and recreations. After trying to rename a large number of folders in bulk, I ended up with a huge number of inconsistencies with duplicate files across similarly named folders. This experience does not exactly instill confidence about the low probability of data loss.</p>
<h2>The Solution</h2>
<p>So then I had a crazy idea, why not just use version control? It handles syncing gracefully as conflict resolution is explicit, where differing local copies can coexist before being resolved when pushing to the server.</p>
<p>I had imagined a workflow like the following: When logging in to a new computer, download all the changes from the server, and then proceed to work on whatever it is I was doing. When done, push the changes to the server, ready to pull from a new computer. If I ever forgot to do a push, although I wouldn't have the latest changes, I could still work on files included in the transaction since Subversion would explicitly notify me about conflicts when I later tried to rectify them.</p>
<p>An obvious, although unintended, benefit of using version control, is that I now have a full history of all of my files. If I want to see what my computer's filesystem looked like on August 10-th of 2022, I can just do that, without any difficulty or complication. I am also no longer afraid to delete something under the premise that I may need it "just in case", keeping my filesystem much more well organised and less prone to digital hoarding. I can just delete it; if in 6 months I realise that I needed it, it can be pulled from history.</p>
<p>This also implicitly allows for a very natural solution to the problem of backup. I have my Subversion server running at home (previously on a <a href="https://www.raspberrypi.com/">Raspberry Pi</a>, now on an old <a href="https://frame.work/au/en">Framework</a> laptop mainboard <a href="https://frame.work/au/en/products/cooler-master-mainboard-case">running on its own</a>) which periodically, via a cron job, runs a full backup to <a href="https://www.backblaze.com/cloud-storage">Backblaze B2</a>. This means I have two local copies of the current version of my files, a full history of my files on a server at home, and a full backup of the history on the cloud. This solution is also less expensive than standard online file storage services; all of the software involved is free, a Raspberry Pi has the only ongoing cost of power, which is very low (and is a cheap computer to start with), and my Backblaze bill is consistently only few dollars a month, with hundreds of gigabytes being stored.</p>
<h2>Why Subversion</h2>
<p>One matter I have yet to address is why I chose Subversion over another version control system, such as Git, Mercurial, or CVS. There are three reasons for this.</p>
<ol>
<li>
<p>Because I intended to run the server locally on a Raspberry Pi, I needed something that was easy to set up and for which I could easily find help online if something went wrong. Git and Subversion are the most popular version control systems, and likewise are far ahead of any alternatives in this respect. Setting up an Apache server for a Subversion repository is an afternoon project at worst, and now that I have written instructions for myself regarding the process, takes half-an hour at worst. I also ran into zero issues when I did my first dry run to see how it works.</p>
</li>
<li>
<p>Given that most of my computer's files would be within the version control repository, I would need a system which could handle a huge number of, sometimes very large files, without lagging on commits, updates, checkouts, etc. Subversion is simply way better than Git at compressing large binary files, which is why it is often chosen by big game development studios who need to store large assets. A 200 gigabyte Subversion repository is much closer to the typical use case of Subversion than a 200 gigabyte Git repository is for Git.</p>
</li>
<li>
<p>I use Git for tracking many of my projects, and I want the <code>.git</code> folder to be treated like any other folder from the perspective of this main file repository. This is an important one as my main repository will contain other Git repositories nested within it. Git submodules are simply an unacceptable solution for this; if you don't know why then just try using Git submodules...</p>
</li>
</ol>
<h2>The Downsides</h2>
<p>Have you ever read a blog post where the author describes some technology or program they've used for over a year and they only have positive things to say about it? Whenever I do, I become extremely skeptical. Everything has a downside, and using anything for long enough will expose that downside. This is not one of those blog posts, and I will not be a blind cheerleeder. I have two main words of caution associated with this approach to backup for anyone wishing to try it.</p>
<ol>
<li>
<p>If you want to be able to commit and download files anywhere, and want to host your server at home rather than paying for an online out-of-the-box solution, the setup time and ongoing maintenence will be higher. You will need to set up dynamic DNS, or pay for a static IP address. You also have to worry about security and logins, rather than just piggybacking off your home wifi network's security. Not wanting to fiddle with any of this, the trade-off I have to deal with is the fact that if I forget to download the files from my desktop on to my laptop before leaving for university or work for the day, I have to live without them that day.</p>
</li>
<li>
<p>Generally I update the files before working, and then commit them when I'm done, on each computer. If I ever forget to do this, Subversion notifies me of the conflict and I resolve it, and everything is fine. Except for the one time I made a commit in a Git repository which fell under my main Subversion repository, and then made a separate commit on my other computer, without syncing the Subversion repositories. Now Subversion identifies conflicts within the <code>.git</code> folder, and as you may imagine, conflicts in this situation are nothing short of painful. Ultimately what I ended up doing was temporarily setting up a remote repository for the Git repository, pushing from one computer, and then dealing with the conflict when pulling from the other. I could then sync the two repositories through the remote. I then deleted the first repository locally and recloned it from the web, and then completely deleted it from my other computer and recovered it from the first in Subversion.</p>
</li>
</ol>
<hr />
<p>If you are interested in the technical details of how such a Subversion server is set up, continue reading <a href="/blog/unifying%20sync%20and%20backup%20-%20part%202.html">part two</a>.</p>
]]></content:encoded></item><item><title>Unifying Sync and Backup - Part 2</title><link>https://aaronmanning.net//blog/unifying sync and backup - part 2.html</link><author>Aaron Manning</author><pubDate>Sun, 02 Feb 2025 05:00:00 UTC</pubDate><content:encoded><![CDATA[<h1>Unifying Sync and Backup - Part 2</h1>
<p>This is a follow up to <a href="/blog/unifying%20sync%20and%20backup%20-%20part%201.html">the first part</a>, which outlines some of the motivation and justification behind using a Subversion monorepo to sync all files across different computers. As a companion, I thought I would go through some of the technical details of setting it up. This is not a tutorial, but with the appropriate changes to the parts which are specific to my set-up, such as file paths, it could be followed as though it was one.</p>
<hr />
<p>For my current setup I am running (Ubuntu Server)[https://ubuntu.com/download/server], hence the use of <code>apt</code> below, and other Ubuntu specific things. These steps can be completed on a fresh install.</p>
<p>The first step was to install Subversion and Apache.</p>
<pre><code class="language-bash">sudo apt-get update
sudo apt-get install subversion apache2 libapache2-mod-svn
</code></pre>
<p>Then to create a folder to hold the repositories.</p>
<pre><code class="language-bash">mkdir /home/aaron/subversion
</code></pre>
<p>The configuration file for Apache which will contain the Subversion server's settings can be found at <code>/etc/apache2/mods-available/dav_svn.conf</code>, and for my setup, I added the following to it.</p>
<pre><code class="language-xml">&lt;Location /svn&gt;
    DAV svn
    SVNParentPath /home/aaron/subversion
    AuthType Basic
    AuthName "svn-repository"
    AuthUserFile /etc/apache2/dav_svn.passwd
    Require valid-user
    LimitXMLRequestBody 8000000
    LimitRequestBody 0
&lt;/Location&gt;
</code></pre>
<p>Then I restarted the server by running</p>
<pre><code class="language-bash">sudo /etc/init.d/apache2 restart
</code></pre>
<p>and set the appropriate permissions with</p>
<pre><code class="language-bash">sudo chown -R www-data:www-data /home/aaron/subversion
</code></pre>
<p>Finally the user can be created, here by the name of <code>aaron</code>, by running</p>
<pre><code class="language-bash">sudo htpasswd -c /etc/apache2/dav_svn.passwd aaron
</code></pre>
<p>which gets stored in the <code>AuthUserFile</code> specified within the configuration file from earlier.</p>
<hr />
<p>With the server setup, a new repository, with the name <code>repo</code> can be created by first creating the folder</p>
<pre><code class="language-bash">mkdir /home/aaron/subversion/repo
</code></pre>
<p>and then telling Subversion to treat this as a repository, and setting the appropriate permissions</p>
<pre><code class="language-bash">svnadmin create /home/aaron/subversion/repo
sudo chown -R www-data:www-data /home/aaron/subversion/repo
</code></pre>
<p>Before moving to the client computer, it is also important to grap the IP address with</p>
<pre><code class="language-bash">hostname -I
</code></pre>
<hr />
<p>Now, with the IP address in hand, checking out the repository is as simple as running</p>
<pre><code class="language-bash">svn checkout http://&lt;ip&gt;/svn/repo
</code></pre>
<hr />
<p>For my setup, I also do a sync to Backblaze by running</p>
<pre><code class="language-bash">b2 sync --delete --replace-newer /home/aaron/subversion/ b2://&lt;backblaze-bucket&gt;/
</code></pre>
<p>where <a href="https://github.com/Backblaze/B2_Command_Line_Tool">b2</a> is the backblaze command line tool, and <code>&lt;backblaze-bucket&gt;</code> is replaced with the unique name of the bucket I am sending to. This does a one way sync, meaning it forces the bucket to exactly match the local file.</p>
]]></content:encoded></item><item><title>Rust is a Half-Baked Language</title><link>https://aaronmanning.net//blog/rust is a half-baked language.html</link><author>Aaron Manning</author><pubDate>Sun, 12 Jan 2025 06:00:00 UTC</pubDate><content:encoded><![CDATA[<h1>Rust is a Half-Baked Language</h1>
<p>Rust is my programming language of choice for almost all projects I am working on and has been for about three years. This however, doesn't mean I like it. So, while I wait for the situation regarding many of the new systems languages (such as <a href="https://odin-lang.org/">Odin</a>, <a href="https://ziglang.org/">Zig</a>, or <a href="https://www.youtube.com/watch?v=TH9VCN6UkyQ">Jai</a>) to settle before making a decision about where I want to invest my time and energy long term, I decided I would take the opportunity to vent about a particular style of problem that comes up frequently in Rust. In particular, many of the language's features feel half-baked, incomplete, or poorly thought out.</p>
<p>There are many open design questions or incomplete features that feel like the kind of thing one would expect in a pre <code>1.0.0</code> language which is still finding its feet.</p>
<p>To be clear, these are not in my opinion the biggest issue with Rust as a language, and my explaining of them will give no indication as to why I choose to use Rust anyway. However I think drawing more attention to these can help with the design of future languages to learn from the mistakes of a post <code>1.0.0</code> language that still feels strangely unfinished.</p>
<p>As I am writing this, I am just trying to get out some of the things that have been most frustrating to me recently, but I will add more sections to this article as things arise in my day to day life that cause me frustration.</p>
<h2>Course Grained Lifetime Management</h2>
<p>This is a bit of an overarching category of issues, where Rust is simply not keeping track of enough information with respect to lifetime management.</p>
<h3>Tracking References to Individual Fields</h3>
<p>Consider the following Rust program.</p>
<pre><code class="language-rust">struct Foo {
    a: i8,
    b: u8,
}

impl Foo {
    fn get_a_mut(&amp;mut self) -&gt; &amp;mut i8 {
        &amp;mut self.a
    }
}

fn main() {
    let mut foo = Foo { a: 0, b: 0 };
    let a = foo.get_a_mut();
    let b = &amp;mut foo.b;
    println!("{}", a);
    println!("{}", b);
}
</code></pre>
<p>This code fails to compile, with the following error.</p>
<pre><code>error[E0499]: cannot borrow `foo.b` as mutable more than once at a time
  --&gt; src/main.rs:15:13
   |
14 |     let a = foo.get_a_mut();
   |             --- first mutable borrow occurs here
15 |     let b = &amp;mut foo.b;
   |             ^^^^^^^^^^ second mutable borrow occurs here
16 |     println!("{}", a);
   |                    - first borrow later used here
</code></pre>
<p>If we apply remove the call to <code>get_a_mut</code> and instead replace that line with what the function is actually doing (i.e. manually inline the function), we get a working program.</p>
<pre><code class="language-rust">struct Foo {
    a: i8,
    b: u8,
}

impl Foo {
    fn get_a_mut(&amp;mut self) -&gt; &amp;mut i8 {
        &amp;mut self.a
    }
}

fn main() {
    let mut foo = Foo { a: 0, b: 0 };
    let a = &amp;mut foo.a;
    let b = &amp;mut foo.b;
    println!("{}", a);
    println!("{}", b);
}
</code></pre>
<p>In the fixed example, the following two lines of code</p>
<pre><code class="language-rust">let a = &amp;mut foo.a;
let b = &amp;mut foo.b;
</code></pre>
<p>are able to be identified as referring to different fields of the struct, and thus the mutable references do not overlap, much like the behaviour of</p>
<pre><code class="language-rust">let Foo {
  a, b,
} = &amp;mut foo;
</code></pre>
<p>So what is going on here? It would appear as though once we do the operation within the function, this extra lifetime information is lost. That is, the lifetime information given by the type signature of the function</p>
<pre><code class="language-rust">fn get_a_mut(&amp;mut self) -&gt; &amp;mut i8;
</code></pre>
<p>does not include within it the fact that the returned reference only refers to the field <code>a</code> within <code>self</code>.</p>
<p>While this example is somewhat contrived, a real world scenario where I have run into this problem is when using a field of type <code>Option&lt;T&gt;</code> and wishing to return a value of type <code>&amp;mut T</code>, by performing a form of unwrapping with specific behaviour for how to handle the <code>None</code> case. For this reason, it actually makes sense to give it its own function rather than just perform the operation inline.</p>
<h3>Internal Mutation Before Returning Immutable Reference</h3>
<p>Keeping the example similar to the previous, now we have both fields of type <code>u8</code>, and the function we are examining, <code>get_a_and_update_b</code> returns an immutable reference to <code>a</code>, after having set <code>b</code> to have its value.</p>
<pre><code class="language-rust">struct Foo {
    a: u8,
    b: u8,
}

impl Foo {
    fn get_a_and_update_b(&amp;mut self) -&gt; &amp;u8 {
        self.b = self.a;
        &amp;self.a
    }
}

fn main() {
    let mut foo = Foo { a: 0, b: 0 };
    let a = foo.get_a_and_update_b();
    let b = &amp;foo.b;
    println!("{}", a);
    println!("{}", b);
}
</code></pre>
<p>Once again, this code does not compile. This is because, even though <code>get_a_and_update_b</code> is done mutating <code>foo</code>, and there is no mutable reference left to be found, the compiler still considers <code>a</code> to be the first mutable borrow which interferes with the immutable borrow of <code>b</code>. The particular error given by the compiler is shown below.</p>
<pre><code class="language-rust">error[E0502]: cannot borrow `foo.b` as immutable because it is also borrowed as mutable
  --&gt; src/main.rs:16:13
   |
15 |     let a = foo.get_a_and_update_b();
   |             --- mutable borrow occurs here
16 |     let b = &amp;foo.b;
   |             ^^^^^^ immutable borrow occurs here
17 |     println!("{}", a);
   |                    - mutable borrow later used here
</code></pre>
<p>The compiler clearly states the use of <code>a</code> is a <code>mutable borrow</code> being used, but <code>a</code> has type <code>&amp;u8</code>.</p>
<p>Most recently this occured for me when working on an abstraction on top of a file of markup which looked somewhat like the following.</p>
<pre><code class="language-rust">struct File {
  ast: Ast,
  content: String,
  path: std::path::PathBuf,
}
</code></pre>
<p>I wanted a method on <code>File</code> with a signature as follows.</p>
<pre><code class="language-rust">fn update(&amp;mut self, new: String) -&gt; &amp;Ast;
</code></pre>
<p>This function should replace the <code>content</code> with <code>new</code>, reparsing the file and storing it in <code>ast</code>, and then returning a reference to the updated <code>Ast</code>. However now this reference to <code>ast</code> is treated like a mutable reference, even though it isn't, meaning I cannot read the <code>path</code> and then access the reference I have to the <code>ast</code>.</p>
<h3>Self References Which Aren't</h3>
<p>This is somewhat a symptom of the first problem, and the way that the bottom level abstractions on memory allocation work. Consider the following simple function.</p>
<pre><code class="language-rust">fn reference(array: Vec&lt;u8&gt;) -&gt; (Vec&lt;u8&gt;, &amp;u8) {
    let first = &amp;array[0];
    (array, first)
}
</code></pre>
<p>This code will not compile, and gives the following error (among others, although this one is the most insightful).</p>
<pre><code>error[E0515]: cannot return value referencing function parameter `array`
 --&gt; src/main.rs:4:5
  |
3 |     let first = &amp;array[0];
  |                  ----- `array` is borrowed here
4 |     (array, first)
  |     ^^^^^^^^^^^^^^ returns a value referencing data owned by the current function
</code></pre>
<p>Of course, Rust is trying to prevent dangling references; we cannot reference something owned by the local function, because that thing will be dropped at the end of scope and the reference will be invalid. In cases where the value is not dropped but rather moved, like the above, moving still results in the new data being in a different memory location, as the return value will be copied to a higher stack frame after returning.</p>
<p>However, in this case the reference is not to the data on the stack held by the struct <code>Vec</code>, it is data on the heap that most certainly does not move when returning.</p>
<p>Again, a contrived example, however I don't think it's a stretch to imagine that, as in the previous example, one may want to store references to substrings of a file's content on the AST, and doing so makes it impossible to store the two in a struct together.</p>
<h2>Const Associated Items in Traits</h2>
<p>In a trait, you can create an associated type as a way to associate a particular type to each implementation, which the trait is not generic over. This is used for the output type in the <code>Index</code> trait for instance.</p>
<p>With the introduction of <code>const</code>, one would hope that you could do the same with a <code>const</code> item, such as a <code>usize</code>. This is fine, but then that value cannot be referred to in the type signature of other functions. As such, the following does not compile.</p>
<pre><code class="language-rust">trait Codec {
    const N: usize;

    fn encode(self) -&gt; [u8; Self::N];
    fn decode(encoded: [u8; Self::N]) -&gt; Self;
}
</code></pre>
<h2>For Loops (and similar) in Const</h2>
<p><code>const</code> is an extremely half-baked feature in general. For instance, <code>for</code> loops cannot be used within <code>const</code> environments, so the following will not compile.</p>
<pre><code class="language-rust">const TOTAL: u32 = {
    let mut total = 0;
    for i in 0..10 {
        total += 1;
    }
    total
};
</code></pre>
<p>This is despite the fact that the <code>while</code> loop equivalent does.</p>
<pre><code class="language-rust">const TOTAL: u32 = {
    let mut total = 0;
    let mut i = 0;
    while i &lt; 10 {
        total += 1;
        i += 1;
    }
    total
};
</code></pre>
<p>This is a consequence of the use of an iterator in the for loop, and the fact that the iterator functions are not <code>const</code>. However, one cannot mark a function in a trait as <code>const</code> under any circumstances, either in the implementation or definition. In the case of other function colouring, such as <code>async</code> and <code>unsafe</code>, if the trait definition has the modifier, so too must the implementation. The colouring is much like a part of the type signature, and thinking about it like this, normal functions are a subtype of unsafe functions. This works in for unsafe in general, with the following compiling just fine.</p>
<pre><code class="language-rust">fn main() {
    a(x)
}

fn x() { }

fn a(_: unsafe fn() -&gt; ()) { }
</code></pre>
<p>But none of this works properly for <code>const</code>.</p>
<h2><code>if let _ and _</code></h2>
<p><code>if let</code> is a half-baked feature due to the lack of support for any additional boolean conditions. This makes the use of the keyword <code>if</code> really confusing. For instance, the following code does not compile.</p>
<pre><code>let mut map = std::collections::HashMap::from(
    [('a', 0), ('b', 1), ('c', 2),]
);

if let Some(value) = map.get(&amp;'a') &amp;&amp; map.len() == 2 {
    println!("Hello");
}
</code></pre>
<h2>Field Access in <code>println</code> (Edit: 2025/03/08)</h2>
<p>Previously, when using <code>println</code>, one would use <code>{}</code> to specify the location of arguments to be printed, and then provide those arguments after the format string, such as in the following example.</p>
<pre><code class="language-rust">fn main() {
    struct Point {
        x: f32,
        y: f32,
    }

    let point = Point { x: 10.0, y: 5.0 };

    println!("({}, {})", point.x, point.y);
}
</code></pre>
<p>Then support was added for putting the variable directly in the braces, so that a variable <code>x</code> could be printed with the following.</p>
<pre><code class="language-rust">println!("{x}");
</code></pre>
<p>However, this doesn't work with field access, that is, this code</p>
<pre><code class="language-rust">fn main() {
    struct Point {
        x: f32,
        y: f32,
    }

    let point = Point { x: 10.0, y: 5.0 };

    println!("({point.x}, {point.y})");
}
</code></pre>
<p>produces the error below.</p>
<pre><code>error: invalid format string: field access isn't supported
 --&gt; src/main.rs:9:17
  |
9 |     println!("({point.x}, {point.y})");
  |                 ^^^^^^^ not supported in format string
  |
help: consider using a positional formatting argument instead
  |
9 |     println!("({0}, {point.y})", point.x);
  |                 ~              +++++++++
</code></pre>
]]></content:encoded></item><item><title>The Broken Promise of Semantic Versioning</title><link>https://aaronmanning.net//blog/the broken promise of semantic versioning.html</link><author>Aaron Manning</author><pubDate>Sat, 07 Dec 2024 03:00:00 UTC</pubDate><content:encoded><![CDATA[<h1>The Broken Promise of Semantic Versioning</h1>
<p>There are many ideas in software engineering that <em>sound like good ideas</em> when you first hear them, but when held up to scrutiny, or used in a non-trivial setting, show themselves to be severely lacking. Usually this is through a kind of idealism which ignores the practical scenarios which arise in actual real world problems. I believe semantic versioning is such an idea.</p>
<p>For the uninitiated, semantic versioning is a versioning scheme defined by <a href="https://semver.org/">a standard</a> in which the version number specifies a formally defined difference in the code, rather than just expressing an intent. One of the defining features of semantic versioning that I will focus on here is the fact that <a href="https://semver.org/#spec-item-8">major version numbers are the only ones which may contain breaking changes after version <code>1.0.0</code></a>. That is, an upgrade from <code>1.*.*</code> to any other <code>1.*.*</code> must not contain a breakage to the public API of the program or library.</p>
<p>My problem with semantic versioning can be summarised briefly as follows: a significant amount of software does not correctly follow the semantic versioning specification, and thus the claim or assumption that it does, results in a greater number of bugs and frustration than if we did away with semantic versioning entirely, forcing developers to treat upgrades with greater care.</p>
<hr />
<p>Let's start with the first major problem; semantic versioning violations are pervasive. Some research from last year, summarised in <a href="https://predr.ag/blog/semver-violations-are-common-better-tooling-is-the-answer/">this article</a>, found that across more than 14,000 releases of the top 1000 most downloaded packages on <a href="https://crates.io">crates.io</a>, the Rust package repository, around 1 in 31 releases and more than 1 in 6 packages had at least one semantic versioning violation. As written within the aforelinked post,</p>
<blockquote>
<p>Demanding perfection from maintainers would be naive, unreasonable, and unfair. Whenever hardworking, conscientious, well-intentioned people make a mistake, the failure is not with the people but in the system.</p>
</blockquote>
<p>The conclusion they provide though, which involves better tooling to detect such violations, is, to my eye, questionable. The ideal of semantic versioning is that when minor security updates, performance improvements, or bug fixes occur, the consumer of the software does not need to think about or worry about breakages, and can just update their software without further consideration. Tools like Cargo will perform such an update automatically, despite not formally specify what the "public API" of a Rust package is considered to be. From <a href="https://doc.rust-lang.org/cargo/reference/semver.html">the documentation</a>,</p>
<blockquote>
<p>These are only guidelines, and not necessarily hard-and-fast rules that all projects will obey... Almost every change carries some risk that it will negatively affect the runtime behaviour, and for those cases it is usually a judgement call by the project maintainers whether or not it is a SemVer-incompatible change.</p>
</blockquote>
<p>In fact, the guidelines focus on the API from the perspective of the compiler, not in terms of the actual behaviour of the code.</p>
<p>Since the semantic versioning specification says that all <a href="https://semver.org/#spec-item-1">"software using semantic versioning MUST declare a public API"</a>, but the package manager will perform updates automatically despite the public API being up to the individual package developer, Cargo can update a package in a way that breaks the behaviour of a program even if the developer has a well defined public API.</p>
<p>Moreover, it is well within my rights to turn a <code>max</code> function on an array into one which calculates the minimum and not release a major version, so long as I document that the behaviour is not part of the public API. Then Cargo can go right ahead and push my update to all my users unwittingly.</p>
<p>This shortcoming is fully admitted in the specification, which states</p>
<blockquote>
<p>This is not a new or revolutionary idea. In fact, you probably do something close to this already. The problem is that "close" isn’t good enough. Without compliance to some sort of formal specification, version numbers are essentially useless for dependency management.</p>
</blockquote>
<p>Well guess what? People don't adhere to this specification, and there is no way they can be guaranteed to, yet we somehow just assume that things follow the semantic versioning specification by our own ideas of what is a breaking change, leading to a problem worse than that which is trying to be solved.</p>
<hr />
<p>The question of what constitutes a bug, or which should be part of the public API of a program or library, is one which is very much open for debate. This ought not be a problem, when developers properly document what is considered part of the public API of their code. However here we come up against idealism again. Anyone who has spent enough time in software will have come across examples of what is known as <a href="https://www.hyrumslaw.com/">Hyrum's law</a>, even if not recognising it by that name. It states that</p>
<blockquote>
<p>With a sufficient number of users of an API, it does not matter what you promise in the contract: all observable behaviours of your system will be depended on by somebody.</p>
</blockquote>
<p>This has been thoroughly demonstrated throughout software, most extremely with bugs that have been depended on by enough users they became features, including the undo button in Gmail, or dotfiles being hidden on Unix.</p>
<p>Therefore, familiarising oneself with this public API, and making sure that the only parts of the program which are depended on are specified in this API, is a necessary part of ensuring safe upgrades of your software. However, if one has to carefully check this to verify if an update is safe, why not just read the changelog? "Changelogs can be wrong or incomplete" I hear you cry. Thank you for making my point for me... just like breaking changes can make their way into minor versions. This is why a responsible software developer should actually run and test their software after performing dependency updates to verify that the behaviour is as intended.</p>
<hr />
<p>It is a serious indictment on the failure of semantic versioning that many projects have adopted the <a href="https://0ver.org">zero versioning</a> philosophy, where software sticks with a <code>0.x.x</code> version number for its entire lifespan, such that <a href="https://semver.org/#spec-item-4">nothing is considered stable</a> and there is complete freedom to make breaking changes. While I do agree that allowing breaking changes is often important to produce the highest quality software, if the solution to this to always stay within the version number range in which according to the spec, all bets are off, then semantic versioning serves absolutely no purpose.</p>
<p>The intended solution here seems to be to go <code>1.0.0</code> and create new major versions as breaking changes occur, however ever since the Python 2 to Python 3 debacle, it seems most developers are too afraid to do so. There is so much fear around implementing breaking changes after <code>1.0.0</code>, such as languages like Rust which just live with their mistakes indefinitely, that so many projects just stay <code>0.*.*</code> for as long as possible, or forever, despite <a href="https://semver.org/#how-do-i-know-when-to-release-100">the recommendation</a> on the semantic versioning website being</p>
<blockquote>
<p>If your software is being used in production, it should probably already be 1.0.0.</p>
</blockquote>
<p>And I don't think it's difficult to see why this fear exists. If I have a library which is <code>1.0.0</code>, under the semantic versioning scheme and any sensible notion of a public API, I can't remove a single function used by 0.1% of my users without updating to <code>2.0.0</code>. Some would say this is fine, just do the update, but now the version numbers of <code>major.minor.patch</code> express confusingly different intent from what they are supposed to, where a clearly minor update results in a major version number change.</p>
<p>This whole situation puts us in a weird middle-ground, where people like Andrew Kelley who runs the <a href="https://ziglang.org">Zig</a> project for a long time said that his software should not be used in production, despite getting much of its funding from companies using it in production... Now he says that people should use it in production, but it's not <code>1.0.0</code>, and he has also expressed complete willingness to make breaking changes. It's almost as though three little numbers don't actually express very much about the actual state of the software.</p>
<p>Most damaging though is this strange obsession over version numbers. Many software developers now treat <code>1.0.0</code> as though it is the end, not the beginning, because now they feel like they are locked in forever. This can't be good for the world of software. A project that actually gets this right is <a href="https://www.swift.org/">swift</a>, which has made breaking changes part of the culture of the language, and provides <a href="https://www.swift.org/migration/documentation/migrationguide/">detailed guides</a> on how to upgrade when breaking changes occur.</p>
<hr />
<p>So what is the upshot of this?</p>
<p>For releasers of software, the solution is to go back to a simple <code>major.minor.patch</code> format of versions which expresses the <em>intent</em> of each version, rather than a formally specified description of how the changes effect users. Or even just use a <code>YYYY.MM.DD</code> date based scheme. The point is to stop fettishising version numbers, and just provide good documentation about the goals of the project, and provide good changelogs where breakages occur.</p>
<p>For consumers of software, and by extension developers depending on the code of others, the solution is to be extremely cautious of semantic versioning. Depend on one version of a package only, and treat updates with care and caution. Read the release notes of any new version to check for known breaking changes, and test your software on the new version to find any undocumented ones.</p>
]]></content:encoded></item><item><title>Zettelkasten</title><link>https://aaronmanning.net//blog/zettelkasten.html</link><author>Aaron Manning</author><pubDate>Fri, 29 Nov 2024 00:00:00 UTC</pubDate><content:encoded><![CDATA[<h1>Zettelkasten</h1>
<p>Among the seemingly thousands of blog posts, YouTube videos, and books on the Zettelkasten note taking system, there seems to be very little clear advice on what Zettelkasten actually is and why it would be useful. Instead any internet search about the topic is clogged up with descriptions of how "life changing" a given note taking app is, with people putting calendars, maps, and even databases in their Obsidian and Notion notes.</p>
<p>From the perspective of the reader, this eventually just descends into a pernicious form of <a href="https://calebschoepp.com/blog/2022/productivity-porn/">productivity porn</a>, where simply reading about the often insane things people are doing with their "plain text" notes gives the feeling of being productive, whilst spending little to no time focussing on the content of the notes, or the thing to be studied or created.</p>
<p>However, like the majority of ideas about how to be productive that have created an industry of charlatans, the useful core of the idea can be described in no more than a short blog post. As such, this post is my extremely brief description of the what, why, and how of Zettelkasten, without all the cruft.</p>
<h2>What?</h2>
<p>Zettelkasten is a note taking system with (usually) two defining features,</p>
<ol>
<li>notes are small, and typically focus on a single idea,</li>
<li>when a note needs to refer to another, it contains a link or reference to said note.</li>
</ol>
<p>In it's original form, a Zettelkasten was a collection of index cards, each with a unique ID. Links between notes were just references to other notes by their ID. Nowadays, there are apps designed around making the management of links better, and keeping notes stored digitally.</p>
<p>The digital incarnation of Zettelkasten can be thought of as like a wiki: each note is a page, and keywords are linked to other pages.</p>
<h2>Why?</h2>
<p>As I see it, there are two reasons why this idea is so useful.</p>
<h3>Discoverability</h3>
<p>Suppose I am writing a note about ElGamal encryption. Such a note will surely refer to other ideas on which the method is based, such as the discrete logarithm, the Diffie-Hellman key exchange protocol, and even more foundational ideas like groups. Everywhere these ideas are referenced they are linked to their corresponding notes. This is particularly valuable in the proof, where even extremely minor asides which would usually not warrant a reference to another theorem in a textbook, but are too long to include in place, can just be a link within the text that can be completely ignored if believed.</p>
<p>By creating links to all of these earlier ideas, other notes continue to receive attention, leading to updates and improvements.</p>
<p>When I am looking for something directly, I have a folder full of text files to search by name or content in whatever software I find is most appropriate.</p>
<p>My Zettelkasten for my mathematics notes currently contains 284,491 words over 1613 notes. It covers a variety of different subject areas, and was written over two and a half years. This way of taking notes, which allows new ideas to be collected in the same bucket, and forces the revisiting of previous work, is the only reason I can still find things after all this time.</p>
<h3>Reusability</h3>
<p>When you write notes digitally in a filesystem, the natural thing to do is to organise it them folders according to some notion of categorisation. Doing this in a collection of notes which you anticipate will be useful many years in the future is destined to fail, with many notes not fitting into clear mutually exclusive categories, and ideas of what are the "right" categories changing like the weather.</p>
<p>If I study from one book about algebraic number theory, and then from another about algebraic geometry, inevitably there are going to be points of overlap. The solution to this in Zettelkasten is to think not in terms of folders but instead in terms of contents pages. That is, there is a note about algebraic number theory which itself is just a list of links to other notes. This means folders can overlap by containing common notes.</p>
<p>This also makes them cheap to create. I can have a contents page for each book that I read, each University subject I have studied, and pages for different topics or paths through learning particular material. Discoverability of contents pages is then solved by a contents page of contents pages, which acts like a home page.</p>
<p>Another benefit of doing away with folders is that all of your notes can sit in one folder in the filesystem, thus allowing the filesystem to enforce uniqueness of the names of your notes, which helps with linking.</p>
<h2>How?</h2>
<p>For me, Obsidian was my entry point into creating a Zettelkasten and is still my recommendation for doing so, even though I no longer use it myself. It's flexible, whilst reliably doing the core things a Zettelkasten needs to do, and it allows you to keep all of your notes as plain text files which can be searched or manipulated with other software.</p>
<p>There are many other choices, <a href="https://zk-org.github.io/zk/">zk</a>, <a href="https://logseq.com/">Logseq</a>, <a href="https://www.zettlr.com/">Zettlr</a>, and <a href="https://www.notion.com/">Notion</a>, just to name a few.</p>
<p>As for me, I am currently in the process of migrating my notes over to a new format and my own app, designed around my particular desires. Yes, I do appreciate the irony of me now mentioning that I am building my own app, given my criticism of people focusing on the system of their notes rather than the content. I do however mention this for completeness and transparency given that my recommended app above is an app that I no longer use.</p>
<p>Now that you have read this blog post, it is time to stop sitting around thinking about how you will be productive by consuming the work of others, and actually start writing some notes. Embrace imperfection, and let the content drive the process.</p>
]]></content:encoded></item><item><title>Why Chalk?</title><link>https://aaronmanning.net//blog/why chalk.html</link><author>Aaron Manning</author><pubDate>Sat, 11 May 2024 05:00:00 UTC</pubDate><content:encoded><![CDATA[<h1>Why Chalk?</h1>
<p>As schools work to phase out dusty chalkboards en masse in favour of shiny new whiteboards, interactive whiteboards or projectors, there is one group of people clinging on to the old fashioned way of doing things with dear life: mathematicians.</p>
<p>So why chalkboards? What is it about chalkboards as a teaching tool that justify using them in this modern day with modern tools? I've tried to condense here a few key ideas as to why chalk is, in my opinion, the best tool for teaching and learning mathematics.</p>
<hr />
<p>Presenting an effective lecture is about telling a story. Developing the machinery required for a complex proof in a lecture can be a long process, which requires many steps including some motivation, setup and many auxiliary results. When presented with an empty blackboard and a piece of chalk, a lecturer has to engage and think deeply about the material they are teaching; presenting new ideas in the way that makes the most sense to the audience and leaving out no details. On the other hand, pre-prepared slides encourage the speaker to be lazy and not engage with the material themself while teaching, which results in less clarity within explanations. They do not show the order and process of thought. When drawing a diagram in front of an audience, the layers of the image can be built up piece by piece, showing how one element follows from the other. This is why viewing the final state of a blackboard after a lecturer misses so much important detail. On a blackboard a lecturer can leave gaps and fill them in after the fact, returning to earlier content and adding details when they fit the overall narrative.</p>
<p>Writing new things on the blackboard potentially not in a top-to-bottom left-to-right order shows the material being introduced in the way it is thought about. Anyone who has taken introductory analysis will know that sometimes the way a final proof is written can be backwards to the way it is reasoned through by the author.</p>
<p>In this sense, a lecturer forcing themself to write everything down on a blackboard slows them down. This effect is very real, and very important. When compared to teaching from slides or pre-prepared notes, there is no question: teachers who use pre-prepared notes and talk over them, no matter how hard they may try, move through the material faster than many students can keep up. Forcing the pace of teaching to be no faster than the pace of writing is crucial to delivering material at a pace that students can follow.</p>
<p>Many large lecture theatres will have many blackboards on sliders which can be moved up and out of the way, but still visible to students. This allows students to spend additional time looking over and thinking about the material from earlier in the lecturer if they need it. Very often teachers who depend on slides will leave students behind if they're still looking at material from a previous slide after the lecturer moves on.</p>
<p>Okay, but so far this case has been built on the comparison with teaching from slides, but what about whiteboards? Indeed, I also think whiteboards are a far inferior teaching tool.</p>
<p>A piece of chalk and an erasing cloth is all you need to write on a chalkboard. This reduction in interruptions and startup cost allows a lecturer to begin teaching immediately, a benefit which is certainly not provided with overhead projectors, document cameras, or slide shows. Even when working with a whiteboard, the constant drying out of markers results in a non-trivial amount of time spent working through old markers which leave impossibly faint lines.</p>
<p>Chalkboards have a pleasing resistance that prevents the sloppiness that comes with a slippery whiteboard. In my experience almost everyone's handwriting is naturally neater on a chalkboard, rather than a whiteboard.</p>
<p>Whiteboards are glossy, and the glare they produce, combined with the stains from poorly erased previous uses make whiteboards difficult to read, especially at a distance in a large lecture theatre. Blackboards also consistently show up better on video, a feature which is especially important in the post COVID era where everything gets recorded for the benefit of distance learning students. Much of the disagreement I have heard with respect to this point comes from people who don't know how to clean a blackboard properly, so as a PSA I will mention, a quick wipe from a slightly damp cloth will get a chalkboard completely clean extremely effectively, moreso than cleaning solution and a lot of elbow grease will on a whiteboard that has been left for more than a couple hours.</p>
<p>So if chalkboards really are so much better than the alternatives, why are fewer and fewer people using them? Well for one, good quality chalkboards are more expensive, and unlike whiteboards, it is harder to get away with the cheapest option. However, more importantly, I suspect some readers right now may be recalling traumatic memories of their time in primary school with dusty and dirty chalkboards, and the horrible sound of someone scraping a piece of metal across them (it makes me wince thinking about it too). It is at this point that I need to inform you that porcelain enamel chalkboards written on with that buttery smooth Hagoromo chalk and cleaned with a damp microfiber cloth make for a dust-free, clean chalkboard experience, with extremely clear and readable lines. This is a chalkboard experience most people have unfortunately been deprived of. As such, in practice, my experience working with a chalkboard looks much more like this.</p>
<p><img src="/blog-assets/chalkboard.jpeg" alt="picture of beautiful chalkboard showing the proof of lagrange&#39;s theorem" /></p>
<p>So please, if you are a university administrator, or wield any decision making power with respect to teaching equipment: while I realise chalkboards are not for everyone, there are many of us who love chalk and can't imagine having to use anything else, and the experience of using chalk is not always as it is made out to be.</p>
<h3>Other Reading</h3>
<p>If you are interested in some other articles and videos about the value of a great chalkboard (and the lectures given on them), I'd recommend the following:</p>
<ul>
<li><a href="https://alumni.berkeley.edu/california-magazine/online/chalk-market-where-mathematicians-go-get-good-stuff/">The Chalk Market: Where Mathematicians Go to Get the Good Stuff</a></li>
<li><a href="https://www.youtube.com/watch?v=PhNUjg9X4g8">Why the World’s Best Mathematicians Are Hoarding Chalk</a></li>
<li><a href="https://slate.com/human-interest/2014/10/a-history-of-the-blackboard-how-the-blackboard-became-an-effective-and-ubiquitous-teaching-tool.html">The Simple Genius of the Blackboard</a></li>
</ul>
<h3>Equipment</h3>
<p>For anyone wondering, my recommended blackboard equipment is <a href="https://hagoromo.shop/">Hagoromo</a> for chalk and <a href="https://theragcompany.com/products/eagle-edgeless-500">microfiber detailing cloths</a> for dusters. If you are interested in getting a chalkboard, the thing you are probably looking for is a <em>porcelain enamel steel backed chalkboard</em>. Unfortunately I don't have any particular recommendations, as good chalkboards are increasingly difficult to get for individuals, but if you know of a university near you with a sufficiently large mathematics or physics department they might be able to let you know their suppliers. If you live in central Europe, I hear <a href="https://www.schultafel.de/">schultafel.de</a> is a great brand which sells to individuals (although I have personally never tried one). I got my board from the university at which I am a student, since some renovations meant they would have otherwise been disposed of.</p>
]]></content:encoded></item><item><title>Minimal YouTube and the Locality of Behaviour Stack</title><link>https://aaronmanning.net//blog/minimal youtube and the locality of behaviour stack.html</link><author>Aaron Manning</author><pubDate>Thu, 02 May 2024 09:00:00 UTC</pubDate><content:encoded><![CDATA[<h1>Minimal YouTube and the Locality of Behaviour Stack</h1>
<p>For the past few weeks I have been on and off working on a web app called <a href="https://minimal-youtube.com/">minimal-youtube</a>. It presents an interface to YouTube which allows you to get a reverse chronological order list of videos from channels you are interested in without the distractions of recommendations, comments, and video stats. It will even generate an RSS feed with embedded videos, if desired.</p>
<p><img src="/blog-assets/minimal-youtube.png" alt="screenshot of the minimal youtube page with some sample videos displayed" /></p>
<p>This post is not about the app though, it's about the technologies used to build the app.</p>
<p>I hate web development. I find JavaScript to be unintelligible nonsense and CSS to be messy and ugly. Up until recently, this has meant that any time I had an idea for a web app (that is, a website which has any functionality beyond serving web pages), I quickly gave up on it and realised that while I may have wanted the functionality of the app, I didn't want the pain of actually building it. The only exception to this was my <a href="https://notes.aaronmanning.net">notes website</a> which does have a small amount of JavaScript, but only for basic functionality or glue code to get the WASM module up and running.</p>
<p>This all changed with the discovery of a few libraries, two in particular, that have empowered me to do web development in a sane, sensible, and dare I say it pleasant way. They are, <a href="https://htmx.org">htmx</a> and <a href="https://github.com/gnat/css-scope-inline">css-scope-inline</a>. The former is less than 20 kilobytes of dependency free JavaScript, while the latter is barely more than a snippet, at 16 lines unminified.</p>
<p>In short, these tools allow all of the front end code of my website to be written into the HTML itself, at the point where it applies (what the htmx creator calls <a href="https://htmx.org/essays/locality-of-behaviour/">locality of behaviour</a>) without any JavaScript, complicated build system, or external CSS files or libraries.</p>
<p>Any front end web developers are now probably thinking "oh great, another back end dev who thinks they can do it all but refuses to write JavaScript." You'd be right, but I claim that hypermedia based applications (which is what htmx is) are suitable in the vast majority of use cases, and when they are suitable they are faster, simpler, easier to reason about, and they empower people to build apps without having to learn how to use a giant framework. Web development doesn't have to be complicated, but our industry has made it so. As such, I think front end developers should also know what their competition is here.</p>
<p>I am not claiming that this app is a hugely complicated piece of software, it's not. However the fact that I was able to create this as someone with little to no experience in front end web development and end up with a codebase of less than 200 lines of Rust and one simple HTML page (including all the CSS and JavaScript) should make clear the threat that is this way of doing web development.</p>
<p>So, with that very long introduction out of the way, the goal of this blog post is to, in a short example, demonstrate the power of these two tools that I have chosen in the hopes of empowering others like me, who hate front end web development, to build something useful that they can be proud of.</p>
<hr />
<p>The core part of <em>minimal-youtube</em> is a form which takes as input a list of channel IDs and playlist IDs, and upon clicking the submit button, updates the page to show a list of YouTube videos. Let's walk through what building something like this would look like from the start. I will skip over many of the particular details, and just try to illustrate how on a high level this "locality of behaviour" mindset is applied to web development.</p>
<p>Suppose we have the following HTML form for handling the input of playlists and channels.</p>
<pre><code class="language-html">&lt;form&gt;
  &lt;input
    type="text"
    name="channels"
    value="{{channels}}"
  &lt;input
    type="text"
    name="playlists"
    value="{{playlists}}"
  &gt;
  &lt;input
    id="generate-feed-button" 
    type="submit"
    value="Generate Feed"
  /&gt;
&lt;/form&gt;
</code></pre>
<p>Notice that the <code>value</code> fields have double braces because that is the syntax used by the templating engine I am using. That is, when the home page is requested, the server replaces these values with the URL query parameters before serving the page.</p>
<p>Here is where we introduce some htmx. In order to determine what happens when we click the submit button, we add some fields to our form, along with a new location for the list of videos to end up.</p>
<pre><code class="language-html">&lt;form
  hx-swap="innerHtml"
  hx-target="#videos"
  hx-get="/api/videos-list"
&gt;
  &lt;input
    type="text"
    name="channels"
    value="{{channels}}"
  &lt;input
    type="text"
    name="playlists"
    value="{{playlists}}"
  &gt;
  &lt;input
    id="generate-feed-button" 
    type="submit"
    value="Generate Feed"
  /&gt;
&lt;/form&gt;
&lt;div id="videos"&gt;&lt;!-- HTMX Replaced --&gt;&lt;/div&gt;
</code></pre>
<p>Now, all we have to do is have our server respond with the HTML for the video list whenever it receives a request at <code>/api/videos-list</code> and the <code>div</code> tag at the bottom will automatically be filled. Yes, it is really that simple. The <code>hx-swap</code> field tells htmx to swap the inner HTML of the <code>div</code>, rather than the whole <code>div</code> itself, <code>hx-target</code> tells htmx which <code>div</code> we are sending the response HTML to, and the <code>hx-get</code> field tells htmx where to fetch the new HTML from, and that it should make a get request. All the form fields will automatically be query parameters to the request that htmx makes.</p>
<p>What if we want to change the trigger for the videos loading? Well in my case, I want the videos to load both on clicking the submit button, and on the initial load of the page. This is as simple as specifying the <code>hx-trigger</code> attribute.</p>
<pre><code class="language-html">&lt;form
  hx-swap="innerHtml"
  hx-target="#videos"
  hx-get="/api/videos-list"
  hx-trigger="load,submit"
&gt;
  &lt;input
    type="text"
    name="channels"
    value="{{channels}}"
  &lt;input
    type="text"
    name="playlists"
    value="{{playlists}}"
  &gt;
  &lt;input
    id="generate-feed-button" 
    type="submit"
    value="Generate Feed"
  /&gt;
&lt;/form&gt;
&lt;div id="videos"&gt;&lt;!-- HTMX Replaced --&gt;&lt;/div&gt;
</code></pre>
<p>How about indicating to the user that the page is loading? This is built in to htmx as well with the <code>hx-indicator</code> attribute.</p>
<pre><code class="language-html">&lt;form
  hx-swap="innerHtml"
  hx-target="#videos"
  hx-get="/api/videos-list"
  hx-trigger="load,submit"
  hx-indicator="#videos-indicator"
&gt;
  &lt;input
    type="text"
    name="channels"
    value="{{channels}}"
  &lt;input
    type="text"
    name="playlists"
    value="{{playlists}}"
  &gt;
  &lt;input
    id="generate-feed-button" 
    type="submit"
    value="Generate Feed"
  /&gt;
&lt;/form&gt;
&lt;div id="videos-indicator" class="htmx-indicator" &gt;
  Loading...
&lt;/div&gt;
&lt;div id="videos"&gt;&lt;!-- HTMX Replaced --&gt;&lt;/div&gt;
</code></pre>
<p>What htmx does here, is add the class <code>htmx-request</code> to our indicator <code>div</code> element while the request is being processed. We can then just use different CSS in that case to display the loading text accordingly.</p>
<p>This is the point at which we get to play around with the other library that this post is about: css-scope-inline.</p>
<p>The problem with inline css is that it doesn't have all the same features of full CSS like <code>hover</code> or <code>@media</code>, and the fact that it often results in a high level of duplication when similar elements are put together. The problem with a separate stylesheet though is that it becomes very difficult to discern what the website is supposed to look like from where the content of the webpage is. It also results in annoying problems with CSS effecting elements in unexpected ways because of this separation. The problem with tools like <a href="https://tailwindcss.com/">tailwindcss</a> is that they result in extremely messy HTML, require the import of a huge library, and anything which is not easy to do is much harder to Google than the default CSS way.</p>
<p>css-scope-inline is a very small library that solves this library by allowing a feature that should be built in to HTML from the start: the ability to put <code>style</code> tags anywhere in your document. For instance, to style our loading text differently when loading is occurring we do the following.</p>
<pre><code class="language-html">&lt;form
  hx-swap="innerHtml"
  hx-target="#videos"
  hx-get="/api/videos-list"
  hx-trigger="load,submit"
  hx-indicator="#videos-indicator"
&gt;
  &lt;input
    type="text"
    name="channels"
    value="{{channels}}"
  &lt;input
    type="text"
    name="playlists"
    value="{{playlists}}"
  &gt;
  &lt;input
    id="generate-feed-button" 
    type="submit"
    value="Generate Feed"
  /&gt;
&lt;/form&gt;
&lt;div id="videos-indicator" class="htmx-indicator" &gt;
  &lt;style&gt;
    me {
      opacity: 0;
      text-align: center;
    }
    me.htmx-request {
      opacity: 1;
      transition: opacity 100ms ease-in;
    }
  &lt;/style&gt;
  Loading...
&lt;/div&gt;
&lt;div id="videos"&gt;&lt;!-- HTMX Replaced --&gt;&lt;/div&gt;
</code></pre>
<p><code>me</code> is a keyword that specifies the element in which the <code>style</code> tag appears.</p>
<p>One other highly desirable feature of our website would be to handle the error when the input doesn't conform to some rules. We can always just return some error text in the HTML output, however it helps to keep the error list separate for our use case. To do this, we just, in the error case, have the server return the header <code>HX-Retarget</code> header with a value of <code>#errors</code> and then create a <code>div</code> with an ID of <code>errors</code>. htmx will then use this as the target for swapping the HTML instead of the one we originally specified. However, it is desirable to clear the error output each time so that when the request succeeds, the error is no longer displayed. For this we just use the one of the <code>hx-on</code> attributes. Our resulting HTML will look like this.</p>
<pre><code class="language-html">&lt;form
  hx-swap="innerHtml"
  hx-target="#videos"
  hx-get="/api/videos-list"
  hx-trigger="load,submit"
  hx-indicator="#videos-indicator"
  hx-on::before-request="document.getElementById('errors').innerHTML = ''"
&gt;
  &lt;input
    type="text"
    name="channels"
    value="{{channels}}"
  &lt;input
    type="text"
    name="playlists"
    value="{{playlists}}"
  &gt;
  &lt;input
    id="generate-feed-button" 
    type="submit"
    value="Generate Feed"
  /&gt;
&lt;/form&gt;
&lt;div id="videos-indicator" class="htmx-indicator" &gt;
  &lt;style&gt;
    me {
      opacity: 0;
      text-align: center;
    }
    me.htmx-request {
      opacity: 1;
      transition: opacity 100ms ease-in;
    }
  &lt;/style&gt;
  Loading...
&lt;/div&gt;
&lt;div id="errors"&gt;&lt;!-- HTMX Replaced --&gt;&lt;/div&gt;
&lt;div id="videos"&gt;&lt;!-- HTMX Replaced --&gt;&lt;/div&gt;
</code></pre>
<p>There is one final really useful feature of htmx being used within this form. The request returns a header <code>HX-Push-Url</code> which specifies the same URL of the original page but with some query parameters. htmx then changes the URL, and updates the browser history accordingly. This allows the page with the given inputs to be bookmarked and easily returned to (remember we started by pre-populating the forms based on the URL parameters).</p>
<p>To be honest, that makes up most of the core functionality of the website, and this shows how easy building something like this can be.</p>
<p>This example is only one simple possible use case for htmx. The official webpage has <a href="https://htmx.org/examples/">many great examples</a> including <a href="https://htmx.org/examples/click-to-edit/">click to edit</a>, <a href="https://htmx.org/examples/infinite-scroll/">infinite scrolling</a> and <a href="https://htmx.org/examples/value-select/">hierarchical picklists</a>. There is no question that some web apps simply cannot be built using a hypermedia approach only, Google Maps for example. However I contend that the vast majority of web apps which use vastly overcomplicated JavaScript frameworks, including apps like Twitter, Twitch, Jira, YouTube and Microsoft Teams, could be built in a simpler, faster, and more reliable way by using a hypermedia based approach.</p>
<p>As for css-scope-inline, it is the only way I styled the website. That is, the main HTML page describes its functionality through htmx attributes and its look through <code>style</code> tags littered all throughout, right next to the elements they are styling. This should now more fully illustrate what is meant by the term "locality of behaviour"; as much of the way the website looks and behaves as can be possibly included together, in the same place, is.</p>
<hr />
<p>For me this is just the start of experimenting with these libraries and this way of building websites, but as a proof of concept, building this app has made me feel empowered to do all sorts of things with web development without the need to write JavaScript, and that leaves me more optimistic about the state of web development than I have ever been.</p>
]]></content:encoded></item></channel></rss>