This page contains updates to the engine powering The Lion's Rear , The Arcology Garden , and Ryan 's other sites. You can subscribe to it via https://engine.arcology.garden/updates.xml in your feed reader of choice. It's also mirrored to my Fediverse instance.
Per-page Custom Templates
The Arcology Project can now render some pages with per-page overrides of django's page template.
There are currently two:
the pre-existing default Arcology Page HTML Template provides a metadata sidebar, like this page has;
Arcology Page Wide Format Template provides a single-column design with the metadata above and below the content. The home-pages use this now, as do "index" Topic Files like my Recipes page that only exists to list backlinks.
I'll add more templates later on, I guess. I probably will change how the headers render in the Wide Format before too long.
We're speeding up the Arcology's Org Mode Hypermedia document generation by tying it to a local API server
One of the downsides of bringing the Arcology's parsing and database management out of Emacs Lisp where I used emacsql and in to Django's ORM means that my Emacs process can no longer quickly access the database with a trustworthy schema. The first attempt to build out the The Arroyo Generators worked well enough relying on a django_manage (aka python manage.py), but it was slow since it had to spin up a whole ass Django environment every time I wanted to regenerate a document.
It was even worse when, for stability's sake, i replaced the implicit use of direnv , etc, with a nix run command in a shell org babel source block which would copy all of my server's source code in to /nix/store every time I made changes to any of these documents, blocking Emacs in the process.
I've since done some work to build out a minimal "Localhost API for the Arcology ", a set of endpoints in the Django application which can answer the questions which the Arcology's document meta-programming systems want to know and provide an Emacs Lisp API to do so:
(arcology-fetch-localapi-bearer-token)fetches the bearer token file shared with thelocalapideployment.(arcology-localapi-call method path)is an API helper which given an API path will fetch the data with authorization and return a deserialized JSON structurethese interactive commands will fetch a URL, putting it on your kill ring or clipboard if you call it with
M-xor equivalent:(arcology-key-to-url page-key &optional heading-id)will take anARCOLOGY_KEYand return a URL(arcology-file-to-url file-path &optional heading-id)will do the same with a file path from(buffer-file-name)or so.(arcology-url-at-point)gets the org-id of the heading your cursor is in if it has one, and makes a URL that links directly to that.(arcology-read-url)pops up a list of all the org-roam headings, and returns a URL to it.(arcology-api-generator)calls in to The Arroyo Generators and returns the string of files they generate
With these it's possible to quickly generate NixOS configurations, Emacs init configurations, or other declarative configuration formats from within org-mode and there are a few interactive user functions to generate URLs to public documents. The Local API deployment includes manifests for generating a home-manager configuration which sets up authorization tokens and starting a background process which will keep the local Arcology database up to date with the Arcology watchsync Command and a small instance of The Arcology's Web Server .
The nice thing about having this run locally is that it's really easy to share a bearer token to secure this API. Since it's possible to dump information from un-published pages with this API, it needs to not be generally accessible, and for that it requires a simple Bearer Token which is presented in an Authorization HTTP header and stored in an Environment file that can be loaded either directly by systemd or in a shell environment like arcology-fetch-localapi-bearer-token does.
This environment file could/should be automatically generated while pulling the other necessary secret in the process, a Syncthing API key from the local config.xml. This would be fun and straightforward to do some day soon but for now it just takes a second to populate yourself.
But for now, we're close to having a system which others could bootstrap!
Integrated Django's caching framework in to the site render pipeline
Today I implemented robust caching support via a Decorator function and the built-in Django cache framework. It's a slick little piece of code that is probably generally useful for Django developers so here you go.
Prior versions of the Arcology used Python's built-in functools.lru_cache which is simple and effective and basically fool-proof. you decorate a function and make sure you provide an argument which can be used to bust the cache (a file hash for example), and call it once for each object, and voila your endpoints respond more quickly with marginal resident memory cost.
Those prior versions were based on FastAPI and deployed on uvicorn which uses a libuv event loop to scale up traffic within a single process; i considered designing the django system to be deployed in the same fashion but ultimately decided to deploy a multi-process served by gunicorn. This meant that the in-memory caches would be duplicated for each process and there would be far more cache misses.
That was not ideal.
I started by just spit-balling a simple file cache that only worked with strings, just calculcated a cache key based on the args and kwargs, used that as a filename and wrote the string to it and that worked out well enough to start productionizing it. When I went to the Arcology Project Configuration to add an environment variable to control the cache path I saw that Django was already trying to provide a CACHES configuration section which lead me back to the Django Cache Framework documentation, a page I read maybe a half-dozen times and disregarded because I thought lru_cache was good enough and it didn't have the exact memoization decorator I wanted. ha ha ha.
That framework has some basically useful facilities for caching objects and template partials and entire view results and the like which I probably could have used to implement this, but I had a bunch of HTML strings spat out by a Rust module which I wanted to be able to partially invalidate (for example inside of the FeedEntry.to_html in The Arcology's Data Models ). So I re-purposed the string file caching decorator I'd written and replaced its innards with calls in to the Django cache framework and deployed it.
You simply love to see it!
This also makes the new Site Maps work much better, too. It used to take upwards of 20 seconds to evaluate every page and link in the database to generate the JSON entities for them; now the individual entities and the combined JSON are both cached and the sitemap loads in milliseconds, maybe a second at most if I change a page and the JSON cache is invalidated. The page/edge caches for every page but the one which was changed will still be valid and greatly speed up the re-rendering of the JSON cache entity.
I had to fix some bugs after deployment, of course, and along the way I fixed metrics publishing which was subject to the same issue until I configured PROMETHEUS_MULTIPROC_DIR, though I probably should move that to a tmpfiles.d entry. I promise those histogram quantiles were and are accurate 😉
Arcology has a Sitemap and a Tag Index
I've re-implemented the support for the Map of the Arcology, a graph of nodes and edges which gives way to space. Here is what I mean:
This image has looked the same for years, and I make sure to use the same placement algorithm each time. So it's a navigable map where pages which link to each other are drawn towards each other. If I draw it, the map responds. It's probably navigable only by myself, but you can see how the four sites in the Arcology interact on their edges. You can see the Arcology Engine 's systems clustered to the east an flowing in to the CCE 's systems and borders on the edges of The Arcology Garden which starts to show content which is more personal, less tied to my Emacs and NixOS development work. And in the north-west you find The Lion's Rear where my Tea, Gardening and other works which are less and less mediated by computers.
You could imagine seeing an island from above, where there is an industrial block driving heart and soul of the Arcology on the east coast, and wildlands and farm lands to the west, with production and creative works spreading across the middle of the land.
A true Arcology would have the services block and the living block co-located so that commutes from the center are similar for all inhabitants but this is no true Arcology. You could imagine such a network model forming as other peoples' thinking systems join the Arcology.
That's all a little too high-concept for you, perhaps.
Maybe you want to see a list of tags, and if you click them it shows you the pages which use that tag.
In that case, you might enjoy the Tag Index. It shows you all the pages across all of the sites which are tagged. The main tag index is "lazy", it won't give you a singular page with all the pages on it because the ORM relationships I set up are just a little bit too limited still. It uses roam:HTMX to load a "partial" in when you click on the count of headings for each tag, letting you click through and see the Pages which use the tag. You can also link to individua tags' index pages, or click the tags in the sidebar of each page.
I might add some breadcrumb thingy soon/next too.
But for now this thing is probably good enough! I wish the JSON generation endpoint was quicker, it could stand to be optimized, but if you hit it with a warm cache it loads quite nicely.
I've also been working on the shape/design/script of the Rebuild of The Complete Computer some more, I'll share some plans for that soon.
Soft-launching the new version of the Arcology ... or not.
I've been working hard the last few weeks on the Django Edition of my site engine.
Arroyo has come together nicely, the configurations for my systems are generated using the new framework, and all that has been really nice.
I instrumented the process django-prometheus and some of my own instrumentation.
I added robots.txt, feeds.json and got the gunicorn workers shipping in The Arcology's Web Server endpoints.
I started writing unit-tests for the models and the ingester code.
I wrote NixOS deployment manifests which front the sites with nginx which collects the serves the static assets from nginx and safely loads changes to the notebooks using Syncthing, though this doesn't seem like it works on the server yet.
In the process of doing all of this, I broke the Arcology FastAPI Arroyo integration so that those sites no longer update. And then I deleted the arcology.db because I couldn't figure out why it wouldn't update. Woops.
So all my sites now run on this new codebase. It probably won't work perfectly but it's nearly feature complete!
Took another swing at Arcology's Atom feed generation
A few weeks ago I wrote an =ArroyoAtomHandler= which would go over each element to generate a page, sort of like the built-in org exporters or my Arroyo HTML exporter . It was bodgy as hell, and I probably could have done a better job at it, but it was just too easy to get confused in working between the Atom and the HTML exporter, and I just didn't like it.
So yesterday I took The Arroyo HTML exporter and gave it the ability to only export a few headings, based on an allowlist of IDs passed in to it.
I also extended the system to cache the contents of Headings' PROPERTIES drawers, in both The Parser Heading Type , and the Arcology Org-Roam Caching Models which can drive other behaviors. I've given the Arcology the same metadata-query powers for Headings which it already has for Pages. This allows me to, at minimum, capture the publish date of any Heading in my system and then query those. The Arcology captures these dates, converts them in to DateTime objects which can then be used to group and order posts, do time-bound queries, and the like, and this can be extended to other metadata stored on the Headings.
Combining these two things together lets the Arcology populate feeds from anywhere in my notebooks as in the Atom Feed Handler . This code design is way more reasonable to deal with and more flexible than the ArroyoAtomHandler was.
So soon I can microblog from my org-mode journal and knowledge system by generating private, topic-specific Atom feeds which automatically cross-post to my Fediverse profiles via Feediverse or a built-in implementation. I can confidently generate a private journal feed that doesn’t have an HTML-page counterpart, it’s just an atom feed of a selection of my org-mode system workfn1it's possible that this won't work since backlinks and whatnot may not be properly cached and generated.
I'm pretty close to having this in a position where it can be deployed. I think largely the behavior and features won't change except for some small design cues, but it's not quite feature complete yet. I laid out a Rough Timeline and Task List on the Arcology's repo which will spell out a "1.0".
I've been planning in earnest to ship a "1.0" version of the Arcology publishing platform and the Complete Computer dynamic declarative environment this year. Part of that is re-structuring all these components so that the Arcology and the Complete Computer can be more piece-meal assemblies that others could adopt without pulling in all of my frankly inane software preferences. As part of that, I want to produce a series of video tutorials as a set of documentation, which I have been calling Rebuild of The Complete Computer . I don't have much to share yet on that front, but a plan is coming together.
The Rebuild of the Complete Computer series will be a semi-scripted stream series where I sell and document the org-mode publishing, computing, and productivity suite I’ve developed for myself and my community.
The Arcology Project: Django Edition now renders a new site design
The current design of these sites is fine, but the way I differentiate the sites using certain emoji for each domain makes the site a bit too busy for my taste. I will not stop using Vulf Fonts , and this design carries that forward. What happens now is that each cross-domain link is tinted with the background of the page it'll link to. So if you browse the sites enough, you'll identify where you are based on the coloring at the top. Each site is stored in the DB along with a link color that is set by the Arcology Seed Command and a Django view renders a dynamically generated CSS file and because I've been more and more of an org-babel sicko, those CSS files are paired with one for the current page's site are generated dynamically using org-mode tables. 😈
I also set up a flexbox layout that will show the backlinks and page metadata on the side on a wide enough display, or float them toward the bottom if you're on a narrow display or browsing on mobile. I'm still investigating what changes I need to make to get Tufte-CSS-style sidenotes to work, but this is a good start.
I need to tackle feed generation next, and I am not looking forward to re-doing this to not rely on Pandoc like the current version does. Once I've done that, it can be self-hosting though, which is exciting stuff.
Checking in on the new version of The Arcology Project |
This fall I have been spending more time with the experimental rust rewrite of my Arroyo Systems Management libraries, the early promising progress was written about below in I am starting to experiment with a rust rewrite of the Arroyo Arcology Generator . Since then, quite a lot of progress has been made.
I spent most of this summer building a few prototypes to try out different ORMs and compare them to what exists in the current Arcology FastAPI application, which uses a library called SQLModel which promises the best of Pydantic for validation and marshaling and SQLAlchemy for query and persistence. It's fine, but a bit overkill for what I ended up building. I spent some time prototyping the Arcology's data model with a few different Rust and Python ORMs before landing on a surprising best choice:
The next version of the Arcology will be built with Django: Meet The Arcology Project: Django Edition
I spent some time over the last year building a small application for the Data Rights Protocol in Django with a coworker who had prior experience with it. I had used it a bit in back in maybe 2012 but never really gave it a fair shake. While there was a lot I didn't and don't like about the codebase we built for testing DRP, Django is really decent to work with once you internalize a project structure and start to work with the ORM. While I haven't had to optimize the ORM's behavior or individual queries in anger yet, it's transparent, it's robust, and it was really easy to get the data model I wanted slotted in to place. Compared to most of the Rust ORMs which involved code generation or a large amount of boilerplate entity structures or trait implementations, this felt more akin to writing Rails or SQLModel. Only Elixir/Phoenix's Ecto came close, and while I thought hard about shipping this in Elixir, I ended up deciding not to based on the agility I have had in the Django prototype.
Ultimately, I'm coming to appreciate Django a lot more than I did a year ago. It's a good kit and I could become quite productive with it. The latest versions have some async python3 support which I tried to use for a bit but ultimately want to rip out since the ORM has two incompatible query paths between sync/async and you can't mix them, so you end up with multiple implementations of the same getter/setter with different colored functions that don't compose. But that's python. Based on historical traffic patterns, Arcology can handle running inside a handful of gunicorn processes. Hell it could run inside of 1 most of the time!
So Django handles page routing, templating, statics, and the data model that is used to serve pages. The data model and the actual HTML generation are being handled by the Rust codebase. pyo3 and its maturin build tool are hot shit. that thing fucking slaps. it goes so hard. etc. It makes it really easy to expose Rust code to Python and the reverse, too.
In The arroyo_rs Native Org Parser the Orgize parser turns a page in to a struct of keywords, headings, links, etc, and these are shepharded in to Python and persisted in to the DB by Django.
This allowed me to do some thing cool very quickly: The The Complete Computing Environment configurations are now generated by the Django codebase instead of Emacs Lisp, and it's faster and more robust. I can nix run a flake to generate the DB from a directory of org-mode files and run it again to tangle them in to a init.el. Soon enough you could, too. It even runs in t184256/nix-on-droid so I can make config changes to my server or emacs environment and rebuild it on the go [not that i should.... 😛 it's mostly just fun.]
The arroyo_rs Native Org Parser also has a fairly basic Org to HTML exporter built in to it, extending functionality in Orgize to provide an HTML exporter that, in a single pass, does all the work which the FastAPI process had to do in three steps. And it does so without invoking Pandoc.
The most complicated part as always is the URL rewriting from internal org-roam IDs to external URLs like the one you're reading right now.
Consider a link to this very post [[id:20231223T231032.979299]]: find the file this heading is stored in. See if that file has a page entity, this will have an ARCOLOGY_KEY keyword in it, publishing it to an arbitrary domain/path in my system, in this case arcology/updates. arcology maps to a domain, engine.arcology.garden. this is replaced to generate a URL to this page, and then the ID is added as an anchor to jump the viewer to this post's headline. it's complicated but it keeps me from having to maintain file system heirarchies and just lean on org-roam's linking facilities in a flat file layout. And as of today The Arcology Project: Django Edition supports rendering these URLs by generating a dictionary mapping from the internal ID to the public URL and passing that dict in to the native exporter, where it'll be replaced when the HTML anchor is generated:
#+CAPTION: a screenshot of Firefox for Android showing a context menu for a link within a page on a localhost:8000 site with a fully qualified URL in it.
There is still a laundry-list of feature work and basic functionality to be done and this system is a long way still from being able to replace the existing system but the new codebase already reveals some very nice synergies.
Speaking of synergies, I have one last thing to share, the new system will Ingest files on-demand using Syncthing rather than inotify as I hinted at in the last post. This, along with much much faster ingestion, means that the sites will update within seconds of a change being made on an endpoint. Syncthing basically provides a very robust HTTP long-poll wrapper around inotify and i would love to not deal with temporary files and POSIX semantics and shitty state machines like I did in the Arcology Automated Database Builder for the FastAPI site.
I am starting to experiment with a rust rewrite of the Arroyo Arcology Generator
See: arcology-rust-extractor for the prototype work
It's time to start learning Rust in earnest. The biggest wart on the side of The Arcology Project has been the Arcology Arroyo System Database Generator just being a big old "inotify watches for files and shells out to Emacs to build the database" process. I've really wanted to rewrite this for a while but basically only fell short on the lack of a featureful org-mode parser in a language i'm willing to write.
My most recent investigation lead me to Orgize, a Rust parser which supports all the things I need. So I'm going to start experimenting with building out a thing that generates the database schema in an incremental fashion. Orgize also has some support for customized HTML generation, which is the other big bug-bear in the system, relying on Pandoc to generate base HTML and then modifying that big string with regular expression matches. This whole system could be replaced with a customized Orgize HtmlHandler and this excites me.
In Faster Arcology DB Generation I work through how I would re-architect this system, but I think I am pretty happy with just moving forward on a thing that parses all the org-mode files and spits out a sqlite DB with rusqlite.
One thing that is maybe a cursed idea that I have been having is to move away from using inotify and to tightly couple the design of the Arcology backend to Syncthing . I think SyncThing really is the bees' knees, and I've felt for a while that it wouldn't be such a bad idea to have a hard dependency on SyncThing's API: GET /rest/events/disk gets me all the file monitoring I could desire with none of the jankiness of dealing with invalid file descriptors from the whole POSIX thing.
This model leaves room too to set up Rustler to make the eventual Arcology Elixir even smoother, where the Rust process is tightly integrated in to the Elixir system passing data structures between the two processes using serde_rustler and constraining all database logic within the Elixir process.
At a certain point I have to start wondering whether the whole site should just be a Rust process, though...
Added some simple JavaScript to my sites' pages to disable some of the CSS
Some folks react really viscerally to Vulf Mono Italic -- I think it's pretty funny. This is the font I look at every day in my Emacs environment, so having it on the web feels natural. I think it's a fun font and pretty good for prose with the benefit of also being monospace for showing code or doing weird typographic tricks in My Poetry .
Anyways, I added a checkbox to the bottom of the pages which will disable a bunch of the CSS and persist it to each of the Arcology Sites ' browser Local Storage. you'll have to set it on each site, but chances are folks won't be exploring my sites if their first reaction is "god i cannot stand this person's taste".
have a good day!
Improved Arcology's feed generator and set up feed2toot to post updates to the Fediverse
This week I have been working on making some in-roads on bridging my sites to the Fediverse by using feed2toot , a little Python application which posts from an RSS or Atom feed to a Mastodon Server API (which +Akkoma+ Pleroma supports). In the first pass, it works well enough to just hand-code the feed lists but I wanted per-feed visibility, multiple accounts, etc etc, so I built some more abstraction that I wanted to put in some time ago, an arcology.arroyo.Feed table for the arroyo-db which makes these much easier to query and group.
In b400ddcfd5c22724969ff0fb31e32c04beaecaf9 I modified the kludgy Atom Pandoc Filter + Template to add page
FILETAGS,keywordskeywords, and per-heading tags to the entries'<category>metadata. These will be added to thefeed2tootposts in theory though I may have to update the post templates...In 9693b4871ee948026f0d38a3febe36e2c4a79af7 and ad96066e52912d8370b0893c7f8df44f4fda1881 I added the arcology.arroyo.Feed modules. This is used by feed2toot in f629ba2e0ef64f774775e1d5944150ad88ee3cc0. I still need to re-wire the Arcology Domain Router to use these tables.
In theory this will automatically publish to the Fediverse once I finish writing this. That's nice...
Next I want to modify feed2toot to publish the HTML in the feeds instead of running it in to plaintext through beautifulsoup4.
Arcology now ships with a NixOS module
I spent some time tonight getting The Arcology Project to run on The Wobserver . In Deploying Arcology to NixOS I ship a NixOS with Arroyo NixOS metadata included. It took more work than I expected to get this to run on NixOS, mostly fiddly little things like PATH but the end result is a lot more simple and more idiomatic.
It's kind of close to the point where someone else could theoretically run this thing if they were so motivated, even if they can't run the whole Arroyo stack. Getting the Emacs package set up is the hardest part.
Some minor improvements to (file write on laptop)->(site content update on server) performance
spent some time tonight trying to cut down on the time it takes for arcology site engine to go from (file write on laptop)->(site content update on server), still a fair bit of work to do...
In Arroyo System Cache I add a function which caches the files' hashes arroyo-db--record-file-hash and arroyo-db-file-updated-p which compares the stored hash to the hash on-disk. If they match it will skip processing the file. This is low-hanging fruit I have wanted to add for a while but was too lazy to deal with the schema migration crap.
It still takes ~3 minutes to update the database though? I need to profile this tomorrow or Monday.
I think I need to add a hash-check to Arcology Arroyo System Database Generator 's arroyo-arcology-update-file function but this is .... not so easy as I thought since by the time this arroyo-arcology-update-db function runs the DB's hashes will have been updated! need to think about this tomorrow...
i'm gonna be sad if i end up needing to change the update function registry from a list in to a topo/dependency graph...
I love programming like a fucking neanderthal goblin who learned jazz guitar since they were in high school but it's slow work.
Upcoming: fixing some "final" performance and scale bottle-necks
I'm going to change the arcology inotify-watcher to not clobber the live DB before writing a new one.
unfortunately engine.arcology.garden currently works by truncating the DB before inserting new links, rather than maintaining enough state to track deletions between indexing runs. this means that any time my indexer runs (any time i edit any file on my notes with a five minute cooldown), my site briefly becomes inaccessible serving up 503s.
since the sqlite3 file it uses for routing etc, is on an XFS filesystem, i feel like i should be able to use a COW mode to set up the new database? like, python equivalent of cp --reflink the file in to a new location, populate that, and then move it on top of the "old" database once the new generation is in place. I'll probably try to do that this week; I'm getting really sick of posting a link and then four minutes later having it 503 cause the site rebuilt.
Once i've done that, i am also going to look at making the site "more static" -- the FastAPI and sqlite models will still be used for routing and page metadata, but the HTML generation itself can be moved to a little roam:Content Addressable Store on the file-system. arcology.arroyo.Page already stores the org document's SHA sum after all.. the DB indexer could be extended to just generate modified pages before they're served, at the cost of disk usage. I really don't wanna build a job-queue but might have to so that such a thing works reasonably.
Most of the performance issues in Arcology could be solved by these two fairly small changes, I think, and once that is done, this thing should be remarkably stable and long-lived.
I'm allowing bots to index parts of my sites again
When I brought Arcology FastAPI online, it served a "Disallow: /" robots.txt rule to prevent search engines from indexing the sites. I'm happy to publish anything as long as random people aren't being drawn towards it. This was a fine trade-off: URLs are public, but rely on social networking or human navigation to be discovered. Recently, I noticed that the Google search results for my name were mostly non-existent on the first page, having been overtaken by more-online youths coming of age.
I was mostly satisfied by that, yet at the same time, there is things which I would like to be discoverable. It should be possible to search for my notes, archive summaries, technical output in the CCE , the Arcology engine itself, etc...
So I wrote some support to add another metadata keyword: ARCOLOGY_ALLOW_CRAWL. If this is specified in an Arcology document, then there will be an Allow rule added to the robots.txt to discover it. Some pages you've got to just find, explore, take a gander on the Arcology Sitemap ... The pages themselves will also have meta tags added for the bots to know not to follow links or index the pages; Neither will they cache or create image search indices of any page that is crawled. I don't need that shit.
Anyways, hopefully this is a worthwhile trade-off and doesn't result in 95% of my already low traffic to become crawlers. Maybe search traffic to the CCE will lead to more folks finding and getting interested in things like The Arcology Project and Arroyo Systems Management .
I thought about putting the metadata in a tag but the data model doesn't support it right now -- I should consider the next part of my development to be to integrate org-roam tags in to the Arcology Arroyo System Database Generator ... It'll let me generate tag indexes or cross-correlate them with Topic Files .
Made some small changes to HTML generation
In b3d53e6bd61f2179eab936c0be5ef73a3164d84b I modified the Arroyo Arcology Generator to put the immediate parent-node's title in to the link model as the "source title" instead of the page. This will make e.g. the Topic Index a lot more clear.
Perhaps it should include both, actually, because I have a lot of links that are inside of Japanese Study SRS cards and now those are confusing. I made that change in 962911ce5a77284529350f49e66bc806eed0cb79.
In 67c179faa21c347bcef38a44503f2d5a35a6d192 I refactored the code which does all the HTML-post-processing, making links that are clickable, etc. in to a set of classes. I also implemented some simple rewrite for org-fc 's "cloze" flashcard syntax so that they render inside of spans instead of like {{front}{hint}@1}. I also made some CSS to make the rest of the org-fc stuff a bit better, like hiding the drawers with the review data in them in 482f383a4e46433d1bdc74f94d29235d3e99ca6d though I would prefer to modify the source org document to not need to render those tables, i'm sure it's not a "cheap" process to render them just to then ask the browser to hide them.
Arcology Feed Generator now exposes each site's feeds in <head>
In bc20ab278e9a7303dcca72dc80e5054550b67673, I added functionality to Arcology Feed Generator to fetch a list of all the feeds from the database and render them into each page's <head> element, this allows you to discover each site's feeds from anywhere in the system.
It's exposing some weakness in my data model so the next step with this feature will be to add a new table to the Arroyo Arcology Generator containing the metadata of the feeds to keep from having to reach directly in to the K/V/F interface.
Firefox users can add a little button to their browser with the open source AwesomeRSS add-on.
Implemented a Sitemap for Arcology using SigmaJS
The Arroyo Arcology Generator provides a set of nodes and edges, and SigmaJS renders them, allowing someone to browse the entirety of the The Arcology Project 's network: Check it out.
It was interesting to load this up; my Arcology Elixir implementation had a similar feature which was ... much less un-ergonomic to code due to the development I have made on the Arroyo System Cache data model, but the first time it loaded in the Python site it "just looked right" -- the site has a geographic, geologic feel, which ForceAtlas2 is happy to converge in to a landmass that feels, somehow, familiar. Land-masses of same-color nodes flow together by nature of their natural Connectivity , but at the edges things are fuzzy, sites flow together... but there are nodes which flow to the "edge" of the mass in much the same way they did before, like my little cluster of notes around Neoliberalism: A Very Short Introduction , one of the few books which I have taken decent notes on and have published, or my Tea notes to the "north" of the graph.
Over time we will see the landmass grow, but perhaps it is sad that I am not publishing enough of my stuff to really notice a change in the land-mass. Of course much of it is dominated by heavy nodes like the Archive and Topic Index and the CCE drawing many nodes towards them, but the shape of the final graph still feels uncanny to me.
It's weird to find a homeland I didn't realize I had.
Moved Arcology's Feed Generator to its own module
I had initially stuffed it in to the Arcology Public Router 's utility module -- that's not a reasonable place to put it. Arcology Feed Generator now has its own page and its own module arcology.feeds in 529958c361.
In 3dad26d I modified the Arcology Feed Generator 's Lua filter to make each entry's title a plain-old-string. In the original implementation of this code, I mis-read the specification and believed that I could just cram HTML in to there, but the HTML has to be properly escaped. Well, I spent some time trying to figure out how to invoke Pandoc's renderer on a sub-document, gave up, and use pandoc.utils.stringify instead. In theory this entry should render properly in feed readers!
In d68d209a I modified the FastAPI Public Router to support feed rendering in localhost. In truth, this router handles basically all of the routing itself and is the main entry point for HTML and Atom rendering on localhost and on the production domains. A terrible misnomer!
Some small tweaks to the page template
I updated the copyright footer to 02022 finally.
Arcology Page Template has been modified slightly to include links to the Arcology Sites along with their sitemoji -- i do hope this will make the noisiness of the pages themselves a little bit more clear. I must admit I am not suuppper happy with the ::before rules littering the text, and am still considering if there is a cleaner solution, however this will also server to let folks jump between them easily.
I learned that CSS attribute selectors like the [href*=] rules in Arcology Site CSS don't have any specificity and the last matching one is the one chosen. So engine.arcology.garden isn't "more specific" than arcology.garden, the latter one is the one used until I reorganize it so that the order has the "most specific" loading last. Buh.
When I tried to deploy it the thing crashed!
So, I made some other small tweaks to the build system including fixing a bug in arcology/docker.nix now that nixpkgs -unstable uses python 3.10 as python3.
I changed the import of Arroyo Emacs but only half as well I would like. I need to make the import for the Arroyo Emacs derivation use the <arroyo> =NIX_PATH= reference. It needs some sort of default.nix in <arroyo> even though I import a package relative to it. hm.
I have some rough ideas for what I want to add to the site next, I am still sort of letting it percolate. Mostly I want to work on organizing the content and updating some pages, and maybe reintroducing a "sitemap" web like the Sigma JS one I added to the Arcology Elixir prototype.
First Post on the dev feed!
Tonight I hacked together this Atom feed generator in Arcology Atom Generator . It works better than I expected to but like all the things in The Arcology Project it's mostly duct-tape and bubble gum. I must say that I am fairly proud of this, in spite of it being a bit oogly.
Standing up the pandoc support for rendering the Atom feed was the hardest part and it scratches something off the list of "brainwork" that I have been fnord ing off, namely to evaluate the LUA APIs for Pandoc. I really don't want to re-invent the wheel but I am at a point where the process of Rewriting and Hydrating the Pandoc HTML should be better accomplished in a set of filters in the pandoc process. To do that I'd have to get the Arcology SQLModel Database Bindings re-written in a lua sqlite engine which frankly I'd rather not do right now. Eventually I may do a full rewrite of the Arcology with a lua or Fennel web process wrapping a pandoc which has a bunch of lua or Fennel embedded inside of it to do the transformation. Ghastly. Maybe I'll stick with regular expressions for now...
But now I can dev-log in to here, and I can create all sorts of different feeds -- any heading on a page with a ARCOLOGY_FEED property should be published to the Atom feed named in that property. I'm sure it'll not be perfect but the Atom validator only reports 2 errors and 2 warnings.
Night night...



