• Editing Bookmark / Reply Titles with HTMX

    This post is a followup to Allow editing of replies/titles and describes some of the changes made to enable this functionality (#265) and my thoughts after my first steps with HTMX.

    Unable to change bookmark / reply titles ☹️

    Above is the before state, which is a static view of title/url of a bookmark/reply. Below is aΒ  gif of what I built and the finished state. There's a new change button, which you can click to reveal a form that allows you to edit the record. This form allows you to save or cancel. Save updates the record and changes the screen back to the "read only" view, while cancel just loads the current "read only" view. Zero custom Javascript.

    Editing a bookmark url / title in Tanzawa

    Powering this is two new simple views in the admin site. One loads the entry's bookmark/reply and returns the read only view. The other loads the same record and displays a form on get, updates it on post.

    The htmx and these two small views were the simple part of this feature. The heavier lift was decoupling the Tanzawa micropub endpoint from the admin views/forms, and then updating the admin views to no longer expect IndieWeb extension data (reply/bookmark url/titles etc...) on update (it's still required on create).

    Thankfully I had functional tests for my micropub endpoint, so I could be confident that my refactoring didn't break existing functionality. Those tests allowed me to extract the logic from the admin forms and put them into a application function. This new application functions are reused by all interfaces creating/updating entries.

    The next phase was adding tests to the admin views and then updating them to use the common application functions. This work was mostly a slog because I needed to add tests and better factories for my tests before doing the actual refactor. There's still more improvement that can be made in the test factories to make them a bit more DRY, but they're good enough for now.

    Working with HTMX has been a dream. Rather than usingΒ  Javascript to tweak DOM, I can simply make calls to the backend to get html from the backend by adding just a couple of attributes to my template dom. All of this using the same django templates.

    My next steps with HTMX are going to be to look into integrating django-components, so I can wrap up the Javascript I do need with the their templates/css and do some more refactoring. For example, right now reply/bookmarks have effectively the same templates / views duplicated. Using components, I should be able to have a single logical component to power both.

    If the components strategy works, I will look at breaking out various parts of Tanzawa into htmx/django-components for easier maintenance e.g. the location selector or location view on the bottom of checkins.
  • Running Tanzawa with Fly.io

    Taking a page out of Simon Willison's Coping strategies for the serial project hoarder presentation, I'm going to write a blog post about what I've done on my projects as part of the "unit of work".

    One of the largest hurdles to running Tanzawa is one that plagues any Django app: getting it setup properly on a server. This usually involves connecting to a server, setting up a gunicorn or uWSGI server to run the app, editing nginx configurations, and fiddling with systemd, at a minimum.

    Each of these are a large barrier to entry. All of them combined means only the most dedicated users would attempt to use it. And the reality is nobody will use it.

    Making Tanzawa easier to install and run has long been a goal of mine. For a while my approach was to basically automate my own setup on Digital Ocean. I attempted this with two puppet scripts: one, created an Ubuntu server that automatically applied security patches and installed Docker, and the second would build a Tanzawa image to run on the server. Using puppet would also allow flexibility for people to host wherever they wanted.

    Ultimately this approach was flawed because you'd still end up needing to maintain a server, even if it updates itself.

    Getting Tanzawa to run on a fully managed platform like Fly.io would lower the barrier to entry quite a bit as it would remove the need to maintain a server and fiddle with configuration files. After migrating my blog from a Digital Ocean to Fly.io, I documented how others can do the same.

    Hosting with Fly.io is now the recommended way to use Tanzawa.
  • I've been thinking about adding the weather of Yokohama (or user defined location) to the top of my blog and on each post, when published. For checkins it would be the location of the checkin. WeatherKit from Apple is pretty interesting to me, but not sure I join the Apple Developer program again.

    Either way, my buddy Paul just posted a good example of How to use WeatherKit from Python. Something for me to stew on.... Thanks Paul!
  • How to Resolve Overlapping DjangoObjectTypes with Graphene and Graphene Relay

    If your schema has multiple types defined using DjangoObjectType for the same model in a Union, selections that work on the Union won't necessarily work as is on Relay node queries.Β 

    For example, suppose we had a model schema like this:

    class Record(models.Model):
        record_type = models.CharField(max_length=12)
        
        @property
        def is_disco(self) -> bool:
            return self.record_type == "DISCO"
    
        @property
        def is_punk(self) -> bool:
            return self.record_type == "PUNK"
    
    class Disco(models.Model):
        record = models.OneToOneField(Record, related_name="disco_record")
        bpm = models.IntegerField()
    
    
    class Punk(models.Model):
        record = models.OneToOneField(Record, related_name="punk_record")
        max_chords = models.IntegerField()

    Our application cares Records and, depending on the record_type, the type of meta information we want to manage changes. As such we create a new model with a OneToOneField to our record for each type we plan on managing.

    When we query our records we wan to only worry about Records, so define our GraphQL types accordingly.

    class DiscoRecord(graphene_django.DjangoObjectType):
        class Meta:
            model = models.Record
        
        bpm = graphene.IntegerField(required=True)
    
        @classmethod
        def get_node(cls, info, id) -> models.Record:
            # Allow our object to be fetchable as a Relay Node
            return models.Record.objects.get(pk=id)
        
        def resolve_bpm(record: models.Record, **kwargs) -> int:
            return record.disco_record.bpm
    
    class PunkRecord(graphene_django.DjangoObjectType):
        class Meta:
            model = models.Record
        
        max_chords = graphene.IntegerField(required=True)
        
        @classmethod
        def get_node(cls, info, id) -> models.Record:
            # Allow our object to be fetchable as a Relay Node
            return models.Record.objects.get(pk=id)
        
        def resolve_max_chords(record: models.Record, **kwargs) -> int:
            return record.punk_record.max_chords
    
    
    class Record(graphene.Union):
        class Meta:
            types = (DiscoRecord, PunkRecord)
    
        @classmethod
        def resolve_type(
            cls, instance: models.Record, info
        ) -> Union[Type[DiscoRecord], Type[PunkRecord]]:
            # Graphene is unable to accurately determine which type it should resolve without help
            # because the unioned types are all DjangoObjectTypes for the same Record class.
            if instance.is_disco:
                return DiscoRecord
            elif instance.is_punk:
                return PunkRecord
            raise ValueError("Unknown record type")

    Because we have the resolve_type @classmethod defined in our Union, Graphene can correctly determine the record type. Without that we'd get an error any time we tried to resolve values that only exist on the PunkRecord or DiscoRecord type.

    So if we had a records query that returned our Record Union, we could query it as follows without any issues.

    query {
        records {
            ... on DiscoRecord {
                bpm
            }
            ... on PunkRecord {
                maxChords
            }
        }
    }

    But what about the Relay node query? The query looks quite similar to our records query.

    query {
        node(id: "fuga") {
            ... on DiscoRecord {
                bpm
            }
            ... on PunkRecord {
                maxChords
            }
        }
    }

    However, and this is the key difference, node does not return our Union type, but rather our individual DiscoRecord / PunkRecord type. And since both of those types are technically Record types (because of the same Django meta class), any PunkRecords will be resolved asΒ  DiscoRecords and return an error when we try to resolve Punk only fields.

    In order for node to be able to differentiate between the Punk and Disco at the type level we need one more is_type_of classmethod defined on our types.

    class DiscoRecord(graphene_django.DjangoObjectType):
        ...
        @classmethod
        def is_type_of(cls, root, info) -> bool:
            # When a DiscoRecord is resolved as a node it does not use the Union type
            # determine the object's type.
            # Only allow this type to be used with Disco records.
            if isinstance(root, models.Record):
                return root.is_disco
            return False
    
    class PunkRecord(graphene_django.DjangoObjectType):
        ...
        @classmethod
        def is_type_of(cls, root, info) -> bool:
            # When a PunkRecord is resolved as a node it does not use the Union type
            # to determine the object's type.
            # Only allow this type to be used with Punk records.
            if isinstance(root, models.Record):
                return root.is_punk
            return False

    This way, when Graphene is looping through all of our types trying to determine which type to use for a given Record, we can inspect the actual record and prevent an erroneous match.

    This is obvious in retrospect. Although our GraphQL query selectors are exactly the same the root type is different and as such requires a bit more instruction to resolve the appropriate type.
  • TIL: How to change the Docker ENTRYPOINT with Packer

    I've been working on automating setup and deployment for Tanzawa. This necessitates setting up a python 3 with all of the requisite dependencies and then starting a webserver.

    Initially I tried to set the run_command, but that's executed when you build the image, not when you run the image. The command used when running the image is controlled by the ENTRYPOINT, which is Docker specific.

    You can change your ENTRYPOINT by adding it to the "changes" section of your Docker configuration in your packer .pkr.hcl configuration file.

    source "docker" "ubuntu" {
      image  = "python:3"
      commit = true
      changes = [
        "ENTRYPOINT [\"uwsgi --emperor /etc/uwsgi/vassals --uid www-data --gid www-data\"]"
      ]
    }
  • indieweb-utils 0.2.0 was just released. I've been having fun collaborating with James on this project. Really looking forward to dogfooding it by integrating it into Tanzawa.

    Update: Read more about it in James' release blog post.
  • Made my first small PRs to indieweb-utils to pin requirements and introduce pytest. There's a few more I'd like to do e.g. black / flake8 / mypy, but all in due time.
  • πŸ”— Can Matt Mullenweg save the internet?

    He's turning Automattic into a different kind of tech giant. But can he take on the trillion-dollar walled gardens and give the internet back to the people?
    While I agree with Matt that decentralization and individual ownership are central to a Web3, the crypto/blockchain aspect of it is a technological farce.

    Following the principles of IndieWeb on your own domain will allow you, today, to own all of your data and to interact with other people absent of any intermediary service and without melting the arctic.

    A major motivator for building Tanzawa was individual ownership. It's not enough to have your data, but have it stuck in a in serialzied blob in a Wordpress plugin data column somewhere. It's too difficult and cumbersome to reuse. It must be in a proper relational schema. So far the fruits of my indieweb journey have allowed me to not only own my data, but to actually use it toΒ  build upon it. Both trips and maps wouldn't have been possible without Tanzawa.
    1. Tagged with
    2. blogging
    3. internet
    4. indieweb
  • Response to Announcing indieweb-utils

    After some thought, I decided to build indieweb-utils, a Python library with building blocks that will assist developers in building IndieWeb applications.
    indieweb-utils looks like a lovely library to help with some of the faff of html parsing for the IndieWeb.

    I originally planned to do something similar using Tanzawa Indieweb module for Django-Indieweb stuff, but now I'm less convinced that'd be useful outside of the Tanzawa context.Β 

    I'd love to see the Python/Indieweb "consolidate" a bit on a single library so we aren't duplicating effort. I'll have to open some PRs. Great work, James!
  • How to Split Commits

    Sometimes in a rush developing, I'll commit two distinct changes in a single commit. From a code perspective, this isn't an issue because the code works. But from a systems perspective you can no longer split changes from A and B. They're forever married.Β 

    Splitting those changes into two commits will allow us to keep a better history of the system and allow our pull request to "tell a better story".

    We can fix combined commits with an interactive rebase. I use PyCharm for part of this in my regular workflow at work, so rather than providing a concrete example, I'll instead summarize the procedure.

    • git rebase -i origin/mainΒ  (or whatever branch you rebase on to) to start an interactive rebase.
    • Find the commit you want to split and mark it as "edit"
    • git reset HEAD~1
    • Add the files / changes for change A, commit
    • Add the files / changes for change B, commit
    • git rebase --continue

    The "secret" is that when you edit stops the rebase after the combined commit. By resetting HEAD~1, we effectively undo that commit. But since it's a soft reset, the changes are not rolled back, just the commit. This allows us to tweak and commit individual parts separately as desired before continuing to the next commit in our branch.
1 of 10 Next