-
How to Resolve Overlapping DjangoObjectTypes with Graphene and Graphene Relay
byIf your schema has multiple types defined using DjangoObjectType for the same model in a Union, selections that work on the Union won't necessarily work as is on Relay node queries.Β
For example, suppose we had a model schema like this:class Record(models.Model): record_type = models.CharField(max_length=12) @property def is_disco(self) -> bool: return self.record_type == "DISCO" @property def is_punk(self) -> bool: return self.record_type == "PUNK" class Disco(models.Model): record = models.OneToOneField(Record, related_name="disco_record") bpm = models.IntegerField() class Punk(models.Model): record = models.OneToOneField(Record, related_name="punk_record") max_chords = models.IntegerField()
Our application cares Records and, depending on the record_type, the type of meta information we want to manage changes. As such we create a new model with a OneToOneField to our record for each type we plan on managing.
When we query our records we wan to only worry about Records, so define our GraphQL types accordingly.class DiscoRecord(graphene_django.DjangoObjectType): class Meta: model = models.Record bpm = graphene.IntegerField(required=True) @classmethod def get_node(cls, info, id) -> models.Record: # Allow our object to be fetchable as a Relay Node return models.Record.objects.get(pk=id) def resolve_bpm(record: models.Record, **kwargs) -> int: return record.disco_record.bpm class PunkRecord(graphene_django.DjangoObjectType): class Meta: model = models.Record max_chords = graphene.IntegerField(required=True) @classmethod def get_node(cls, info, id) -> models.Record: # Allow our object to be fetchable as a Relay Node return models.Record.objects.get(pk=id) def resolve_max_chords(record: models.Record, **kwargs) -> int: return record.punk_record.max_chords class Record(graphene.Union): class Meta: types = (DiscoRecord, PunkRecord) @classmethod def resolve_type( cls, instance: models.Record, info ) -> Union[Type[DiscoRecord], Type[PunkRecord]]: # Graphene is unable to accurately determine which type it should resolve without help # because the unioned types are all DjangoObjectTypes for the same Record class. if instance.is_disco: return DiscoRecord elif instance.is_punk: return PunkRecord raise ValueError("Unknown record type")
Because we have the resolve_type @classmethod defined in our Union, Graphene can correctly determine the record type. Without that we'd get an error any time we tried to resolve values that only exist on the PunkRecord or DiscoRecord type.
So if we had a records query that returned our Record Union, we could query it as follows without any issues.query { records { ... on DiscoRecord { bpm } ... on PunkRecord { maxChords } } }
But what about the Relay node query? The query looks quite similar to our records query.query { node(id: "fuga") { ... on DiscoRecord { bpm } ... on PunkRecord { maxChords } } }
However, and this is the key difference, node does not return our Union type, but rather our individual DiscoRecord / PunkRecord type. And since both of those types are technically Record types (because of the same Django meta class), any PunkRecords will be resolved asΒ DiscoRecords and return an error when we try to resolve Punk only fields.
In order for node to be able to differentiate between the Punk and Disco at the type level we need one more is_type_of classmethod defined on our types.class DiscoRecord(graphene_django.DjangoObjectType): ... @classmethod def is_type_of(cls, root, info) -> bool: # When a DiscoRecord is resolved as a node it does not use the Union type # determine the object's type. # Only allow this type to be used with Disco records. if isinstance(root, models.Record): return root.is_disco return False class PunkRecord(graphene_django.DjangoObjectType): ... @classmethod def is_type_of(cls, root, info) -> bool: # When a PunkRecord is resolved as a node it does not use the Union type # to determine the object's type. # Only allow this type to be used with Punk records. if isinstance(root, models.Record): return root.is_punk return False
This way, when Graphene is looping through all of our types trying to determine which type to use for a given Record, we can inspect the actual record and prevent an erroneous match.
This is obvious in retrospect. Although our GraphQL query selectors are exactly the same the root type is different and as such requires a bit more instruction to resolve the appropriate type. -
TIL: How to change the Docker ENTRYPOINT with Packer
byI've been working on automating setup and deployment for Tanzawa. This necessitates setting up a python 3 with all of the requisite dependencies and then starting a webserver.
Initially I tried to set the run_command, but that's executed when you build the image, not when you run the image. The command used when running the image is controlled by the ENTRYPOINT, which is Docker specific.
You can change your ENTRYPOINT by adding it to the "changes" section of your Docker configuration in your packer .pkr.hcl configuration file.source "docker" "ubuntu" { image = "python:3" commit = true changes = [ "ENTRYPOINT [\"uwsgi --emperor /etc/uwsgi/vassals --uid www-data --gid www-data\"]" ] }
-
How to Split Commits
bySometimes in a rush developing, I'll commit two distinct changes in a single commit. From a code perspective, this isn't an issue because the code works. But from a systems perspective you can no longer split changes from A and B. They're forever married.Β
Splitting those changes into two commits will allow us to keep a better history of the system and allow our pull request to "tell a better story".
We can fix combined commits with an interactive rebase. I use PyCharm for part of this in my regular workflow at work, so rather than providing a concrete example, I'll instead summarize the procedure.- git rebase -i origin/mainΒ (or whatever branch you rebase on to) to start an interactive rebase.
- Find the commit you want to split and mark it as "edit"
- git reset HEAD~1
- Add the files / changes for change A, commit
- Add the files / changes for change B, commit
- git rebase --continue
The "secret" is that when you edit stops the rebase after the combined commit. By resetting HEAD~1, we effectively undo that commit. But since it's a soft reset, the changes are not rolled back, just the commit. This allows us to tweak and commit individual parts separately as desired before continuing to the next commit in our branch. -
How to Gracefully Restart A Parent Process
byWhen enabling or disabling plugins in Tanzawa, for urls to register correctly across all sub-processes, you must restart all processes, not just fiddle with the process that made the request.
The complete changes are in PR #121, but the line of interest is below.import os import signal os.kill(os.getppid(), signal.SIGHUP)
Where getppid gets the process id of the parent process (gunicorn, uwsgi, etc...) and sends it a HUP signal. -
How to Handle Pluralization and Internationalization in Django Templates
byThis is written in the docs, but it was a first time for me to handle. Your templates can start to get very verbose when you really start supporting i18n support.Β
For strings directly in your templates you can use the blocktrans plural tag. ( Note this changes a bit with Django 3.2, blocktrans becomes blocktranslate ).{% load i18n %} {% blocktrans count counter=object_list|length %} {{ object_list }}δ»Ά {% plural %} {{ object_list }}δ»Ά {% endblocktrans %}
For master data that has a dedicated DB column, you can use the get_language_code from the i18n package.{% get_current_language as LANGUAGE_CODE %} {% if LANGUAGE_CODE != "en" %} {{ my_model.foo }} {% else %} {{ my_model.foo_en }} {% endif %}
-
TIL: English and GEOS Reference Points Opposite of Each Other
byThis post is less a TIL and more of a I knew that and I don't want to forget it again and stems from a bugfix in Tanzawa.
In English when we refer to a geo-coordinate we usually say it in latitude, longitude order. The reason why we say coordinates in this order is we could measure latitude accurately (via astronomical measurements) before longitude. Frontend mapping libraries like leaflet.js keep this familiar ordering. i.e. plotting points on a map takes a latitude/longitude array and events have a latlng property for referencing points.
GEOS, the open source geometry library used in most GIS (include GeoDjango) applications doesn't think of points in those terms, but as a graph of x,y coordinates.Β As such if you when you're working with data across these boundaries it's important to not mix up your ordering.
When instantiating a Point it's tempting to just pass in floats directly. But if you do that it's easy to mix up the ordering , so I've started make sure I always use the keyword argument name to reduce mistakes.from django.contrib.gis.geos import Point # Keep our familiar lat/lon ordering without messing up the data point. point = Point(y=35.31593281000502, x=139.4700015160363)
-
How to Pipe Python stdout with xargs
byWhen writing instructions for getting started with Tanzawa, users needed a way to set a unique SECRET_KEY in their environment variable configuration file. Initially I had a secret key entry in the sample file with some instructions to "just modify it". But that felt like I was just passing the buck.
What I wanted to do was to generate a unique secret_key and output it to the .env file. Outputting just the secret key is simple, you can just use >> and append output to an existing file. But I wanted to use my Python secret key output as an argument to another command.
I did it as follows:python3 -c "import secrets; print(secrets.token_urlsafe())" | xargs -I{} -n1 echo SECRET_KEY={} >> .env
1. Use the Python secrets module to generate a secure token.
2. Pipe the output to xargs.
3. -I is "replace string" and "{}" is the string we want xargs to replace. -n1 limits us to a single argument.
4. xargs executes and takes our Python output as an argument and replaces the {} with it, giving us our desired string.
Writing this now, I probably could have just used Python to include the SECRET_KEY= bit and forgone using xargs, but it was good practice anyways. -
How to Process Tailwind / PostCSS with Webpack
byUsually when I work with webpack another tool I'm using generates the appropriate config for me and I can remain blissfully unaware of how it all works.
With Tanzawa I've only been adding tooling when absolutely necessary. The other day I configured PostCSS and Tailwind with Webpack. It still required a bunch of searching and piecing together blog posts to get something that worked for me.
Below is a list of facts that helped me figure out how to think about processing CSS with webpack.- As wrong as it feels, your entry point for processing your CSS should be a Javascript file.
- Webpack by default does not output a separate stylesheet file. In order to output a plain CSS file, you must use the MiniCssExtractPlugin.
- Despite wanting to output only CSS, and specifying the filename in the options (style.css), Webpack will create an empty Javascript file regardless. There isn't a way to prevent this unless you add another plugin. I'm adding it to .gitignore.
- The "use" plugins have the following roles
- MiniCssExtractPlugin: Exact built css to a dedicated css file.
- css-loader: Allow you to import css from your entrypoint Javascript file
- postcss-loader: Run postcss with yourΒ
// webpack.config.js const path = require('path'); const MiniCssExtractPlugin = require('mini-css-extract-plugin'); const tailwindConfig = { entry: "./css/index.js", output: { path: path.resolve(__dirname, "../static/tailwind/"), }, plugins: [ new MiniCssExtractPlugin({ filename: "style.css" }), ], module: { rules: [ { test: /\.css$/, use: [ MiniCssExtractPlugin.loader, { loader: "css-loader", options: { importLoaders: 1 } }, "postcss-loader", ], }, ], }, } module.exports = [tailwindConfig]
In order for postcss to play well with Tailwind and webpack, I needed to update my config to pass the tailwind plugin the path for the tailwind.config.js. It simply imports (requires) the tailwind config and immediately passes the path.// postcss.config.js module.exports = { plugins: [ require("tailwindcss")("./tailwind.config.js"), require("autoprefixer"), ] }
Finally to run this all for production, I execute via webpack as follows. I still need to figure out how to get the NODE_ENV environment variable set via with the webpack --production flag, so for now it's a bit redundant.// package.json { ... "scripts": { "build": "NODE_ENV=production webpack --mode=production" } ... }