Government services and NGOs and more getting blocked on Facebook so they don’t need to pay journalists for the work. Another example of why shouldn’t rely on free platforms. You need a site that you own and to encourage people to use it else your work just becomes “collateral damage”.
The sooner the world stops using Mark’s site as an internet portal the better.
Yet websites are treated as these embarrassing, ugly, ad-riddled things, whilst newsletters have established some kind of prestige for themselves somehow.
I saw this article about newsletters and how people like them more than websites despite the web having all these pretty fonts and layout capabilities.
That misses the point. Good writing is good writing irrespective of fonts and typography.
Newsletters are usually read via a mail client. This means those articles already use beautiful (licensed) system fonts. They’re already downloaded onto your system so they load instantly. They’re hand curated by yourself. They’re in your inbox. No infinite scrolling or comments or faff.
People like newsletters more than the web because they don’t use RSS and mail clients stopped including RSS readers.
I gave a talk yesterday about personal data warehouses for GitHub’s OCTO Speaker Series, focusing on my Datasette and Dogsheep projects. The video of the talk is now available, and …
So many good ideas in this talk.
- I love this idea of standardizing all of your data to sqlite databases so you can freely explore it. I also love this idea of shipping static datasets inside a sqlite db inside a Docker image so you can “scale to zero”.
One thing I’ve been wanting to do for a while is add some kind of public dashboard for my Airbot data. Using something like Datasette I could export subsets (or all of it) to sqlite and allow you to slice and dice the data at will.
Also really like the idea of having automated cron/lambda jobs setup to pull your personal data off the web automatically. Right now I’m only importing my swarm checkins / interactions with my syndicated tweets. Having some automated cron jobs to just collect the data to sqlite would allow me to explore my data much easier.
There seems to be recurring theme (maybe it’s the holy grail) of nerds wanting to build their own search engines/portals for all of their data. In one sense it’s a “solved” problem with Spotlight and other such tools. On the other hand Spotlight and these tools don’t provide you context.
There was a tool that was under development in the early Mono days on (written by Nat?) that did this, at least partially. If you were chatting in Gaim it’d show you a window of your recent emails, their contact info, maybe their latest rss feeds. I’ve always thought a tool like this would be killer – but with so much data being up in servers and hidden behind apis and proprietary services these days it seems increasingly difficult.
This is spot on. I recognize that the purpose of the service from Apple is to prevent malware from running on your Mac, but it doesn’t sit right with me. Especially that the data is sent unencrypted over the net. Combined with not being able unsigned code on the M1, I’m wondering if my mid-2014 Mac might not be my last.
The major difference between the feeds of the Social Dilemma and your RSS feed is one is algorithmly designed to addict and manipulate, while the other is just a chronological list of stuff you’re interested in reading. What I do in these cases is I give a quick glance at the unreads for something that really interests me and then mark everything as read.
My feed list is only about 10 – 15 items at the moment, it’s quickly expanding as I find interesting people on the IndieWeb, so it may not be as viable of a strategy for a larger number of feeds.