改善
Kaizen  · Today I Learned by Ville Säävuori

Weeknotes 2020/52 - WebRTC, SwiftUI and personal data archiving

My days have been starting to look worryingly similar past few weeks, so I wanted to try to make this Christmas week stand out at least a little. After working a pretty hectic pace for a long time I’m finding it pretty difficult to put down the developer tools and wind down.

I decided to take at least a couple of weeks off from normal work routines and instead do something different with no schedules and release targets. Still need to do a couple of small tasks for Slipmat but after that I’m going to drop all that work for a while.

To this end I spent the past week mostly studying and learning new things like SwitfUI and WebRTC.

WebRTC

WebRTC is a Web technology that I’ve been interested in and following for several years. It’s no surprise that the browser developers have put significantly more resources into WebRTC this year as all kinds of conference technologies are seeing increased usage of hundreds percents due to the pandemic.

I started on working on a demo project that would allow me to build a audio-only chat for Slipmat live page to complement the text chat. This is purely an experimental project but it’s very interesting as it combines lots of new technologies and tools that I haven’t personally used in production yet. I want to get the demo running during this short break from other work.

Personal Data Archiving

I’ve been collecting and archiving all my data over 20 years already. But most of that data has been suck in a SQL database or on various network disks, not in really usable form. Encouraged by Simon’s Dogsheep project I started to take small steps towards automating this data collection with GitLab CI into a portable form.

As most of the data is natively handled as JSON, I chose JSON as the base format for the archives as well. It’s really easy to work with, both machine and human readable, and easy to import to databases or use with static site tools like Hugo or Gridsome.

This week I added Garmin Connect data collector (garmin-connect-to-json on NPM) to the toolbelt. Writing these collectors is a endless process but having started it feels good.

I put together a new private repository on GitLab with a scheduled CI pipeline that uses all these collectors (currently fetching Tweets, Wakatime, and Garmin data) and archives the data into JSON files. Next step will be to write some kind of frontend that can be used to browse and search it. I’m not planning to use Datasette for the final site but might still use it as an quick and easy temporary solution.

The primary end goal of this project is to get all my data in a state that if a Web site (that I’ve poured data in and that I find important in some sense) shuts down or becomes evil like FB, I can just stop using it and not lose any of that data. (That said, not sure if I want to touch my FB data at all.) Secondary utility for this kind of personal data collection is the easy access and reuse of all the data. Having a uniform and easy to use API to big collection of data is pretty nice thing to have.

Misc