Synology – Storage Pool Restore Woes

After roughly 44.000 hours in service, I decided to slowly replace my trusty Western Digital Red 4TB drives with some spanking new Western Digital Red Pro 8TB ones. Partly because I am running out of storage and partly because some of the older drives started giving me elusive but terrifying dmesg outputs.

With Synology, this is usually a pretty pain-free process. You remove one old drive at a time, run an extended SMART check on the new drive and then repair/restore the storage pool with the replaced drive.

This causes three scrubs of your drives/pools for each md software RAID device (md2, md3 and md4):

With the number of drives in my ageing DS2413+, this takes about 14-ish hours per run. Except when it does not.

Sunday is SMART day. All drives run an automated extended SMART test at precisely midnight. This also takes about 14 hours. Except when it does not.

Turns out that running the SMART test during a pool restore is a mighty bad idea and slows both processes down to a crawl. I am not talking about a few performance hits, I am talking full-blown “this takes 3 times as long as it should” madness.

After about 38 hours I finally cancelled the SMART tests and the restore process finally reached acceptable speeds again.

TSE – Redone Icon

As I have eluded to in the past, I am a huge fan of the SemWare Editor or TSE Pro for short. The icon I used on the linked post was a homemade upscale of the original icon you can find on the SemWare website.

Since it probably benefits the community more to have the source file for the icon, you can find that for download here.

It is an Affinity Designer file that contains the vector-based redo. Perhaps someone has some practical use for it (i.e. shortcut icons, website banners or whatnot).

Some thoughts on Kagi search

Kagi, unrelated to the prior (and now defunct) shareware payment provider of the same name and domain, is a new search engine that has received a bit of attention over the past few weeks.

The company promises to respect the user’s privacy while still delivering a set of compelling features and high-quality, relevant, user-tunable search results. This sounds awesome, especially since other sites like Brave, DuckDuckGo, Ecosia and Startpage had their share of negative press over the last few years. SearX is a nice idea – but often does not deliver relevant results. So one could say the market is ripe for a new competitor.

Kagi offers two tiers of service: A free tier limited to 50 search queries per month and a paid 10$/month tier for unlimited queries.

The company is US-based but seemingly employs an international team of people worldwide via remote work.

Kagi’s landing page after logging in

The Good

First off, Kagi ticks all the right boxes for me. It integrates relevant additional data as well as quick access to archived copies of a site on archive.org. This does not sound like much of a feature, but I do a lot of research and this saves me some clicks.

The ability to rate the relevance of certain domains is also absolutely stellar.

As for the quality of the search results, I have no complaints. The ability to use specific “lenses” to skew results to a certain set (i.e. programming related or PDFs) is great.

The Bad

There is no way to sugarcoat this: 10$/month for a search engine is too much of an ask. I’d happily pay 5$/month for a service like this.

However, even that would not work because Kagi uses Stripe and only accepts credit cards. No Google Pay, no Paypal, no nothing – only credit cards. This is a typical issue with US-based services that do not realize credit cards are not the primary payment method in the rest of the world.

Kagi states on Hackernews that they anticipate a low search volume for their regular users (citation/link missing). I heavily disagree here. When I am using a search engine, I do not send one query. I usually do some research by using one term, run some variations on that term and new search queries based on the information I have learned from previous results.

I use my search engine of choice more than 1.7 times a day, so the free tier of Kagi would be unusable for me. And if I cannot use the search engine, there is no way I will fully commit to switching to it.

Unfortunately, the long-term sustainability and growth of the service are murky topics and something that warrants further analysis in 2-3 years – assuming we will get any kind of published data from Kagi. Will the company be able to convert enough users into customers to be sustainable and/or profitable?

The company is US-based. For many, this might seem like a great selling point, however from a privacy perspective the US are a terrible haven. The fact that government can order the silent exfiltration of data via gag orders is worrisome. Kagi assures us they do not log or collect data – but the same was also true for many VPN providers that were logging in the past decade and handed over data to the feds. An independent audit of the infrastructure, configuration and software – similar to how Mullvad operates – would go a long way to verify the claims and build trust.

Lastly, Kagi has a worrying amount of products in the pipeline. Their Orion browser is in beta and they have already announced an e-mail service on their FAQ. On the one hand, it is a good strategy to branch out and offer many different products in various categories. On the other hand, you might be spreading yourself a little thin here, Kagi.

The Bottom Line

Despite me sounding pretty negative, I do like what Kagi offers. However, the price and available payment methods (and I am not alone in this) are a big turn-off right now. A price of 10$/month is just too high for me when Newsblur takes 36$/year (which comes down to 3$/month). If Kagi magically manages to knock the price down to 5-6$/month I’d immediately subscribe.

The free tier is virtually useless for me and acts as a nice gimmick to show off how Kagi works, what features are present and what kind of results you can expect.

Ironically, this is similar to the methods employed by the shareware processor Kagi (fully functional but limited to x uses). We will see if Kagi search will last as long as the company whose name and domain it is using – or whether the party will come to a sobering end much earlier.

And even if this bitter end should come to pass, I think that having a service like Kagi is important. It shows that an increasing number of users are growing sick of being the product. And Kagi might be able to more easily innovate/refine in a similar fashion as XenForo managed to one-up vBulletin back in the day.

If you want a simple, quotable takeaway from this post, then here you go: While I currently would not pay for Kagi, I highly recommend you try it out yourself. It is an elegant search engine that did not fail me on my queries yet.

Affinity Photo: Creating a Microbutton or Antipixel

I wanted to create some new microbuttons for some of the cool stuff in the link roll.

I could, of course, use the online generator. But that would be boring – plus there is no option to add images/logos – so I would have to do some retouching anyway.

Instead I opted to create and share a simple template for Affinity Photo. You will need to install the Silkscreen font first.

It is easy to override pretty much every colour (border, text and background) and by smartly using the hierarchies in Affinity Photo, we can get some neat auto-masking.

The hardest part for me was to understand the way Affinity Photo tries to smooth elements. This is quite different from what I was used to in Photoshop.

Building a video soundboard in OBS

Despite the fact that I do not stream as much anymore, I continue to tinker with OBS on a daily basis. For a while now I wanted to have what I would call a “video soundboard”, a simple mechanism that allows me to quickly play short clips during my stream.

Now, this doesn’t really sound too hard. Create many scenes, add media sources, slap a StreamDeck button on top of that – boom, you are done. This is a crappy solution because it requires a ton of additional work to add new videos into the mix.

I wanted to have a mechanism that relies entirely on one single scene with one single media source for all the content. Control over what gets played is wholly set on the StreamDeck and the StreamDeck only, meaning that adding new content is as easy as adding a single new command to a new button on the StreamDeck.

Sounds interesting? Here is how it works.

Prerequisites

Before we start, you will need the following things ready:

The Basic Setup

Create a new button on your StreamDeck with the Text File Tools plugin.

Specify a target filename and set the Input Text to the location of your desired media file (i.e. C:/Temp/heheboi.mp4). Leave the “Append” checkbox unchecked so the entire file gets rewritten.

This is all you will need to do to add new media to the scene. We can now set up OBS.

Within OBS, create a new scene (i.e. “Scene – Memes”) and add a media source to that scene (i.e. “Media Source – Memes”). Make sure this is a media source and not a VLC source!

Open the properties of the media source and be sure to check the “Local File” checkbox, “Restart playback when source becomes active”, “Use hardware decoding when available” and “Show nothing when playback ends”.

Hide the media source via the Sources list by clicking on the “eye” icon in the source list.

Setting up Advanced Scene Switcher macros

Now open the Advanced Scene Switcher plugin via Tools – Advanced Scene Switcher, navigate to the Macro tab and add a new Macro called “Meme Scene (Start)”.

Check the “Perform actions only on condition change” checkbox and add the following condition:

[ If ] [ File ]
Content of [ local file ] [ <PATH ON STREAMDECK BUTTON> ] matches:
.*

(Yes, that is dot asterisk – no space or anything else before, after or inbetween)
Check the “use regular expressions” checkbox and the “if modification date changed” checkbox, and leave “if content changed” unchecked.

Now add the following actions:

[ Switch scene ]
Switch to scene [ Scene - Memes ]
Check the "Wait until transition to target scene is complete" checkbox.

[ Scene item visibility ]
On [ Scene - Memes ] [ Show ] [ Source ] [ Media Source - Memes ]

This takes care of actually playing the video when a change in the file is detected. But we also want to switch back to the previous scene when playback has finished, so we must add another macro.

Add the second macro “Meme Scene (End)”, check the “Perform actions only on condition change” checkbox and add the following conditions:

[ If ] [ Scene ]
[ Current scene is ] [ Scene - Memes ]

[ And ] [ Scene item visibility ] (Click the clock) [ For at least] [ 1.00 ] [ seconds ]
On [ Scene - Memes ] [ Media Source - Memes ] is [ Shown ]

[ And ] [ Media ]
[ Media Source - Memes ] state is [ Ended ]

Add the following actions to the second macro:

[ Switch scene ]
Switch to scene [ Previous Scene ]
(Check the "Wait until transition to target scene is complete" checkbox)

[ Scene item visibility ]
On [ Scene - Memes ] [ Hide ] [ Source ] [ Media Source - Memes ]

Now we should be good, right? Well, almost. While we react to changes in the file thanks to the macro and switch between the scenes, we still do not set the media file on the source. This is handled by the Lua script which we must set up as a final step.

Setting up the Lua script

Open the Scripts window via Tools – Scripts and add the VideoFileFromText.lua script.

You should see some options on the right side of the window.

Set the interval to 50ms, browse to select the same text file you used on the Elgato StreamDeck button for the Video File List, and select the “Scene – Memes” scene for the Scene, as well as the “Media Source – Memes” for the Media Source. Finally, check the “Enable script” button and you are done.

Tying it all together

Be sure that the Advanced Scene Switcher is active and press the button on the StreamDeck. The scene should switch to your Meme scene, play the video and then switch back. Add another button on the StreamDeck that writes a different video file path to the same text file.

Now press the new button, and the second video file should play.

This makes adding short clips really simple and pain-free. No need to manually create multiple scenes or deal with multi-action steps on the StreamDeck. Adding a new video is as quick as adding a new button, setting the path to the desired media file and giving it a nice button image.

Of course, this is just the solution that I came up with, so your mileage may vary.

However, I do think that the inherent simplicity makes it an ideal solution. What do you think?

Friendship ended with WebDrive

Now RaiDrive is my best friend.

After more than a decade I have finally migrated away from WebDrive. It is not that I am particularly unhappy with the product, so South River should not feel bad here. My use-case for the software simply changed over the years and WebDrive did not cater towards that.

Back in the 2000s I primarily used WebDrive to keep a connection to an FTP or SFTP system to easily manage and edit files. A perfect fit, a very reliable tool. Especially in a landscape where many applications only knew how to work with local (as in: on a local drive) files. With the advent of rich media content online, however, WebDrive’s approach to file access no longer fits my requirements.

These days, I manage pools of video and audio on remote systems. I do not want to download an entire file to “peek” into it. It is problematic for collaboration. And more importantly: It is slow and inefficient.

Enter RaiDrive, a Korean-based software that does this very well (on supported protocols).

I read some buzz online about Raidrive not being reliable, however, I cannot mirror that sentiment. The software has been reliable for me during the 2 months of daily use.

Being the old fogey that I am, I make no use of any of the hip “cloud” integrations both products offer, so I cannot speak to the quality of those. However, Raidrive uses EldoS/Callback’s reliable components – the same ones also used in my favourite sync tool Syncback Pro.

Content-based file search with Powershell and FileLocator

I love Powershell. Unfortunately, as soon as we cross into the realm of trying to grep for a specific string in gigabytes worth of large files, Powershell becomes a bit of a slowpoke.

Thankfully I also use the incredible FileLocator Pro, a highly optimized tool for searching file contents – no matter the size. The search is blazingly fast – and you can easily utilize FileLocator’s magic within Powershell!

For the sake of clarity: I will be using Powershell 7.1.3 for the following example.

# Add the required assembly
Add-Type -Path "C:\Program Files\Mythicsoft\FileLocator Pro\Mythicsoft.Search.Core.dll"

# Prepare the base search engine and criteria
$searchEngine                      = New-Object Mythicsoft.Search.Core.SearchEngine
$searchCriteria                    = New-Object Mythicsoft.Search.Core.SearchFileSystemCriteria

$searchCriteria.FileName           = "*.log"
$searchCriteria.FileNameExprType   = [Mythicsoft.Search.Core.ExpressionType]::Boolean

$searchCriteria.LookIn             = "C:\Temp\LogData"
$searchCriteria.LookInExprType     = [Mythicsoft.Search.Core.ExpressionType]::Boolean

$searchCriteria.SearchSubDirectory = $true

$searchCriteria.ContainingText     = ".*The device cannot perform the requested procedure.*"
$searchCriteria.ContentsExprType   = [Mythicsoft.Search.Core.ExpressionType]::RegExp

# Actually perform the search, $false executes it on the same thread as the Powershell session (as in: it's blocking)
$searchEngine.Start($searchCriteria, $false)

foreach($result in $searchEngine.SearchResultItems)
{
   # SeachResultItems are on a per-file basis.
   foreach($line in $result.FoundLines)
   {
      "Match in $($result.FileName) on line $($line.LineNumber): $($line.Value)"
   }
}

Wowzers, that’s pretty easy! In fact, a lot easier (and quicker, to boot!) than playing around with Get-Contents, StreamReaders and the like.

One thing of note here: Between running this on a loop for every file in a directory, it is actually quicker to process an entire tree of folders/files. The larger the dataset, the larger the gains through invoking FileLocator.

And yeah, you can use FileLocator on the command line through flpsearch.exe – however the results are not as easily digestable as the IEnumerables you get through the assembly.

The SemWare Editor is now available for free

There are two things you cannot ever have enough of: Good text editors and good file managers.

One of the arguably best commercial console-based editors for Windows with a history going back all the way to the 1980s is now available for free: The SemWare Editor.

If you have never heard or tried TSE Pro, imagine a mix between the simple and intuitive CUI of a EDIT.COM with the rich feature set of a VI, allowing you to extend and alter how the editor works by adding or modifying the included macros. The editor comes in two flavours: A true console application and a Windows-only pseudo console that has a few more bells and whistles. Of course, the purebred console version works great via SSH/telnet.

Now is a great time to give TSE a try, as the following announcement came on the mailing list:

Yes, this and future versions will be free.
The good Lord Willing, (ref: James 4:13-15),
I plan to continue working on TSE.

Sammy Mitchell

I cannot praise the editor enough and will vouch that it is worth every penny of its previous license cost.

You can grab the setup on Carlo’s TSE page.

Corsair K95 Lockups

I am still rocking my beloved Corsair K95 RGB, the original one with the 18 G-keys. I still think there is no keyboard to date that is as great for multiboxing as this one.

Since migrating to my new machine a few months ago, the keyboard would occasionally do quirky things. Letting iCue run for a while caused gamepad detection to “lock-up” and took quite a while. What is worse: No up- or downgrading of iCue made a difference here.

Things like the joy.cpl, GeForce Now, Inner Space or games supporting gamepads would frequently take minutes to load. Disconnecting and reconnecting the keyboard fixed the issue temporarily until something went bonkers again.

The solution to this was to force a reinstallation of the keyboard’s firmware through iCue. I have no idea why I would need to do this, but flashing the firmware again solved the issue permanently.

Windows 11!!one

I have been playing around with Windows 11 in a virtual machine. My thoughts can best be summed up with “a bouquet of unremarkable things nobody wanted”. Windows 11 already made the rounds on the internet over its strict “no old hardware allowed” policy and the back-and-forth over Direct Storage which seemed like nothing more than marketing bullshit.

Personally, I have an entirely different pet peeve with Windows 11: It looks revolting. It looks ugly. It looks disgusting. Windows 11 looks more and more like a failed attempt of skinning Wine to make it hip, fresh and cool. Or like the aftermath of a broken UXThemePatcher run. Or what happens when Window Blinds crashes.

“What is this?” – “…Unique”

Please remember: People, (presumably) actual living people, got paid to do this.

People that know – or should know – that a majority of old applications will look butt-ugly with a half-assed mix of design elements from Windows 2000 (console contents and some colours), Windows 8/10 (the window controls that were meant for rectangular themes) and the lunacy that is Windows 11.

The colours do not match. The icon language does not match. The margins do not match. Nothing matches.

Synergy 1 Pro on Linux clients – Automatically start before logon with LightDM

This is just a very quick and dirty how-to for getting Synergy 1 Pro to run on LightDM before logging in. All the other instructions I have found haven’t really worked out for me, so let this be my best try…

Step 1: Setting up

Before we can set up LightDM’s configuration we first need to create a PEM cert and configuration with the root user, as that is what is my LightDM process is running as.

Log into a normal interactive X session. Start the graphical Synergy 1 Pro client via “sudo synergy”, generate a certificate and set the client up in a way that it actually can connect and is approved by the server.

Step 2: Adjust LightDM’s configuration

I am on Arch, so my configuration sits in /etc/lightdm/lightdm.conf. Open the configuration and add the following block:

[SeatDefaults]
greeter-setup-script=/usr/bin/synergyc --daemon --name <CLIENT_NAME> --enable-crypto --tls-cert /root/.synergy/SSL/Synergy.pem <SERVER_IP/HOST>:24800

Step 3

There is no step 3.

Whenever a session gets terminated, the synergy client will also briefly be killed and respawned for the lightdm greeter. I have found no reason to setup anything other than the greeter-setup-script.

VMware Workstation – Containers with vctl

I never understood why people think Docker is a big thing. To me, it always seemed to solve a problem that does not exist by adding layers of complexity which inevitably always introduce new problems and bugs.

If you wanted to isolate processes, why not use jails or zones? “But Tsukasa”, people sneered at me with mild amusement in the past, “you don’t understand. It’s about the ease of replacing software!”. Yeah, you can do that without Docker, it’s called package management.

Somewhere along the line, the OCI was founded and at least there was some kind of standardized way of handling containers.

Enter VMware Workstation in the middle of 2020. Coming to us courtesy of a technical preview, VMware shipped the new vctl container CLI it plucked from VMware Fusion. And I really wanted to love it, because the idea behind it is good – but…

A Promising Disappointment

I am a VMware guy. After more than a decade with VMware Workstation, I really dig the features. Yes, you can probably achieve similar results with other virtualization solutions – but none make it as easy as VMware. Yes, call me indolent and a fanboy, if you must. So imagine my joy when VMware announced their container CLI.

No more need to install the Hyper-V role, no more fiddling with some wonky plugins – just a clean, supported product that does what Docker does, but with VMware’s hypervisor in the back. One product to be the definitive all-out solution for my desktop (x86) virtualization needs.

VMware creates a new virtual machine in the background that acts as a host for the containers. This machine does not show up on your usual list of running VMs. Instead, it will show you the active containers. Don’t click on them though, the Workstation UI does not really know what to do with containers and you will end up with a botched tab of nothingness.

Since vctl is using VMware’s hypervisor, all the good stuff is already in place and familiar to me. Network configuration is dead simple and I have all the tools to explore/manage the container VM.

The performance is also top-notch, so what could I possibly complain about?

The integration and the polish. vctl creates an alias to docker, so you can issue both a vctl ps or docker ps and get the same result. Unfortunately, vctl does not shim all the commands and parameters Docker has, meaning that a lot of tooling and cool integration simply does not work. Want to use VSCode’s Remote Container extension with VMware? Bad luck, the command does not reply in the expected fashion because it does not understand all the parameters.

This is incredibly disappointing because the container feature in Workstation is so close to being a fantastic proposition in a time where VMware sunsets some long-standing features (cough, Shared VMs, cough, Unity on Linux, cough).

It does what it says – and nothing more… yet!

Please don’t misunderstand: The feature does what it advertises to do. I can easily author a Dockerfile and build it with vctl – without having to install Docker. This by itself is already a godsend because it reduces the amount of software I need to install on my workstation.

But I cannot help but wonder how cool it would be to have a (parameter-compatible) drop-in replacement for Docker from VMware as part of the software I use for full virtualization anyway. And give me a docker-compose, while you are at it. Thanks.

Synology Diskstation – Two Things

I do not get to write neat posts nearly as often as I would like to. But this one does not violate any NDAs and is relevant to an OG post on this blog.

So today I want to talk about two things regarding my beloved DS2413+ that other people might find useful in some capacity. Or at least entertaining.

Be Cool, Be Quiet – Live the Noctua Lifestyle!

I replaced the two Y.S. Tech stock fans in my DS2413+ with two Noctua NF-P12 redux-1300. Technically you can pop in every 3-pin 120mm fan you want, however, due to the way Synology drives the fans, they might not drive enough airflow, stop working (as in stop spinning) or DSM complains about fan failure.

I originally intended to replace the fans with the official replacement parts, however, it seems that I got stiffed so procuring the parts on short notice was no option. After a bit of research, I settled on the NF-P12 because other folks around the internet had positive experiences with the swap-out. I used this rare chance to clean the interior of the NAS, routed the cables nicely and thought I was done – I was wrong. I learned that lesson when the unit started beeping in the middle of the night.

You do want to set the fan setting to “Cool Mode” in your power settings, otherwise, one of the fans will randomly stop spinning after a few hours. Setting the fan speed to “Cool mode” fixes the issue and prevents DSM from issuing alarm beeps.

There are some other hacky ways to edit the fan profiles manually via the console, however, this operation apparently needs to be repeated after each DSM update. I’m way too lazy for that.

As for my cool Noctua lifestyle: The temperatures are virtually identical and the fans are quiet (as you would expect from the mighty Austrian owl!).

If you want to live the dream, please be sure to check the web for other people’s reports of your specific unit. Depending on the model the fan size, pin type and compatible fans will vary.

The big question, though: Is it worth the hassle?

Honestly speaking there is very little difference between the Y.S. Tech and Noctua fan in terms of cooling performance and noise level – at least when used on “Cool mode”. But you want that Noctua lifestyle, don’t you?

Addendum 2021-09-13: After upgrading to DSM 7, something about the way the fans are being addressed seems to have changed. I ran into several instances where DSM would report the fan as “faulty” and turned it off completely. Changing the fan settings around does not seem to make a difference here. I have popped in a new set of Y.S. Tech fans (original Synology replacement parts) for the time being…

Data Scrubbing – Or: Dude, Where is my Data?

I run my data scrubbing tasks regularly. Due to a recent power outage, the system complained about possible Write Cache issues, successfully completed a scrub and prompted whether I want to reboot now or later. It also asked whether I wanted to remap the disk.

“Sure”, I thought to myself, “I like maps!”. I toggled the option and hit “Reboot now”. DSM rebooted… and that was about it.

Blinking status LEDs but no DSM web interface, no SMB and no NFS shares. Slightly nervous I tried to connect to the NAS via SSH. dmesg and the system messages did not show anything of particular interest, so I started poking around the internet.

Google spew upon me pages and pages of horror stories that made my skin crawl: Bad superblocks, broken filesystems, complete loss of data, cats and dogs living together – the whole nine yards to make me break into a cold sweat and fear for the worst.

In this case, though, a simple “top” explained the situation: DSM was performing an e2fsck check of my filesystem.

This obviously caused the logical device to be busy or unavailable and explains why all lvs, pvs and vgs commands listed everything to be in order and mdadm was reporting proper operation. This also explains why the shares were not available, as the logical volume was not mounted.

Personally, I find the design decision to not initialize the web interface a bit weird, as it is truly unsettling to see all your data in limbo, with your only indication that something is or could be happening being the blinking lights on the front of the unit (not the drive indicators).

I hope that DSM 7 might improve on that end. It would be cool if the web interface had come up and indicated that a volume is currently unavailable due to running filesystem checks. This would be much more transparent.

Closing Thoughts

The DS2413+ is still an awesome unit and I very much appreciate the stability and ease of use of it. Synology is doing a great job at being very user friendly, so it really hits hard when something like the e2fsck situation comes up.

Gopher

Good news, everyone! This blog is now also available via Gopher.

I will be working on making the blog look better (as in: remove all the pesky HTML and replace it with proper plaintext) over the coming weeks.

It is honestly great to see that taz.de is still available through Gopher and I hope to join those elitist ranks with a proper and deserving presentation. But until then… please excuse the HTML.

Streamlining your OBS workflow

Building a stream layout is a lot of work. Design elements like colours, fonts and layouts have to be consistent. In the past, I used to design things in Photoshop or Affinity Photo, cut the assets up into smaller pieces and then either use them in OBS directly or run them through DaVinci Resolve for some basic animation. This approach works fine on a rather static layout.

Now I’ve been toying around with the idea of what I call “After Dark” streams that have their own, slightly different style. The fonts and layouts stay the same, however, all the colours change. With my old workflow I would either need to re-export and edit all the assets… or find another way.

For a while, I have been doing my layouts as HTML documents now. Using CSS animations and jQuery as a base for dynamic data processing, I can easily switch things around.

Since I am on Windows, reading/writing the contents of a JSON file is really easy with Powershell. So I can map some Stream Deck keys to perform value toggles in the JSON, causing my layout to dynamically adjust.

Same for the “Now Playing on Pretzel” widget. It processes the JSON file generated by Pretzel’s desktop client, dynamically resizes the widget and even fades out once music stops playing.

HTML stream layout comparison

The overall advantage is obvious: If I ever choose to edit the colour scheme, it is one edit within one CSS file. New font? A couple of changes. Changing the stream title, metadata et al is also just a simple set of nodes in a JSON file – the rest of the layout dynamically adjusts. And it is all easily accessible through one press on my Stream Deck.

Additionally, this approach reduces the number of required scenes/elements drastically. Whereas you would either need to toggle the visibility of sources or duplicate scenes on a more traditional setup, everything runs in proper code here. I have no dedicated intermission scene… the title card simply transforms into it, keeping all elements coherent within the scene.

“But Tsukasa, the performance impact”, people will yell. I dare say that any blur effect on a fullscreen video in OBS has probably a heavier impact on the performance than a reusable browser source. The entire title card sits at around 10% CPU usage, with a good portion of that going towards the VLC video source.

Dynamic changes to the layout

So I feel it is high time people stop using video-based layouts and migrate to proper HTML-based ones.