I like Ardour. Unfa on YouTube made a great tutorial on how to use it.
I like Ardour. Unfa on YouTube made a great tutorial on how to use it.
It isn’t misusing metric, it just simply isn’t metric at all.
Sounds like you want a proper backup solution. Take a look at borg backup, a tool that supports encrypted, deduplicated, compressed, incremental backups. You can even directly save to your cloud via protocols such as ssh, s3, etc.
No, it is customer’s since there will only be one customer left at that point.
single master text file
Sounds like something you are using to manage your packages to me…
Stop giving them ideas!
IANAL but it looks like they are violating Apache 2, as they are supposed to retain the license and mark any changes.
I wonder how this interacts with tiling window managers…
Try installing nvidia-dkms. It is better integrated into the kernel, so you may have better luck with it. Also make sure to read the xorg page on the arch wiki if you are going to stick with arch.
Can’t include any proprietary code, so using the google sdk would invalidate it I believe.
Sure. If you are using an nvidia optimus laptop, you should also add __NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia at the start of the last line when running in hybrid mode to run mpv on the dgpu. You should have a file at ~/.wallpaperrc that contains wallpaper_playlist: /path/to/mpv/playlist
. You may want to add this script to your startup sequence via your wm/de.
#!/bin/sh
WALLPAPER_PLAYLIST=$(cat ~/.wallpaperrc | grep -v '^\w*#' | grep 'wallpaper_playlist' | sed "s/wallpaper_playlist: //")
xwinwrap -g 1920x1080 -ov -- mpv -wid WID --no-osc --no-audio --loop-playlist --shuffle --playlist=$WALLPAPER_PLAYLIST
Hope this helps!
I set mpv as the root window which worked well. I stopped using it a while back, but if you are interested, I could dig up the simple script for you (literally one or two lines iirc).
Wow, CUPS is way better than I previously thought and I thought it was amazing!
You already have a plethora of great suggestions for improvements to make, so I won’t leave any more, but rather offer some advice. It can be daunting to go all in and sacrifice the conveniences you currently enjoy. This is why I recommend you change your behaviour and software in a piecemeal fashion. Change only a few (or even one) things at a time and get used to it. Once you are comfortable with where you are at, then introduce more improvements. This approach will help prevent you from getting overloaded or burnt out, resulting in you going back and compromising your privacy. Good luck!
If I’m being honest, it is fairly slow. It takes a good few seconds to respond on a 6800XT using the medium vram option. But that is the price to pay to running ai locally. Of course, a cluster should drastically improve the speed of the model.
You can run llms on text-generation-ui such as open llama and gpt2. It is very similar to the stable diffusion web ui.
Oh no! Anyway…
It is just how I prefer to do my computing. I tend to live on the command line and pipe programs together to get complex behavior. If you don’t like that, then my approach is not for you and that’s fine. As for your analogy, I see it more as “instead of driving down the road in a car, I like to put my own car together using prefabs”.
Same, I thought it was used commonly too.