You mean deeper than Lviv, which they have been striking from day 1 of the invasion? How much deeper can Russia still strike?
You mean deeper than Lviv, which they have been striking from day 1 of the invasion? How much deeper can Russia still strike?
Luxury! My homeserver has an i5 3470 with 6GB or RAM (yes, it’s a cursed 4+2 setup)! </badMontyPythonReference>
Interesting, I also run Nextcloud and pihole, and vaultwarden, jellyfin, paperless-ngx, gitea, vscode-server and a minecraft server (every now and then).
You’re right that such a system really does show its age, but only when doing multiple intensive tasks at the same time. I try not to backup my photos to Nextcloud while running minecraft, for example, as the imagine identification task pins my CPU at 100%. So yes, I agree, you’re probably not doing anything out of the ordinary on your setup.
The point I was trying to make still stands though, as that pi 2B could run more than I would’ve expected beforehand. I believe it once even ran jellyfin, a simple file server, samba, and a webserver with a simple HTML website. Jellyfin worked just fine, as long as the pi didn’t have to transcode (never got hardware transcoding to work).
It is funny that you should run out of memory, seeing as everything fits (albeit, just barely) on my machine in 1/5 the memory. Would de overhead of running VM’s account for such a large difference?
Coming from someone who started selfhosting on a pi 2B (similar-ish specs), you’d be surprised. If you don’t need anything fast or fancy, that 1GB will go a long way, and plenty of selfhosted apps require very little CPU. The only real problem I faced was that all HTTPS-related network tasks were limited at ~3MB/s, as that is how fast my pi could encrypt the data (presumably, I just saw my webserver utilising the entire CPU and figured this was the most likely explanation)
I’ve had good experiences with whisper.cpp (should be in the AUR). I used the large model on my GPU (3060), and it filled 11.5 out of the 12GB of vram, so you might have to settle for a lower tier model. The speed was pretty much real time on my GPU, so it might be quite a bit slower on your CPU, unless the lower tier models are also a lot faster (never tested them due to lack of necessity).
The large model had pretty much perfect accuracy (only 5 or so mistakes in ~40 pages of transcriptions), and that was with Dutch audio recorded on a smartphone. If it can handle my pretty horrible conditions, your audio should (hopefully) be no problem to transcribe.
It depends what you’re optimising for. If you want a single (relatively small) download to be available on your HDD as fast as possible, then your current setup might be better (optimising for lower latency). However, if you want to be maxing out your internet speeds at all time and increase your HDD speeds by making the copy sequential (optimising for throughput), then the setup with the catch drive will be better. Keep in mind that a HDD’s sequential write performance is significantly higher than its random write performance, so copying a large file in one go will be faster than copying a whole bunch of random chunks in a random order (like torrents do). You can check the difference for yourself by doing a disk benchmark and comparing the sequential vs random writes of your drive.
qBittorrent has exactly the option you’re looking for, I believe it’s called “incomplete download path” in the settings, letting you store incomplete downloads at a temporary path and moving them to their regular location when the download finishes. Aside from the download speed improvement, this will also lead to less fragmentation on your HDD (which might be part of the reason why it is so slow when downloading directly to it). Pre-allocating space could have the same effect, but I would recommend only using one of these two solutions at once (pre-allocating space on your SSD would only waste space)
It’s possible for a certain hardware/software setup not to support a certain codec. For example, my jellyfin client (Finamp) uses the iOS native decoders (afaik), which means opus files are practically broken. My music library (8000+ songs) contained exactly 1 lossy file, which just so happened to be an opus file. I decided to spend the extra ~20MB to standardise my entire library to flac files, ensuring I could play every song on all my devices.
Edit cause I posted too soon: you are generally correct; only in very specific circumstances will you encounter compatibility issues like this one in the modern world. This is 100% apple being apple, and you can expect pretty much every other (reasonably modern) device to support all codecs you might encounter in the wild.
To add to the audio compression: it isn’t possible to further compress an mp3 file without losing any quality. You can either:
If you’re willing to spend some extra time learning about audio compression, you can download lossless files and compress those directly to whatever format and bitrate you want. The quality will be better than option 1 above, as the audio is only lossely compressed once instead of twice.
I have about 0 experience with openssl, I just looked at the man page (openssl-enc). It looks like this command doesn’t take a positional argument. I believe the etcBackup.key file isn’t being read, as that command simply doesn’t attempt to read any files without a flag like -in or -out. I could be wrong though, see previously stated inexperience.
Dutch media are reporting the same thing: https://nos.nl/l/2529468 (liveblog) https://nos.nl/l/2529464 (Normal article)
That seems like a good edit, and fair enough. Good to know that there is also room for people who want to use their computer in a non-fanatical way, simply minding our own business.
I don’t fit in an of these teams, and neither do literally all Linux users I know. Should we have identity crises, or could this be a giant oversimplification?
Which compression level are you using? My old server is able to compress flac’s at the highest (and therefore “slowest”) compression level at >50x speed, so bumping the level up shouldn’t be too hard on your CPU.
I’ve been running some external drives on my server for about a year now. In my experience, hard drives with an external power supply suffer less from random disconnects. The specific PC also makes quite a large difference in reliability. My server is just a regular desktop and has very little problem staying connected and powering my 3 external drives. My seedbox is an old laptop, and has been having almost constant problems with random disconnects and power issues. Maybe test how well your framework does with some external drives before committing to the plan?
To change the ownership of the files, you should only have to run sudo chown -R user:group directory
. -R makes chown run recursively, so it will modify the directory and all subdirectories and files. Do note that changing the ownership to plex:plex or something similar would leave your user unable to normally modify the files. My solution to this was to add both my regular user and the plex (in my case jellyfin) user to the same group. That way both users can easily see and modify the files, as long as the group has read/write permissions (the 2nd column of rwx in ls -Al
). If necessary, you can add group permissions with sudo chmod -R g+rw directory
.
On a side note: have you considered using jellyfin? It’s a completely free alternative to plex, which recently received a truly massive update with tons of new features. Some people prefer plex’ overall experience, but I’ve been running jellyfin with almost no complaints.
Small disclaimer: I’m writing from mobile, so the commands might not be 100% correct. Run at your own risk, and NEVER POINT A CHMOD/CHOWN COMMAND AT SYSTEM DIRECTORIES LIKE / OR /USR. That’s one of the easiest ways to completely break your system.
Have you tried the official guide from the jellyfin website?
As for the guide this AI generated: it bothers me that they instruct you to use chocolatey for the *arrs, but still advice you to install docker, qbittorrent and jellyfin manually (all of which have chocolatey packages). I disagree with the comment that external storage would be recommended, as internal storage is generally more reliable (depending on a lot of factors of course). Also, I believe the “adding a library”-section of the jellyfin setup is a bit too short to be of any use, and would recommend referring to the jellyfin docs instead.
This guide also doesn’t explain how to make jellyfin accessible outside of your LAN. Once again, I’d recommend referring to the jellyfin docs if you want to do this.
I personally have only set up qbittorrent, jellyfin and docker (not the *arr suite), so I can’t comment on the completeness of the guide, but I wouldn’t trust it too much (seeing the previous oversights).
And finally, as someone who started their selfhosted server journey on windows: don’t. There is a reason why almost all guides are written for linux, as it is (in my humble opinion) vastly superior for server usage once you get used to it.
didn’t know that was a part of bisexuality
I should probably flee before I get eaten by an army of blahåjar (apparently that’s the correct plural?)
Oh I don’t mind the nitpicking, thanks for the explanation! I (apparently erroneously) thought “demake” and “decompile” were synonyms. Guess I’m one of today’s 10000.
In that case the (now taken down, but forked a gazillion times) portal64 project would be a correct example of a demake, right?
interested in females
Username checks out, though I’m assuming you meant “demakes”?
Anyways, the demake I’m most familiar with is the in-progress Lego island. The YouTuber behind it documented part of the process in vlogs (linked on the GitHub page), so that might be an interesting starting point.
Source: Gapminder, cited as source by the above graph as well
Funny how much the graph changes when you have more than 1 data point per decade every decade. Almost makes me wonder whether the creator of the above graph was trying to paint a certain picture instead of presenting raw data in a way that makes it easier to grasp, without bias.
Notice the inflection point where Mao implements the “great leap forward”. Also notice other countries’ similar rates of increasing life expectancy in the graph below, just without the same ravine around 1960.
I’m sorry, but I have to disagree with (what I think to be) your implicit claim that Mao somehow single-handedly raised China’s life expectancy through the power of communism or whatever. Please do correct me if this wasn’t your implicit claim, and if you we’re either 1) yourself mislead by the graph you shared, or 2) you have some other claim entirely that is somehow supported by said graph.