To be honest I’m more concerned by language-humor
.
Like not even saying what kind of humour, just any type of humour at all.
Jokes are for adults only!
To be honest I’m more concerned by language-humor
.
Like not even saying what kind of humour, just any type of humour at all.
Jokes are for adults only!
Heads up for anyone (like me) who isn’t already familiar with SimpleX, unfortunately its name makes it impossible to search for unless you already know what it is. I was only able to track it down after a couple frustrating minutes after I added “linux” into the search on a lark.
Reminds me a little of the old Jonathan Shapiro research OSes (Coyotos, EROS, CapROS), though toned down a little bit. The EROS family was about eliminating the filesystem entirely at the OS level since you can simulate files with capabilities anyway. Serenum seems to be toning that down a little and effectively having file- or directory-level capabilities, which I think is sensible if you’re going to have a capability-based OS, since they end up being a bit more user-visible as an OS.
He’s got the same problem every research OS has: zero software. He’s probably smart to ditch the idea of hardware entirely and just fix on one hardware platform.
I wish him luck selling his computer systems, but I doubt he’s going to do very well. What would a customer do with one of these? Edit files? And then…edit them again? I guess you can show off how inconvenient it is to edit things due to its security.
I just mean it’s a bit optimistic to try and fund this by selling it. I understand he doesn’t have a research grant, but it’s clearly just a research OS.
I feel like this should be required reading for a lot of Linux users. That article is a couple years old now, but I think is even more true now than it was when it was written. Having a middleman (package maintainer) between the user and the software developer is a tremendous benefit. Maintainers enforce quality, and if you bypass them, you’re going to end up with Linux as the Google Play Store (doubly so if you try and fool yourself into thinking it won’t happen because “Linux is different”)
The search term is censored by DuckDuckGo in Korea. Even robots apparently think it’s going to be an IoT buttplug.
That’s Saturday night in North American time zones. Just a heads up in case you’re planning a boys’ night out a couple hundred billion years in advance, maybe move it to Friday night in case the world ends Saturday night.
It’s not. He was very explicitly not talking about his murder there.
In a certain light, you could argue that Linus doesn’t really have any control at all. He doesn’t write any code for Linux (hasn’t in many years), doesn’t do any real planning or commanding or managing. “All” he does is coordinate merges and maintain his own personal git branch. (And he’s not alone in that: a lot of people maintain their own Linux branches). He has literally no formal authority at all in Linux development.
It just so happens that, by a very large margin, his own personal git branch is the most popular and trusted in the world. People trust his judgment for what goes in and doesn’t go in.
It’s not like Linux development is stopped because Linus goes offline (or goes on vacation or whatever). People keep writing code and discussing and testing and whatnot. It’s just that without Linus’s discerning eye casting judgment on their work, it doesn’t enter the mainstream.
Nothing will really get slowed down. Whether something officially gets labelled by Linus as “6.8” or “6.whatever” doesn’t really matter in the big picture of Linux development.
Ah thanks for that! You can tell how long it’s been since I’ve used Mac OS.
Isn’t it Mac OS X 14? I.e., Mac OS 10.14?
The stat
command is using statx, which gives you a slightly different struct.
statx is the cool new Linux-only system call for stat-ing.
Not every filesystem will support the new btime field.
(And, as you correctly say, many of those time fields are wrong, anyway)
I used to run a TFTP server on my router that held the decryption keys. As soon as a machine got far enough in the boot sequence to get network access, it would pull the decryption keys from the router. That way a thief would have to steal the router along with the computer, and have the router running when booting up the computer. It works wirelessly, too!
I’m going to reframe the question as “Are computers good for someone tech illiterate?”
I think the answer is “yes, if you have someone that can help you”.
The problem with proprietary systems like Windows or OS X is that that “someone” is a large corporation. And, in fairness, they generally do a good job of looking after tech illiterate people. They ensure that their users don’t have to worry about how to do updates, or figure out what browser they should be using, or what have you.
But (and it’s a big but) they don’t actually care about you. Their interest making sure you have a good experience ends at a dollar sign. If they think what’s best for you is to show you ads and spy on you, that’s what they’ll do. And you’re in a tricky position with them because you kind of have to trust them.
So with Linux you don’t have a corporation looking after you. You do have a community (like this one) to some degree, but there’s a limit to how much we can help you. We’re not there on your computer with you (thankfully, for your privacy’s sake), so to a large degree, you are kind of on your own.
But Linux actually works very well if you have a trusted friend/partner/child/sibling/whoever who can help you out now and then. If you’ve got someone to help you out with it, Linux can actually work very very well for tech illiterate people. The general experience of browsing around, editing documents, editing photos, etc., works very much the same way as it does on Windows or OS X. You will probably be able to do all that without help.
But you might not know which software is best for editing photos. Or you might need help with a specific task (like getting a printer set up) and having someone to fall back on will give you much better experience.
I think for most people they won’t care either way.
Some people do legitimately occasionally need to poke around in GRUB before loading the kernel. Setting up certain kernel parameters or looking for something on the filesystem or something like that. For those people, booting directly into the kernel means your ability to “poke around” is now limited by how nice your motherboard’s firmware is. But even for those people, they should always at least have the option of setting up a 2-stage boot.
The principled “old” way of adding fancy features to your filesystem was through block-level technologies, like LVM and LUKS. Both of those are filesystem-agnostic, meaning you can use them with any filesystem. They just act as block devices, and you can put any filesystem on top of them.
You want to be able to dynamically grow and shrink partitions without moving them around? LVM has you covered! You want to do RAID? mdadm has you covered! You want to do encryption? LUKS has you covered? You want snapshotting? Uh, well…technically LVM can do that…it’s kind of awkward to manage, though.
Anyway, the point is, all of them can be mixed and matched in any configuration you want. You want a RAID6 where one device is encrypted split up into an ext4 and two XFS partitions where one of the XFS partitions is in RAID10 with another drive for some stupid reason? Do it up, man. Nothing stopping you.
For some reason (I’m actually not sure of the reason), this stagnated. Red Hat’s Strata project has tried to continue pushing in this direction, kind of, but in general, I guess developers just didn’t find this kind of work that sexy. I mentioned LVM can do snapshotting "kind of awkward"ly. Nobody’s done it in as sexy and easy way to do as the cool new COWs.
So, ZFS was an absolute bombshell when it landed in the mid 2000s. It did everything LVM did, but way way way better. It did everything mdadm did, but way way way better. It did everything XFS did, but way way way better. Okay it didn’t do LUKS stuff (yet), but that was promised to be coming. It was Copy-On-Write and B-tree-everywhere. It did everything that (almost) every other block-level and filesystem previously made had ever done, but better. It was just…the best. And it shit all over that block-layer stuff.
But…well…it needed a lot of RAM, and it was licensed in a way such that Linux couldn’t get it right away, and when it did get ZFS support, it wasn’t like native in-the-kernel kind of stuff that people were used to.
But it was so good that it inspired other people to copy it. They looked at ZFS and said “hey why don’t we throw away all this block-level layered stuff? Why don’t we just do every possible thing in one filesystem?”.
And so BtrFS was born. (I don’t know why it’s pronounced “butter” either).
And now we have bcachefs, too.
What’s the difference between them all? Honestly mostly licensing, developer energy, and maturity. ZFS has been around for ages and is the most mature. bcachefs is brand spanking new. BtrFS is in the middle. Technically speaking, all of them either do each other’s features or have each other’s features on their TODO list. LUKS in particular is still very commonly used because encryption is still missing in most (all?) of them, but will be done eventually.
YouTube titles, too :(
Yes, it is. ed25519 depends upon discrete log for its security, which Shor’s algorithm can (theoretically, of course, not like it’s ever been done) efficiently solve.
The post-quantum algorithms are in active research right now. I don’t blame anyone for avoiding those at least until we’ve quantum computers big enough to solve baby toy elliptic curves.
To the best of my knowledge, this “drives from the same batch fail at around the same time” folk wisdom has never been demonstrated in statistical studies. But, I mean, mixing drive models is certainly not going to do any harm.
Those instructions are from the official docs, and install.sh comes from the source repo. It’s an annoying script (it basically runs apt, npm, make, on your behalf…thanks, I can do that myself), but if you’re trusting the repo source to begin with, I don’t think it’s any less secure.
This may be super-nitpicky (and I lose LocalSend and use it a lot), but there is one difference between LocalSend and Airdrop. LocalSend requires network connectivity (and requires the devices to be on the same network), whereas Airdrop can work without any network connection (using Bluetooth).