I often find myself explaining the same things in real life and online, so I recently started writing technical blog posts.
This one is about why it was a mistake to call 1024 bytes a kilobyte. It’s about a 20min read so thank you very much in advance if you find the time to read it.
Feedback is very much welcome. Thank you.
Short answer: It’s because of binary.
Computers are very good at calculating with powers of two, and because of that a lot of computer concepts use powers of two to make calculations easier.
Edit: Oops… It’s 210, not 27
Sorry y’all… 😅
FYFY
FYFYFTFY FTFYYeah, I deserve that. I’m just gonna leave my typo. Thanks for the laugh!
I’m confused, why this quotation? 1024 is 210, not 27
So the problem is that our decimal number system just sucks. Should have gone with hexadecimal 😎
/Joking, if it isn’t obvious. Thank you for the explanation.
Or seximal!
Not that 1024 would be any better, as it’s 4424 in base 6.
Long answer
Just to add, I would argue that by definition of prefixes it is 1000.
However there are other terms to use, in this case Kibibyte (kilo binary byte, KiB instead of just KB) that way you are being clear on what you actually mean (particularly a big difference with modern storage/file sizes)
EDIT: Of course the link in the post goes over this, I admit my brain initially glossed over that and I thought it was a question thread
deleted by creator