Microsoft righted an age-old “wrong” (at least for those who geek out about disk formatting) earlier this week. With its latest Windows 11 Insider Canary Preview Build, the company increased the maximum FAT32 partition size limit from 32GB to 2TB when using the command line.
NTFS also has a 255 limit, but it’s UTF16, so for unicode, you will get more out of it. High price to pay for UTF16. Windows basically is moving stuff between UTF16 and ASCII all the time. Most apps are ASCII but Windows is natively UTF16. All other modernly maintained OS do UTF8, which “won” unicode.
The fact that all major Unix (not just Linux) filesystems are to 255 bytes says it’s not a feature in demand.
I’d much rather have COW subvolume snapshotting and incremental backup of btrfs or zfs. Plus all the other things Linux has over Windows of course.
NTFS also has a 255 limit, but it’s UTF16, so for unicode, you will get more out of it.
I think this is a biased way of putting it. NTFS way is easy to understand and therefore manage. What’s more important is that ASCII basically means English only. I’ve seen enough of such “discrimination” (stuff breaks etc.) based on used language in software/technology and it should end for good.
All other modernly maintained OS do UTF8, which “won” unicode.
UTF8 is Unicode. UTF8 symbols can take more than 1 byte.
Plus all the other things Linux has over Windows of course.
There are also encryption methods that slash maximum length of each filename even further.
Of course UTF8 is Unicode. The cool thing about UTF8 is that is ASCII, until it isn’t. It cover all of Unicode, but doesn’t need any bloat if you are just doing latin characters. Plus UTF8 will seamless go through ASCII code and things that understand it do, others just have patches of jibberish, but still work otherwise. It’s a way better approach. Better legacy handling and more efficient packing for latin languages. Which is why it “won” out. UTF16 pretty much only exists in Windows because it’s legacy it will be hard for it to escape.
LUKS is by far the most common encryption setup on Linux. It’s done at block layer and the filesystem doesn’t know about it. No effect of filename length, or anything else.
None of that helps or discards anything I’ve said above. But it allows to say that NTFS limit can be basically 1024 bytes. Just because you like what UTF-8 offers it doesn’t solve hurdles with Linux limits.
Linus’s VFS is where the 256 limit is hard. Some Linux filesystem, like RaiserFS, go way beyond it. If it was a big deal, it would be patched and widely spread. The magic of Linux, is you can try it yourself, run your own fork and submit patches.
LUKS is the one to talk about as the others aren’t as good an approach in general. LUKS is the recommended approach.
Edit: oh and NTFS is 512 bytes. UTF16 = 16bit = 2 bytes. 256*2 = 512
The magic of Linux, is you can try it yourself, run your own fork and submit patches.
Well it should probably go further and offer more of another kind of magic - where stuff works as user expects it to work.
As for submitting patches, it sounds like you suggest people play around and touch core parts responsible for file system operations. Such an advice is not going to work for everyone. Open source software is not ideal. It can be ideal in theory, but that’s it.
LUKS is the one to talk about as the others aren’t as good an approach in general. LUKS is the recommended approach.
It looks like there are enough use cases where some people would not prefer LUKS.
I have lived quite happily, on pretty much only open source for over 12 years now. Professionally and at home (longer at home). Debian I put with Wikipedia as an example of what humans can be.
There is no gate keepers in who can do what where. Only on who will accept the patches. Projects fork for all kinds of reasons, though even Google failed to fork the Linux kernel. If there is some good patch to extend the filename limit, it will get in. Enough pressure and maybe the core team of that subsystem will do it.
Open source already won I’m affriad. Most of the internet, IoT to super computers, runs open source. Has been that way for a while. If you use Windows, fine, but it is just a consumer end node OS for muggels. 😉
If you setup a new install, and say you want encryption, LUKS is what you get.
Open source is great when it works. “If there is some good patch…” and “Enough pressure and maybe…” is the sad reality of it. Why would people need to put pressure on order for Linux to start supporting features long available in file systems it supports? Why would I, specifically, should spend time on it? Does Linux want to become an os for everyone or only for people experimenting with dangerous stuff that make them lose data sometimes?
Don’t get me wrong, Linux is good even now. But there is no need to actively deny points of possible improvement. When they ask you how great XFS is compared to others you shouldn’t throw “exbibytes” word, you should first think what problems people might have with it, especially if they want to switch from windows.
If you setup a new install, and say you want encryption, LUKS is what you get.
And if I want to only encrypt some files? I need to create a volume specifically for that, right? Or I could just use something else.
Open source clearly works because of the scale and breath of it’s use. That’s the modern world and its use is only increasing. This a good thing for multiple reasons.
Unicode filename length clearly isn’t as big an issue as you feel or it would be fixed. There is some BIG money that could be spent to fix this for countries and companies who need unicode.
How you encrypt depends on your aim. If you aim is limit your character available for filenames, there are ways. If it’s read only, you do a GPG tar ball. LUKS if you want a live system. You can just create a file, LUKS format it.
NTFS also has a 255 limit, but it’s UTF16, so for unicode, you will get more out of it. High price to pay for UTF16. Windows basically is moving stuff between UTF16 and ASCII all the time. Most apps are ASCII but Windows is natively UTF16. All other modernly maintained OS do UTF8, which “won” unicode.
The fact that all major Unix (not just Linux) filesystems are to 255 bytes says it’s not a feature in demand.
I’d much rather have COW subvolume snapshotting and incremental backup of btrfs or zfs. Plus all the other things Linux has over Windows of course.
I think this is a biased way of putting it. NTFS way is easy to understand and therefore manage. What’s more important is that ASCII basically means English only. I’ve seen enough of such “discrimination” (stuff breaks etc.) based on used language in software/technology and it should end for good.
UTF8 is Unicode. UTF8 symbols can take more than 1 byte.
There are also encryption methods that slash maximum length of each filename even further.
Of course UTF8 is Unicode. The cool thing about UTF8 is that is ASCII, until it isn’t. It cover all of Unicode, but doesn’t need any bloat if you are just doing latin characters. Plus UTF8 will seamless go through ASCII code and things that understand it do, others just have patches of jibberish, but still work otherwise. It’s a way better approach. Better legacy handling and more efficient packing for latin languages. Which is why it “won” out. UTF16 pretty much only exists in Windows because it’s legacy it will be hard for it to escape.
LUKS is by far the most common encryption setup on Linux. It’s done at block layer and the filesystem doesn’t know about it. No effect of filename length, or anything else.
None of that helps or discards anything I’ve said above. But it allows to say that NTFS limit can be basically 1024 bytes. Just because you like what UTF-8 offers it doesn’t solve hurdles with Linux limits.
LUKS is commonly used but not the only one.
Linus’s VFS is where the 256 limit is hard. Some Linux filesystem, like RaiserFS, go way beyond it. If it was a big deal, it would be patched and widely spread. The magic of Linux, is you can try it yourself, run your own fork and submit patches.
LUKS is the one to talk about as the others aren’t as good an approach in general. LUKS is the recommended approach.
Edit: oh and NTFS is 512 bytes. UTF16 = 16bit = 2 bytes. 256*2 = 512
Well it should probably go further and offer more of another kind of magic - where stuff works as user expects it to work.
As for submitting patches, it sounds like you suggest people play around and touch core parts responsible for file system operations. Such an advice is not going to work for everyone. Open source software is not ideal. It can be ideal in theory, but that’s it.
It looks like there are enough use cases where some people would not prefer LUKS.
I have lived quite happily, on pretty much only open source for over 12 years now. Professionally and at home (longer at home). Debian I put with Wikipedia as an example of what humans can be.
There is no gate keepers in who can do what where. Only on who will accept the patches. Projects fork for all kinds of reasons, though even Google failed to fork the Linux kernel. If there is some good patch to extend the filename limit, it will get in. Enough pressure and maybe the core team of that subsystem will do it.
Open source already won I’m affriad. Most of the internet, IoT to super computers, runs open source. Has been that way for a while. If you use Windows, fine, but it is just a consumer end node OS for muggels. 😉
If you setup a new install, and say you want encryption, LUKS is what you get.
Does it look like I advocate for windows? Nah.
Open source is great when it works. “If there is some good patch…” and “Enough pressure and maybe…” is the sad reality of it. Why would people need to put pressure on order for Linux to start supporting features long available in file systems it supports? Why would I, specifically, should spend time on it? Does Linux want to become an os for everyone or only for people experimenting with dangerous stuff that make them lose data sometimes?
Don’t get me wrong, Linux is good even now. But there is no need to actively deny points of possible improvement. When they ask you how great XFS is compared to others you shouldn’t throw “exbibytes” word, you should first think what problems people might have with it, especially if they want to switch from windows.
And if I want to only encrypt some files? I need to create a volume specifically for that, right? Or I could just use something else.
Open source clearly works because of the scale and breath of it’s use. That’s the modern world and its use is only increasing. This a good thing for multiple reasons.
Unicode filename length clearly isn’t as big an issue as you feel or it would be fixed. There is some BIG money that could be spent to fix this for countries and companies who need unicode.
How you encrypt depends on your aim. If you aim is limit your character available for filenames, there are ways. If it’s read only, you do a GPG tar ball. LUKS if you want a live system. You can just create a file, LUKS format it.
Resetup
close
reopen
Basically the same as systemd-homed does for you: https://wiki.archlinux.org/title/Systemd-homed
But there are many ways. A good few filesystems offer folder/file encryption natively. Though I’d argue that’s less secure.