It's removing a layer of complexity from your storage: just as the article explains (does anyone read the articles? Anyone?) there's currently a lot of overhead in SSDs mapping their underlying model into pretending to be hard drives with 512 byte or 4K sectors, just like 40 year old spinning platter.
Your filesystem is then layering a bunch of work on top of that to map the things you care about - files - into a bunch of fragments and metadata into those 4K chunks. You would gain the ability to do things like:
1. Throw away all the spinning disk emulation code in the SSD.
2. Align the FS level primitives with the storage: if you've set your RAID chunks to 64K per array member (for example), store a 64K object in one write, not break it into 16 x 4 K blocks. If your ZFS filesystem is set to 1 MB records, write 1 MB objects to disk, not many 4 K chunks.
3. Variable sized objects mean your filesystem could simply dispatch whole files as objects: if the FS knows your photo is a 20 MB file and your source code file is 1K, it no longer has to break the photo into many blocks, or waste a whole 4K block on a 1K file, it writes a 20 MB object and a 1K object.
4. Applications could access the storage even more directly where it makes sense: Postgres, for example, stores large records via the toast mechanism, where very large column in a row will be stored as a separately to the rest of the table (so as not to blow out the table files). You could extend that special case to simply address the storage directly, and not bother with filesystem overhead at all.
Your filesystem is then layering a bunch of work on top of that to map the things you care about - files - into a bunch of fragments and metadata into those 4K chunks. You would gain the ability to do things like:
1. Throw away all the spinning disk emulation code in the SSD.
2. Align the FS level primitives with the storage: if you've set your RAID chunks to 64K per array member (for example), store a 64K object in one write, not break it into 16 x 4 K blocks. If your ZFS filesystem is set to 1 MB records, write 1 MB objects to disk, not many 4 K chunks.
3. Variable sized objects mean your filesystem could simply dispatch whole files as objects: if the FS knows your photo is a 20 MB file and your source code file is 1K, it no longer has to break the photo into many blocks, or waste a whole 4K block on a 1K file, it writes a 20 MB object and a 1K object.
4. Applications could access the storage even more directly where it makes sense: Postgres, for example, stores large records via the toast mechanism, where very large column in a row will be stored as a separately to the rest of the table (so as not to blow out the table files). You could extend that special case to simply address the storage directly, and not bother with filesystem overhead at all.