Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Does anyone have any recommendations for a simple S3-wrapper to a standard dir? I've got a few apps/services that can send data to S3 (or S3 compatible services) that I want to point to a local server I have, but they don't support SFTP or any of the more "primitive" solutions. I did use a python local-s3 thing, but it was... not good.




Versity Gateway looks like a reasonable option here. I haven't personally used it, but I know some folks who say it performs pretty great as a "ZFS-backed S3" alternative.

https://github.com/versity/versitygw

Unlike other options like Garage or Minio, it doesn't have any clustering, replication, erasure coding, ...

Your S3 objects are just files on disk, and Versity exposes it. I gather it exists to provide an S3 interface on top of their other project (ScoutFS), but it seems like it should work on any old filesystem.


Versity is really promising. I got a chance to meet with Ben recently at the Super Computing conference in St. Louis and he was super chill about stuff. Big shout out to him.

He also mentioned that the minio-to-versity migration is a straight forward process. Apparently, you just read the data from mino's shadow filesystem and set it as an extended attribute in your file.


I really like what I've (just now) read about Versity. I like that they are thinking about large scale deployments with tape as the explicit cold-storage option. It really makes sense to me coming from an HPC background.

Thanks for posting this, as it's the first I've come across their work.


Garage also decide to not implement erasure coding.

You could perhaps checkout https://garagehq.deuxfleurs.fr/

I've done some preliminary testing with garage and I was pleasantly surprised. It worked as expected and didn't run into any gotchas.

Garage is really good for core S3, the only thing I ran into was it didn't support object tagging. It could be considered maybe a more esoteric corner of the S3 api, but minio does support it. Especially if you're just mapping for a test api, object tagging is most likely an unneeded feature anyway.

It's a "Misc" endpoint in the Garage docs here: https://garagehq.deuxfleurs.fr/documentation/reference-manua...


"didn't support object tagging"

Thanks for pointing that out.


Do you want to serve already existing files from a directory or just that the backend is a directory on your server?

If the answer is the latter, seaweedfs is an option:

https://github.com/seaweedfs/seaweedfs?tab=readme-ov-file#qu...


s3proxy has a filesystem backend [0].

Possibly of interest: s3gw[1] is a modified version of ceph's radosgw that allows it to run standalone. It's geared towards kubernetes (notably part of Rancher's storage solution), but should work as a standalone container.

[0] https://github.com/gaul/s3proxy [1] https://github.com/s3gw-tech/s3gw


Check out from nvidia, aistore: https://github.com/NVIDIA/aistore

It's not a fully featured s3 compatible service, like MinIO, but we used it to great success as a local on-prem s3 read/write cache with AWS as the backing S3 store. This avoided expensive network egress charges as we wanted to process data in both AWS as well as in a non-AWS GPU cluster (i.e. a neocloud)


that is not easily possible. In S3, "foo" and "foo/bar" are valid and distinct object names that cannot be directly mapped to a POSIX directory. As soon as you create one of those objects, you cannot create the other

rclone serve s3, could be.

I just learned about the rclone serve subcommand the other day. Rclone is not exactly niche, but it feels like such an underrated piece of software.

This is the winner



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: