Interesting, so why do that? Is it just to simplify your client code? Instead of using the s3 api, you just save files in a standard (virtual) file system? Any other benefits or reliability/performance drawbacks?
I use it, for example, for Gitea storage. My host doesn't have a lot of storage, but I use reclone so that it all goes directly onto Google Drive (unlimited paid storage).
Gitea also has a private docker container registry built in, which quickly grows large into several (hundred) gigabytes. It all works perfectly well with rclone.
This makes my host stateless. Just run gitea docker image with Google-Drive backed storage. It works great because both git repositories and docker images are backed by files that are essentially immutable.
An example that would not work well would be trying to run a container whose storage is a SQLite file that updates often. Trying to sync that SQLite file to google drive with rclone would be a bad idea.
Cloud drive approach considerably simplifies the code. You can use shell scripts, Unix tools, 3rd party apps. You can combine them. This approach gives a lot of freedom and power. It is just Unix but with a cloud.
Such approach also protects you from a vendor lock-in: you can use any cloud storage you like today.
Performance drawbacks are evident - if a file is not cached in the local cache then it takes some time to get it there. But it does not really matter for the most apps because that initial lag is relatively short.