The key is: don't experiment / develop on a giant database. Make a small subset of the database for testing views on. Then they are plenty fast.
As far as generation speed, what matters is that the indexer can keep up with the insert rate. Unless you are doing a big import from an existing dataset, you'll have to have A LOT of user activity to generate so much data that you are outrunning the indexer. In that case you probably have a big enough project that it makes sense to use a cluster solution like CouchDB-Lounge (which will divide the generation time by roughly the # of hosts you bring into the cluster).
Someday soon I hope we'll have an integrated clustering solution (Cloudant is working to contribute theirs back to the project) so you can just deploy to a set of nodes and get these benefits without much additional operational complexity.
As far as generation speed, what matters is that the indexer can keep up with the insert rate. Unless you are doing a big import from an existing dataset, you'll have to have A LOT of user activity to generate so much data that you are outrunning the indexer. In that case you probably have a big enough project that it makes sense to use a cluster solution like CouchDB-Lounge (which will divide the generation time by roughly the # of hosts you bring into the cluster).
Someday soon I hope we'll have an integrated clustering solution (Cloudant is working to contribute theirs back to the project) so you can just deploy to a set of nodes and get these benefits without much additional operational complexity.