Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Amazing product. We use VictoriaMetrics for quite a while, and previously used Loki and our custom Clickhouse/Vector approach for logs and we have switched to VictoriaLogs. It is much better and faster than Loki, same goes with custom CH/Vector thing we had. Kudos to the team, we are waiting for VictoriaTraces to switch Tempo instance to it for opentelemetry stuff.


Can you talk a little bit about your Victoria Logs setup? About how many logs are you ingesting and what kind of sizing do you have on your setup?


Sure thing!

Ingested logs 24h: 428 Mil Ingested bytes 24h: 625GB Inser req/s: 6k/s

8vCPU, 16GB mem. Running standard-rwo PVC on GCP.

We have a couple of projects like this with similar usage and similar machine sizing.

Still running vmlogs-single, and we will until we see a need to move to vmlogs-cluster version.


That sounds like a lot of resources provisioned for 6k/s. 625GB/24hr is a small footprint.


I would have said it sounded pretty good. What technologies are you comparing it against, out of curiosity.


ClickHouse


Can you share any additional details? What kind of ingestion do you have, with what dimensions of a clickhouse cluster?

I'm also curious how it handles structured vs unstructured logs.

Thanks!


That seems pretty good. Do you have any sort of HA solution?


personally Hetzner SX295 that has 14x 22 TB on a ZFS setup

It ingests 70k lines per second without a sweat

reads are just as fast




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: