NHacker Next
login
▲Load Test GlassFlow for ClickHouse: Real-Time Dedup at Scaleglassflow.dev
19 points by super_ar 3 days ago | 9 comments
Loading comments...
super_ar 3 days ago [-]
Hi HN, A few weeks ago, we shared GlassFlow: Open Source streaming ETL to dedup and join streams from Kafka for ClickHouse (https://news.ycombinator.com/item?id=43953722).

One of the top questions we received was: “How well does it perform at high throughput?”

We ran a load test and would like to share some results with you.

Summary of the test:

- Tested on 20m records

- Kafka produced 55,000 records/sec

- Processing rate of GlassFlow (deduplication): 9,000+ records/sec

- Measured on a MacBook Pro (M3 Max)

- End-to-end latency: <0.12 ms per request

Here is the blog post with full test results and tried with different parameters (rps, # of publishers, etc.): https://www.glassflow.dev/blog/load-test-glass-flow-for-clic...

It was important to us to set up the testing in a way that everybody could reproduce. Here are the docs: https://docs.glassflow.dev/load-test/setup

We would love to get feedback, especially from folks consuming high-throughput in ClickHouse.

Thanks for reading!

Ashish and Armend (founders)

secondcoming 2 hours ago [-]
> - Measured on a MacBook Pro (M3 Max)

Everything was running on the same machine?

super_ar 2 hours ago [-]
Yes, same machine.
kI3RO 3 hours ago [-]
That site has no scrollbars so I can't read it. Any alternative?
super_ar 3 hours ago [-]
There is another test that we published on our docs page. You can check it out here:

Setup: https://docs.glassflow.dev/load-test/setup

Results: https://docs.glassflow.dev/load-test/results

api 6 hours ago [-]
Unless I’m missing some big numbers somewhere you could do that locally on a pi 5 with efficient code. Nothing heroic required, just a decently fast language like Go.

My laptop can run 70B LLMs at usable speeds.

I know. Doesn’t scale. No redundancy. No auto redeploy on failures. This is what I mean.

Do we really have to sacrifice this much efficiency for those things or are we doing it wrong? Does the ability to redeploy on failures, cluster, and scale really require order of magnitude performance penalties across the whole stack?

super_ar 5 hours ago [-]
Totally fair point. For stable, known workloads, you can get really far with something lightweight on a single machine. The challenge comes when you need fault tolerance, scaling, and delivery guarantees without constantly jumping in to fix things. Often heard from data teams talking about data peaks that they cannot predict as easily. But yes, a lot of existing tools make you pay a high-efficiency cost for that. At GlassFlow we are trying to hit that sweet spot...efficient but still resilient.
CaveTech 3 hours ago [-]
I think your benchmark may miss the mark a bit if this is your angle.

20m records and 9k/sec isn’t very impressive. I would imagine most prospective customers have larger workloads, as you could throw this behind Postgres and call it a day. FWIW I was interested but your metrics made me second guess and wonder what was wrong.

super_ar 3 hours ago [-]
Fair point. Thanks for calling it out! To clarify, we’re focused on a specific use case: Kafka to ClickHouse pipelines with exactly-once guarantees. Kafka can’t provide exactly-once out of the box when writing to external systems like ClickHouse. You could use something like Flink, but there’s no native Flink-to-ClickHouse connector and Flink requires certain ops effort from the teams. Our goal was to show users a very easy-to-reproduce load test to validate the results. As a next step, we’re actively working on a Kubernetes-ready version that will scale horizontally and plan to share those higher-throughput results with the HN community soon.