Posts

New top story on Hacker News: PlanetScale for Postgres

PlanetScale for Postgres 59 by adocomplete | 7 comments on Hacker News.

New top story on Hacker News: A CarFax for Used PCs; Hewlett Packard wants to give old laptops new life

A CarFax for Used PCs; Hewlett Packard wants to give old laptops new life 24 by rubenbe | 16 comments on Hacker News.

New top story on Hacker News: Asynchronous Error Handling Is Hard

Asynchronous Error Handling Is Hard 11 by hedgehog | 1 comments on Hacker News.

New top story on Hacker News: Loss of key US satellite data could send hurricane forecasting back 'decades'

Loss of key US satellite data could send hurricane forecasting back 'decades' 21 by trauco | 5 comments on Hacker News.

New top story on Hacker News: Europe's First Exascale Supercomputer Powers Up

Europe's First Exascale Supercomputer Powers Up 8 by Brajeshwar | 1 comments on Hacker News.

New top story on Hacker News: Show HN: A tool to benchmark LLM APIs (OpenAI, Claude, local/self-hosted)

Show HN: A tool to benchmark LLM APIs (OpenAI, Claude, local/self-hosted) 3 by mrqjr | 1 comments on Hacker News. I recently built a small open-source tool to benchmark different LLM API endpoints — including OpenAI, Claude, and self-hosted models (like llama.cpp). It runs a configurable number of test requests and reports two key metrics: • First-token latency (ms): How long it takes for the first token to appear • Output speed (tokens/sec): Overall output fluency Demo: https://llmapitest.com/ Code: https://ift.tt/A5w0Fk3 The goal is to provide a simple, visual, and reproducible way to evaluate performance across different LLM providers, including the growing number of third-party “proxy” or “cheap LLM API” services. It supports: • OpenAI-compatible APIs (official + proxies) • Claude (via Anthropic) • Local endpoints (custom/self-hosted) You can also self-host it with docker-compose. Config is clean, adding a new provider only requires a simple plugin-style addition. Would love fee...