groq
| Slug |
groq
|
|---|---|
| Updated | 3 hours ago |
| Website | https://groq.com/ |
|---|
Groq is an AI inference company that pioneered the LPU (Language Processing Unit) in 2016, the first chip purpose-built for AI inference. Their proprietary LPU inference engine delivers ultra-low latency AI inference, with benchmarks showing Llama 2 70B running at 300 tokens per second—reportedly 10x faster than NVIDIA H100 clusters and 18x faster on Anyscale's LLMPerf Leaderboard. Groq focuses on making AI inference fast and affordable at scale, offering both cloud services and on-premises deployment options, with a mission to enable real-time AI applications that were previously impossible due to latency constraints.