🧪 LLM Inference
Local AI Chat with MediaPipe & WebGPU
Initializing...
Using Gemma 3 270M (runs locally in your browser)
Send