Google has introduced FunctionGemma, a specialized version of the Gemma 3 270M model, designed specifically for function calling and optimized for edge workloads. FunctionGemma retains the Gemma 3 architecture but focuses on translating natural language into executable API actions rather than general chat. It uses a structured conversation format with control tokens to manage tool definitions and function calls, ensuring reliable tool use in production. The model, trained on 6 trillion tokens, supports a 256K vocabulary optimized for JSON and multilingual text, enhancing token efficiency. FunctionGemma's primary deployment target is edge devices like phones and laptops, benefiting from its compact size and quantization support for low-latency, low-memory inference. Demonstrations such as Mobile Actions and Tiny Garden showcase its ability to perform complex tasks on-device without server calls, achieving up to 85% accuracy after fine-tuning. This development signifies a step forward in creating efficient, localized AI solutions that can operate independently of cloud infrastructure, crucial for privacy and real-time applications.
Read Full Article: Google’s FunctionGemma: AI for Edge Function Calling