(Daily Point) — Google has taken a significant stride in democratizing access to advanced AI technologies with the introduction of Gemma, a family of “open models” designed for building AI software.
This move allows developers and businesses to harness AI capabilities in their projects without the hindrance of high costs, as Google provides free access to model weights and technical data. Gemma, optimized for Google Cloud, presents an attractive proposition to developers, complemented by a $300 credit incentive for first-time cloud customers, potentially driving revenue growth for Google in the competitive cloud computing market.
While not entirely open source, Gemma represents a compromise between proprietary control and open collaboration by releasing key technical data. This approach allows Google to strike a balance, retaining some control over terms of use and ownership to prevent potential abuses while fostering widespread contributions and benefits within the AI community.
In contrast to Meta’s Llama 2 models with sizes ranging from seven to 70 billion parameters, Gemma models are configured at two billion or seven billion parameters. While the size of Google’s largest Gemini models remains undisclosed, Gemma caters to developers who prioritize robust AI capabilities without the need for the scale of larger models.
Additionally, Google’s collaboration with Nvidia ensures seamless integration of Gemma models with Nvidia’s chips, expanding the compatibility of these models across a broader range of hardware platforms. Nvidia’s commitment to making chatbot software compatible with Gemma further enhances its accessibility and usability across various computing environments.