What this means in practice, especially with GPT 3.5 now in the market and GPT4 likely launching soon, remains to be seen, though a year ago, Google said PaLM typically outperformed GPT3 in math questions, something large language models aren't necessarily best at. At the time, the company noted that the Google Research team responsible for the model was looking to build a model that could "generalize across domains and tasks while being highly efficient." With its 540-billion parameters, it's a significantly larger model than OpenAI's GPT3 with its 175 billion parameters. It's interesting that Google went with the PaLM model as the foundation for these services. The one thing we didn't see today was the public release of LaMDA, Google's best-known model. The focus here is also clearly on business users who want to augment the model with their own data and/or tune it for their use cases. Throughout its announcements, Google stressed that a company's training data will always be kept private and not used to train the broader model. They can opt to give the large language model control of this flow or use a more deterministic flow (maybe in a customer service scenario), where there is no risk of the model going off-piste. He stressed that the users will get control over the generative flow here. Kurian noted that this could be used to retrieve information, but also - with the right hooks into a company's APIs - to transact. To do this, Google combined its foundation models with its enterprise search capabilities and its conversation AI for building single- and multi-turn conversations. "Generative AI Application Builder is a fast application development environment designed to allow business users- not necessarily just developers - but to allow business users to work in concert with developers to leverage the power of search, conversation experiences and foundation models, while respecting enterprise controls," Kurian explained. It will allow developers to build AI-powered chat interfaces and digital assistants based on their own data. The Generative AI App Builder is an entirely new service. The idea here is to let developers give a number of examples to the tool to teach it what kinds of results they are looking for - and then test these and make them available as code, but Google provided very few details about how exactly this service will work in practice. PaLM chat will be the tool for building chat-style, multi-turn applications while PaLM text is meant for single-turn input/output scenarios. This service, too, will only be available to Trusted Testers and will make two models available to these developers: PaLM chat-bison-001 and PaLM text-bison-001. A company spokesperson told me that Google chose PaLM for this first release "as it works particularly well for chat and text use cases." Likely candidates for additional models are LaMDA and MUM.įor developers who don't want to delve into the API, Google is launching the low-code MakerSuite service. Why Google is not using a more generic name than "PaLM" for this API is anyone's guess, but the company is indeed making the PaLM model available through this API for multi-turn conversations and for single-turn general purpose use cases like text summarization and classification. Google says that starting today, it will make an "efficient model available in terms of size and capabilities" and that it will add other models and sizes soon.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |