-
Notifications
You must be signed in to change notification settings - Fork 939
AI Hybrid Inference: extract expected inputs from prompt #8989
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: firebase-ai-hybridinference
Are you sure you want to change the base?
AI Hybrid Inference: extract expected inputs from prompt #8989
Conversation
|
Vertex AI Mock Responses Check
|
Size Report 1Affected Products
Test Logs |
Size Analysis Report 1Affected Products
Test Logs |
2dab7ea
to
e002c40
Compare
// Triggers out-of-band download so model will eventually become available. | ||
const availability = await this.downloadIfAvailable(); | ||
const availability = await this.downloadIfAvailable(mergedOptions); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I now see the same error in Edge that we used to have in Chrome when we were migrated to the new type: the LanguageModel.prompt
method exists, but throws an unsupported input error with the new rich type. Edge Canary works. I'll take a look at detecting the Edge version in isAvailable.
Problem Statement
Vertex doesn't require callers to pre-specify expected input types. Could we make the hybrid API do the same?
Now that Edge has added support for text inputs, but not image inputs, we need to remove the default image input type, which further motivates this change.
Solution
Since the AI SDK request object contains the intended input types, we can extract those and reformat them into the format expected by Chrome.