Revert "Reduce gemini partial batch noise"
This reverts commit 3be797131a.
This commit is contained in:
@@ -655,22 +655,6 @@
|
||||
- If behavior in the browser does not match the latest backend/frontend code, the first assumption should be stale frontend assets until proven otherwise
|
||||
|
||||
## Recent Change Log
|
||||
- Date: `2026-03-17`
|
||||
- What changed:
|
||||
- Added adaptive Gemini Vision output-token sizing so smaller candidate batches, especially single-candidate sequential recovery calls, now request much shorter responses.
|
||||
- Added a dedicated shorter single-candidate Gemini Vision instruction path for sequential recovery after batch failure.
|
||||
- Stopped counting a batch as a strong user-facing partial failure when sequential recovery still salvages recommendations from that batch.
|
||||
- Added unit coverage for the adaptive Gemini Vision token budget helper.
|
||||
- Why it changed:
|
||||
- The user-provided log `ai-media-hub-2026-03-17T07-55-17-127Z.log` still showed `gemini vision partially failed on 4 of 6 batches`.
|
||||
- The same log also showed `sequentialRetried: 0`, which means the fallback single-candidate reevaluation path was still not recovering those truncated JSON batches well enough.
|
||||
- How it was verified:
|
||||
- `pwsh -NoProfile -File scripts/selftest.ps1`
|
||||
- added Go tests for adaptive Gemini token sizing
|
||||
- What is still risky or incomplete:
|
||||
- This reduces partial-failure pressure further, but extremely short or malformed Gemini outputs can still fail before one complete recommendation object is emitted.
|
||||
- Smaller recovery responses improve reliability, but repeated sequential recovery can still add latency on difficult searches.
|
||||
|
||||
- Date: `2026-03-17`
|
||||
- What changed:
|
||||
- Reduced Gemini Vision batch size from `6` to `4` so each model response carries fewer recommendation objects and is less likely to be truncated mid-JSON.
|
||||
|
||||
Reference in New Issue
Block a user