This commit is contained in:
@@ -34,6 +34,7 @@
|
||||
- Card summaries now also translate lazily to Korean, and Gemini negative-assessment handling now drives stronger follow-up search behavior than before.
|
||||
- Search preview delivery is now moving away from persistent on-disk preview caching toward live proxy / live transcode behavior, with Google Video preview reuse added to result cards and modal playback.
|
||||
- Search breadth and Gemini review budget were widened again because the latest user feedback still reported a thin visible result count even after smarter filtering.
|
||||
- The codebase is now back on the broader-search / modal-fitting direction associated with `5ca7aef`, with an added Gemini deadline guard to reduce reverse-proxy `504` risk.
|
||||
|
||||
## Current Architecture
|
||||
- `backend/main.go`
|
||||
@@ -233,6 +234,7 @@
|
||||
- The result modal should now stay within viewport height, but this still needs real browser confirmation on multiple short-height displays because CSS-only constraints were the source of the latest user-visible regression.
|
||||
- Artgrid preview playback now has a server-side ffmpeg transcode path for `.m3u8` style preview URLs, but this trades storage savings for runtime CPU cost.
|
||||
- The provided Artgrid HTML sample still does not expose a direct preview `m3u8` or `mp4` URL by itself, and `yt-dlp` probe on the sample clip URL returned `Unsupported URL`, so fully reliable Artgrid playback still depends on live-page/runtime preview discovery succeeding elsewhere in the pipeline.
|
||||
- Docker CLI is not installed in this environment, so container-image build verification still cannot be performed locally from this machine.
|
||||
- The local self-test script is better than before, but it is still a smoke test, not full integration coverage.
|
||||
|
||||
## Current Risks Around Search Quality
|
||||
@@ -554,6 +556,7 @@
|
||||
## Highest-Value Next Steps
|
||||
- [ ] Reduce `/api/search` latency further without collapsing result count
|
||||
- [ ] Rebuild the reverted search-expansion work from the previous stable baseline, but only after measuring where candidate quality collapses between ranked pool and final merge
|
||||
- [ ] Validate the reopened `5ca7aef` search-breadth direction against real proxy timeouts and visible result count before widening it any further
|
||||
- [ ] Build a repeatable repo-local bootstrap script or documented setup command set for non-root machines so fresh PC setup does not depend on shell history
|
||||
- [ ] Improve Envato / Artgrid preview acquisition reliability so Gemini Vision sees real frames more often
|
||||
- [ ] Browser-verify the new result modal at multiple viewport heights and confirm translated Source Summary readability on real long descriptions
|
||||
@@ -625,6 +628,23 @@
|
||||
- If behavior in the browser does not match the latest backend/frontend code, the first assumption should be stale frontend assets until proven otherwise
|
||||
|
||||
## Recent Change Log
|
||||
- Date: `2026-03-17`
|
||||
- What changed:
|
||||
- Reapplied the broader-search / modal-fitting codepath from `5ca7aef` as requested by the user.
|
||||
- Added a targeted 504 mitigation by making Gemini batch recovery stop at the request deadline instead of continuing sequential single-candidate retries indefinitely.
|
||||
- Kept an explicit Gemini vision JSON output token cap to reduce the chance of truncated model responses during larger structured batches.
|
||||
- Why it changed:
|
||||
- The user explicitly asked to return to the `5ca7aef` direction and then fix the live `504 Gateway Time-out`.
|
||||
- The provided log `ai-media-hub-2026-03-17T04-59-24-566Z.log` showed backend search collection finishing in about `22s`, then Gemini batch failure plus sequential recovery continuing until the reverse proxy returned `504`.
|
||||
- The direct cause was deadline-unaware Gemini recovery work, not initial search collection itself.
|
||||
- How it was verified:
|
||||
- `go test ./...`
|
||||
- `bash scripts/selftest.sh`
|
||||
- attempted `docker build -t ai-media-hub:test .` but `docker` is not installed in this environment
|
||||
- What is still risky or incomplete:
|
||||
- This reduces one concrete `504` path, but it does not guarantee that all reverse-proxy timeout cases are eliminated under worse upstream conditions.
|
||||
- Container-image build verification still needs to happen on a machine that actually has Docker available.
|
||||
|
||||
- Date: `2026-03-17`
|
||||
- What changed:
|
||||
- Raised search breadth again by widening per-source collector caps, increasing the number of base queries considered, increasing per-collector query budgets, and expanding the Gemini candidate review budget.
|
||||
|
||||
@@ -306,6 +306,7 @@ User query: ` + query,
|
||||
},
|
||||
"generationConfig": map[string]any{
|
||||
"responseMimeType": "application/json",
|
||||
"maxOutputTokens": 1400,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@@ -105,7 +105,7 @@ func EvaluateAllCandidatesWithGemini(service *GeminiService, query string, ranke
|
||||
}
|
||||
|
||||
func EvaluateAllCandidatesWithGeminiWithDeadline(service *GeminiService, query string, ranked []SearchResult, deadline time.Time) ([]AIRecommendation, GeminiBatchStats, error) {
|
||||
const chunkSize = 8
|
||||
const chunkSize = 6
|
||||
const maxConcurrentBatches = 2
|
||||
if service == nil {
|
||||
return nil, GeminiBatchStats{}, fmt.Errorf("gemini service is not configured")
|
||||
@@ -186,7 +186,7 @@ func EvaluateAllCandidatesWithGeminiWithDeadline(service *GeminiService, query s
|
||||
"error": batch.err.Error(),
|
||||
})
|
||||
}
|
||||
recovered, recoveredErrs := recoverGeminiBatchSequentially(service, query, ranked, batch.index*chunkSize)
|
||||
recovered, recoveredErrs := recoverGeminiBatchSequentially(service, query, ranked, batch.index*chunkSize, chunkSize, deadline)
|
||||
if len(recovered) > 0 {
|
||||
stats.SequentialRetried++
|
||||
stats.Succeeded++
|
||||
@@ -345,11 +345,17 @@ func MergeGeminiBatchStats(base, extra GeminiBatchStats) GeminiBatchStats {
|
||||
return merged
|
||||
}
|
||||
|
||||
func recoverGeminiBatchSequentially(service *GeminiService, query string, ranked []SearchResult, startIndex int) ([]AIRecommendation, []string) {
|
||||
recovered := make([]AIRecommendation, 0, 8)
|
||||
func recoverGeminiBatchSequentially(service *GeminiService, query string, ranked []SearchResult, startIndex, chunkSize int, deadline time.Time) ([]AIRecommendation, []string) {
|
||||
recovered := make([]AIRecommendation, 0, chunkSize)
|
||||
errs := make([]string, 0, 4)
|
||||
endIndex := min(startIndex+8, len(ranked))
|
||||
endIndex := min(startIndex+chunkSize, len(ranked))
|
||||
for idx := startIndex; idx < endIndex; idx++ {
|
||||
if !deadline.IsZero() && time.Now().After(deadline) {
|
||||
if len(errs) < 4 {
|
||||
errs = append(errs, "sequential gemini recovery stopped at deadline")
|
||||
}
|
||||
break
|
||||
}
|
||||
recs, err := service.Recommend(query, []SearchResult{ranked[idx]})
|
||||
if err != nil {
|
||||
if len(errs) < 4 {
|
||||
|
||||
Reference in New Issue
Block a user