Add view counter configuration knobs

This commit is contained in:
thePR0M3TH3AN
2025-09-30 22:10:05 -04:00
parent b7f24c1070
commit 65198c3abe
3 changed files with 46 additions and 0 deletions

View File

@@ -113,3 +113,34 @@ export const WATCH_HISTORY_FETCH_EVENT_LIMIT = 12;
* retention guarantees.
*/
export const WATCH_HISTORY_CACHE_TTL_MS = 24 * 60 * 60 * 1000;
/**
* How long the view counter should treat repeat plays as duplicates.
*
* BitVid de-duplicates view events that occur within this rolling window so
* quick refreshes or stalled replays do not inflate the totals. The default of
* 24 hours mirrors common analytics tooling, but you can tighten or relax the
* window depending on how aggressively you want to filter repeat traffic.
*/
export const VIEW_COUNT_DEDUPE_WINDOW_SECONDS = 86_400;
/**
* How far back the view counter should hydrate historical events during
* backfills.
*
* When a new analytics worker starts up, it can walk older Nostr events to
* reconstruct totals. Limiting the backfill horizon keeps catch-up jobs
* bounded—90 days covers recent trends without hammering relays for year-old
* history. Increase the number if you need deeper analytics or shrink it for
* lighter start-up workloads.
*/
export const VIEW_COUNT_BACKFILL_MAX_DAYS = 90;
/**
* How long clients can trust cached view totals before re-fetching.
*
* Cached results smooth out traffic spikes and reduce relay load. Five minutes
* strikes a balance between responsiveness and efficiency; lower the TTL for
* fresher dashboards or raise it if your analytics traffic is heavy.
*/
export const VIEW_COUNT_CACHE_TTL_MS = 5 * 60 * 1000;

9
docs/nostr-analytics.md Normal file
View File

@@ -0,0 +1,9 @@
# Nostr Analytics Knobs
BitVid's view counter emits Nostr events so operators can track engagement without duplicating storage. Tune the following exports in `config/instance-config.js` to match your retention and performance goals:
- `VIEW_COUNT_DEDUPE_WINDOW_SECONDS` (default: `86_400`): repeat plays from the same viewer inside this window are treated as duplicates so stalled reloads do not inflate totals. Shorten the window to count more aggressive replays, or extend it if you want conservative numbers.
- `VIEW_COUNT_BACKFILL_MAX_DAYS` (default: `90`): controls how far back hydrators should walk history when a new analytics worker boots. Longer windows deliver deeper trend lines at the cost of heavier relay scans.
- `VIEW_COUNT_CACHE_TTL_MS` (default: `5 * 60 * 1000`): defines how long cached aggregates remain trustworthy before clients refresh them. Lower values surface spikes faster, while higher ones smooth traffic for relay-friendly dashboards.
Most operators can ship with the defaults—24-hour deduplication, a 90-day backfill horizon, and five-minute cache TTLs match what we run in production. Deviate only if your relays have unusual load constraints or your reporting needs stricter fidelity.

View File

@@ -11,6 +11,9 @@ import {
WATCH_HISTORY_PAYLOAD_MAX_BYTES,
WATCH_HISTORY_FETCH_EVENT_LIMIT,
WATCH_HISTORY_CACHE_TTL_MS,
VIEW_COUNT_DEDUPE_WINDOW_SECONDS,
VIEW_COUNT_BACKFILL_MAX_DAYS,
VIEW_COUNT_CACHE_TTL_MS,
} from "../config/instance-config.js";
export const isDevMode = true; // Set to false for production
@@ -61,3 +64,6 @@ export { WATCH_HISTORY_BATCH_RESOLVE };
export { WATCH_HISTORY_PAYLOAD_MAX_BYTES };
export { WATCH_HISTORY_FETCH_EVENT_LIMIT };
export { WATCH_HISTORY_CACHE_TTL_MS };
export { VIEW_COUNT_DEDUPE_WINDOW_SECONDS };
export { VIEW_COUNT_BACKFILL_MAX_DAYS };
export { VIEW_COUNT_CACHE_TTL_MS };