mirror of
https://github.com/bnair123/MusicAnalyser.git
synced 2026-02-25 11:46:07 +00:00
- Added hierarchical AGENTS.md knowledge base - Implemented PlaylistService with 6h themed and 24h devotion mix logic - Integrated AI theme generation for 6h playlists via Gemini/OpenAI - Added /playlists/refresh and metadata endpoints to API - Updated background worker with scheduled playlist curation - Created frontend PlaylistsSection, Tooltip components and integrated into Dashboard - Added Alembic migration for playlist tracking columns - Fixed Docker healthcheck with curl installation
3.3 KiB
3.3 KiB
PROJECT KNOWLEDGE BASE
Generated: 2025-12-30 Branch: main
OVERVIEW
Personal music analytics dashboard polling Spotify 24/7. Core stack: Python (FastAPI, SQLAlchemy, SQLite) + React (Vite, Tailwind, AntD). Integrates AI (Gemini) for listening narratives.
STRUCTURE
.
├── backend/ # FastAPI API & Spotify polling worker
│ ├── app/ # Core logic (services, models, schemas)
│ ├── alembic/ # DB migrations
│ └── tests/ # Pytest suite
├── frontend/ # React application
│ └── src/ # Components & application logic
├── docs/ # Technical & architecture documentation
└── docker-compose.yml # Production orchestration
WHERE TO LOOK
| Task | Location | Notes |
|---|---|---|
| Modify API endpoints | backend/app/main.py |
FastAPI routes |
| Update DB models | backend/app/models.py |
SQLAlchemy ORM |
| Change polling logic | backend/app/ingest.py |
Worker & ingestion logic |
| Add analysis features | backend/app/services/stats_service.py |
Core metric computation |
| Update UI components | frontend/src/components/ |
React/AntD components |
| Adjust AI prompts | backend/app/services/narrative_service.py |
LLM integration |
CODE MAP (KEY SYMBOLS)
| Symbol | Type | Location | Role |
|---|---|---|---|
SpotifyClient |
Class | backend/app/services/spotify_client.py |
API wrapper & token management |
StatsService |
Class | backend/app/services/stats_service.py |
Metric computation & report generation |
NarrativeService |
Class | backend/app/services/narrative_service.py |
LLM (Gemini/OpenAI) integration |
ingest_recently_played |
Function | backend/app/ingest.py |
Primary data ingestion entry |
Track |
Model | backend/app/models.py |
Central track entity with metadata |
PlayHistory |
Model | backend/app/models.py |
Immutable log of listening events |
Module Dependencies
[run_worker.py] ───> [ingest.py] ───> [spotify_client.py]
└───> [reccobeats_client.py]
[main.py] ─────────> [services/] ───> [models.py]
CONVENTIONS
- Single Container Multi-Process:
backend/entrypoint.shstarts worker + API (Docker anti-pattern, project-specific). - SQLite Persistence: Production uses SQLite (
music.db) via Docker volumes. - Deduplication: Ingestion checks
(track_id, played_at)unique constraint before insert. - Frontend State: Minimal global state; primarily local component state and API fetching.
ANTI-PATTERNS (THIS PROJECT)
- Manual DB Edits: Always use Alembic migrations for schema changes.
- Sync in Async: Avoid blocking I/O in FastAPI routes (GeniusClient is currently synchronous).
- Hardcoded IDs: Avoid hardcoding Spotify/Playlist IDs; use
.envconfiguration.
COMMANDS
# Backend
cd backend && uvicorn app.main:app --reload
python backend/run_worker.py
# Frontend
cd frontend && npm run dev
# Tests
cd backend && pytest tests/
NOTES
- Multi-arch Docker builds (
amd64,arm64) automated via GHA. ReccoBeatsservice used for supplemental audio features (energy, valence).- Genius API used as fallback for lyrics and artist images.