8 Commits

Author SHA1 Message Date
google-labs-jules[bot]
f4432154b6 Implement Phase 3 Music Analysis and LLM Engine
- Refactor Database: Add `Artist` model, M2M relationship, and `AnalysisSnapshot` model.
- Backend Services: Implement `StatsService` for computable metrics and `NarrativeService` for Gemini LLM integration.
- Fix Ingestion: Correctly handle multiple artists per track and backfill existing data.
- Testing: Add unit tests for statistics logic and live verification scripts.
- Documentation: Add `PHASE_4_FRONTEND_GUIDE.md`.
2025-12-24 23:16:32 +00:00
bnair123
ab47dd62ca Merge pull request #3 from bnair123/frontend-phase2-ant-design-2702419047852121330
Implement Phase 2 Frontend (Ant Design) & Fix Data Enrichment
2025-12-25 02:52:58 +04:00
google-labs-jules[bot]
6e80e97960 Implement Phase 2 Frontend with Ant Design and verify Data Ingestion
- Created `frontend/` React+Vite app using Ant Design (Dark Theme).
- Implemented `App.jsx` to display listening history and calculated "Vibes".
- Updated `backend/app/ingest.py` to fix ReccoBeats ID parsing.
- Updated `backend/app/schemas.py` to expose audio features to the API.
- Updated `README.md` with detailed Docker hosting instructions.
- Added `TODO.md` for Phase 3 roadmap.
- Cleaned up test scripts.
2025-12-24 22:51:53 +00:00
bnair123
f034b3eb43 Merge pull request #2 from bnair123/phase2-frontend-enrichment-14969504762303104643
Phase 2: Frontend & Enrichment Implementation
2025-12-25 01:51:12 +04:00
google-labs-jules[bot]
0ca9893c68 Implement Phase 2 Frontend and Phase 3 Data Enrichment
- Initialize React+Vite Frontend with Ant Design Dashboard.
- Implement Data Enrichment: ReccoBeats (Audio Features) and Spotify (Genres).
- Update Database Schema via Alembic Migrations.
- Add Docker support (Dockerfile, docker-compose.yml).
- Update README with hosting instructions.
2025-12-24 21:34:36 +00:00
bnair123
3a424d15a5 Add project context and documentation for Music Analyser
This document outlines the vision, technical decisions, current architecture, and future roadmap for the Music Analyser project. It serves as a guide for future AI agents or developers.
2025-12-24 22:03:18 +04:00
bnair123
4ca4c7befd Enhance Docker publish workflow with metadata and caching
Added environment variables for registry and image name. Updated Docker build and push steps to include metadata extraction and caching.
2025-12-24 21:54:04 +04:00
bnair123
b502e95652 Merge pull request #1 from bnair123/setup-initial-backend-8149240771439055261
Initial Backend Setup
2025-12-24 21:30:32 +04:00
41 changed files with 6201 additions and 59 deletions

84
PHASE_4_FRONTEND_GUIDE.md Normal file
View File

@@ -0,0 +1,84 @@
# Phase 4 Frontend Implementation Guide
This guide details how to consume the data generated by the Phase 3 Backend (Analysis & LLM Engine) and how to display it in the frontend.
## 1. Data Source
The backend now produces **Analysis Snapshots**. You should create an API endpoint (e.g., `GET /api/analysis/latest`) that returns the most recent snapshot.
### JSON Payload Structure
The response object contains two main keys: `metrics_payload` (calculated numbers) and `narrative_report` (LLM text).
```json
{
"id": 1,
"date": "2024-12-25T12:00:00Z",
"period_label": "last_30_days",
"metrics_payload": {
"volume": { ... },
"time_habits": { ... },
"sessions": { ... },
"vibe": { ... },
"era": { ... },
"skips": { ... }
},
"narrative_report": {
"vibe_check": "...",
"patterns": ["..."],
"persona": "...",
"roast": "..."
}
}
```
---
## 2. UI Components & Display Strategy
### A. Hero Section ("The Vibe Check")
**Data Source:** `narrative_report`
- **Headline:** Display `narrative_report.persona` as a large badge/title (e.g., "The Focused Fanatic").
- **Narrative:** Display `narrative_report.vibe_check` as the main text.
- **Roast:** Add a small, dismissible "Roast Me" alert box containing `narrative_report.roast`.
### B. "The Vibe" Radar Chart
**Data Source:** `metrics_payload.vibe`
- Use a **Radar Chart** (Spider Chart) with the following axes (0.0 - 1.0):
- Energy (`avg_energy`)
- Valence (`avg_valence`)
- Danceability (`avg_danceability`)
- Acousticness (`avg_acousticness`)
- Instrumentalness (`avg_instrumentalness`)
- **Tooltip:** Show the exact value.
### C. Listening Habits (Time & Sessions)
**Data Source:** `metrics_payload.time_habits` & `metrics_payload.sessions`
- **Hourly Heatmap:** Use a bar chart for `metrics_payload.time_habits.hourly_distribution` (0-23 hours). Highlight the `peak_hour`.
- **Session Stats:** Display "Average Session" stats:
- `sessions.avg_minutes` (mins)
- `sessions.avg_tracks` (tracks)
- `sessions.count` (total sessions)
### D. Top Favorites
**Data Source:** `metrics_payload.volume`
- **Lists:** Display Top 5 Tracks, Artists, and Genres.
- **Images:** You will need to fetch Artist/Track images from Spotify API using the IDs provided in the lists (the current snapshot only stores names/counts for simplicity, but the IDs are available in the backend if you expand the serializer). *Note: Phase 3 backend currently returns names. For Phase 4, ensure the API endpoint enriches these with Spotify Image URLs.*
### E. Era Analysis
**Data Source:** `metrics_payload.era`
- **Musical Age:** Display `musical_age` (e.g., "1998") prominently.
- **Distribution:** Pie chart for `decade_distribution`.
### F. Attention Span (Skips)
**Data Source:** `metrics_payload.skips`
- **Metric:** Display "Skip Rate" (`skip_rate`) as a percentage.
- **Insight:** "You skipped X tracks this month."
---
## 3. Integration Tips
- **Caching:** The backend stores snapshots. You do NOT need to trigger a calculation on page load. Just fetch the latest snapshot.
- **Theme:** The app uses Ant Design Dark Mode. Stick to Spotify colors (Black/Green/White) but add accent colors based on the "Vibe" (e.g., High Energy = Red/Orange, Low Energy = Blue/Purple).
- **Expansion:** Future snapshots allow for "Trend" views. You can graph `metrics_payload.volume.total_plays` over the last 6 snapshots to show activity trends.

View File

@@ -1,27 +1,27 @@
# Music Analyser
A personal analytics dashboard for your music listening habits, powered by Python, FastAPI, and Google Gemini AI.
A personal analytics dashboard for your music listening habits, powered by Python, FastAPI, React, and Google Gemini AI.
## Project Structure
## Features
- `backend/`: FastAPI backend for data ingestion and API.
- `app/ingest.py`: Background worker that polls Spotify.
- `app/services/`: Logic for Spotify and Gemini APIs.
- `app/models.py`: Database schema (Tracks, PlayHistory).
- `frontend/`: (Coming Soon) React/Vite frontend.
- **Continuous Ingestion**: Polls Spotify every 60 seconds to record your listening history.
- **Data Enrichment**: Automatically fetches **Genres** (via Spotify) and **Audio Features** (Energy, BPM, Mood via ReccoBeats).
- **Dashboard**: A responsive UI (Ant Design) to view your history, stats, and "Vibes".
- **AI Ready**: Database schema and environment prepared for Gemini AI integration.
## Getting Started
## Hosting Guide
### Prerequisites
You can run this application using Docker Compose. You have two options: using the pre-built image from GitHub Container Registry or building from source.
- Docker & Docker Compose (optional, for containerization)
- Python 3.11+ (for local dev)
- A Spotify Developer App (Client ID & Secret)
- A Google Gemini API Key
### 1. Prerequisites
- Docker & Docker Compose installed.
- **Spotify Developer Credentials** (Client ID & Secret).
- **Spotify Refresh Token** (Run `backend/scripts/get_refresh_token.py` locally to generate this).
- **Google Gemini API Key**.
### 1. Setup Environment Variables
### 2. Configuration (`.env`)
Create a `.env` file in the `backend/` directory:
Create a `.env` file in the root directory (same level as `docker-compose.yml`). This file is used by Docker Compose to populate environment variables.
```bash
SPOTIFY_CLIENT_ID="your_client_id"
@@ -30,43 +30,57 @@ SPOTIFY_REFRESH_TOKEN="your_refresh_token"
GEMINI_API_KEY="your_gemini_key"
```
To get the `SPOTIFY_REFRESH_TOKEN`, run the helper script:
### 3. Run with Docker Compose
```bash
python backend/scripts/get_refresh_token.py
```
#### Option A: Build from Source (Recommended for Dev/Modifications)
### 2. Run Locally
Use this if you want to modify the code or ensure you are running the exact local version.
Install dependencies:
1. Clone the repository.
2. Ensure your `.env` file is set up.
3. Run:
```bash
docker-compose up -d --build
```
```bash
cd backend
pip install -r requirements.txt
```
#### Option B: Use Pre-built Image
Run the server:
Use this if you just want to run the app without building locally.
```bash
uvicorn app.main:app --reload
```
1. Open `docker-compose.yml`.
2. Ensure the `backend` service uses the image: `ghcr.io/bnair123/musicanalyser:latest`.
* *Note: If you want to force usage of the image and ignore local build context, you can comment out `build: context: ./backend` in the yaml, though Compose usually prefers build context if present.*
3. Ensure your `.env` file is set up.
4. Run:
```bash
docker pull ghcr.io/bnair123/musicanalyser:latest
docker-compose up -d
```
The API will be available at `http://localhost:8000`.
### 4. Access the Dashboard
### 3. Run Ingestion (Manually)
Open your browser to:
`http://localhost:8991`
You can trigger the ingestion process via the API:
### 5. Data Persistence
```bash
curl -X POST http://localhost:8000/trigger-ingest
```
- **Database**: Stored in a named volume or host path mapped to `/app/music.db`.
- **Migrations**: The backend uses Alembic. Schema changes are applied automatically on startup.
Or run the ingestion logic directly via python shell (see `app/ingest.py`).
## Local Development (Non-Docker)
### 4. Docker Build
1. **Backend**:
```bash
cd backend
pip install -r requirements.txt
python run_worker.py # Starts ingestion
uvicorn app.main:app --reload # Starts API
```
To build the image locally:
```bash
docker build -t music-analyser-backend ./backend
```
2. **Frontend**:
```bash
cd frontend
npm install
npm run dev
```
Access at `http://localhost:5173`.

37
TODO.md Normal file
View File

@@ -0,0 +1,37 @@
# Future Roadmap & TODOs
## Phase 3: AI Analysis & Insights
### 1. Data Analysis Enhancements
- [ ] **Timeframe Selection**:
- [ ] Update Backend API to accept timeframe parameters (e.g., `?range=30d`, `?range=year`, `?range=all`).
- [ ] Update Frontend to include a dropdown/toggle for these timeframes.
- [ ] **Advanced Stats**:
- [ ] Top Artists / Tracks calculation for the selected period.
- [ ] Genre distribution charts (Pie/Bar chart).
### 2. AI Integration (Gemini)
- [ ] **Trigger Mechanism**:
- [ ] Add "Generate AI Report" button on the UI.
- [ ] (Optional) Schedule daily auto-generation.
- [ ] **Prompt Engineering**:
- [ ] Design prompts to analyze:
- "Past 30 Days" (Monthly Vibe Check).
- "Overall" (Yearly/All-time evolution).
- [ ] Provide raw data (list of tracks + audio features) to Gemini.
- [ ] **Storage**:
- [ ] Create `AnalysisReport` table to store generated HTML/Markdown reports.
- [ ] View past reports in a new "Insights" tab.
### 3. Playlist Generation
- [ ] **Concept**: "Daily Vibe Playlist" or "AI Recommended".
- [ ] **Implementation**:
- [ ] Use ReccoBeats or Spotify Recommendations API.
- [ ] Seed with top 5 recent tracks.
- [ ] Filter by audio features (e.g., "High Energy" playlist).
- [ ] **Action**:
- [ ] Add "Save to Spotify" button in the UI (Requires `playlist-modify-public` scope).
### 4. Polish
- [ ] **Mobile Responsiveness**: Ensure Ant Design tables and charts stack correctly on mobile.
- [ ] **Error Handling**: Better UI feedback for API failures (e.g., expired tokens).

147
backend/alembic.ini Normal file
View File

@@ -0,0 +1,147 @@
# A generic, single database configuration.
[alembic]
# path to migration scripts.
# this is typically a path given in POSIX (e.g. forward slashes)
# format, relative to the token %(here)s which refers to the location of this
# ini file
script_location = %(here)s/alembic
# template used to generate migration file names; The default value is %%(rev)s_%%(slug)s
# Uncomment the line below if you want the files to be prepended with date and time
# see https://alembic.sqlalchemy.org/en/latest/tutorial.html#editing-the-ini-file
# for all available tokens
# file_template = %%(year)d_%%(month).2d_%%(day).2d_%%(hour).2d%%(minute).2d-%%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory. for multiple paths, the path separator
# is defined by "path_separator" below.
prepend_sys_path = .
# timezone to use when rendering the date within the migration file
# as well as the filename.
# If specified, requires the tzdata library which can be installed by adding
# `alembic[tz]` to the pip requirements.
# string value is passed to ZoneInfo()
# leave blank for localtime
# timezone =
# max length of characters to apply to the "slug" field
# truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# version location specification; This defaults
# to <script_location>/versions. When using multiple version
# directories, initial revisions must be specified with --version-path.
# The path separator used here should be the separator specified by "path_separator"
# below.
# version_locations = %(here)s/bar:%(here)s/bat:%(here)s/alembic/versions
# path_separator; This indicates what character is used to split lists of file
# paths, including version_locations and prepend_sys_path within configparser
# files such as alembic.ini.
# The default rendered in new alembic.ini files is "os", which uses os.pathsep
# to provide os-dependent path splitting.
#
# Note that in order to support legacy alembic.ini files, this default does NOT
# take place if path_separator is not present in alembic.ini. If this
# option is omitted entirely, fallback logic is as follows:
#
# 1. Parsing of the version_locations option falls back to using the legacy
# "version_path_separator" key, which if absent then falls back to the legacy
# behavior of splitting on spaces and/or commas.
# 2. Parsing of the prepend_sys_path option falls back to the legacy
# behavior of splitting on spaces, commas, or colons.
#
# Valid values for path_separator are:
#
# path_separator = :
# path_separator = ;
# path_separator = space
# path_separator = newline
#
# Use os.pathsep. Default configuration used for new projects.
path_separator = os
# set to 'true' to search source files recursively
# in each "version_locations" directory
# new in Alembic version 1.10
# recursive_version_locations = false
# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8
# database URL. This is consumed by the user-maintained env.py script only.
# other means of configuring database URLs may be customized within the env.py
# file.
sqlalchemy.url = driver://user:pass@localhost/dbname
[post_write_hooks]
# post_write_hooks defines scripts or Python functions that are run
# on newly generated revision scripts. See the documentation for further
# detail and examples
# format using "black" - use the console_scripts runner, against the "black" entrypoint
# hooks = black
# black.type = console_scripts
# black.entrypoint = black
# black.options = -l 79 REVISION_SCRIPT_FILENAME
# lint with attempts to fix using "ruff" - use the module runner, against the "ruff" module
# hooks = ruff
# ruff.type = module
# ruff.module = ruff
# ruff.options = check --fix REVISION_SCRIPT_FILENAME
# Alternatively, use the exec runner to execute a binary found on your PATH
# hooks = ruff
# ruff.type = exec
# ruff.executable = ruff
# ruff.options = check --fix REVISION_SCRIPT_FILENAME
# Logging configuration. This is also consumed by the user-maintained
# env.py script only.
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARNING
handlers = console
qualname =
[logger_sqlalchemy]
level = WARNING
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S

1
backend/alembic/README Normal file
View File

@@ -0,0 +1 @@
Generic single-database configuration.

87
backend/alembic/env.py Normal file
View File

@@ -0,0 +1,87 @@
from logging.config import fileConfig
import os
import sys
from sqlalchemy import engine_from_config
from sqlalchemy import pool
from alembic import context
# Add app to path to import models
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from app.database import Base
from app.models import * # Import models to register them
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# Interpret the config file for Python logging.
# This line sets up loggers basically.
if config.config_file_name is not None:
fileConfig(config.config_file_name)
# add your model's MetaData object here
# for 'autogenerate' support
target_metadata = Base.metadata
# other values from the config, defined by the needs of env.py,
# can be acquired:
# my_important_option = config.get_main_option("my_important_option")
# ... etc.
# Override sqlalchemy.url with our app's URL
config.set_main_option("sqlalchemy.url", "sqlite:///./music.db")
def run_migrations_offline() -> None:
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url,
target_metadata=target_metadata,
literal_binds=True,
dialect_opts={"paramstyle": "named"},
)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online() -> None:
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
connectable = engine_from_config(
config.get_section(config.config_ini_section, {}),
prefix="sqlalchemy.",
poolclass=pool.NullPool,
)
with connectable.connect() as connection:
context.configure(
connection=connection, target_metadata=target_metadata
)
with context.begin_transaction():
context.run_migrations()
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()

View File

@@ -0,0 +1,28 @@
"""${message}
Revision ID: ${up_revision}
Revises: ${down_revision | comma,n}
Create Date: ${create_date}
"""
from typing import Sequence, Union
from alembic import op
import sqlalchemy as sa
${imports if imports else ""}
# revision identifiers, used by Alembic.
revision: str = ${repr(up_revision)}
down_revision: Union[str, Sequence[str], None] = ${repr(down_revision)}
branch_labels: Union[str, Sequence[str], None] = ${repr(branch_labels)}
depends_on: Union[str, Sequence[str], None] = ${repr(depends_on)}
def upgrade() -> None:
"""Upgrade schema."""
${upgrades if upgrades else "pass"}
def downgrade() -> None:
"""Downgrade schema."""
${downgrades if downgrades else "pass"}

View File

@@ -0,0 +1,63 @@
"""Add Artist and Snapshot models
Revision ID: 4401cb416661
Revises: 707387fe1be2
Create Date: 2025-12-24 23:06:59.235445
"""
from typing import Sequence, Union
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision: str = '4401cb416661'
down_revision: Union[str, Sequence[str], None] = '707387fe1be2'
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
"""Upgrade schema."""
# ### commands auto generated by Alembic - please adjust! ###
op.create_table('analysis_snapshots',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('date', sa.DateTime(), nullable=True),
sa.Column('period_start', sa.DateTime(), nullable=True),
sa.Column('period_end', sa.DateTime(), nullable=True),
sa.Column('period_label', sa.String(), nullable=True),
sa.Column('metrics_payload', sa.JSON(), nullable=True),
sa.Column('narrative_report', sa.JSON(), nullable=True),
sa.Column('model_used', sa.String(), nullable=True),
sa.PrimaryKeyConstraint('id')
)
op.create_index(op.f('ix_analysis_snapshots_date'), 'analysis_snapshots', ['date'], unique=False)
op.create_index(op.f('ix_analysis_snapshots_id'), 'analysis_snapshots', ['id'], unique=False)
op.create_table('artists',
sa.Column('id', sa.String(), nullable=False),
sa.Column('name', sa.String(), nullable=True),
sa.Column('genres', sa.JSON(), nullable=True),
sa.PrimaryKeyConstraint('id')
)
op.create_index(op.f('ix_artists_id'), 'artists', ['id'], unique=False)
op.create_table('track_artists',
sa.Column('track_id', sa.String(), nullable=False),
sa.Column('artist_id', sa.String(), nullable=False),
sa.ForeignKeyConstraint(['artist_id'], ['artists.id'], ),
sa.ForeignKeyConstraint(['track_id'], ['tracks.id'], ),
sa.PrimaryKeyConstraint('track_id', 'artist_id')
)
# ### end Alembic commands ###
def downgrade() -> None:
"""Downgrade schema."""
# ### commands auto generated by Alembic - please adjust! ###
op.drop_table('track_artists')
op.drop_index(op.f('ix_artists_id'), table_name='artists')
op.drop_table('artists')
op.drop_index(op.f('ix_analysis_snapshots_id'), table_name='analysis_snapshots')
op.drop_index(op.f('ix_analysis_snapshots_date'), table_name='analysis_snapshots')
op.drop_table('analysis_snapshots')
# ### end Alembic commands ###

View File

@@ -0,0 +1,73 @@
"""Initial Schema Complete
Revision ID: 707387fe1be2
Revises:
Create Date: 2025-12-24 21:23:43.744292
"""
from typing import Sequence, Union
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision: str = '707387fe1be2'
down_revision: Union[str, Sequence[str], None] = None
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
"""Upgrade schema."""
# ### commands auto generated by Alembic - please adjust! ###
op.create_table('tracks',
sa.Column('id', sa.String(), nullable=False),
sa.Column('name', sa.String(), nullable=True),
sa.Column('artist', sa.String(), nullable=True),
sa.Column('album', sa.String(), nullable=True),
sa.Column('duration_ms', sa.Integer(), nullable=True),
sa.Column('popularity', sa.Integer(), nullable=True),
sa.Column('raw_data', sa.JSON(), nullable=True),
sa.Column('danceability', sa.Float(), nullable=True),
sa.Column('energy', sa.Float(), nullable=True),
sa.Column('key', sa.Integer(), nullable=True),
sa.Column('loudness', sa.Float(), nullable=True),
sa.Column('mode', sa.Integer(), nullable=True),
sa.Column('speechiness', sa.Float(), nullable=True),
sa.Column('acousticness', sa.Float(), nullable=True),
sa.Column('instrumentalness', sa.Float(), nullable=True),
sa.Column('liveness', sa.Float(), nullable=True),
sa.Column('valence', sa.Float(), nullable=True),
sa.Column('tempo', sa.Float(), nullable=True),
sa.Column('time_signature', sa.Integer(), nullable=True),
sa.Column('genres', sa.JSON(), nullable=True),
sa.Column('lyrics_summary', sa.String(), nullable=True),
sa.Column('genre_tags', sa.String(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.PrimaryKeyConstraint('id')
)
op.create_index(op.f('ix_tracks_id'), 'tracks', ['id'], unique=False)
op.create_table('play_history',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('track_id', sa.String(), nullable=True),
sa.Column('played_at', sa.DateTime(), nullable=True),
sa.Column('context_uri', sa.String(), nullable=True),
sa.ForeignKeyConstraint(['track_id'], ['tracks.id'], ),
sa.PrimaryKeyConstraint('id')
)
op.create_index(op.f('ix_play_history_id'), 'play_history', ['id'], unique=False)
op.create_index(op.f('ix_play_history_played_at'), 'play_history', ['played_at'], unique=False)
# ### end Alembic commands ###
def downgrade() -> None:
"""Downgrade schema."""
# ### commands auto generated by Alembic - please adjust! ###
op.drop_index(op.f('ix_play_history_played_at'), table_name='play_history')
op.drop_index(op.f('ix_play_history_id'), table_name='play_history')
op.drop_table('play_history')
op.drop_index(op.f('ix_tracks_id'), table_name='tracks')
op.drop_table('tracks')
# ### end Alembic commands ###

View File

@@ -2,9 +2,10 @@ import asyncio
import os
from datetime import datetime
from sqlalchemy.orm import Session
from .models import Track, PlayHistory
from .models import Track, PlayHistory, Artist
from .database import SessionLocal
from .services.spotify_client import SpotifyClient
from .services.reccobeats_client import ReccoBeatsClient
from dateutil import parser
# Initialize Spotify Client (env vars will be populated later)
@@ -15,10 +16,118 @@ def get_spotify_client():
refresh_token=os.getenv("SPOTIFY_REFRESH_TOKEN"),
)
def get_reccobeats_client():
return ReccoBeatsClient()
async def ensure_artists_exist(db: Session, artists_data: list):
"""
Ensures that all artists in the list exist in the Artist table.
Returns a list of Artist objects.
"""
artist_objects = []
for a_data in artists_data:
artist_id = a_data["id"]
artist = db.query(Artist).filter(Artist.id == artist_id).first()
if not artist:
artist = Artist(
id=artist_id,
name=a_data["name"],
genres=[] # Will be enriched later
)
db.add(artist)
# We commit inside the loop or after, but for now we rely on the main commit
# However, to return the object correctly we might need to flush if we were doing complex things,
# but here adding to session is enough for SQLAlchemy to track it.
artist_objects.append(artist)
return artist_objects
async def enrich_tracks(db: Session, spotify_client: SpotifyClient, recco_client: ReccoBeatsClient):
"""
Finds tracks missing genres (Spotify) or audio features (ReccoBeats) and enriches them.
Also enriches Artists with genres.
"""
# 1. Enrich Audio Features (via ReccoBeats)
tracks_missing_features = db.query(Track).filter(Track.danceability == None).limit(50).all()
print(f"DEBUG: Found {len(tracks_missing_features)} tracks missing audio features.")
if tracks_missing_features:
print(f"Enriching {len(tracks_missing_features)} tracks with audio features (ReccoBeats)...")
ids = [t.id for t in tracks_missing_features]
features_list = await recco_client.get_audio_features(ids)
features_map = {}
for f in features_list:
tid = f.get("id")
if not tid and "href" in f:
if "tracks/" in f["href"]:
tid = f["href"].split("tracks/")[1].split("?")[0]
elif "track/" in f["href"]:
tid = f["href"].split("track/")[1].split("?")[0]
if tid:
features_map[tid] = f
updated_count = 0
for track in tracks_missing_features:
data = features_map.get(track.id)
if data:
track.danceability = data.get("danceability")
track.energy = data.get("energy")
track.key = data.get("key")
track.loudness = data.get("loudness")
track.mode = data.get("mode")
track.speechiness = data.get("speechiness")
track.acousticness = data.get("acousticness")
track.instrumentalness = data.get("instrumentalness")
track.liveness = data.get("liveness")
track.valence = data.get("valence")
track.tempo = data.get("tempo")
updated_count += 1
print(f"Updated {updated_count} tracks with audio features.")
db.commit()
# 2. Enrich Artist Genres (via Spotify Artists)
# We look for artists who have no genres. Note: an artist might genuinely have no genres,
# so we might need a flag "genres_checked" in the future, but for now checking empty list is okay.
# However, newly created artists have genres=[] (empty list) or None?
# My model definition: genres = Column(JSON, nullable=True)
# So if it is None, we haven't fetched it.
artists_missing_genres = db.query(Artist).filter(Artist.genres == None).limit(50).all()
if artists_missing_genres:
print(f"Enriching {len(artists_missing_genres)} artists with genres (Spotify)...")
artist_ids_list = [a.id for a in artists_missing_genres]
artist_data_map = {}
# Spotify allows fetching 50 artists at a time
for i in range(0, len(artist_ids_list), 50):
chunk = artist_ids_list[i:i+50]
artists_data = await spotify_client.get_artists(chunk)
for a_data in artists_data:
if a_data:
artist_data_map[a_data["id"]] = a_data.get("genres", [])
for artist in artists_missing_genres:
genres = artist_data_map.get(artist.id)
if genres is not None:
artist.genres = genres
else:
# If we couldn't fetch, set to empty list so we don't keep retrying forever (or handle errors better)
artist.genres = []
db.commit()
async def ingest_recently_played(db: Session):
client = get_spotify_client()
spotify_client = get_spotify_client()
recco_client = get_reccobeats_client()
try:
items = await client.get_recently_played(limit=50)
items = await spotify_client.get_recently_played(limit=50)
except Exception as e:
print(f"Error connecting to Spotify: {e}")
return
@@ -30,7 +139,6 @@ async def ingest_recently_played(db: Session):
played_at_str = item["played_at"]
played_at = parser.isoparse(played_at_str)
# 1. Check if track exists, if not create it
track_id = track_data["id"]
track = db.query(Track).filter(Track.id == track_id).first()
@@ -39,17 +147,30 @@ async def ingest_recently_played(db: Session):
track = Track(
id=track_id,
name=track_data["name"],
artist=", ".join([a["name"] for a in track_data["artists"]]),
artist=", ".join([a["name"] for a in track_data["artists"]]), # Legacy string
album=track_data["album"]["name"],
duration_ms=track_data["duration_ms"],
popularity=track_data["popularity"],
raw_data=track_data
)
db.add(track)
db.commit() # Commit immediately so ID exists for foreign key
# 2. Check if this specific play instance exists
# We assume (track_id, played_at) is unique enough
# Handle Artists Relation
artists_data = track_data.get("artists", [])
artist_objects = await ensure_artists_exist(db, artists_data)
track.artists = artist_objects
db.add(track)
db.commit()
# Ensure relationships exist even if track existed (e.g. migration)
# Check if track has artists linked. If not (and raw_data has them), link them.
# FIX: Logic was previously indented improperly inside `if not track`.
if not track.artists and track.raw_data and "artists" in track.raw_data:
print(f"Backfilling artists for track {track.name}")
artist_objects = await ensure_artists_exist(db, track.raw_data["artists"])
track.artists = artist_objects
db.commit()
exists = db.query(PlayHistory).filter(
PlayHistory.track_id == track_id,
PlayHistory.played_at == played_at
@@ -66,9 +187,13 @@ async def ingest_recently_played(db: Session):
db.commit()
# Enrich
await enrich_tracks(db, spotify_client, recco_client)
async def run_worker():
"""Simulates a background worker loop."""
db = SessionLocal()
try:
while True:
print("Worker: Polling Spotify...")

View File

@@ -1,14 +1,32 @@
from sqlalchemy import Column, Integer, String, DateTime, JSON, ForeignKey, Boolean
from sqlalchemy import Column, Integer, String, DateTime, JSON, ForeignKey, Float, Table, Text
from sqlalchemy.orm import relationship
from datetime import datetime
from .database import Base
# Association Table for Many-to-Many Relationship between Track and Artist
track_artists = Table(
'track_artists',
Base.metadata,
Column('track_id', String, ForeignKey('tracks.id'), primary_key=True),
Column('artist_id', String, ForeignKey('artists.id'), primary_key=True)
)
class Artist(Base):
__tablename__ = "artists"
id = Column(String, primary_key=True, index=True) # Spotify ID
name = Column(String)
genres = Column(JSON, nullable=True) # List of genre strings
# Relationships
tracks = relationship("Track", secondary=track_artists, back_populates="artists")
class Track(Base):
__tablename__ = "tracks"
id = Column(String, primary_key=True, index=True) # Spotify ID
name = Column(String)
artist = Column(String)
artist = Column(String) # Display string (e.g. "Drake, Future") - kept for convenience
album = Column(String)
duration_ms = Column(Integer)
popularity = Column(Integer, nullable=True)
@@ -16,14 +34,33 @@ class Track(Base):
# Store raw full JSON response for future-proofing analysis
raw_data = Column(JSON, nullable=True)
# Enriched Data (Phase 3 Prep)
# Audio Features
danceability = Column(Float, nullable=True)
energy = Column(Float, nullable=True)
key = Column(Integer, nullable=True)
loudness = Column(Float, nullable=True)
mode = Column(Integer, nullable=True)
speechiness = Column(Float, nullable=True)
acousticness = Column(Float, nullable=True)
instrumentalness = Column(Float, nullable=True)
liveness = Column(Float, nullable=True)
valence = Column(Float, nullable=True)
tempo = Column(Float, nullable=True)
time_signature = Column(Integer, nullable=True)
# Genres (stored as JSON list of strings) - DEPRECATED in favor of Artist.genres but kept for now
genres = Column(JSON, nullable=True)
# AI Analysis fields
lyrics_summary = Column(String, nullable=True)
genre_tags = Column(String, nullable=True) # JSON list stored as string or just raw JSON
genre_tags = Column(String, nullable=True)
created_at = Column(DateTime, default=datetime.utcnow)
updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
plays = relationship("PlayHistory", back_populates="track")
artists = relationship("Artist", secondary=track_artists, back_populates="tracks")
class PlayHistory(Base):
@@ -37,3 +74,23 @@ class PlayHistory(Base):
context_uri = Column(String, nullable=True)
track = relationship("Track", back_populates="plays")
class AnalysisSnapshot(Base):
"""
Stores the computed statistics and LLM analysis for a given period.
Allows for trend analysis over time.
"""
__tablename__ = "analysis_snapshots"
id = Column(Integer, primary_key=True, index=True)
date = Column(DateTime, default=datetime.utcnow, index=True) # When the analysis was run
period_start = Column(DateTime)
period_end = Column(DateTime)
period_label = Column(String) # e.g., "last_30_days", "monthly_nov_2023"
# The heavy lifting: stored as JSON blobs
metrics_payload = Column(JSON) # The input to the LLM (StatsService output)
narrative_report = Column(JSON) # The output from the LLM (NarrativeService output)
model_used = Column(String, nullable=True) # e.g. "gemini-1.5-flash"

View File

@@ -12,6 +12,19 @@ class TrackBase(BaseModel):
lyrics_summary: Optional[str] = None
genre_tags: Optional[str] = None
# Audio Features
danceability: Optional[float] = None
energy: Optional[float] = None
valence: Optional[float] = None
tempo: Optional[float] = None
key: Optional[int] = None
mode: Optional[int] = None
acousticness: Optional[float] = None
instrumentalness: Optional[float] = None
liveness: Optional[float] = None
speechiness: Optional[float] = None
loudness: Optional[float] = None
class Track(TrackBase):
created_at: datetime
updated_at: datetime

View File

@@ -0,0 +1,67 @@
import os
import json
import google.generativeai as genai
from typing import Dict, Any
class NarrativeService:
def __init__(self, model_name: str = "gemini-2.5-flash"):
self.api_key = os.getenv("GEMINI_API_KEY")
if not self.api_key:
print("WARNING: GEMINI_API_KEY not found. LLM features will fail.")
else:
genai.configure(api_key=self.api_key)
self.model_name = model_name
def generate_narrative(self, stats_json: Dict[str, Any]) -> Dict[str, str]:
if not self.api_key:
return {"error": "Missing API Key"}
prompt = f"""
You are analyzing a user's Spotify listening data. Below is a JSON summary of metrics I've computed. Your job is to:
1. Write a narrative "Vibe Check" (2-3 paragraphs) describing their overall listening personality this period.
2. Identify 3-5 notable patterns or anomalies.
3. Provide a "Musical Persona" label (e.g., "Late-Night Binge Listener", "Genre Chameleon", "Album Purist").
4. Write a brief, playful "roast" (1-2 sentences) based on the data.
Guidelines:
- Do NOT recalculate any numbers.
- Use specific metrics to support observations (e.g., "Your whiplash score of 18.3 BPM suggests...").
- Keep tone conversational but insightful.
- Avoid mental health claims; stick to behavioral descriptors.
- Highlight both positive patterns and quirks.
Data:
{json.dumps(stats_json, indent=2)}
Output Format (return valid JSON):
{{
"vibe_check": "...",
"patterns": ["...", "..."],
"persona": "...",
"roast": "..."
}}
"""
try:
# Handle full model path if passed or default short name
# The library often accepts 'gemini-2.5-flash' but list_models returns 'models/gemini-2.5-flash'
model_id = self.model_name
if not model_id.startswith("models/") and "/" not in model_id:
# Try simple name, if it fails user might need to pass 'models/...'
pass
model = genai.GenerativeModel(model_id)
response = model.generate_content(prompt)
# Clean up response to ensure valid JSON (sometimes LLMs add markdown blocks)
text = response.text.strip()
if text.startswith("```json"):
text = text.replace("```json", "").replace("```", "")
elif text.startswith("```"):
text = text.replace("```", "")
return json.loads(text)
except Exception as e:
return {"error": str(e), "raw_response": response.text if 'response' in locals() else "No response"}

View File

@@ -0,0 +1,18 @@
import httpx
from typing import List, Dict, Any
RECCOBEATS_API_URL = "https://api.reccobeats.com/v1/audio-features"
class ReccoBeatsClient:
async def get_audio_features(self, spotify_ids: List[str]) -> List[Dict[str, Any]]:
if not spotify_ids:
return []
ids_param = ",".join(spotify_ids)
async with httpx.AsyncClient() as client:
try:
response = await client.get(RECCOBEATS_API_URL, params={"ids": ids_param})
if response.status_code != 200:
return []
return response.json().get("content", [])
except Exception:
return []

View File

@@ -3,6 +3,7 @@ import base64
import time
import httpx
from fastapi import HTTPException
from typing import List, Dict, Any
SPOTIFY_TOKEN_URL = "https://accounts.spotify.com/api/token"
SPOTIFY_API_BASE = "https://api.spotify.com/v1"
@@ -68,3 +69,26 @@ class SpotifyClient:
if response.status_code != 200:
return None
return response.json()
async def get_artists(self, artist_ids: List[str]) -> List[Dict[str, Any]]:
"""
Fetches artist details (including genres) for a list of artist IDs.
Spotify allows up to 50 IDs per request.
"""
if not artist_ids:
return []
token = await self.get_access_token()
ids_param = ",".join(artist_ids)
async with httpx.AsyncClient() as client:
response = await client.get(
f"{SPOTIFY_API_BASE}/artists",
params={"ids": ids_param},
headers={"Authorization": f"Bearer {token}"},
)
if response.status_code != 200:
print(f"Error fetching artists: {response.text}")
return []
return response.json().get("artists", [])

View File

@@ -0,0 +1,396 @@
from sqlalchemy.orm import Session
from sqlalchemy import func, distinct, desc
from datetime import datetime, timedelta
from typing import Dict, Any, List
import math
import numpy as np
from ..models import PlayHistory, Track, Artist, AnalysisSnapshot
class StatsService:
def __init__(self, db: Session):
self.db = db
def compute_volume_stats(self, period_start: datetime, period_end: datetime) -> Dict[str, Any]:
"""
Calculates volume metrics: Total Plays, Unique Tracks, Artists, etc.
"""
query = self.db.query(PlayHistory).filter(
PlayHistory.played_at >= period_start,
PlayHistory.played_at <= period_end
)
plays = query.all()
total_plays = len(plays)
if total_plays == 0:
return {
"total_plays": 0,
"estimated_minutes": 0,
"unique_tracks": 0,
"unique_artists": 0,
"unique_albums": 0,
"unique_genres": 0,
"top_tracks": [],
"top_artists": [],
"repeat_rate": 0,
"concentration": {}
}
# Calculate Duration (Estimated)
# Note: We query tracks to get duration.
# Ideally we join, but eager loading might be heavy. Let's do a join or simple loop.
# Efficient approach: Get all track IDs from plays, fetch Track objects in bulk map.
track_ids = [p.track_id for p in plays]
tracks = self.db.query(Track).filter(Track.id.in_(set(track_ids))).all()
track_map = {t.id: t for t in tracks}
total_ms = 0
unique_track_ids = set()
unique_artist_ids = set()
unique_album_names = set() # Spotify doesn't give album ID in PlayHistory directly unless joined, track has album name string.
# Ideally track has raw_data['album']['id'].
unique_album_ids = set()
genre_counts = {}
# For Top Lists
track_play_counts = {}
artist_play_counts = {}
for p in plays:
t = track_map.get(p.track_id)
if t:
total_ms += t.duration_ms
unique_track_ids.add(t.id)
# Top Tracks
track_play_counts[t.id] = track_play_counts.get(t.id, 0) + 1
# Artists (using relation)
# Note: This might cause N+1 query if not eager loaded.
# For strictly calculation, accessing t.artists (lazy load) loop might be slow for 1000s of plays.
# Optimization: Join PlayHistory -> Track -> Artist in query.
# Let's rely on raw_data for speed if relation loading is slow,
# OR Assume we accept some latency.
# Better: Pre-fetch artist connections or use the new tables properly.
# Let's use the object relation for correctness as per plan.
for artist in t.artists:
unique_artist_ids.add(artist.id)
artist_play_counts[artist.id] = artist_play_counts.get(artist.id, 0) + 1
if artist.genres:
for g in artist.genres:
genre_counts[g] = genre_counts.get(g, 0) + 1
if t.raw_data and "album" in t.raw_data:
unique_album_ids.add(t.raw_data["album"]["id"])
else:
unique_album_ids.add(t.album) # Fallback
estimated_minutes = total_ms / 60000
# Top 5 Tracks
sorted_tracks = sorted(track_play_counts.items(), key=lambda x: x[1], reverse=True)[:5]
top_tracks = []
for tid, count in sorted_tracks:
t = track_map.get(tid)
top_tracks.append({
"name": t.name,
"artist": t.artist, # Display string
"count": count
})
# Top 5 Artists
# Need to fetch Artist names
top_artist_ids = sorted(artist_play_counts.items(), key=lambda x: x[1], reverse=True)[:5]
top_artists_objs = self.db.query(Artist).filter(Artist.id.in_([x[0] for x in top_artist_ids])).all()
artist_name_map = {a.id: a.name for a in top_artists_objs}
top_artists = []
for aid, count in top_artist_ids:
top_artists.append({
"name": artist_name_map.get(aid, "Unknown"),
"count": count
})
# Top Genres
sorted_genres = sorted(genre_counts.items(), key=lambda x: x[1], reverse=True)[:5]
top_genres = [{"name": g, "count": c} for g, c in sorted_genres]
# Concentration
unique_tracks_count = len(unique_track_ids)
repeat_rate = (total_plays - unique_tracks_count) / total_plays if total_plays > 0 else 0
# HHI (HerfindahlHirschman Index)
# Sum of (share)^2. Share = track_plays / total_plays
hhi = sum([(c/total_plays)**2 for c in track_play_counts.values()])
return {
"total_plays": total_plays,
"estimated_minutes": int(estimated_minutes),
"unique_tracks": unique_tracks_count,
"unique_artists": len(unique_artist_ids),
"unique_albums": len(unique_album_ids),
"unique_genres": len(genre_counts),
"top_tracks": top_tracks,
"top_artists": top_artists,
"top_genres": top_genres,
"repeat_rate": round(repeat_rate, 3),
"concentration": {
"hhi": round(hhi, 4),
# "gini": ... (skip for now to keep it simple)
}
}
def compute_time_stats(self, period_start: datetime, period_end: datetime) -> Dict[str, Any]:
"""
Hourly, Daily distribution, etc.
"""
query = self.db.query(PlayHistory).filter(
PlayHistory.played_at >= period_start,
PlayHistory.played_at <= period_end
)
plays = query.all()
hourly_counts = [0] * 24
weekday_counts = [0] * 7 # 0=Mon, 6=Sun
if not plays:
return {"hourly_distribution": hourly_counts}
for p in plays:
# played_at is UTC in DB usually. Ensure we handle timezone if user wants local.
# For now, assuming UTC or system time.
h = p.played_at.hour
d = p.played_at.weekday()
hourly_counts[h] += 1
weekday_counts[d] += 1
peak_hour = hourly_counts.index(max(hourly_counts))
# Weekend Share
weekend_plays = weekday_counts[5] + weekday_counts[6]
weekend_share = weekend_plays / len(plays) if len(plays) > 0 else 0
return {
"hourly_distribution": hourly_counts,
"peak_hour": peak_hour,
"weekday_distribution": weekday_counts,
"weekend_share": round(weekend_share, 2)
}
def compute_session_stats(self, period_start: datetime, period_end: datetime) -> Dict[str, Any]:
"""
Session logic: Gap > 20 mins = new session.
"""
query = self.db.query(PlayHistory).filter(
PlayHistory.played_at >= period_start,
PlayHistory.played_at <= period_end
).order_by(PlayHistory.played_at.asc())
plays = query.all()
if not plays:
return {"count": 0, "avg_length_minutes": 0}
sessions = []
current_session = [plays[0]]
for i in range(1, len(plays)):
prev = plays[i-1]
curr = plays[i]
diff = (curr.played_at - prev.played_at).total_seconds() / 60
if diff > 20:
sessions.append(current_session)
current_session = []
current_session.append(curr)
sessions.append(current_session)
session_lengths_min = []
for sess in sessions:
if len(sess) > 1:
start = sess[0].played_at
end = sess[-1].played_at
# Add duration of last track?
# Let's just do (end - start) for simplicity + avg track duration
duration = (end - start).total_seconds() / 60
session_lengths_min.append(duration)
else:
session_lengths_min.append(3.0) # Approx 1 track
avg_min = sum(session_lengths_min) / len(session_lengths_min) if session_lengths_min else 0
return {
"count": len(sessions),
"avg_tracks": len(plays) / len(sessions),
"avg_minutes": round(avg_min, 1),
"longest_session_minutes": round(max(session_lengths_min), 1) if session_lengths_min else 0
}
def compute_vibe_stats(self, period_start: datetime, period_end: datetime) -> Dict[str, Any]:
"""
Aggregates Audio Features (Energy, Valence, etc.)
"""
query = self.db.query(PlayHistory).filter(
PlayHistory.played_at >= period_start,
PlayHistory.played_at <= period_end
)
plays = query.all()
track_ids = list(set([p.track_id for p in plays]))
if not track_ids:
return {}
tracks = self.db.query(Track).filter(Track.id.in_(track_ids)).all()
# Collect features
features = {
"energy": [], "valence": [], "danceability": [],
"tempo": [], "acousticness": [], "instrumentalness": [],
"liveness": [], "speechiness": []
}
for t in tracks:
# Weight by plays? The spec implies "Per-Period Aggregates".
# Usually weighted by play count is better representation of what was HEARD.
# Let's weight by play count in this period.
play_count = len([p for p in plays if p.track_id == t.id])
if t.energy is not None:
for _ in range(play_count):
features["energy"].append(t.energy)
features["valence"].append(t.valence)
features["danceability"].append(t.danceability)
features["tempo"].append(t.tempo)
features["acousticness"].append(t.acousticness)
features["instrumentalness"].append(t.instrumentalness)
features["liveness"].append(t.liveness)
features["speechiness"].append(t.speechiness)
stats = {}
for key, values in features.items():
valid = [v for v in values if v is not None]
if valid:
stats[f"avg_{key}"] = float(np.mean(valid))
stats[f"std_{key}"] = float(np.std(valid))
else:
stats[f"avg_{key}"] = None
# Derived Metrics
if stats.get("avg_energy") and stats.get("avg_valence"):
stats["mood_quadrant"] = {
"x": round(stats["avg_valence"], 2),
"y": round(stats["avg_energy"], 2)
}
return stats
def compute_era_stats(self, period_start: datetime, period_end: datetime) -> Dict[str, Any]:
"""
Musical Age and Era Distribution.
"""
query = self.db.query(PlayHistory).filter(
PlayHistory.played_at >= period_start,
PlayHistory.played_at <= period_end
)
plays = query.all()
years = []
track_ids = list(set([p.track_id for p in plays]))
tracks = self.db.query(Track).filter(Track.id.in_(track_ids)).all()
track_map = {t.id: t for t in tracks}
for p in plays:
t = track_map.get(p.track_id)
if t and t.raw_data and "album" in t.raw_data and "release_date" in t.raw_data["album"]:
rd = t.raw_data["album"]["release_date"]
# Format can be YYYY, YYYY-MM, YYYY-MM-DD
try:
year = int(rd.split("-")[0])
years.append(year)
except:
pass
if not years:
return {"musical_age": None}
avg_year = sum(years) / len(years)
# Decade breakdown
decades = {}
for y in years:
dec = (y // 10) * 10
label = f"{dec}s"
decades[label] = decades.get(label, 0) + 1
total = len(years)
decade_dist = {k: round(v/total, 2) for k, v in decades.items()}
return {
"musical_age": int(avg_year),
"decade_distribution": decade_dist
}
def compute_skip_stats(self, period_start: datetime, period_end: datetime) -> Dict[str, Any]:
"""
Implements boredom skip detection:
(next_track.played_at - current_track.played_at) < (current_track.duration_ms / 1000 - 10s)
"""
query = self.db.query(PlayHistory).filter(
PlayHistory.played_at >= period_start,
PlayHistory.played_at <= period_end
).order_by(PlayHistory.played_at.asc())
plays = query.all()
if len(plays) < 2:
return {"skip_rate": 0, "total_skips": 0}
skips = 0
track_ids = list(set([p.track_id for p in plays]))
tracks = self.db.query(Track).filter(Track.id.in_(track_ids)).all()
track_map = {t.id: t for t in tracks}
for i in range(len(plays) - 1):
current_play = plays[i]
next_play = plays[i+1]
track = track_map.get(current_play.track_id)
if not track or not track.duration_ms:
continue
diff_seconds = (next_play.played_at - current_play.played_at).total_seconds()
# Logic: If diff < (duration - 10s), it's a skip.
# Convert duration to seconds
duration_sec = track.duration_ms / 1000.0
# Also ensure diff isn't negative or weirdly small (re-plays)
# And assume "listening" means diff > 30s at least?
# Spec says "Spotify only returns 30s+".
if diff_seconds < (duration_sec - 10):
skips += 1
return {
"total_skips": skips,
"skip_rate": round(skips / len(plays), 3)
}
def generate_full_report(self, period_start: datetime, period_end: datetime) -> Dict[str, Any]:
return {
"period": {
"start": period_start.isoformat(),
"end": period_end.isoformat()
},
"volume": self.compute_volume_stats(period_start, period_end),
"time_habits": self.compute_time_stats(period_start, period_end),
"sessions": self.compute_session_stats(period_start, period_end),
"vibe": self.compute_vibe_stats(period_start, period_end),
"era": self.compute_era_stats(period_start, period_end),
"skips": self.compute_skip_stats(period_start, period_end)
}

10
backend/backend.log Normal file
View File

@@ -0,0 +1,10 @@
INFO: Started server process [9223]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: 127.0.0.1:35326 - "GET /history?limit=100 HTTP/1.1" 200 OK
INFO: 127.0.0.1:35342 - "GET /history?limit=100 HTTP/1.1" 200 OK
INFO: Shutting down
INFO: Waiting for application shutdown.
INFO: Application shutdown complete.
INFO: Finished server process [9223]

View File

@@ -9,3 +9,4 @@ google-generativeai==0.3.2
tenacity==8.2.3
python-dateutil==2.9.0.post0
requests==2.31.0
alembic==1.13.1

82
backend/run_analysis.py Normal file
View File

@@ -0,0 +1,82 @@
import os
import sys
import json
from datetime import datetime, timedelta
from app.database import SessionLocal
from app.services.stats_service import StatsService
from app.services.narrative_service import NarrativeService
from app.models import AnalysisSnapshot
def run_analysis_pipeline(days: int = 30, model_name: str = "gemini-2.5-flash"):
db = SessionLocal()
try:
end_date = datetime.utcnow()
start_date = end_date - timedelta(days=days)
print(f"--- Starting Analysis for period: {start_date} to {end_date} ---")
# 1. Compute Stats
print("Calculating metrics...")
stats_service = StatsService(db)
stats_json = stats_service.generate_full_report(start_date, end_date)
# Check if we have enough data
if stats_json["volume"]["total_plays"] == 0:
print("No plays found in this period. Skipping LLM analysis.")
return
print(f"Stats computed. Total Plays: {stats_json['volume']['total_plays']}")
print(f"Top Artist: {stats_json['volume']['top_artists'][0]['name'] if stats_json['volume']['top_artists'] else 'N/A'}")
# 2. Generate Narrative
print(f"Generating Narrative with {model_name}...")
narrative_service = NarrativeService(model_name=model_name)
narrative_json = narrative_service.generate_narrative(stats_json)
if "error" in narrative_json:
print(f"LLM Error: {narrative_json['error']}")
else:
print("Narrative generated successfully.")
print(f"Persona: {narrative_json.get('persona')}")
# 3. Save Snapshot
print("Saving snapshot to database...")
snapshot = AnalysisSnapshot(
period_start=start_date,
period_end=end_date,
period_label=f"last_{days}_days",
metrics_payload=stats_json,
narrative_report=narrative_json,
model_used=model_name
)
db.add(snapshot)
db.commit()
print(f"Snapshot saved with ID: {snapshot.id}")
# 4. Output to file for easy inspection
output = {
"snapshot_id": snapshot.id,
"metrics": stats_json,
"narrative": narrative_json
}
with open("latest_analysis.json", "w") as f:
json.dump(output, f, indent=2)
print("Full report saved to latest_analysis.json")
except Exception as e:
print(f"Pipeline Failed: {e}")
import traceback
traceback.print_exc()
finally:
db.close()
if __name__ == "__main__":
# Allow arguments?
days = 30
if len(sys.argv) > 1:
try:
days = int(sys.argv[1])
except:
pass
run_analysis_pipeline(days=days)

View File

@@ -0,0 +1,31 @@
from sqlalchemy.orm import Session
from app.database import SessionLocal, engine, Base
from app.models import Track, PlayHistory
from datetime import datetime, timedelta
Base.metadata.create_all(bind=engine)
db = SessionLocal()
# clear
db.query(PlayHistory).delete()
db.query(Track).delete()
db.commit()
# Create tracks
t1 = Track(id="t1", name="Midnight City", artist="M83", album="Hurry Up, We're Dreaming", duration_ms=243000, danceability=0.6, energy=0.8, valence=0.5, raw_data={})
t2 = Track(id="t2", name="Weightless", artist="Marconi Union", album="Weightless", duration_ms=480000, danceability=0.2, energy=0.1, valence=0.1, raw_data={})
t3 = Track(id="t3", name="Levitating", artist="Dua Lipa", album="Future Nostalgia", duration_ms=203000, danceability=0.8, energy=0.9, valence=0.9, raw_data={})
db.add_all([t1, t2, t3])
db.commit()
# Create history
ph1 = PlayHistory(track_id="t1", played_at=datetime.utcnow() - timedelta(minutes=10))
ph2 = PlayHistory(track_id="t2", played_at=datetime.utcnow() - timedelta(minutes=30))
ph3 = PlayHistory(track_id="t3", played_at=datetime.utcnow() - timedelta(minutes=60))
db.add_all([ph1, ph2, ph3])
db.commit()
print("Data populated")
db.close()

78
backend/seed_data.py Normal file
View File

@@ -0,0 +1,78 @@
from datetime import datetime, timedelta
import random
from app.database import SessionLocal
from app.models import Track, Artist, PlayHistory
from app.services.stats_service import StatsService
def seed_db():
db = SessionLocal()
# 1. Create Artists
artists = []
for i in range(10):
a = Artist(
id=f"artist_{i}",
name=f"Artist {i}",
genres=[random.choice(["pop", "rock", "jazz", "edm", "hip-hop"]) for _ in range(2)]
)
db.merge(a) # merge handles insert/update
artists.append(a)
db.commit()
print(f"Seeded {len(artists)} artists.")
# 2. Create Tracks
tracks = []
for i in range(50):
# Random artist
artist = random.choice(artists)
t = Track(
id=f"track_{i}",
name=f"Track {i}",
artist=artist.name, # Legacy
album=f"Album {i % 10}",
duration_ms=random.randint(180000, 300000), # 3-5 mins
popularity=random.randint(10, 90),
danceability=random.uniform(0.3, 0.9),
energy=random.uniform(0.3, 0.9),
valence=random.uniform(0.1, 0.9),
tempo=random.uniform(80, 160),
raw_data={"album": {"id": f"album_{i%10}", "release_date": f"{random.randint(2000, 2023)}-01-01"}}
)
# Link artist
t.artists.append(artist)
db.merge(t)
tracks.append(t)
db.commit()
print(f"Seeded {len(tracks)} tracks.")
# 3. Create Play History (Last 30 days)
plays = []
base_time = datetime.utcnow() - timedelta(days=25)
for i in range(200):
# Create sessions
# 80% chance next play is soon (2-5 mins), 20% chance gap (30-600 mins)
gap = random.randint(2, 6) if random.random() > 0.2 else random.randint(30, 600)
base_time += timedelta(minutes=gap)
if base_time > datetime.utcnow():
break
track = random.choice(tracks)
p = PlayHistory(
track_id=track.id,
played_at=base_time,
context_uri="spotify:playlist:fake"
)
db.add(p)
db.commit()
print(f"Seeded play history until {base_time}.")
db.close()
if __name__ == "__main__":
seed_db()

View File

@@ -0,0 +1,69 @@
import unittest
from datetime import datetime, timedelta
from unittest.mock import MagicMock
from app.services.stats_service import StatsService
from app.models import PlayHistory, Track, Artist
class TestStatsService(unittest.TestCase):
def setUp(self):
self.mock_db = MagicMock()
self.service = StatsService(self.mock_db)
def test_compute_volume_stats_empty(self):
# Mock empty query result
self.mock_db.query.return_value.filter.return_value.all.return_value = []
start = datetime.utcnow()
end = datetime.utcnow()
stats = self.service.compute_volume_stats(start, end)
self.assertEqual(stats["total_plays"], 0)
self.assertEqual(stats["unique_tracks"], 0)
def test_compute_session_stats(self):
# Create dummy plays
t1 = datetime(2023, 1, 1, 10, 0, 0)
t2 = datetime(2023, 1, 1, 10, 5, 0) # 5 min gap (same session)
t3 = datetime(2023, 1, 1, 12, 0, 0) # 1h 55m gap (new session)
plays = [
PlayHistory(played_at=t1, track_id="1"),
PlayHistory(played_at=t2, track_id="2"),
PlayHistory(played_at=t3, track_id="3"),
]
# Mock the query chain
# service.db.query().filter().order_by().all()
query_mock = self.mock_db.query.return_value.filter.return_value.order_by.return_value
query_mock.all.return_value = plays
stats = self.service.compute_session_stats(datetime.utcnow(), datetime.utcnow())
# Expected: 2 sessions ([t1, t2], [t3])
self.assertEqual(stats["count"], 2)
# Avg tracks: 3 plays / 2 sessions = 1.5
self.assertEqual(stats["avg_tracks"], 1.5)
def test_compute_skip_stats(self):
# Track duration = 30s
track = Track(id="t1", duration_ms=30000)
# Play 1: 10:00:00
# Play 2: 10:00:10 (Diff 10s. Duration 30s. 10 < 20 (30-10) -> Skip)
p1 = PlayHistory(played_at=datetime(2023, 1, 1, 10, 0, 0), track_id="t1")
p2 = PlayHistory(played_at=datetime(2023, 1, 1, 10, 0, 10), track_id="t1")
plays = [p1, p2]
query_mock = self.mock_db.query.return_value.filter.return_value.order_by.return_value
query_mock.all.return_value = plays
# Mock track lookup
self.mock_db.query.return_value.filter.return_value.all.return_value = [track]
stats = self.service.compute_skip_stats(datetime.utcnow(), datetime.utcnow())
self.assertEqual(stats["total_skips"], 1)
if __name__ == '__main__':
unittest.main()

View File

26
docker-compose.yml Normal file
View File

@@ -0,0 +1,26 @@
version: '3.8'
services:
backend:
build:
context: ./backend
image: ghcr.io/bnair123/musicanalyser:latest
container_name: music-analyser-backend
restart: unless-stopped
volumes:
- /opt/mySpotify/music.db:/app/music.db
environment:
- SPOTIFY_CLIENT_ID=${SPOTIFY_CLIENT_ID}
- SPOTIFY_CLIENT_SECRET=${SPOTIFY_CLIENT_SECRET}
- SPOTIFY_REFRESH_TOKEN=${SPOTIFY_REFRESH_TOKEN}
- GEMINI_API_KEY=${GEMINI_API_KEY}
ports:
- '8000:8000'
frontend:
build:
context: ./frontend
container_name: music-analyser-frontend
restart: unless-stopped
ports:
- '8991:80'
depends_on:
- backend

24
frontend/.gitignore vendored Normal file
View File

@@ -0,0 +1,24 @@
# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
pnpm-debug.log*
lerna-debug.log*
node_modules
dist
dist-ssr
*.local
# Editor directories and files
.vscode/*
!.vscode/extensions.json
.idea
.DS_Store
*.suo
*.ntvs*
*.njsproj
*.sln
*.sw?

14
frontend/Dockerfile Normal file
View File

@@ -0,0 +1,14 @@
# Stage 1: Build the React app
FROM node:18 as build
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# Stage 2: Serve with Nginx
FROM nginx:alpine
COPY --from=build /app/dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

16
frontend/README.md Normal file
View File

@@ -0,0 +1,16 @@
# React + Vite
This template provides a minimal setup to get React working in Vite with HMR and some ESLint rules.
Currently, two official plugins are available:
- [@vitejs/plugin-react](https://github.com/vitejs/vite-plugin-react/blob/main/packages/plugin-react) uses [Babel](https://babeljs.io/) (or [oxc](https://oxc.rs) when used in [rolldown-vite](https://vite.dev/guide/rolldown)) for Fast Refresh
- [@vitejs/plugin-react-swc](https://github.com/vitejs/vite-plugin-react/blob/main/packages/plugin-react-swc) uses [SWC](https://swc.rs/) for Fast Refresh
## React Compiler
The React Compiler is not enabled on this template because of its impact on dev & build performances. To add it, see [this documentation](https://react.dev/learn/react-compiler/installation).
## Expanding the ESLint configuration
If you are developing a production application, we recommend using TypeScript with type-aware lint rules enabled. Check out the [TS template](https://github.com/vitejs/vite/tree/main/packages/create-vite/template-react-ts) for information on how to integrate TypeScript and [`typescript-eslint`](https://typescript-eslint.io) in your project.

29
frontend/eslint.config.js Normal file
View File

@@ -0,0 +1,29 @@
import js from '@eslint/js'
import globals from 'globals'
import reactHooks from 'eslint-plugin-react-hooks'
import reactRefresh from 'eslint-plugin-react-refresh'
import { defineConfig, globalIgnores } from 'eslint/config'
export default defineConfig([
globalIgnores(['dist']),
{
files: ['**/*.{js,jsx}'],
extends: [
js.configs.recommended,
reactHooks.configs.flat.recommended,
reactRefresh.configs.vite,
],
languageOptions: {
ecmaVersion: 2020,
globals: globals.browser,
parserOptions: {
ecmaVersion: 'latest',
ecmaFeatures: { jsx: true },
sourceType: 'module',
},
},
rules: {
'no-unused-vars': ['error', { varsIgnorePattern: '^[A-Z_]' }],
},
},
])

13
frontend/index.html Normal file
View File

@@ -0,0 +1,13 @@
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<link rel="icon" type="image/svg+xml" href="/vite.svg" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>frontend</title>
</head>
<body>
<div id="root"></div>
<script type="module" src="/src/main.jsx"></script>
</body>
</html>

22
frontend/nginx.conf Normal file
View File

@@ -0,0 +1,22 @@
server {
listen 80;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
# Proxy API requests to backend
location /api/ {
# 'backend' is the service name in docker-compose
# We strip the /api/ prefix if the backend doesn't expect it,
# but in this setup the backend routes are /history, /tracks etc.
# It's cleaner to keep /api prefix in frontend and rewrite here or configure backend to serve on /api
# For simplicity, let's proxy /api/ to /
rewrite ^/api/(.*) /$1 break;
proxy_pass http://backend:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}

4197
frontend/package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

32
frontend/package.json Normal file
View File

@@ -0,0 +1,32 @@
{
"name": "frontend",
"private": true,
"version": "0.0.0",
"type": "module",
"scripts": {
"dev": "vite",
"build": "vite build",
"lint": "eslint .",
"preview": "vite preview"
},
"dependencies": {
"@ant-design/icons": "^6.1.0",
"antd": "^6.1.2",
"axios": "^1.13.2",
"date-fns": "^4.1.0",
"react": "^19.2.0",
"react-dom": "^19.2.0",
"react-router-dom": "^7.11.0"
},
"devDependencies": {
"@eslint/js": "^9.39.1",
"@types/react": "^19.2.5",
"@types/react-dom": "^19.2.3",
"@vitejs/plugin-react": "^5.1.1",
"eslint": "^9.39.1",
"eslint-plugin-react-hooks": "^7.0.1",
"eslint-plugin-react-refresh": "^0.4.24",
"globals": "^16.5.0",
"vite": "^7.2.4"
}
}

1
frontend/public/vite.svg Normal file
View File

@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="iconify iconify--logos" width="31.88" height="32" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 257"><defs><linearGradient id="IconifyId1813088fe1fbc01fb466" x1="-.828%" x2="57.636%" y1="7.652%" y2="78.411%"><stop offset="0%" stop-color="#41D1FF"></stop><stop offset="100%" stop-color="#BD34FE"></stop></linearGradient><linearGradient id="IconifyId1813088fe1fbc01fb467" x1="43.376%" x2="50.316%" y1="2.242%" y2="89.03%"><stop offset="0%" stop-color="#FFEA83"></stop><stop offset="8.333%" stop-color="#FFDD35"></stop><stop offset="100%" stop-color="#FFA800"></stop></linearGradient></defs><path fill="url(#IconifyId1813088fe1fbc01fb466)" d="M255.153 37.938L134.897 252.976c-2.483 4.44-8.862 4.466-11.382.048L.875 37.958c-2.746-4.814 1.371-10.646 6.827-9.67l120.385 21.517a6.537 6.537 0 0 0 2.322-.004l117.867-21.483c5.438-.991 9.574 4.796 6.877 9.62Z"></path><path fill="url(#IconifyId1813088fe1fbc01fb467)" d="M185.432.063L96.44 17.501a3.268 3.268 0 0 0-2.634 3.014l-5.474 92.456a3.268 3.268 0 0 0 3.997 3.378l24.777-5.718c2.318-.535 4.413 1.507 3.936 3.838l-7.361 36.047c-.495 2.426 1.782 4.5 4.151 3.78l15.304-4.649c2.372-.72 4.652 1.36 4.15 3.788l-11.698 56.621c-.732 3.542 3.979 5.473 5.943 2.437l1.313-2.028l72.516-144.72c1.215-2.423-.88-5.186-3.54-4.672l-25.505 4.922c-2.396.462-4.435-1.77-3.759-4.114l16.646-57.705c.677-2.35-1.37-4.583-3.769-4.113Z"></path></svg>

After

Width:  |  Height:  |  Size: 1.5 KiB

42
frontend/src/App.css Normal file
View File

@@ -0,0 +1,42 @@
#root {
max-width: 1280px;
margin: 0 auto;
padding: 2rem;
text-align: center;
}
.logo {
height: 6em;
padding: 1.5em;
will-change: filter;
transition: filter 300ms;
}
.logo:hover {
filter: drop-shadow(0 0 2em #646cffaa);
}
.logo.react:hover {
filter: drop-shadow(0 0 2em #61dafbaa);
}
@keyframes logo-spin {
from {
transform: rotate(0deg);
}
to {
transform: rotate(360deg);
}
}
@media (prefers-reduced-motion: no-preference) {
a:nth-of-type(2) .logo {
animation: logo-spin infinite 20s linear;
}
}
.card {
padding: 2em;
}
.read-the-docs {
color: #888;
}

117
frontend/src/App.jsx Normal file
View File

@@ -0,0 +1,117 @@
import React, { useEffect, useState } from 'react';
import { Table, Layout, Typography, Tag, Card, Statistic, Row, Col, Space } from 'antd';
import { ClockCircleOutlined, SoundOutlined, UserOutlined } from '@ant-design/icons';
import axios from 'axios';
import { format } from 'date-fns';
const { Header, Content, Footer } = Layout;
const { Title, Text } = Typography;
const App = () => {
const [history, setHistory] = useState([]);
const [loading, setLoading] = useState(true);
// Fetch History
useEffect(() => {
const fetchHistory = async () => {
try {
const response = await axios.get('/api/history?limit=100');
setHistory(response.data);
} catch (error) {
console.error("Failed to fetch history", error);
} finally {
setLoading(false);
}
};
fetchHistory();
}, []);
// Columns for Ant Design Table
const columns = [
{
title: 'Track',
dataIndex: ['track', 'name'],
key: 'track',
render: (text, record) => (
<Space direction="vertical" size={0}>
<Text strong>{text}</Text>
<Text type="secondary" style={{ fontSize: '12px' }}>{record.track.album}</Text>
</Space>
),
},
{
title: 'Artist',
dataIndex: ['track', 'artist'],
key: 'artist',
render: (text) => <Tag icon={<UserOutlined />} color="blue">{text}</Tag>,
},
{
title: 'Played At',
dataIndex: 'played_at',
key: 'played_at',
render: (date) => (
<Space>
<ClockCircleOutlined />
{format(new Date(date), 'MMM d, h:mm a')}
</Space>
),
sorter: (a, b) => new Date(a.played_at) - new Date(b.played_at),
defaultSortOrder: 'descend',
},
{
title: 'Vibe',
key: 'vibe',
render: (_, record) => {
const energy = record.track.energy;
const valence = record.track.valence;
if (energy === undefined || valence === undefined) return <Tag>Unknown</Tag>;
let color = 'default';
let label = 'Neutral';
if (energy > 0.7 && valence > 0.5) { color = 'orange'; label = 'High Energy / Happy'; }
else if (energy > 0.7 && valence <= 0.5) { color = 'red'; label = 'High Energy / Dark'; }
else if (energy <= 0.4 && valence > 0.5) { color = 'green'; label = 'Chill / Peaceful'; }
else if (energy <= 0.4 && valence <= 0.5) { color = 'purple'; label = 'Sad / Melancholic'; }
return <Tag color={color}>{label}</Tag>;
}
}
];
return (
<Layout style={{ minHeight: '100vh' }}>
<Header style={{ display: 'flex', alignItems: 'center' }}>
<Title level={3} style={{ color: 'white', margin: 0 }}>
<SoundOutlined style={{ marginRight: 10 }}/> Music Analyser
</Title>
</Header>
<Content style={{ padding: '0 50px', marginTop: 30 }}>
<div style={{ background: '#141414', padding: 24, borderRadius: 8, minHeight: 280 }}>
<Row gutter={16} style={{ marginBottom: 24 }}>
<Col span={8}>
<Card>
<Statistic title="Total Plays (Stored)" value={history.length} prefix={<SoundOutlined />} />
</Card>
</Col>
</Row>
<Title level={4} style={{ color: 'white' }}>Recent Listening History</Title>
<Table
columns={columns}
dataSource={history}
rowKey="id"
loading={loading}
pagination={{ pageSize: 10 }}
/>
</div>
</Content>
<Footer style={{ textAlign: 'center' }}>
Music Analyser ©{new Date().getFullYear()} Created with Ant Design
</Footer>
</Layout>
);
};
export default App;

View File

@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" class="iconify iconify--logos" width="35.93" height="32" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 228"><path fill="#00D8FF" d="M210.483 73.824a171.49 171.49 0 0 0-8.24-2.597c.465-1.9.893-3.777 1.273-5.621c6.238-30.281 2.16-54.676-11.769-62.708c-13.355-7.7-35.196.329-57.254 19.526a171.23 171.23 0 0 0-6.375 5.848a155.866 155.866 0 0 0-4.241-3.917C100.759 3.829 77.587-4.822 63.673 3.233C50.33 10.957 46.379 33.89 51.995 62.588a170.974 170.974 0 0 0 1.892 8.48c-3.28.932-6.445 1.924-9.474 2.98C17.309 83.498 0 98.307 0 113.668c0 15.865 18.582 31.778 46.812 41.427a145.52 145.52 0 0 0 6.921 2.165a167.467 167.467 0 0 0-2.01 9.138c-5.354 28.2-1.173 50.591 12.134 58.266c13.744 7.926 36.812-.22 59.273-19.855a145.567 145.567 0 0 0 5.342-4.923a168.064 168.064 0 0 0 6.92 6.314c21.758 18.722 43.246 26.282 56.54 18.586c13.731-7.949 18.194-32.003 12.4-61.268a145.016 145.016 0 0 0-1.535-6.842c1.62-.48 3.21-.974 4.76-1.488c29.348-9.723 48.443-25.443 48.443-41.52c0-15.417-17.868-30.326-45.517-39.844Zm-6.365 70.984c-1.4.463-2.836.91-4.3 1.345c-3.24-10.257-7.612-21.163-12.963-32.432c5.106-11 9.31-21.767 12.459-31.957c2.619.758 5.16 1.557 7.61 2.4c23.69 8.156 38.14 20.213 38.14 29.504c0 9.896-15.606 22.743-40.946 31.14Zm-10.514 20.834c2.562 12.94 2.927 24.64 1.23 33.787c-1.524 8.219-4.59 13.698-8.382 15.893c-8.067 4.67-25.32-1.4-43.927-17.412a156.726 156.726 0 0 1-6.437-5.87c7.214-7.889 14.423-17.06 21.459-27.246c12.376-1.098 24.068-2.894 34.671-5.345a134.17 134.17 0 0 1 1.386 6.193ZM87.276 214.515c-7.882 2.783-14.16 2.863-17.955.675c-8.075-4.657-11.432-22.636-6.853-46.752a156.923 156.923 0 0 1 1.869-8.499c10.486 2.32 22.093 3.988 34.498 4.994c7.084 9.967 14.501 19.128 21.976 27.15a134.668 134.668 0 0 1-4.877 4.492c-9.933 8.682-19.886 14.842-28.658 17.94ZM50.35 144.747c-12.483-4.267-22.792-9.812-29.858-15.863c-6.35-5.437-9.555-10.836-9.555-15.216c0-9.322 13.897-21.212 37.076-29.293c2.813-.98 5.757-1.905 8.812-2.773c3.204 10.42 7.406 21.315 12.477 32.332c-5.137 11.18-9.399 22.249-12.634 32.792a134.718 134.718 0 0 1-6.318-1.979Zm12.378-84.26c-4.811-24.587-1.616-43.134 6.425-47.789c8.564-4.958 27.502 2.111 47.463 19.835a144.318 144.318 0 0 1 3.841 3.545c-7.438 7.987-14.787 17.08-21.808 26.988c-12.04 1.116-23.565 2.908-34.161 5.309a160.342 160.342 0 0 1-1.76-7.887Zm110.427 27.268a347.8 347.8 0 0 0-7.785-12.803c8.168 1.033 15.994 2.404 23.343 4.08c-2.206 7.072-4.956 14.465-8.193 22.045a381.151 381.151 0 0 0-7.365-13.322Zm-45.032-43.861c5.044 5.465 10.096 11.566 15.065 18.186a322.04 322.04 0 0 0-30.257-.006c4.974-6.559 10.069-12.652 15.192-18.18ZM82.802 87.83a323.167 323.167 0 0 0-7.227 13.238c-3.184-7.553-5.909-14.98-8.134-22.152c7.304-1.634 15.093-2.97 23.209-3.984a321.524 321.524 0 0 0-7.848 12.897Zm8.081 65.352c-8.385-.936-16.291-2.203-23.593-3.793c2.26-7.3 5.045-14.885 8.298-22.6a321.187 321.187 0 0 0 7.257 13.246c2.594 4.48 5.28 8.868 8.038 13.147Zm37.542 31.03c-5.184-5.592-10.354-11.779-15.403-18.433c4.902.192 9.899.29 14.978.29c5.218 0 10.376-.117 15.453-.343c-4.985 6.774-10.018 12.97-15.028 18.486Zm52.198-57.817c3.422 7.8 6.306 15.345 8.596 22.52c-7.422 1.694-15.436 3.058-23.88 4.071a382.417 382.417 0 0 0 7.859-13.026a347.403 347.403 0 0 0 7.425-13.565Zm-16.898 8.101a358.557 358.557 0 0 1-12.281 19.815a329.4 329.4 0 0 1-23.444.823c-7.967 0-15.716-.248-23.178-.732a310.202 310.202 0 0 1-12.513-19.846h.001a307.41 307.41 0 0 1-10.923-20.627a310.278 310.278 0 0 1 10.89-20.637l-.001.001a307.318 307.318 0 0 1 12.413-19.761c7.613-.576 15.42-.876 23.31-.876H128c7.926 0 15.743.303 23.354.883a329.357 329.357 0 0 1 12.335 19.695a358.489 358.489 0 0 1 11.036 20.54a329.472 329.472 0 0 1-11 20.722Zm22.56-122.124c8.572 4.944 11.906 24.881 6.52 51.026c-.344 1.668-.73 3.367-1.15 5.09c-10.622-2.452-22.155-4.275-34.23-5.408c-7.034-10.017-14.323-19.124-21.64-27.008a160.789 160.789 0 0 1 5.888-5.4c18.9-16.447 36.564-22.941 44.612-18.3ZM128 90.808c12.625 0 22.86 10.235 22.86 22.86s-10.235 22.86-22.86 22.86s-22.86-10.235-22.86-22.86s10.235-22.86 22.86-22.86Z"></path></svg>

After

Width:  |  Height:  |  Size: 4.0 KiB

68
frontend/src/index.css Normal file
View File

@@ -0,0 +1,68 @@
:root {
font-family: system-ui, Avenir, Helvetica, Arial, sans-serif;
line-height: 1.5;
font-weight: 400;
color-scheme: light dark;
color: rgba(255, 255, 255, 0.87);
background-color: #242424;
font-synthesis: none;
text-rendering: optimizeLegibility;
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
}
a {
font-weight: 500;
color: #646cff;
text-decoration: inherit;
}
a:hover {
color: #535bf2;
}
body {
margin: 0;
display: flex;
place-items: center;
min-width: 320px;
min-height: 100vh;
}
h1 {
font-size: 3.2em;
line-height: 1.1;
}
button {
border-radius: 8px;
border: 1px solid transparent;
padding: 0.6em 1.2em;
font-size: 1em;
font-weight: 500;
font-family: inherit;
background-color: #1a1a1a;
cursor: pointer;
transition: border-color 0.25s;
}
button:hover {
border-color: #646cff;
}
button:focus,
button:focus-visible {
outline: 4px auto -webkit-focus-ring-color;
}
@media (prefers-color-scheme: light) {
:root {
color: #213547;
background-color: #ffffff;
}
a:hover {
color: #747bff;
}
button {
background-color: #f9f9f9;
}
}

19
frontend/src/main.jsx Normal file
View File

@@ -0,0 +1,19 @@
import React from 'react'
import ReactDOM from 'react-dom/client'
import App from './App.jsx'
import { ConfigProvider, theme } from 'antd';
ReactDOM.createRoot(document.getElementById('root')).render(
<React.StrictMode>
<ConfigProvider
theme={{
algorithm: theme.darkAlgorithm,
token: {
colorPrimary: '#1DB954', // Spotify Green
},
}}
>
<App />
</ConfigProvider>
</React.StrictMode>,
)

16
frontend/vite.config.js Normal file
View File

@@ -0,0 +1,16 @@
import { defineConfig } from 'vite'
import react from '@vitejs/plugin-react'
// https://vitejs.dev/config/
export default defineConfig({
plugins: [react()],
server: {
proxy: {
'/api': {
target: 'http://localhost:8000',
changeOrigin: true,
rewrite: (path) => path.replace(/^\/api/, ''),
},
},
},
})