Implement Phase 2 Frontend and Phase 3 Data Enrichment

- Initialize React+Vite Frontend with Ant Design Dashboard.
- Implement Data Enrichment: ReccoBeats (Audio Features) and Spotify (Genres).
- Update Database Schema via Alembic Migrations.
- Add Docker support (Dockerfile, docker-compose.yml).
- Update README with hosting instructions.
This commit is contained in:
google-labs-jules[bot]
2025-12-24 21:34:36 +00:00
parent 3a424d15a5
commit 0ca9893c68
15 changed files with 607 additions and 60 deletions

121
README.md
View File

@@ -1,72 +1,87 @@
# Music Analyser
A personal analytics dashboard for your music listening habits, powered by Python, FastAPI, and Google Gemini AI.
A personal analytics dashboard for your music listening habits, powered by Python, FastAPI, React, and Google Gemini AI.
## Project Structure
- `backend/`: FastAPI backend for data ingestion and API.
- `app/ingest.py`: Background worker that polls Spotify.
- `app/services/`: Logic for Spotify and Gemini APIs.
- `app/ingest.py`: Background worker that polls Spotify and enriches data via ReccoBeats.
- `app/services/`: Logic for Spotify, ReccoBeats, and Gemini APIs.
- `app/models.py`: Database schema (Tracks, PlayHistory).
- `frontend/`: (Coming Soon) React/Vite frontend.
- `frontend/`: React + Vite frontend for visualizing the dashboard.
- `docker-compose.yml`: For easy deployment.
## Getting Started
## Features
### Prerequisites
- **Continuous Ingestion**: Polls Spotify every 60 seconds to record your listening history.
- **Data Enrichment**: Automatically fetches **Genres** (via Spotify) and **Audio Features** (Energy, BPM, Mood via ReccoBeats).
- **Dashboard**: A responsive UI to view your history and stats.
- **AI Ready**: Database schema and environment prepared for Gemini AI integration.
- Docker & Docker Compose (optional, for containerization)
- Python 3.11+ (for local dev)
- A Spotify Developer App (Client ID & Secret)
- A Google Gemini API Key
## Hosting Guide (Docker)
### 1. Setup Environment Variables
This application is designed to run via Docker Compose.
Create a `.env` file in the `backend/` directory:
### 1. Prerequisites
- Docker & Docker Compose installed.
- **Spotify Developer Credentials** (Client ID & Secret).
- **Spotify Refresh Token** (Run `backend/scripts/get_refresh_token.py` locally to generate this).
- **Google Gemini API Key**.
### 2. Deployment
1. **Clone the repository**.
2. **Create a `.env` file** in the root directory (or use environment variables directly):
```bash
SPOTIFY_CLIENT_ID="your_client_id"
SPOTIFY_CLIENT_SECRET="your_client_secret"
SPOTIFY_REFRESH_TOKEN="your_refresh_token"
GEMINI_API_KEY="your_gemini_key"
```
3. **Run with Docker Compose**:
```bash
docker-compose up -d --build
```
This will:
- Build and start the **Backend** (port 8000).
- Build and start the **Frontend** (port 8991).
- Create a **Persistent Volume** at `/opt/mySpotify` (mapped to the container's database) to ensure **no data loss** during updates.
4. **Access the Dashboard**:
Open your browser to `http://localhost:8991` (or your server IP).
### 3. Data Persistence & Updates
- **Data**: All data is stored in `music.db` inside the container, which is mounted to `/opt/mySpotify/music.db` on your host machine.
- **Migrations**: The project uses **Alembic** for database migrations. When you update the container image in the future, the backend will automatically apply any schema changes without deleting your data.
### 4. Pulling from Registry (Alternative)
If you prefer to pull the pre-built image instead of building locally:
```bash
SPOTIFY_CLIENT_ID="your_client_id"
SPOTIFY_CLIENT_SECRET="your_client_secret"
SPOTIFY_REFRESH_TOKEN="your_refresh_token"
GEMINI_API_KEY="your_gemini_key"
docker pull ghcr.io/bnair123/musicanalyser:latest
```
To get the `SPOTIFY_REFRESH_TOKEN`, run the helper script:
(Note: You still need to mount the volume and pass environment variables as shown in `docker-compose.yml`).
```bash
python backend/scripts/get_refresh_token.py
```
## Local Development
### 2. Run Locally
1. **Backend**:
```bash
cd backend
pip install -r requirements.txt
python run_worker.py # Starts ingestion
uvicorn app.main:app --reload # Starts API
```
Install dependencies:
```bash
cd backend
pip install -r requirements.txt
```
Run the server:
```bash
uvicorn app.main:app --reload
```
The API will be available at `http://localhost:8000`.
### 3. Run Ingestion (Manually)
You can trigger the ingestion process via the API:
```bash
curl -X POST http://localhost:8000/trigger-ingest
```
Or run the ingestion logic directly via python shell (see `app/ingest.py`).
### 4. Docker Build
To build the image locally:
```bash
docker build -t music-analyser-backend ./backend
```
2. **Frontend**:
```bash
cd frontend
npm install
npm run dev
```