Spaces:
Runtime error
Runtime error
A newer version of the Gradio SDK is available:
6.1.0
MarioGPT Quick Reference Guide
๐ Quick Start Commands
Installation
# Basic installation
pip install -r requirements.txt
# With extras
pip install -e ".[gradio,mcp]"
# Development mode
pip install -e ".[dev,gradio,mcp]"
Running
Gradio Web Interface:
python app.py
# Visit: http://localhost:7860
MCP Server:
python mcp_server.py
Tests:
python test_mcp_server.py
๐ Level Generation Parameters
Prompt Components
| Component | Options | Description |
|---|---|---|
| Pipes | no, little, some, many | Number of pipes in level |
| Enemies | no, little, some, many | Enemy density |
| Blocks | little, some, many | Platform/block count |
| Elevation | low, high | Vertical platforming |
Example Prompts
"many pipes, some enemies, low elevation"
"no pipes, many enemies, high elevation"
"some pipes, some enemies, some blocks, low elevation"
Advanced Parameters
| Parameter | Range | Default | Description |
|---|---|---|---|
| temperature | 0.1-2.0 | 2.0 | Higher = more diverse |
| level_size | 100-2799 | 1399 | Level length in tokens |
๐ MCP Tools
1. generate_mario_level
{
"prompt": "many enemies, high elevation",
"temperature": 1.5,
"level_size": 1000
}
Returns: PNG image + text description
2. get_level_suggestions
{}
Returns: List of example prompts
๐ฎ Keyboard Controls
In the playable demo:
- Arrow Keys - Move left/right
- A - Run
- S - Jump
- D - Shoot fireballs
๐ Common Issues & Fixes
CUDA Out of Memory
# Use smaller level_size
level_size = 500
# Or force CPU mode
export CUDA_VISIBLE_DEVICES=""
MCP Server Won't Start
# Check dependencies
pip install mcp pydantic
# Verify Python path
which python
Static Files Missing
# Create directory
mkdir -p static
# Set permissions
chmod 755 static
Import Errors
# Install in editable mode
pip install -e .
# Or add to PYTHONPATH
export PYTHONPATH="${PYTHONPATH}:$(pwd)"
๐ Temperature Guide
| Temperature | Behavior | Use Case |
|---|---|---|
| 0.1-0.5 | Very consistent | Reproducible levels |
| 0.5-1.0 | Balanced | Production quality |
| 1.0-1.5 | Creative | Interesting variety |
| 1.5-2.0 | Wild | Experimental levels |
๐๏ธ Project Structure
mario-gpt/
โโโ app.py # Gradio web interface
โโโ mcp_server.py # MCP server for HuggingChat
โโโ requirements.txt # Python dependencies
โโโ setup.py # Package configuration
โโโ README.md # Documentation
โโโ INSTALLATION.md # Deployment guide
โโโ BUGFIXES.md # Change log
โโโ test_mcp_server.py # Test suite
โโโ mcp_config.json # MCP configuration
โโโ static/ # Generated HTML files
โโโ data/
โ โโโ tiles/ # Mario tile assets
โโโ supermariogpt/ # Core package
โโโ lm.py # Language model
โโโ dataset.py # Data handling
โโโ prompter.py # Prompt engineering
โโโ utils.py # Utilities
๐ง Environment Variables
# Force CPU mode
export CUDA_VISIBLE_DEVICES=""
# Set tile directory
export TILE_DIR="/path/to/tiles"
# Logging level
export LOG_LEVEL="DEBUG"
# Python path
export PYTHONPATH="."
๐ฆ Dependencies Overview
Core
- torch - Deep learning framework
- transformers - GPT-2 model
- numpy - Numerical operations
- scipy - Scientific computing
Web Interface
- gradio - Web UI framework
- fastapi - API framework
- uvicorn - ASGI server
- spaces - HuggingFace decorator
Image Processing
- pillow - Image manipulation
MCP Support
- mcp - Model Context Protocol
- pydantic - Data validation
Development
- pytest - Testing framework
- black - Code formatting
- flake8 - Linting
- isort - Import sorting
๐ฏ Best Practices
For Development
- Use virtual environment
- Install in editable mode:
pip install -e . - Run tests before committing
- Format code with black
- Check with flake8
For Deployment
- Use specific package versions
- Enable GPU for production
- Set up monitoring/logging
- Configure persistent storage
- Use environment variables
For MCP Integration
- Test server independently first
- Verify configuration file syntax
- Check Python path in config
- Monitor server logs
- Handle errors gracefully
๐ Additional Resources
Documentation
Repositories
๐ก Pro Tips
- Faster Generation: Use smaller level_size during testing
- Better Quality: Use temperature 1.0-1.5 for best results
- Diverse Levels: Combine different prompt styles
- GPU Optimization: Enable TF32 for Ampere GPUs
- Memory Management: Clear CUDA cache between generations
๐ Debugging
Enable Verbose Logging
import logging
logging.basicConfig(level=logging.DEBUG)
Check CUDA Availability
import torch
print(f"CUDA available: {torch.cuda.is_available()}")
print(f"CUDA device: {torch.cuda.get_device_name(0)}")
Test Model Loading
from supermariogpt.lm import MarioLM
mario_lm = MarioLM()
print("Model loaded successfully!")
Test MCP Server
# Run test suite
python test_mcp_server.py
# Manual test
echo '{"jsonrpc":"2.0","id":1,"method":"tools/list"}' | python mcp_server.py
๐จ Example Use Cases
1. Easy Tutorial Level
Prompt: "little pipes, no enemies, little blocks, low elevation"
Temperature: 1.0
Level Size: 800
2. Challenging Action Level
Prompt: "many pipes, many enemies, some blocks, high elevation"
Temperature: 1.5
Level Size: 1500
3. Platform-Heavy Level
Prompt: "no pipes, little enemies, many blocks, high elevation"
Temperature: 1.2
Level Size: 1200
4. Experimental Level
Prompt: "many pipes, many enemies, many blocks, high elevation"
Temperature: 2.0
Level Size: 2000
Last Updated: 2024 Version: 1.0.0