深入解析:基于Flask+Vue.js的智能社区垃圾分类管理系统 - 三创赛参赛项目全栈开发指南

项目概述与创新点

项目背景

随着我国垃圾分类政策的全面推进,社区垃圾分类管理面临诸多挑战:居民分类知识不足、投放监督难、管理效率低下等。本项目旨在打造一个智能化、可视化、互动性强的社区垃圾分类管理系统,结合AI图像识别技术,提升垃圾分类的准确性和管理效率。

创新亮点

  1. AI智能识别:集成深度学习模型,实现垃圾图片自动分类

  2. 积分激励机制:建立居民垃圾分类积分体系,提高参与度

  3. 可视化数据大屏:实时展示社区垃圾分类数据与趋势

  4. 移动端便捷管理:支持小程序快速查询与上报

技术栈设计

后端技术栈

  • Web框架:Flask + Flask-RESTful

  • 数据库:PostgreSQL + Redis

  • AI模型:PyTorch + ResNet预训练模型

  • 任务队列:Celery + Redis

  • API文档:Swagger/OpenAPI

前端技术栈

  • 主框架:Vue.js 3 + Vue Router + Pinia

  • UI组件库:Element Plus

  • 可视化:ECharts

  • 移动端:UniApp(兼容小程序)

部署与运维

  • 容器化:Docker + Docker Compose

  • CI/CD:GitHub Actions

  • 监控:Prometheus + Grafana

项目架构设计

text

garbage-classification-system/
├── backend/                 # Flask后端
│   ├── app/
│   │   ├── api/            # API蓝图
│   │   ├── models/         # 数据模型
│   │   ├── services/       # 业务逻辑
│   │   ├── utils/          # 工具函数
│   │   └── ai_model/       # AI模型相关
│   ├── migrations/         # 数据库迁移
│   ├── tests/              # 单元测试
│   └── requirements.txt
├── frontend/               # Vue前端
│   ├── src/
│   │   ├── views/         # 页面组件
│   │   ├── components/    # 通用组件
│   │   ├── store/         # 状态管理
│   │   ├── router/        # 路由配置
│   │   └── api/           # API调用
│   └── package.json
├── mobile/                 # 小程序端
├── docker-compose.yml
├── README.md
└── .gitignore

核心功能模块实现

1. 后端核心代码实现

app/init.py - Flask应用工厂

python

from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from flask_cors import CORS
from flask_jwt_extended import JWTManager
from config import Config

db = SQLAlchemy()
jwt = JWTManager()

def create_app(config_class=Config):
    app = Flask(__name__)
    app.config.from_object(config_class)
    
    # 初始化扩展
    CORS(app)
    db.init_app(app)
    jwt.init_app(app)
    
    # 注册蓝图
    from app.api import bp as api_bp
    app.register_blueprint(api_bp, url_prefix='/api/v1')
    
    # 创建数据库表
    with app.app_context():
        db.create_all()
    
    return app

app/models/user.py - 用户模型

python

from app import db
from datetime import datetime
from werkzeug.security import generate_password_hash, check_password_hash

class User(db.Model):
    __tablename__ = 'users'
    
    id = db.Column(db.Integer, primary_key=True)
    username = db.Column(db.String(64), unique=True, index=True)
    email = db.Column(db.String(120), unique=True, index=True)
    password_hash = db.Column(db.String(128))
    phone = db.Column(db.String(20))
    address = db.Column(db.String(200))
    points = db.Column(db.Integer, default=0)
    created_at = db.Column(db.DateTime, default=datetime.utcnow)
    
    # 关系
    garbage_records = db.relationship('GarbageRecord', backref='user', lazy='dynamic')
    
    def set_password(self, password):
        self.password_hash = generate_password_hash(password)
    
    def check_password(self, password):
        return check_password_hash(self.password_hash, password)
    
    def to_dict(self):
        return {
            'id': self.id,
            'username': self.username,
            'email': self.email,
            'points': self.points,
            'created_at': self.created_at.isoformat()
        }

app/services/ai_classifier.py - AI垃圾分类器

python

import torch
import torch.nn as nn
from PIL import Image
import torchvision.transforms as transforms
from torchvision.models import resnet50

class GarbageClassifier:
    def __init__(self, model_path='models/garbage_resnet50.pth'):
        self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
        self.model = self._load_model(model_path)
        self.classes = ['可回收物', '有害垃圾', '厨余垃圾', '其他垃圾']
        self.transform = transforms.Compose([
            transforms.Resize((224, 224)),
            transforms.ToTensor(),
            transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
        ])
    
    def _load_model(self, model_path):
        """加载预训练模型"""
        model = resnet50(pretrained=False)
        num_ftrs = model.fc.in_features
        model.fc = nn.Linear(num_ftrs, 4)  # 4分类
        
        if torch.cuda.is_available():
            model.load_state_dict(torch.load(model_path))
        else:
            model.load_state_dict(torch.load(model_path, map_location='cpu'))
        
        model.eval()
        return model.to(self.device)
    
    def predict(self, image_path):
        """预测图片中的垃圾类型"""
        image = Image.open(image_path).convert('RGB')
        image_tensor = self.transform(image).unsqueeze(0).to(self.device)
        
        with torch.no_grad():
            outputs = self.model(image_tensor)
            probabilities = torch.nn.functional.softmax(outputs, dim=1)
            _, predicted = torch.max(outputs, 1)
            
        return {
            'category': self.classes[predicted.item()],
            'confidence': probabilities[0][predicted.item()].item(),
            'probabilities': {
                cls: prob.item() for cls, prob in zip(self.classes, probabilities[0])
            }
        }

2. 前端核心组件实现

src/views/Dashboard.vue - 数据仪表盘

vue



<script setup>
import { ref, onMounted } from 'vue'
import * as echarts from 'echarts'
import StatCard from '@/components/StatCard.vue'
import RecentRecords from '@/components/RecentRecords.vue'
import { getDashboardData } from '@/api/dashboard'

const statsData = ref([
  { title: '今日投放', value: 0, icon: 'trash', color: '#409EFF' },
  { title: '正确率', value: '0%', icon: 'check', color: '#67C23A' },
  { title: '活跃用户', value: 0, icon: 'user', color: '#E6A23C' },
  { title: '累计积分', value: 0, icon: 'coin', color: '#F56C6C' }
])

const recentRecords = ref([])
const pieChart = ref(null)
const lineChart = ref(null)

onMounted(async () => {
  await loadDashboardData()
  initCharts()
})

const loadDashboardData = async () => {
  try {
    const data = await getDashboardData()
    statsData.value[0].value = data.today_count
    statsData.value[1].value = `${data.accuracy_rate}%`
    statsData.value[2].value = data.active_users
    statsData.value[3].value = data.total_points
    recentRecords.value = data.recent_records
  } catch (error) {
    console.error('加载仪表盘数据失败:', error)
  }
}

const initCharts = () => {
  // 初始化饼图
  const pieInstance = echarts.init(pieChart.value)
  pieInstance.setOption({
    tooltip: { trigger: 'item' },
    legend: { top: 'bottom' },
    series: [{
      name: '垃圾分类',
      type: 'pie',
      radius: ['40%', '70%'],
      data: [
        { value: 335, name: '可回收物' },
        { value: 310, name: '有害垃圾' },
        { value: 234, name: '厨余垃圾' },
        { value: 135, name: '其他垃圾' }
      ],
      emphasis: {
        itemStyle: {
          shadowBlur: 10,
          shadowOffsetX: 0,
          shadowColor: 'rgba(0, 0, 0, 0.5)'
        }
      }
    }]
  })
  
  // 初始化折线图
  const lineInstance = echarts.init(lineChart.value)
  lineInstance.setOption({
    tooltip: { trigger: 'axis' },
    xAxis: {
      type: 'category',
      data: ['周一', '周二', '周三', '周四', '周五', '周六', '周日']
    },
    yAxis: { type: 'value' },
    series: [{
      data: [120, 200, 150, 80, 70, 110, 130],
      type: 'line',
      smooth: true,
      areaStyle: {}
    }]
  })
}
</script>

3. AI模型训练代码

train_model.py - 模型训练脚本

python

import torch
import torch.nn as nn
import torch.optim as optim
from torchvision import datasets, transforms, models
from torch.utils.data import DataLoader
import argparse
from tqdm import tqdm

def train_model(model, dataloaders, criterion, optimizer, num_epochs=25):
    """训练模型"""
    best_acc = 0.0
    
    for epoch in range(num_epochs):
        print(f'Epoch {epoch}/{num_epochs-1}')
        print('-' * 10)
        
        # 每个epoch都有训练和验证阶段
        for phase in ['train', 'val']:
            if phase == 'train':
                model.train()
            else:
                model.eval()
            
            running_loss = 0.0
            running_corrects = 0
            
            # 迭代数据
            for inputs, labels in tqdm(dataloaders[phase]):
                inputs = inputs.to(device)
                labels = labels.to(device)
                
                # 梯度清零
                optimizer.zero_grad()
                
                # 前向传播
                with torch.set_grad_enabled(phase == 'train'):
                    outputs = model(inputs)
                    _, preds = torch.max(outputs, 1)
                    loss = criterion(outputs, labels)
                    
                    # 反向传播+优化仅在训练阶段
                    if phase == 'train':
                        loss.backward()
                        optimizer.step()
                
                # 统计
                running_loss += loss.item() * inputs.size(0)
                running_corrects += torch.sum(preds == labels.data)
            
            epoch_loss = running_loss / len(dataloaders[phase].dataset)
            epoch_acc = running_corrects.double() / len(dataloaders[phase].dataset)
            
            print(f'{phase} Loss: {epoch_loss:.4f} Acc: {epoch_acc:.4f}')
            
            # 深度复制模型
            if phase == 'val' and epoch_acc > best_acc:
                best_acc = epoch_acc
                torch.save(model.state_dict(), 'best_model.pth')
    
    print(f'Best val Acc: {best_acc:4f}')
    return model

if __name__ == '__main__':
    # 数据预处理
    data_transforms = {
        'train': transforms.Compose([
            transforms.RandomResizedCrop(224),
            transforms.RandomHorizontalFlip(),
            transforms.ToTensor(),
            transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
        ]),
        'val': transforms.Compose([
            transforms.Resize(256),
            transforms.CenterCrop(224),
            transforms.ToTensor(),
            transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
        ]),
    }
    
    # 加载数据集
    image_datasets = {
        'train': datasets.ImageFolder('data/train', data_transforms['train']),
        'val': datasets.ImageFolder('data/val', data_transforms['val'])
    }
    
    dataloaders = {
        x: DataLoader(image_datasets[x], batch_size=32, shuffle=True, num_workers=4)
        for x in ['train', 'val']
    }
    
    # 初始化模型
    device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
    model = models.resnet50(pretrained=True)
    num_ftrs = model.fc.in_features
    model.fc = nn.Linear(num_ftrs, 4)  # 4分类
    model = model.to(device)
    
    # 定义损失函数和优化器
    criterion = nn.CrossEntropyLoss()
    optimizer = optim.Adam(model.parameters(), lr=0.001)
    
    # 训练模型
    model = train_model(model, dataloaders, criterion, optimizer, num_epochs=25)

部署与运行指南

Docker部署配置

docker-compose.yml

yaml

version: '3.8'

services:
  postgres:
    image: postgres:13
    environment:
      POSTGRES_DB: garbage_db
      POSTGRES_USER: admin
      POSTGRES_PASSWORD: secure_password
    volumes:
      - postgres_data:/var/lib/postgresql/data
    ports:
      - "5432:5432"

  redis:
    image: redis:6-alpine
    ports:
      - "6379:6379"

  backend:
    build: ./backend
    ports:
      - "5000:5000"
    environment:
      DATABASE_URL: postgresql://admin:secure_password@postgres:5432/garbage_db
      REDIS_URL: redis://redis:6379/0
    depends_on:
      - postgres
      - redis
    volumes:
      - ./backend:/app
      - model_volume:/app/models

  frontend:
    build: ./frontend
    ports:
      - "8080:80"
    depends_on:
      - backend

  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
      - ./ssl:/etc/nginx/ssl
    depends_on:
      - backend
      - frontend

volumes:
  postgres_data:
  model_volume:

项目运行步骤

bash

# 1. 克隆项目
git clone https://github.com/yourusername/garbage-classification-system.git
cd garbage-classification-system

# 2. 安装依赖
cd backend && pip install -r requirements.txt
cd ../frontend && npm install

# 3. 配置环境变量
cp backend/.env.example backend/.env
# 编辑.env文件设置数据库等信息

# 4. 初始化数据库
flask db init
flask db migrate
flask db upgrade

# 5. 训练AI模型(可选)
python train_model.py

# 6. 启动开发服务器
# 后端
cd backend && flask run --port=5000
# 前端
cd frontend && npm run serve

# 7. 或使用Docker部署
docker-compose up -d

项目亮点与参赛建议

技术亮点

  1. 全栈技术整合:前后端分离,微服务架构

  2. AI技术落地:深度学习模型实际应用

  3. 数据可视化:丰富的数据展示与分析

  4. 响应式设计:多端适配,用户体验优秀

商业价值

  1. 政策契合:符合国家垃圾分类政策导向

  2. 社会效益:提升社区管理效率,环保教育

  3. 商业模式:积分商城、广告投放、数据服务

参赛建议

  1. 突出创新点:重点展示AI识别准确率和用户体验

  2. 准备演示数据:使用真实社区数据进行演示

  3. 商业模式清晰:设计可行的盈利模式

  4. 团队分工明确:展示团队的技术与商业能力

后续优化方向

  1. 技术优化

    • 使用YOLOv8实现实时目标检测

    • 集成GPT-4进行垃圾回收建议

    • 添加区块链积分系统

  2. 功能扩展

    • 垃圾回收预约功能

    • 环保知识问答社区

    • AR垃圾分类指导

  3. 部署优化

    • Kubernetes集群部署

    • 边缘计算优化

    • CDN加速静态资源

总结

本项目为三创赛提供了一个完整、可实施、创新性强的Python全栈项目方案。通过结合AI技术、可视化展示和社区管理,既解决了实际问题,又展示了团队的技术实力。建议参赛团队根据自身情况调整功能,重点突出项目的创新性和实用性。

posted @ 2026-01-04 20:59  yangykaifa  阅读(11)  评论(0)    收藏  举报