Visualizing Traces
Visualizing Traces
https://www.compilenrun.com/docs/framework/fastapi/fastapi-middleware/fastapi-tracing-middleware/
For production applications, you'll want to visualize traces. Here are some popular options:
- Jaeger: Open-source, end-to-end distributed tracing
- Zipkin: Another popular open-source tracing system
- Datadog APM: Commercial solution with extensive features
- New Relic: Commercial APM with distributed tracing support
Most of these tools can be integrated with OpenTelemetry, providing a standardized way to collect and export traces.
OpenTelemetry FastAPI monitoring
https://uptrace.dev/guides/opentelemetry-fastapi
What is OpenTelemetry?
OpenTelemetry is an open source observability framework hosted by Cloud Native Computing Foundation. It is a merger of OpenCensus and OpenTracing projects.
OpenTelemetry aims to provide a single standard across all types of observability signals such as OpenTemetry logs, distributed tracing, and metrics.
OpenTelemetry specifies how to collect and send telemetry data to backend platforms. With OpenTelemetry, you can instrument your application once and then add or change vendors without changing the instrumentation.
OpenTelemetry provides detailed instrumentation, enabling you to monitor and measure the performance of your FastAPI applications in real-time. You can identify bottlenecks, optimize code, and ensure your application runs smoothly even under heavy traffic.
Observability Made Easy: Adding Logs, Traces & Metrics to FastAPI with Logfire
https://dev.to/devgeetech/observability-made-easy-adding-logs-traces-metrics-to-fastapi-with-logfire-529l
Picture this: You deploy your shiny new application. Everything looks great in dev, the logs are clean, requests are snappy, and life is good. Then… disaster strikes. A user reports a bug. Another complains about slow response times. You check your logs—wait, where are they? You SSH into the server, tail some logs, guess what went wrong, and hope for the best. Sound familiar?
Observability—knowing what’s happening inside your app in real time—shouldn’t be this hard. But setting up an observability stack often feels like assembling IKEA furniture with missing instructions. That’s where Logfire comes in.
Pydantic Logfire is a platform that makes it ridiculously easy to add observability to your application, no matter the size. In this post, I’ll show you how to integrate Logfire into a FastAPI app to get instant insights into logs, traces, and metrics—without the usual setup headaches. By the end, you’ll have real-time visibility into what’s happening under the hood, so you can debug, optimize, and sleep better at night.
Let’s get started!
https://pydantic.dev/logfire
TOOLS
https://github.com/wesdu/fastapi-opentracing
https://github.com/acidjunk/starlette-opentracing
OpenTracing support for Starlette and FastApi. Inspired by: Flask-OpenTracing OpenTracing implementations exist for major distributed tracing systems and can be bound or swapped with a one-line configuration change. This package uses the OpenTracing API for Python to implement it's functionality.
The package will implement: Starlette middleware that can be used to add Opentracing support to all incoming requests. It also supports the usage of a customer root span by looking at extra headers for incoming request:
https://github.com/Blueswen/fastapi-observability
FastAPI with Observability
Observe FastAPI app with three pillars of observability: Traces (Tempo), Metrics (Prometheus), Logs (Loki) on Grafana through OpenTelemetry and OpenMetrics.Observe the FastAPI application with three pillars of observability on Grafana:
- Traces with Tempo and OpenTelemetry Python SDK
- Metrics with Prometheus and Prometheus Python Client
- Logs with Loki
https://github.com/fike/fastapi-blog
https://github.com/fike/fastapi-blog/tree/main/deployments
A FastAPI sample
It's a FastAPI implementation as the backend for a blog system. This project's a funny goal to apply things that I'm learning. Things that you see here like telemetry using open source projects, CRUD using REST, and GraphQL (I hope that I have time to do that).
version: "3.9" services: db-fapi-blog: environment: - POSTGRES_USER=fapi_blog - POSTGRES_PASSWORD=fapi_pass - POSTGRES_DB=fapi_blog image: postgres:13 volumes: - db-data:/var/lib/postgresql/data networks: - dev ports: - 5432:5432 backend: build: context: ../ dockerfile: deployments/Dockerfile_backend_dev args: ENVIRONMENT: development volumes: - $PWD/backend:/opt/blog/backend depends_on: - db-fapi-blog environment: - ENVIRONMENT=development - OTELE_TRACE=True - SQLALCHEMY_DATABASE_URI=postgresql://fapi_blog:fapi_pass@db-fapi-blog:5432/fapi_blog - SECRET_KEY=c4af88af61391658010bd80e6fcb923f3fab38f53a4ece6d963a0c3c2d1e463b - ORIGINS=* - OTEL_EXPORTER_OTLP_INSECURE=False networks: - dev ports: - 8000:8000 restart: always frontend: build: context: ../ network: deployments_dev dockerfile: deployments/Dockerfile_frontend_dev args: ENVIRONMENT: development volumes: - $PWD/frontend:/opt/blog/frontend depends_on: - backend environment: - ENVIRONMENT=development - OTELE_TRACE=True networks: - dev ports: - 3000:3000 restart: always jaeger-all-in-one: image: jaegertracing/all-in-one environment: - JAEGER_DISABLED=false networks: - dev ports: - 16686:16686 - 6831:6831/udp - 14268 - 14250 # Zipkin zipkin-all-in-one: image: openzipkin/zipkin:latest networks: - dev ports: - "9411:9411" # Collector otel-collector: image: otel/opentelemetry-collector:latest command: ["--config=/etc/otel-collector-config.yaml", "${OTELCOL_ARGS}"] volumes: - ./otel-collector-config.yaml:/etc/otel-collector-config.yaml networks: - dev ports: - "1888:1888" # pprof extension - "8888:8888" # Prometheus metrics exposed by the collector - "8889:8889" # Prometheus exporter metrics - "13133:13133" # health_check extension - "4317:4317" # OTLP gRPC receiver # - "55679:55679" # zpages extension - "9411" # Zipkin receiver - "55679" # zpages extension depends_on: - jaeger-all-in-one - zipkin-all-in-one prometheus: container_name: prometheus image: prom/prometheus:latest volumes: - ./prometheus.yaml:/etc/prometheus/prometheus.yml ports: - "9090:9090" networks: dev: volumes: db-data:
https://jaeger-cn.readthedocs.io/zh-cn/latest/
Jaeger, 分布式跟踪系统
欢迎访问Jaeger的文档门户!Jaeger新手和有经验的老手都可以从下面获取到想要的内容
如果没有找到你想要的内容,或者有个其他问题,可以通过以下途径联系到我们 Github, Gitter chat, mailing list.
关于¶
Jaeger受到了Dapper 和 OpenZipkin的启发,由Uber Technologies开发并开源的一个分布式跟踪系统
它可以用于监控微服务体系结构:
- 分布式上下文传播
- 分布式事务监控
- 根本原因分析
- 服务依赖关系分析
- 性能/延迟优化
通过这篇文章Evolving Distributed Tracing at Uber可以了解到Jaeger的历史变迁和变化的原因
https://github.com/softwarebloat/python-tracing-demo
https://github.com/softwarebloat/python-tracing-demo/blob/main/docker-compose.yml
services: grafana: image: grafana/grafana:8.2.0 ports: - 5000:3000 volumes: - ./config/grafana.ini:/etc/grafana/grafana.ini - ./config/grafana-datasources.yml:/etc/grafana/provisioning/datasources/datasources.yaml environment: - GF_AUTH_ANONYMOUS_ENABLED=true - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin - GF_AUTH_DISABLE_LOGIN_FORM=true depends_on: - tempo loki: image: grafana/loki:2.3.0 ports: - 3100:3100 command: -config.file=/etc/loki/local-config.yaml tempo: image: grafana/tempo:1.2.0 command: [ "-search.enabled=true", "-config.file=/etc/tempo.yaml" ] volumes: - ./config/tempo.yaml:/etc/tempo.yaml ports: - 8000:8000 # tempo tracing-demo: build: . ports: - "8080:8080" logging: driver: loki options: loki-url: http://localhost:3100/loki/api/v1/push
https://github.com/belajarqywok/fastapi-tensorflow-jaeger
Deployment of NLP (Natural Language Processing) and Image Recognition models using FastAPI Framework (RPC / RESTful API), Tensorflow, and Jaeger OpenTelemetry (Distributed Tracing).
Deployment of NLP (Natural Language Processing) and Image Recognition models using FastAPI Framework (RPC / RESTful API), Tensorflow, and Jaeger OpenTelemetry (Distributed Tracing).
Topics
https://github.com/dannielshalev/fastapi_alchemy_auditing_demo
This repository demonstrates a FastAPI application integrated with SQLAlchemy for database operations and includes auditing functionality to store request and response details.
-
SQLAlchemy Integration: Utilizes SQLAlchemy for database operations, providing an efficient and scalable approach to interact with the database.
-
Middleware for Auditing: Implements FastAPI middleware to capture and store auditing information in the database for each incoming request.
The project is organized into the following modules and functions:
-
database.py
: Manages the database connection and includes the SQLAlchemy logic for interacting with the database. -
modules.py
: Defines the database schema, including an exampleAudit
table for storing auditing information.
-
get_db
: A FastAPI dependency that creates a new SQLAlchemySessionLocal
for each request and closes it once the request is completed. -
db_session_middleware
: Middleware function executed for each request, capturing details both before and after the endpoint function is executed. It includes logic to store auditing information. -
store_audit
: Function responsible for storing auditing information in the database.
https://github.com/belajarqywok/fastapi-tensorflow-jaeger
https://github.com/fanqingsong/fastapi-blog?tab=readme-ov-file