hermes安装笔记
1、安装 在Windows下,进入wsl
curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bash
中间可能会出现https的问题,将2.0改为1.0 就行
2、配置
执行hermes setup,quickly ----选择相关的模型,输入API-KEY时注意
输入的或粘贴的KEY并不显示,其实已经输入了,不要多次粘贴。不要多次粘贴
3更换模型后,微信好像就失忆了。再次在微信中运行时,他会要求你在wsl 中执行一个命令进行恢复
hermes pairing approve weixin ******
其中的*****每个电脑 不一样
4、如果使用摩搭的API-KEY,记得将前面的MS-去掉;
附我的config.yaml文件
model: default: Qwen/Qwen3-Coder-30B-A3B-Instruct provider: custom base_url: https://api-inference.modelscope.cn/v1/ api_key: providers: {} fallback_providers: [] credential_pool_strategies: zai: random toolsets: - hermes-cli agent: max_turns: 90 gateway_timeout: 1800 restart_drain_timeout: 60 service_tier: '' tool_use_enforcement: auto gateway_timeout_warning: 900 gateway_notify_interval: 600 terminal: backend: local modal_mode: auto cwd: . timeout: 180 env_passthrough: [] docker_image: nikolaik/python-nodejs:python3.11-nodejs20 docker_forward_env: [] docker_env: {} singularity_image: docker://nikolaik/python-nodejs:python3.11-nodejs20 modal_image: nikolaik/python-nodejs:python3.11-nodejs20 daytona_image: nikolaik/python-nodejs:python3.11-nodejs20 container_cpu: 1 container_memory: 5120 container_disk: 51200 container_persistent: true docker_volumes: [] docker_mount_cwd_to_workspace: false persistent_shell: true browser: inactivity_timeout: 120 command_timeout: 30 record_sessions: false allow_private_urls: false camofox: managed_persistence: false checkpoints: enabled: true max_snapshots: 50 file_read_max_chars: 100000 compression: enabled: true threshold: 0.5 target_ratio: 0.2 protect_last_n: 20 smart_model_routing: enabled: false max_simple_chars: 160 max_simple_words: 28 cheap_model: {} auxiliary: vision: provider: auto model: '' base_url: '' api_key: '' timeout: 120 download_timeout: 30 web_extract: provider: auto model: '' base_url: '' api_key: '' timeout: 360 compression: provider: auto model: '' base_url: '' api_key: '' timeout: 120 session_search: provider: auto model: '' base_url: '' api_key: '' timeout: 30 skills_hub: provider: auto model: '' base_url: '' api_key: '' timeout: 30 approval: provider: auto model: '' base_url: '' api_key: '' timeout: 30 mcp: provider: auto model: '' base_url: '' api_key: '' timeout: 30 flush_memories: provider: auto model: '' base_url: '' api_key: '' timeout: 30 display: compact: false personality: kawaii resume_display: full busy_input_mode: interrupt bell_on_complete: false show_reasoning: false streaming: false inline_diffs: true show_cost: false skin: default interim_assistant_messages: true tool_progress_command: false tool_progress_overrides: {} tool_preview_length: 0 platforms: {} tool_progress: all privacy: redact_pii: false tts: provider: edge edge: voice: en-US-AriaNeural elevenlabs: voice_id: pNInz6obpgDQGcFmaJgB model_id: eleven_multilingual_v2 openai: model: gpt-4o-mini-tts voice: alloy mistral: model: voxtral-mini-tts-2603 voice_id: c69964a6-ab8b-4f8a-9465-ec0925096ec8 neutts: ref_audio: '' ref_text: '' model: neuphonic/neutts-air-q4-gguf device: cpu stt: enabled: true provider: local local: model: base language: '' openai: model: whisper-1 mistral: model: voxtral-mini-latest voice: record_key: ctrl+b max_recording_seconds: 120 auto_tts: false silence_threshold: 200 silence_duration: 3.0 human_delay: mode: 'off' min_ms: 800 max_ms: 2500 context: engine: compressor memory: memory_enabled: true user_profile_enabled: true memory_char_limit: 2200 user_char_limit: 1375 provider: '' delegation: model: '' provider: '' base_url: '' api_key: '' max_iterations: 50 reasoning_effort: '' prefill_messages_file: '' skills: external_dirs: [] honcho: {} timezone: '' discord: require_mention: true free_response_channels: '' allowed_channels: '' auto_thread: true reactions: true whatsapp: {} approvals: mode: manual timeout: 60 command_allowlist: [] quick_commands: {} personalities: {} security: redact_secrets: true tirith_enabled: true tirith_path: tirith tirith_timeout: 5 tirith_fail_open: true website_blocklist: enabled: false domains: [] shared_files: [] cron: wrap_response: true logging: level: INFO max_size_mb: 5 backup_count: 3 network: force_ipv4: false _config_version: 17 session_reset: mode: both idle_minutes: 1440 at_hour: 4 custom_providers: - name: modelscope base_url: https://api-inference.modelscope.cn/v1/ api_key: model: Qwen/Qwen3-Coder-30B-A3B-Instruct models: Qwen/Qwen3-Coder-30B-A3B-Instruct: context_length: 64000 # ── Fallback Model ──────────────────────────────────────────────────── # Automatic provider failover when primary is unavailable. # Uncomment and configure to enable. Triggers on rate limits (429), # overload (529), service errors (503), or connection failures. # # Supported providers: # openrouter (OPENROUTER_API_KEY) — routes to any model # openai-codex (OAuth — hermes auth) — OpenAI Codex # nous (OAuth — hermes auth) — Nous Portal # zai (ZAI_API_KEY) — Z.AI / GLM # kimi-coding (KIMI_API_KEY) — Kimi / Moonshot # kimi-coding-cn (KIMI_CN_API_KEY) — Kimi / Moonshot (China) # minimax (MINIMAX_API_KEY) — MiniMax # minimax-cn (MINIMAX_CN_API_KEY) — MiniMax (China) # # For custom OpenAI-compatible endpoints, add base_url and api_key_env. # # fallback_model: # provider: openrouter # model: anthropic/claude-sonnet-4 # # ── Smart Model Routing ──────────────────────────────────────────────── # Optional cheap-vs-strong routing for simple turns. # Keeps the primary model for complex work, but can route short/simple # messages to a cheaper model across providers. # # smart_model_routing: # enabled: true # max_simple_chars: 160 # max_simple_words: 28 # cheap_model: # provider: openrouter # model: google/gemini-2.5-flash
活到老,学到老。

浙公网安备 33010602011771号