🔄 OpenClaw Workflow Orchestration

工作流编排——让Agent像流水线一样高效运转

什么是工作流编排?

世界上有一种复杂任务,它需要10个步骤、调用5个工具、处理3种异常情况、还有2个分支路径。如果全靠Prompt引导,Agent会迷失在"先做A还是先做B"的选择困境中。

Workflow Orchestration把复杂任务拆成有序的步骤流程——就像工厂流水线,每个环节专注自己的事,整体高效运转。OpenClaw支持顺序、并行、条件分支、循环等多种编排模式。

💡 核心收益:降低复杂度50%、提高成功率30%、减少Token消耗40%。

顺序执行

最基础的模式——一个接一个执行,每步的输出成为下一步的输入。

# 顺序执行示例
workflow:
  type: sequential
  steps:
    - id: step_1
      name: 搜索
      tool: web_search
      params:
        query: "{{user_query}}"
      output: search_results
      
    - id: step_2
      name: 获取详情
      tool: web_fetch
      params:
        url: "{{step_1.search_results[0].url}}"
      output: page_content
      
    - id: step_3
      name: 摘要生成
      model: gpt-4o
      input: "{{step_2.page_content}}"
      output: summary
      
# 执行顺序: 搜索 → 获取详情 → 摘要
# 每步必须等待前一步完成

并行执行

多个独立任务同时执行,大幅提升效率。适合批量处理、多源聚合场景。

# 并行执行示例
workflow:
  type: parallel
  concurrency: 5  # 同时最多5个任务
  
  tasks:
    - id: fetch_news
      tool: web_search
      params: {query: "AI新闻"}
      
    - id: fetch_tutorials
      tool: web_search  
      params: {query: "OpenClaw教程"}
      
    - id: fetch_tools
      tool: web_search
      params: {query: "AI工具推荐"}
      
  # 所有任务完成后合并
  merge:
    strategy: combine_results
    output: aggregated_content
    
# 执行时间: 顺序需15秒 → 并行只需5秒 (省67%)

并行+顺序组合

条件分支

根据上一步结果动态选择下一步路径——就像程序的if/else。

# 条件分支示例
workflow:
  type: conditional
  
  steps:
    - id: classify_query
      action: intent_classification
      
    - id: route
      branches:
        - condition: "intent == 'search'"
          workflow: search_flow
          
        - condition: "intent == 'create'"
          workflow: create_flow
          
        - condition: "intent == 'explain'"
          workflow: explain_flow
          
        - default:
          workflow: fallback_flow

# search_flow 子流程
search_flow:
  steps: [web_search, filter, summarize]
  
# create_flow 子流程  
create_flow:
  steps: [generate_outline, draft_content, polish]

循环处理

重复执行直到满足终止条件——适合批量处理、迭代优化场景。

# 循环处理示例
workflow:
  type: loop
  
  iterate:
    source: "{{search_results}}"  # 遍历搜索结果
    item_var: result
    max_iterations: 10
    
  body:
    - tool: web_fetch
      params: {url: "{{result.url}}"}
    - action: extract_key_info
      output: extracted_info
      
  # 终止条件
  terminate:
    condition: "extracted_info.quality_score > 0.8"
    or: "iterations == 10"
    
  # 收集所有迭代结果
  collect:
    output: all_extracted

迭代优化循环

# 多轮迭代直到达标
workflow:
  type: iterative
  
  steps:
    - id: draft
      model: gpt-4o
      action: generate_draft
      
    - id: evaluate
      action: quality_check
      
    - id: improve
      condition: "evaluate.score < 0.9"
      model: gpt-4o
      action: revise
      input: "{{draft}} + {{evaluate.feedback}}"
      
  # 循环直到分数达标
  loop:
    steps: [draft, evaluate, improve]
    max_loops: 3
    break_when: "evaluate.score >= 0.9"

错误处理与重试

# 工作流错误处理
workflow:
  error_handling:
    strategy: graceful
    
    # 单步重试
    retry:
      max_attempts: 3
      backoff: exponential
      initial_delay: 1000
      
    # 步骤fallback
    fallback:
      - step: web_fetch
        on_error: use_cached
        cache_ttl: 3600
        
      - step: generate_summary
        on_error: simple_summary
        fallback_model: gpt-4o-mini
        
    # 整体失败处理
    on_workflow_fail:
      action: report_error
      notify: true

实战案例:多源内容聚合

# SKILLS/content_aggregator.md

name: content_aggregator
description: 从多个来源聚合内容并生成报告

workflow:
  type: hybrid
  name: multi_source_aggregation
  
  # 第一阶段:并行获取
  stage_collect:
    type: parallel
    concurrency: 5
    
    tasks:
      - id: rss_news
        tool: rss_fetch
        feeds: [openai, anthropic, huggingface]
        
      - id: hn_trending
        tool: web_fetch
        url: "https://hn.algolia.com/api/v1/search"
        
      - id: reddit_ai
        tool: web_fetch  
        url: "https://reddit.com/r/artificial/hot"
        
  # 第二阶段:顺序处理
  stage_process:
    type: sequential
    depends_on: stage_collect
    
    steps:
      - id: merge_sources
        action: combine
        inputs: [rss_news, hn_trending, reddit_ai]
        
      - id: dedupe
        action: remove_duplicates
        method: semantic
        
      - id: rank
        action: sort_by_relevance
        
      - id: generate_report
        model: gpt-4o
        action: create_html_report
        
  # 第三阶段:发布
  stage_publish:
    type: sequential
    depends_on: stage_process
    
    steps:
      - id: save_file
        action: write_file
        path: "/var/www/miaoquai/news/{{date}}.html"
        
      - id: update_sitemap
        action: update_sitemap
        
      - id: notify_feishu
        action: send_message
        channel: feishu

最佳实践

⚠️ 踩坑实录:有次并行执行忘了设concurrency限制,结果50个任务同时发起,API直接限流返回429。整个流程挂了。教训:并行虽好,但要敬畏限流。

相关链接

#工作流编排 #并行执行 #条件分支 #循环处理 #OpenClaw