PluginMind Docs

Workflow Development

This guide shows how to build and evolve workflows on top of PluginMind’s AI service registry. Everything here reflects the current implementation—no secret features required.


🎯 Goals

  • Understand how a request flows through /process.
  • Extend the prompt templates to support new analysis styles.
  • Wire custom business logic into the registry (fallbacks, metadata, job tracking).

🧠 How /process Works

  1. Session guardget_session_user verifies the pm_session cookie.
  2. User trackinguser_service.get_or_create_user loads/creates the SQLModel user and enforces query limits.
  3. Analysis orchestrationanalysis_service.analyze_generic runs the workflow:
    Loading code snippet…
  4. Persistence – results are written to analysis_results (when a database session is supplied).
  5. ResponseGenericAnalysisResponse serialises the outcome.

The legacy /analyze endpoint reuses the same building blocks but always chooses the crypto analyzer for backward compatibility.


🪄 Customising Prompt Templates

app/ash_prompt.py controls the 4-D methodology prompts. To add or tweak a template:

  1. Extend the AnalysisType enum if you need a new keyword.
  2. Provide a _get_<type>_template function that returns a multi-line string.
  3. Register the template in the PromptTemplateEngine constructor.

Example snippet:

Loading code snippet…

Once deployed, clients can call /process with "analysis_type": "code" and the new template will be used automatically, provided the registry can supply an analyzer for that type (see next section).


🔌 Choosing the Right AI Service

Service routing lives in analysis_service._get_analyzer_for_type:

Analysis TypeService Type UsedDefault Provider
documentDOCUMENT_PROCESSOROpenAI (with Grok as fallback)
chatCHAT_PROCESSOROpenAI
seoSEO_GENERATOROpenAI
cryptoCRYPTO_ANALYZERGrok
customGENERIC_ANALYZEROpenAI (Grok as fallback)

If no dedicated service exists, the code gracefully falls back to the generic analyzer. You can override this behaviour by:

  1. Registering a new service and tags in service_initialization.py.
  2. Extending the service_mapping dict within _get_analyzer_for_type.
  3. Optionally adjusting the fallback logic for fine control.

🔁 Advanced Patterns

1. Fallback to Custom

If a specialised service fails, the workflow will automatically retry using AnalysisType.CUSTOM:

Loading code snippet…

2. Async Jobs

Use /analyze-async when you want to decouple request/response timing. The helper functions in app/utils/background_tasks.py handle:

  • job creation (create_analysis_job)
  • status transitions (process_analysis_background)
  • error capture (job.error field)

3. Metadata Enrichment

analyze_generic returns a metadata dictionary. Populate it with timing, token counts, or cost tracking to surface richer insights to the caller.


🧰 Developer Checklist

  • Set MAX_USER_INPUT_LENGTH in .env to safeguard prompt sizes.
  • Add regression tests when you introduce new analysis types (see tests/test_generic_processing.py).
  • Use the health endpoints before enabling new workflows in production.
  • (Optional) Store additional artefacts—structured JSON, embeddings, etc.—inside AnalysisResult.result_data.

🤝 Tips for Collaboration

  • Keep new analysis types lowercase and hyphen-free so they fit naturally into URLs and enums.
  • If you need multi-step flows (e.g., summarise → sentiment → recommendations), orchestrate them inside analysis_service before returning to the client.
  • Document user-facing behaviour in docs2/api/endpoints.md whenever you adjust request/response shapes.

Happy orchestrating! ✨