PluginMind Docs

Ai Service Integration

Want to add Anthropic Claude, Google Gemini, or your own in-house model? This guide walks through the exact steps required to plug new services into PluginMind’s registry.


🧩 Registry Recap

  • All services implement AIService from app/services/ai_service_interface.py.
  • Each registration provides a service ID (string) and an AIServiceType (prompt optimizer, document processor, etc.).
  • The registry stores metadata so /services and /services/health can report capabilities.
Loading code snippet…

🛠️ Step-by-Step Integration

1. Implement the Service Class

Create a file such as app/services/claude_service.py.

Loading code snippet…

2. Extend Settings

Update app/core/config.py so new environment variables are available (and provide safe defaults in testing mode):

Loading code snippet…

3. Register the Service

Modify initialize_ai_services() in app/services/service_initialization.py:

Loading code snippet…

If Claude becomes your preferred document analyzer, register it before Grok or OpenAI so the “first in wins” rule selects it automatically.

4. Wire Into Workflows (Optional)

Adjust _get_analyzer_for_type to prefer the new service:

Loading code snippet…

If you added a brand-new AnalysisType, map it here and provide a prompt template in ash_prompt.py.

5. Update Docs & Tests

  • Document the new service in docs2/api/endpoints.md and docs2/guides/workflow-development.md.
  • Add a simple registry unit test (see tests/test_ai_service_registry.py) to ensure registration succeeds.
  • Optionally mock the new provider inside integration tests to verify /process responses.

🧪 Testing Checklist

  • TESTING=1 pytest to ensure the suite still passes.
  • pytest tests/test_ai_service_registry.py::TestServiceRegistration::test_dynamic_registration to confirm metadata.
  • Use /services and /services/health endpoints locally to verify health reports.

🛡️ Best Practices

  • Keep API keys out of source control (use .env or secret managers).
  • Set conservative HTTP timeouts and retries—don’t block the event loop.
  • Populate AIServiceMetadata.capabilities accurately; it powers dashboards and future workflow builders.
  • Implement health_check() even if it just returns True—this keeps /services/health meaningful.
  • For cost visibility, store usage metrics in metadata returned from analyze_generic.

💡 Example Environment Variables

Loading code snippet…

Add them to your deployment secrets and expose a feature flag (e.g., ENABLE_CLAUDE=true) if you want runtime toggles.

Happy integrating! 🤝