feat: complete overhaul of AI subsystem with multi-agent litellm architecture
- Refactored AI module to use litellm, supporting Anthropic, Google, OpenAI, etc. - Introduced 'Engineer' (execution) and 'Architect' (strategic) AI agents. - Added real-time streaming responses and interactive chat mode via 'rich'. - Added CLI arguments for model/key overrides (--engineer-model, --architect-model). - Replaced 'openai' with 'litellm' in requirements.txt and setup.cfg. - Updated nodes.run() to support an 'on_complete' callback for live node-status streaming. - Fixed an undefined variable bug (config.profiles -> self.profiles) in configfile.py. - Updated README and docstrings with new AI plugin tool registration API. - Regenerated HTML documentation using pdoc3. - Bumped version to 5.0b1 for beta release.
This commit is contained in:
@@ -56,7 +56,9 @@ For more detailed information, please read our [Privacy Policy](https://connpy.g
|
||||
Or use fzf by installing pyfzf and running conn config --fzf true.
|
||||
- Create in bulk, copy, move, export, and import nodes for easy management.
|
||||
- Run automation scripts on network devices.
|
||||
- Use GPT AI to help you manage your devices.
|
||||
- Use AI with a multi-agent system (Engineer/Architect) to manage devices.
|
||||
Supports any LLM provider via litellm (OpenAI, Anthropic, Google, etc.).
|
||||
Features streaming responses, interactive chat, and extensible plugin tools.
|
||||
- Add plugins with your own scripts.
|
||||
- Much more!
|
||||
|
||||
@@ -428,15 +430,46 @@ for key in routers.result:
|
||||
print(key, ' ---> ', ("pass" if routers.result[key] else "fail"))
|
||||
```
|
||||
### Using AI
|
||||
```
|
||||
The AI module uses a multi-agent architecture with an **Engineer** (fast execution) and an **Architect** (strategic reasoning). It supports any LLM provider through [litellm](https://github.com/BerriAI/litellm).
|
||||
```python
|
||||
import connpy
|
||||
conf = connpy.configfile()
|
||||
organization = 'openai-org'
|
||||
api_key = "openai-key"
|
||||
myia = connpy.ai(conf, organization, api_key)
|
||||
input = "go to router 1 and get me the full configuration"
|
||||
result = myia.ask(input, dryrun = False)
|
||||
print(result)
|
||||
# Uses models and API keys from config, or override them:
|
||||
myai = connpy.ai(conf, engineer_model="gemini/gemini-2.5-flash", engineer_api_key="your-key")
|
||||
result = myai.ask("go to router1 and show me the running configuration")
|
||||
print(result["response"])
|
||||
# Streaming is enabled by default for CLI, disable for programmatic use:
|
||||
result = myai.ask("show interfaces on all routers", stream=False)
|
||||
print(result["response"])
|
||||
```
|
||||
|
||||
#### AI Plugin Tool Registration
|
||||
Plugins can extend the AI system by registering custom tools via the `Preload` class:
|
||||
```python
|
||||
def _register_my_tools(ai_instance):
|
||||
tool_def = {
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": "my_custom_tool",
|
||||
"description": "Does something useful.",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {"query": {"type": "string"}},
|
||||
"required": ["query"]
|
||||
}
|
||||
}
|
||||
}
|
||||
ai_instance.register_ai_tool(
|
||||
tool_definition=tool_def,
|
||||
handler=my_handler_function,
|
||||
target="engineer", # or "architect" or "both"
|
||||
engineer_prompt="- My tool: does X.",
|
||||
architect_prompt=" * My tool (my_custom_tool)."
|
||||
)
|
||||
|
||||
class Preload:
|
||||
def __init__(self, connapp):
|
||||
connapp.ai.modify(_register_my_tools)
|
||||
```
|
||||
## http API
|
||||
With the Connpy API you can run commands on devices using http requests
|
||||
@@ -527,7 +560,7 @@ With the Connpy API you can run commands on devices using http requests
|
||||
|
||||
**Method**: `POST`
|
||||
|
||||
**Description**: This route sends to chatgpt IA a request that will parse it into an understandable output for the application and then run the request.
|
||||
**Description**: This route sends a request to the AI multi-agent system which will analyze it, execute commands on devices if needed, and return the result. Supports any LLM provider configured via litellm.
|
||||
|
||||
#### Request Body:
|
||||
|
||||
|
||||
Reference in New Issue
Block a user