feat: complete overhaul of AI subsystem with multi-agent litellm architecture
- Refactored AI module to use litellm, supporting Anthropic, Google, OpenAI, etc. - Introduced 'Engineer' (execution) and 'Architect' (strategic) AI agents. - Added real-time streaming responses and interactive chat mode via 'rich'. - Added CLI arguments for model/key overrides (--engineer-model, --architect-model). - Replaced 'openai' with 'litellm' in requirements.txt and setup.cfg. - Updated nodes.run() to support an 'on_complete' callback for live node-status streaming. - Fixed an undefined variable bug (config.profiles -> self.profiles) in configfile.py. - Updated README and docstrings with new AI plugin tool registration API. - Regenerated HTML documentation using pdoc3. - Bumped version to 5.0b1 for beta release.
This commit is contained in:
@@ -133,3 +133,12 @@ dmypy.json
|
||||
|
||||
#App
|
||||
connpy-completion-helper
|
||||
|
||||
# Gemini & AI Tools
|
||||
.gemini/
|
||||
GEMINI.md
|
||||
|
||||
# Node.js (used by Gemini CLI or plugins)
|
||||
node_modules/
|
||||
package-lock.json
|
||||
package.json
|
||||
|
||||
Reference in New Issue
Block a user