Compare commits
139 Commits
main
..
7a46f640e2
| Author | SHA1 | Date | |
|---|---|---|---|
| 7a46f640e2 | |||
| c2584c86ba | |||
| a6aff6df76 | |||
| e5ebf8eea7 | |||
| b80ed64957 | |||
| 4823238538 | |||
| 9149d5b157 | |||
| a1244855d4 | |||
| 0805f6f72f | |||
| 25de08a17c | |||
| 1bd9bd62c5 | |||
| 87bb6302ff | |||
| c6a31cb710 | |||
| b7528027ac | |||
| f96fe77aed | |||
| 26ea2e588d | |||
| e07f7ff130 | |||
| 54a539c51a | |||
| 4f8497ff26 | |||
| 9975d60a91 | |||
| 3d5db06343 | |||
| 3e32aa958c | |||
| ea3bfeee9e | |||
| fd883a4821 | |||
| 137524b176 | |||
| d6880d5956 | |||
| efe1428f0d | |||
| a3d0e39ba8 | |||
| 97c039459c | |||
| 00905575fc | |||
| d96910092b | |||
| 0813b927b0 | |||
| 7856dcb9a3 | |||
| 4373a34711 | |||
| 98b85628de | |||
| be40b2accd | |||
| 54fa5845af | |||
| 06501eccc9 | |||
| bcbbd4765d | |||
| acbfb03b10 | |||
| 51f86f214a | |||
| a0a0e68c49 | |||
| d5ca894d55 | |||
| fc85314e9b | |||
| 6e70b38524 | |||
| 5a1dbc04e1 | |||
| 0e34ea79c6 | |||
| a74d055993 | |||
| 8828471c1b | |||
| 404d874771 | |||
| 1cb0962fac | |||
| 7d10409ad1 | |||
| 98a85154cb | |||
| 8235de23ec | |||
| 150268b11d | |||
| b268f8a372 | |||
| 0b16de5db8 | |||
| 65fed3a1a2 | |||
| 51bdc4e59a | |||
| 68b63baeac | |||
| 8329ca25de | |||
| bc157a990c | |||
| 9440611f1e | |||
| 943865958d | |||
| 0fad67513f | |||
| 2f5b5fcf6b | |||
| 3061b54059 | |||
| ffed88189f | |||
| 9893f2ed51 | |||
| 2aa4934288 | |||
| feb34ad638 | |||
| 59821d6c16 | |||
| 38eb2e2d37 | |||
| 860e57be02 | |||
| a78aa4c75e | |||
| cc68ff0545 | |||
| 638db44aa5 | |||
| b4660254cd | |||
| c706ac893c | |||
| 3072128d31 | |||
| 53480ec39b | |||
| 0e90a5aca1 | |||
| 8c28fbcaa6 | |||
| 32ab9d3e2d | |||
| d61346b3e9 | |||
| d689504eec | |||
| 815c161544 | |||
| c83a2cd28f | |||
| 118ca1d14e | |||
| fa250e2ae3 | |||
| 1f5fe13805 | |||
| 5c9c605184 | |||
| 881eca6181 | |||
| 4348e353a2 | |||
| d1df2a4cf6 | |||
| c09703053b | |||
| e4e82ef1c6 | |||
| 3e0a6b223d | |||
| 12f6baefad | |||
| 8a605dfb9c | |||
| 2a32b84849 | |||
| 65b2a5da0b | |||
| 950b88a2ea | |||
| 67fa4e1e6d | |||
| de2c2ab21b | |||
| 5769d4a5af | |||
| c4950ed029 | |||
| b5df984498 | |||
| 8e25d5de2a | |||
| 27212b1009 | |||
| b5d6894865 | |||
| cba3f8d2d9 | |||
| 2b9e754ff5 | |||
| fd8b367d52 | |||
| 59b38bb58a | |||
| 3b7bee233e | |||
| 8f13b0b2bf | |||
| 9f3cb6f6d9 | |||
| b199ddc8ac | |||
| 7b9bd44ae5 | |||
| 940f9964f7 | |||
| 8f6c1703ac | |||
| d81254deb2 | |||
| 9898920ab2 | |||
| 2042178cbe | |||
| 555b285d36 | |||
| 79dfa66247 | |||
| 1c6bdddbdc | |||
| 43e8325890 | |||
| d4121bcbc0 | |||
| 506044b9fb | |||
| 56bd92d1f1 | |||
| 221d7170ce | |||
| b3418d48de | |||
| 5113aef8c2 | |||
| 255b2bd4ef | |||
| 4a593f2016 | |||
| 6b58e71c6c | |||
| 5467e4f4bc |
@@ -1,22 +0,0 @@
|
||||
.git
|
||||
__pycache__
|
||||
*.pyc
|
||||
*.pyo
|
||||
*.pyd
|
||||
.pytest_cache
|
||||
.venv
|
||||
venv
|
||||
env
|
||||
node_modules
|
||||
dist
|
||||
build
|
||||
*.egg-info
|
||||
docker
|
||||
docker-compose.yml
|
||||
.gemini
|
||||
.github
|
||||
docs
|
||||
scratch
|
||||
testall
|
||||
testremote
|
||||
automation-template.yaml
|
||||
@@ -22,11 +22,11 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
ref: publish
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v5
|
||||
uses: actions/setup-python@v3
|
||||
with:
|
||||
python-version: '3.10'
|
||||
- name: Install dependencies
|
||||
|
||||
-38
@@ -50,7 +50,6 @@ coverage.xml
|
||||
*.py,cover
|
||||
.hypothesis/
|
||||
.pytest_cache/
|
||||
scratch/
|
||||
|
||||
# Translations
|
||||
*.mo
|
||||
@@ -131,40 +130,3 @@ dmypy.json
|
||||
|
||||
#clients
|
||||
*sync_client*
|
||||
|
||||
#App
|
||||
connpy-completion-helper
|
||||
|
||||
# Gemini & AI Tools
|
||||
.gemini/
|
||||
GEMINI.md
|
||||
|
||||
# Node.js (used by Gemini CLI or plugins)
|
||||
node_modules/
|
||||
package-lock.json
|
||||
package.json
|
||||
|
||||
# Development docs
|
||||
connpy_roadmap.md
|
||||
testall/
|
||||
testremote/
|
||||
*.db
|
||||
*.patch
|
||||
scratch.py
|
||||
|
||||
# Internal planning and implementation docs
|
||||
PLAN_CAPA_SERVICIOS.md
|
||||
implementation_plan.md
|
||||
remote-plugin-implementation-plan.md
|
||||
NETWORK_COMMAND_CENTER_PLAN.md
|
||||
ssm_implemmetaiton_plan.md
|
||||
async_interact_plan.md
|
||||
repo_consolidado_limpio.md
|
||||
connpy_roadmap.md
|
||||
MULTI_USER_PLAN.md
|
||||
COPILOT_PLAN.md
|
||||
ARCHITECTURAL_DEBT_REFACTOR.md
|
||||
|
||||
#themes
|
||||
nord.yml
|
||||
theme.py
|
||||
|
||||
@@ -1,8 +0,0 @@
|
||||
include LICENSE
|
||||
include README.md
|
||||
include requirements.txt
|
||||
recursive-include connpy/core_plugins *
|
||||
recursive-include connpy/proto *
|
||||
recursive-include connpy/grpc *.proto
|
||||
recursive-exclude * __pycache__
|
||||
recursive-exclude * *.py[co]
|
||||
@@ -1,5 +1,5 @@
|
||||
<p align="center">
|
||||
<img src="https://nginx.gederico.dynu.net/images/CONNPY-resized.png" alt="App Logo">
|
||||
<img src="https://media.discordapp.net/attachments/1243672493340753982/1243672566166720562/CONNPY-resized.png?ex=665253d6&is=66510256&hm=7957c9ec64159244181f20cb67e6d16da8cebdb0f6e2775ed121c77f648ff4a1&=&format=webp&quality=lossless&width=300&height=300" alt="App Logo">
|
||||
</p>
|
||||
|
||||
|
||||
@@ -9,182 +9,417 @@
|
||||
[](https://github.com/fluzzi/connpy/blob/main/LICENSE)
|
||||
[](https://pypi.org/pypi/connpy/)
|
||||
|
||||
**Connpy** is a powerful Connection Manager and Network Automation Platform for Linux, Mac, and Docker. It provides a unified interface for **SSH, SFTP, Telnet, kubectl, Docker pods, and AWS SSM**.
|
||||
|
||||
The v6 release introduces the **AI Copilot**, an interactive terminal assistant that understands your network context and helps you manage your infrastructure more intelligently.
|
||||
|
||||
|
||||
## 🤖 AI Copilot (New in v6)
|
||||
The AI Copilot is deeply integrated into your terminal workflow:
|
||||
- **Terminal Context Awareness**: The Copilot can "see" your screen output, helping you diagnose errors or analyze command results in real-time.
|
||||
- **Hybrid Multi-Agent System**: Automatically escalates complex tasks between the **Network Engineer** (execution) and the **Network Architect** (strategy).
|
||||
- **MCP Integration**: Dynamically load tools from external providers (6WIND, AWS, etc.) via the Model Context Protocol.
|
||||
- **Interactive Chat**: Launch with `conn ai` for a collaborative troubleshooting session.
|
||||
|
||||
|
||||
## Core Features
|
||||
- **Multi-Protocol**: Native support for SSH, SFTP, Telnet, kubectl, Docker exec, and AWS SSM.
|
||||
- **Context Management**: Set regex-based contexts to manage specific nodes across different environments (work, home, clients).
|
||||
- **Advanced Inventory**:
|
||||
- Organize nodes in folders (`@folder`) and subfolders (`@subfolder@folder`).
|
||||
- Use Global Profiles (`@profilename`) to manage shared credentials easily.
|
||||
- Bulk creation, copying, moving, and export/import of nodes.
|
||||
- **Modern UI**: High-performance terminal experience with `prompt-toolkit`, including:
|
||||
- Fuzzy search integration with `fzf`.
|
||||
- Advanced tab completion.
|
||||
- Syntax highlighting and customizable themes.
|
||||
- **Automation Engine**: Run parallel tasks and playbooks on multiple devices with variable support.
|
||||
- **Plugin System**: Build and execute custom Python scripts locally or on a remote gRPC server.
|
||||
- **gRPC Architecture**: Fully decoupled Client/Server model for distributed management.
|
||||
- **Privacy & Sync**: Local-first encrypted storage (RSA/OAEP) with optional Google Drive backup.
|
||||
|
||||
Connpy is a ssh and telnet connection manager and automation module for Linux, Mac and Docker
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install connpy
|
||||
```
|
||||
|
||||
### Run it in Windows/Linux using Docker
|
||||
```bash
|
||||
### Run it in Windows using docker
|
||||
```
|
||||
git clone https://github.com/fluzzi/connpy
|
||||
cd connpy
|
||||
docker compose build
|
||||
|
||||
# Run it like a native app (completely silent)
|
||||
docker compose run --rm --remove-orphans connpy-app [command]
|
||||
|
||||
# Pro Tip: Add this alias for a 100% native experience from any folder
|
||||
alias conn='docker compose -f /path/to/connpy/docker-compose.yml run --rm --remove-orphans connpy-app'
|
||||
docker compose -f path/to/folder/docker-compose.yml build
|
||||
docker compose -f path/to/folder/docker-compose.yml run -it connpy-app
|
||||
```
|
||||
|
||||
---
|
||||
## Connection manager
|
||||
### Features
|
||||
- You can generate profiles and reference them from nodes using @profilename so you dont
|
||||
need to edit multiple nodes when changing password or other information.
|
||||
- Nodes can be stored on @folder or @subfolder@folder to organize your devices. Then can
|
||||
be referenced using node@subfolder@folder or node@folder
|
||||
- If you have too many nodes. Get completion script using: conn config --completion.
|
||||
Or use fzf installing pyfzf and running conn config --fzf true
|
||||
- Create in bulk, copy, move, export and import nodes for easy management.
|
||||
- Run automation scripts in network devices.
|
||||
- use GPT AI to help you manage your devices.
|
||||
- Add plugins with your own scripts.
|
||||
- Much more!
|
||||
|
||||
## 🔒 Privacy & Integration
|
||||
|
||||
### Privacy Policy
|
||||
Connpy is committed to protecting your privacy:
|
||||
- **Local Storage**: All server addresses, usernames, and passwords are encrypted and stored **only** on your machine. No data is transmitted to our servers.
|
||||
- **Data Access**: Data is used solely for managing and automating your connections.
|
||||
|
||||
### Google Integration
|
||||
Used strictly for backup:
|
||||
- **Backup**: Sync your encrypted configuration with your Google Drive account.
|
||||
- **Scoped Access**: Connpy only accesses its own backup files.
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
### Usage:
|
||||
```
|
||||
usage: conn [-h] [--add | --del | --mod | --show | --debug] [node|folder] [--sftp]
|
||||
conn {profile,move,copy,list,bulk,export,import,ai,run,api,plugin,config,sync,context} ...
|
||||
conn {profile,move,mv,copy,cp,list,ls,bulk,export,import,ai,run,api,plugin,config} ...
|
||||
|
||||
positional arguments:
|
||||
node|folder node[@subfolder][@folder]
|
||||
Connect to specific node or show all matching nodes
|
||||
[@subfolder][@folder]
|
||||
Show all available connections globaly or in specified path
|
||||
```
|
||||
|
||||
### Basic Examples:
|
||||
```bash
|
||||
# Add a folder and subfolder
|
||||
### Options:
|
||||
```
|
||||
-h, --help show this help message and exit
|
||||
-v, --version Show version
|
||||
-a, --add Add new node[@subfolder][@folder] or [@subfolder]@folder
|
||||
-r, --del, --rm Delete node[@subfolder][@folder] or [@subfolder]@folder
|
||||
-e, --mod, --edit Modify node[@subfolder][@folder]
|
||||
-s, --show Show node[@subfolder][@folder]
|
||||
-d, --debug Display all conections steps
|
||||
-t, --sftp Connects using sftp instead of ssh
|
||||
```
|
||||
|
||||
### Commands:
|
||||
```
|
||||
profile Manage profiles
|
||||
move(mv) Move node
|
||||
copy(cp) Copy node
|
||||
list(ls) List profiles, nodes or folders
|
||||
bulk Add nodes in bulk
|
||||
export Export connection folder to Yaml file
|
||||
import Import connection folder to config from Yaml file
|
||||
ai Make request to an AI
|
||||
run Run scripts or commands on nodes
|
||||
api Start and stop connpy api
|
||||
plugin Manage plugins
|
||||
config Manage app config
|
||||
sync Sync config with Google
|
||||
```
|
||||
|
||||
### Manage profiles:
|
||||
```
|
||||
usage: conn profile [-h] (--add | --del | --mod | --show) profile
|
||||
|
||||
positional arguments:
|
||||
profile Name of profile to manage
|
||||
|
||||
options:
|
||||
-h, --help show this help message and exit
|
||||
-a, --add Add new profile
|
||||
-r, --del, --rm Delete profile
|
||||
-e, --mod, --edit Modify profile
|
||||
-s, --show Show profile
|
||||
|
||||
```
|
||||
|
||||
### Examples:
|
||||
```
|
||||
conn profile --add office-user
|
||||
conn --add @office
|
||||
conn --add @datacenter@office
|
||||
|
||||
# Add a node with a profile
|
||||
conn --add server1@datacenter@office --profile @myuser
|
||||
|
||||
# Connect to a node (fuzzy match)
|
||||
conn server1
|
||||
|
||||
# Start the AI Copilot
|
||||
conn ai
|
||||
|
||||
# Run a command on all nodes in a folder
|
||||
conn run @office "uptime"
|
||||
conn --add server@datacenter@office
|
||||
conn --add pc@office
|
||||
conn --show server@datacenter@office
|
||||
conn pc@office
|
||||
conn server
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🔌 Plugin System
|
||||
Connpy supports a robust plugin architecture where scripts can run transparently on a remote gRPC server.
|
||||
|
||||
### Structure
|
||||
Plugins must be Python files containing:
|
||||
- **Class `Parser`**: Defines `argparse` arguments.
|
||||
- **Class `Entrypoint`**: Execution logic.
|
||||
- **Class `Preload`**: (Optional) Hooks and modifications to the core app.
|
||||
|
||||
See the [Plugin Requirements section](#plugin-requirements-for-connpy) for full technical details.
|
||||
|
||||
---
|
||||
|
||||
## Plugin Requirements for Connpy
|
||||
|
||||
### Remote Plugin Execution
|
||||
When Connpy operates in remote mode, plugins are executed **transparently on the server**:
|
||||
- The client automatically downloads the plugin source code (`Parser` class context) to generate the local `argparse` structure and provide autocompletion.
|
||||
- The execution phase (`Entrypoint` class) is redirected via gRPC streams to execute in the server's memory.
|
||||
- You can manage remote plugins using the `--remote` flag.
|
||||
|
||||
### General Structure
|
||||
- The plugin script must define specific classes:
|
||||
1. **Class `Parser`**: Handles `argparse.ArgumentParser` initialization.
|
||||
2. **Class `Entrypoint`**: Main execution logic (receives `args`, `parser`, and `connapp`).
|
||||
3. **Class `Preload`**: (Optional) For modifying core app behavior or registering hooks.
|
||||
- The plugin script must be a Python file.
|
||||
- Only the following top-level elements are allowed in the plugin script:
|
||||
- Class definitions
|
||||
- Function definitions
|
||||
- Import statements
|
||||
- The `if __name__ == "__main__":` block for standalone execution
|
||||
- Pass statements
|
||||
|
||||
### Specific Class Requirements
|
||||
- The plugin script must define specific classes with particular attributes and methods. Each class serves a distinct role within the plugin's architecture:
|
||||
1. **Class `Parser`**:
|
||||
- **Purpose**: Handles parsing of command-line arguments.
|
||||
- **Requirements**:
|
||||
- Must contain only one method: `__init__`.
|
||||
- The `__init__` method must initialize at least two attributes:
|
||||
- `self.parser`: An instance of `argparse.ArgumentParser`.
|
||||
- `self.description`: A string containing the description of the parser.
|
||||
2. **Class `Entrypoint`**:
|
||||
- **Purpose**: Acts as the entry point for plugin execution, utilizing parsed arguments and integrating with the main application.
|
||||
- **Requirements**:
|
||||
- Must have an `__init__` method that accepts exactly three parameters besides `self`:
|
||||
- `args`: Arguments passed to the plugin.
|
||||
- The parser instance (typically `self.parser` from the `Parser` class).
|
||||
- The Connapp instance to interact with the Connpy app.
|
||||
3. **Class `Preload`**:
|
||||
- **Purpose**: Performs any necessary preliminary setup or configuration independent of the main parsing and entry logic.
|
||||
- **Requirements**:
|
||||
- Contains at least an `__init__` method that accepts parameter connapp besides `self`.
|
||||
|
||||
### Class Dependencies and Combinations
|
||||
- **Dependencies**:
|
||||
- `Parser` and `Entrypoint` are interdependent and must both be present if one is included.
|
||||
- `Preload` is independent and may exist alone or alongside the other classes.
|
||||
- **Valid Combinations**:
|
||||
- `Parser` and `Entrypoint` together.
|
||||
- `Preload` alone.
|
||||
- All three classes (`Parser`, `Entrypoint`, `Preload`).
|
||||
|
||||
### Preload Modifications and Hooks
|
||||
You can customize the behavior of core classes using hooks:
|
||||
- **`modify(method)`**: Alter class instances (e.g., `connapp.config`, `connapp.ai`).
|
||||
- **`register_pre_hook(method)`**: Logic to run before a method execution.
|
||||
- **`register_post_hook(method)`**: Logic to run after a method execution.
|
||||
|
||||
### Command Completion Support
|
||||
Plugins can provide intelligent tab completion:
|
||||
1. **Tree-based Completion (Recommended)**: Define `_connpy_tree(info)` returning a navigation dictionary.
|
||||
2. **Legacy Completion**: Define `_connpy_completion(wordsnumber, words, info)`.
|
||||
In the `Preload` class of the plugin system, you have the ability to customize the behavior of existing classes and methods within the application through a robust hooking system. This documentation explains how to use the `modify`, `register_pre_hook`, and `register_post_hook` methods to tailor plugin functionality to your needs.
|
||||
|
||||
---
|
||||
#### Modifying Classes with `modify`
|
||||
The `modify` method allows you to alter instances of a class at the time they are created or after their creation. This is particularly useful for setting or modifying configuration settings, altering default behaviors, or adding new functionalities to existing classes without changing the original class definitions.
|
||||
|
||||
## ⚙️ gRPC Service Architecture
|
||||
Connpy can operate in a decoupled mode:
|
||||
1. **Start the API (Server)**: `conn api -s 50051`
|
||||
2. **Configure the Client**:
|
||||
```bash
|
||||
conn config --service-mode remote
|
||||
conn config --remote-host localhost:50051
|
||||
- **Usage**: Modify a class to include additional configurations or changes
|
||||
- **Modify Method Signature**:
|
||||
- `modify(modification_method)`: A function that is invoked with an instance of the class as its argument. This function should perform any modifications directly on this instance.
|
||||
- **Modification Method Signature**:
|
||||
- **Arguments**:
|
||||
- `cls`: This function accepts a single argument, the class instance, which it then modifies.
|
||||
- **Modifiable Classes**:
|
||||
- `connapp.config`
|
||||
- `connapp.node`
|
||||
- `connapp.nodes`
|
||||
- `connapp.ai`
|
||||
- ```python
|
||||
def modify_config(cls):
|
||||
# Example modification: adding a new attribute or modifying an existing one
|
||||
cls.new_attribute = 'New Value'
|
||||
|
||||
class Preload:
|
||||
def __init__(self, connapp):
|
||||
# Applying modification to the config class instance
|
||||
connapp.config.modify(modify_config)
|
||||
```
|
||||
All inventory management and execution will now happen on the server.
|
||||
|
||||
---
|
||||
#### Implementing Method Hooks
|
||||
There are 2 methods that allows you to define custom logic to be executed before (`register_pre_hook`) or after (`register_post_hook`) the main logic of a method. This is particularly useful for logging, auditing, preprocessing inputs, postprocessing outputs or adding functionalities.
|
||||
|
||||
## 🐍 Automation Module (API)
|
||||
You can use `connpy` as a Python library for your own scripts.
|
||||
- **Usage**: Register hooks to methods to execute additional logic before or after the main method execution.
|
||||
- **Registration Methods Signature**:
|
||||
- `register_pre_hook(pre_hook_method)`: A function that is invoked before the main method is executed. This function should do preprocessing of the arguments.
|
||||
- `register_post_hook(post_hook_method)`: A function that is invoked after the main method is executed. This function should do postprocessing of the outputs.
|
||||
- **Method Signatures for Pre-Hooks**
|
||||
- `pre_hook_method(*args, **kwargs)`
|
||||
- **Arguments**:
|
||||
- `*args`, `**kwargs`: The arguments and keyword arguments that will be passed to the method being hooked. The pre-hook function has the opportunity to inspect and modify these arguments before they are passed to the main method.
|
||||
- **Return**:
|
||||
- Must return a tuple `(args, kwargs)`, which will be used as the new arguments for the main method. If the original arguments are not modified, the function should return them as received.
|
||||
- **Method Signatures for Post-Hooks**:
|
||||
- `post_hook_method(*args, **kwargs)`
|
||||
- **Arguments**:
|
||||
- `*args`, `**kwargs`: The arguments and keyword arguments that were passed to the main method.
|
||||
- `kwargs["result"]`: The value returned by the main method. This allows the post-hook to inspect and even alter the result before it is returned to the original caller.
|
||||
- **Return**:
|
||||
- Can return a modified result, which will replace the original result of the main method, or simply return `kwargs["result"]` to return the original method result.
|
||||
- ```python
|
||||
def pre_processing_hook(*args, **kwargs):
|
||||
print("Pre-processing logic here")
|
||||
# Modify arguments or perform any checks
|
||||
return args, kwargs # Return modified or unmodified args and kwargs
|
||||
|
||||
### Basic Execution
|
||||
```python
|
||||
def post_processing_hook(*args, **kwargs):
|
||||
print("Post-processing logic here")
|
||||
# Modify the result or perform any final logging or cleanup
|
||||
return kwargs["result"] # Return the modified or unmodified result
|
||||
|
||||
class Preload:
|
||||
def __init__(self, connapp):
|
||||
# Registering a pre-hook
|
||||
connapp.ai.some_method.register_pre_hook(pre_processing_hook)
|
||||
|
||||
# Registering a post-hook
|
||||
connapp.node.another_method.register_post_hook(post_processing_hook)
|
||||
```
|
||||
|
||||
|
||||
### Executable Block
|
||||
- The plugin script can include an executable block:
|
||||
- `if __name__ == "__main__":`
|
||||
- This block allows the plugin to be run as a standalone script for testing or independent use.
|
||||
|
||||
### Script Verification
|
||||
- The `verify_script` method in `plugins.py` is used to check the plugin script's compliance with these standards.
|
||||
- Non-compliant scripts will be rejected to ensure consistency and proper functionality within the plugin system.
|
||||
|
||||
### Example Script
|
||||
|
||||
For a practical example of how to write a compatible plugin script, please refer to the following example:
|
||||
|
||||
[Example Plugin Script](https://github.com/fluzzi/awspy)
|
||||
|
||||
This script demonstrates the required structure and implementation details according to the plugin system's standards.
|
||||
|
||||
## Automation module usage
|
||||
### Standalone module
|
||||
```
|
||||
import connpy
|
||||
router = connpy.node("uniqueName", "1.1.1.1", user="admin")
|
||||
router.run(["show ip int brief"])
|
||||
router = connpy.node("uniqueName","ip/host", user="username", password="password")
|
||||
router.run(["term len 0","show run"])
|
||||
print(router.output)
|
||||
hasip = router.test("show ip int brief","1.1.1.1")
|
||||
if hasip:
|
||||
print("Router has ip 1.1.1.1")
|
||||
else:
|
||||
print("router does not have ip 1.1.1.1")
|
||||
```
|
||||
|
||||
### Parallel Tasks with Variables
|
||||
```python
|
||||
### Using manager configuration
|
||||
```
|
||||
import connpy
|
||||
conf = connpy.configfile()
|
||||
device = conf.getitem("router@office")
|
||||
router = connpy.node("unique name", **device, config=conf)
|
||||
result = router.run("show ip int brief")
|
||||
print(result)
|
||||
```
|
||||
### Running parallel tasks on multiple devices
|
||||
```
|
||||
import connpy
|
||||
conf = connpy.configfile()
|
||||
#You can get the nodes from the config from a folder and fitlering in it
|
||||
nodes = conf.getitem("@office", ["router1", "router2", "router3"])
|
||||
#You can also get each node individually:
|
||||
nodes = {}
|
||||
nodes["router1"] = conf.getitem("router1@office")
|
||||
nodes["router2"] = conf.getitem("router2@office")
|
||||
nodes["router10"] = conf.getitem("router10@datacenter")
|
||||
#Also, you can create the nodes manually:
|
||||
nodes = {}
|
||||
nodes["router1"] = {"host": "1.1.1.1", "user": "user", "password": "password1"}
|
||||
nodes["router2"] = {"host": "1.1.1.2", "user": "user", "password": "password2"}
|
||||
nodes["router3"] = {"host": "1.1.1.2", "user": "user", "password": "password3"}
|
||||
#Finally you run some tasks on the nodes
|
||||
mynodes = connpy.nodes(nodes, config = conf)
|
||||
result = mynodes.test(["show ip int br"], "1.1.1.2")
|
||||
for i in result:
|
||||
print("---" + i + "---")
|
||||
print(result[i])
|
||||
print()
|
||||
# Or for one specific node
|
||||
mynodes.router1.run(["term len 0". "show run"], folder = "/home/user/logs")
|
||||
```
|
||||
### Using variables
|
||||
```
|
||||
import connpy
|
||||
config = connpy.configfile()
|
||||
nodes = config.getitem("@office", ["router1", "router2"])
|
||||
nodes = config.getitem("@office", ["router1", "router2", "router3"])
|
||||
commands = []
|
||||
commands.append("config t")
|
||||
commands.append("interface lo {id}")
|
||||
commands.append("ip add {ip} {mask}")
|
||||
commands.append("end")
|
||||
variables = {}
|
||||
variables["router1@office"] = {"ip": "10.57.57.1"}
|
||||
variables["router2@office"] = {"ip": "10.57.57.2"}
|
||||
variables["router3@office"] = {"ip": "10.57.57.3"}
|
||||
variables["__global__"] = {"id": "57"}
|
||||
variables["__global__"]["mask"] = "255.255.255.255"
|
||||
expected = "!"
|
||||
routers = connpy.nodes(nodes, config = config)
|
||||
|
||||
variables = {
|
||||
"router1@office": {"id": "1"},
|
||||
"__global__": {"mask": "255.255.255.0"}
|
||||
}
|
||||
routers.run(["interface lo{id}", "ip address 10.0.0.{id} {mask}"], variables)
|
||||
routers.run(commands, variables)
|
||||
routers.test("ping {ip}", expected, variables)
|
||||
for key in routers.result:
|
||||
print(key, ' ---> ', ("pass" if routers.result[key] else "fail"))
|
||||
```
|
||||
### Using AI
|
||||
```
|
||||
|
||||
### AI Programmatic Use
|
||||
```python
|
||||
import connpy
|
||||
myai = connpy.ai(connpy.configfile())
|
||||
response = myai.ask("What is the status of the BGP neighbors in the office?")
|
||||
conf = connpy.configfile()
|
||||
organization = 'openai-org'
|
||||
api_key = "openai-key"
|
||||
myia = connpy.ai(conf, organization, api_key)
|
||||
input = "go to router 1 and get me the full configuration"
|
||||
result = myia.ask(input, dryrun = False)
|
||||
print(result)
|
||||
```
|
||||
## http API
|
||||
With the Connpy API you can run commands on devices using http requests
|
||||
|
||||
### 1. List Nodes
|
||||
|
||||
**Endpoint**: `/list_nodes`
|
||||
|
||||
**Method**: `POST`
|
||||
|
||||
**Description**: This route returns a list of nodes. It can also filter the list based on a given keyword.
|
||||
|
||||
#### Request Body:
|
||||
|
||||
```json
|
||||
{
|
||||
"filter": "<keyword>"
|
||||
}
|
||||
```
|
||||
|
||||
* `filter` (optional): A keyword to filter the list of nodes. It returns only the nodes that contain the keyword. If not provided, the route will return the entire list of nodes.
|
||||
|
||||
#### Response:
|
||||
|
||||
- A JSON array containing the filtered list of nodes.
|
||||
|
||||
---
|
||||
*For detailed developer notes and plugin hooks documentation, see the [Documentation](https://fluzzi.github.io/connpy/).*
|
||||
|
||||
### 2. Get Nodes
|
||||
|
||||
**Endpoint**: `/get_nodes`
|
||||
|
||||
**Method**: `POST`
|
||||
|
||||
**Description**: This route returns a dictionary of nodes with all their attributes. It can also filter the nodes based on a given keyword.
|
||||
|
||||
#### Request Body:
|
||||
|
||||
```json
|
||||
{
|
||||
"filter": "<keyword>"
|
||||
}
|
||||
```
|
||||
|
||||
* `filter` (optional): A keyword to filter the nodes. It returns only the nodes that contain the keyword. If not provided, the route will return the entire list of nodes.
|
||||
|
||||
#### Response:
|
||||
|
||||
- A JSON array containing the filtered nodes.
|
||||
|
||||
---
|
||||
|
||||
### 3. Run Commands
|
||||
|
||||
**Endpoint**: `/run_commands`
|
||||
|
||||
**Method**: `POST`
|
||||
|
||||
**Description**: This route runs commands on selected nodes based on the provided action, nodes, and commands. It also supports executing tests by providing expected results.
|
||||
|
||||
#### Request Body:
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "<action>",
|
||||
"nodes": "<nodes>",
|
||||
"commands": "<commands>",
|
||||
"expected": "<expected>",
|
||||
"options": "<options>"
|
||||
}
|
||||
```
|
||||
|
||||
* `action` (required): The action to be performed. Possible values: `run` or `test`.
|
||||
* `nodes` (required): A list of nodes or a single node on which the commands will be executed. The nodes can be specified as individual node names or a node group with the `@` prefix. Node groups can also be specified as arrays with a list of nodes inside the group.
|
||||
* `commands` (required): A list of commands to be executed on the specified nodes.
|
||||
* `expected` (optional, only used when the action is `test`): A single expected result for the test.
|
||||
* `options` (optional): Array to pass options to the run command, options are: `prompt`, `parallel`, `timeout`
|
||||
|
||||
#### Response:
|
||||
|
||||
- A JSON object with the results of the executed commands on the nodes.
|
||||
|
||||
---
|
||||
|
||||
### 4. Ask AI
|
||||
|
||||
**Endpoint**: `/ask_ai`
|
||||
|
||||
**Method**: `POST`
|
||||
|
||||
**Description**: This route sends to chatgpt IA a request that will parse it into an understandable output for the application and then run the request.
|
||||
|
||||
#### Request Body:
|
||||
|
||||
```json
|
||||
{
|
||||
"input": "<user input request>",
|
||||
"dryrun": true or false
|
||||
}
|
||||
```
|
||||
|
||||
* `input` (required): The user input requesting the AI to perform an action on some devices or get the devices list.
|
||||
* `dryrun` (optional): If set to true, it will return the parameters to run the request but it won't run it. default is false.
|
||||
|
||||
#### Response:
|
||||
|
||||
- A JSON array containing the action to run and the parameters and the result of the action.
|
||||
|
||||
|
||||
|
||||
+368
-149
@@ -1,182 +1,402 @@
|
||||
#!/usr/bin/env python3
|
||||
'''
|
||||
<p align="center">
|
||||
<img src="https://nginx.gederico.dynu.net/images/CONNPY-resized.png" alt="App Logo">
|
||||
</p>
|
||||
## Connection manager
|
||||
|
||||
Connpy is a connection manager that allows you to store nodes to connect them fast and password free.
|
||||
|
||||
# Connpy
|
||||
[](https://pypi.org/pypi/connpy/)
|
||||
[](https://pypi.org/pypi/connpy/)
|
||||
[](https://github.com/fluzzi/connpy/blob/main/LICENSE)
|
||||
[](https://pypi.org/pypi/connpy/)
|
||||
### Features
|
||||
- You can generate profiles and reference them from nodes using @profilename so you dont
|
||||
need to edit multiple nodes when changing password or other information.
|
||||
- Nodes can be stored on @folder or @subfolder@folder to organize your devices. Then can
|
||||
be referenced using node@subfolder@folder or node@folder
|
||||
- If you have too many nodes. Get completion script using: conn config --completion.
|
||||
Or use fzf installing pyfzf and running conn config --fzf true
|
||||
- Create in bulk, copy, move, export and import nodes for easy management.
|
||||
- Run automation scripts in network devices.
|
||||
- use GPT AI to help you manage your devices.
|
||||
- Add plugins with your own scripts.
|
||||
- Much more!
|
||||
|
||||
**Connpy** is a powerful Connection Manager and Network Automation Platform for Linux, Mac, and Docker. It provides a unified interface for **SSH, SFTP, Telnet, kubectl, Docker pods, and AWS SSM**.
|
||||
|
||||
The v6 release introduces the **AI Copilot**, an interactive terminal assistant that understands your network context and helps you manage your infrastructure more intelligently.
|
||||
|
||||
|
||||
## 🤖 AI Copilot (New in v6)
|
||||
The AI Copilot is deeply integrated into your terminal workflow:
|
||||
- **Terminal Context Awareness**: The Copilot can "see" your screen output, helping you diagnose errors or analyze command results in real-time.
|
||||
- **Hybrid Multi-Agent System**: Automatically escalates complex tasks between the **Network Engineer** (execution) and the **Network Architect** (strategy).
|
||||
- **MCP Integration**: Dynamically load tools from external providers (6WIND, AWS, etc.) via the Model Context Protocol.
|
||||
- **Interactive Chat**: Launch with `conn ai` for a collaborative troubleshooting session.
|
||||
|
||||
|
||||
## Core Features
|
||||
- **Multi-Protocol**: Native support for SSH, SFTP, Telnet, kubectl, Docker exec, and AWS SSM.
|
||||
- **Context Management**: Set regex-based contexts to manage specific nodes across different environments (work, home, clients).
|
||||
- **Advanced Inventory**:
|
||||
- Organize nodes in folders (`@folder`) and subfolders (`@subfolder@folder`).
|
||||
- Use Global Profiles (`@profilename`) to manage shared credentials easily.
|
||||
- Bulk creation, copying, moving, and export/import of nodes.
|
||||
- **Modern UI**: High-performance terminal experience with `prompt-toolkit`, including:
|
||||
- Fuzzy search integration with `fzf`.
|
||||
- Advanced tab completion.
|
||||
- Syntax highlighting and customizable themes.
|
||||
- **Automation Engine**: Run parallel tasks and playbooks on multiple devices with variable support.
|
||||
- **Plugin System**: Build and execute custom Python scripts locally or on a remote gRPC server.
|
||||
- **gRPC Architecture**: Fully decoupled Client/Server model for distributed management.
|
||||
- **Privacy & Sync**: Local-first encrypted storage (RSA/OAEP) with optional Google Drive backup.
|
||||
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install connpy
|
||||
### Usage
|
||||
```
|
||||
|
||||
### Run it in Windows/Linux using Docker
|
||||
```bash
|
||||
git clone https://github.com/fluzzi/connpy
|
||||
cd connpy
|
||||
docker compose build
|
||||
|
||||
# Run it like a native app (completely silent)
|
||||
docker compose run --rm --remove-orphans connpy-app [command]
|
||||
|
||||
# Pro Tip: Add this alias for a 100% native experience from any folder
|
||||
alias conn='docker compose -f /path/to/connpy/docker-compose.yml run --rm --remove-orphans connpy-app'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🔒 Privacy & Integration
|
||||
|
||||
### Privacy Policy
|
||||
Connpy is committed to protecting your privacy:
|
||||
- **Local Storage**: All server addresses, usernames, and passwords are encrypted and stored **only** on your machine. No data is transmitted to our servers.
|
||||
- **Data Access**: Data is used solely for managing and automating your connections.
|
||||
|
||||
### Google Integration
|
||||
Used strictly for backup:
|
||||
- **Backup**: Sync your encrypted configuration with your Google Drive account.
|
||||
- **Scoped Access**: Connpy only accesses its own backup files.
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
usage: conn [-h] [--add | --del | --mod | --show | --debug] [node|folder] [--sftp]
|
||||
conn {profile,move,copy,list,bulk,export,import,ai,run,api,plugin,config,sync,context} ...
|
||||
conn {profile,move,mv,copy,cp,list,ls,bulk,export,import,ai,run,api,plugin,config} ...
|
||||
|
||||
positional arguments:
|
||||
node|folder node[@subfolder][@folder]
|
||||
Connect to specific node or show all matching nodes
|
||||
[@subfolder][@folder]
|
||||
Show all available connections globaly or in specified path
|
||||
Options:
|
||||
-h, --help show this help message and exit
|
||||
-v, --version Show version
|
||||
-a, --add Add new node[@subfolder][@folder] or [@subfolder]@folder
|
||||
-r, --del, --rm Delete node[@subfolder][@folder] or [@subfolder]@folder
|
||||
-e, --mod, --edit Modify node[@subfolder][@folder]
|
||||
-s, --show Show node[@subfolder][@folder]
|
||||
-d, --debug Display all conections steps
|
||||
-t, --sftp Connects using sftp instead of ssh
|
||||
|
||||
Commands:
|
||||
profile Manage profiles
|
||||
move(mv) Move node
|
||||
copy(cp) Copy node
|
||||
list(ls) List profiles, nodes or folders
|
||||
bulk Add nodes in bulk
|
||||
export Export connection folder to Yaml file
|
||||
import Import connection folder to config from Yaml file
|
||||
ai Make request to an AI
|
||||
run Run scripts or commands on nodes
|
||||
api Start and stop connpy api
|
||||
plugin Manage plugins
|
||||
config Manage app config
|
||||
sync Sync config with Google
|
||||
```
|
||||
|
||||
### Basic Examples:
|
||||
```bash
|
||||
# Add a folder and subfolder
|
||||
### Manage profiles
|
||||
```
|
||||
usage: conn profile [-h] (--add | --del | --mod | --show) profile
|
||||
|
||||
positional arguments:
|
||||
profile Name of profile to manage
|
||||
|
||||
options:
|
||||
-h, --help show this help message and exit
|
||||
-a, --add Add new profile
|
||||
-r, --del, --rm Delete profile
|
||||
-e, --mod, --edit Modify profile
|
||||
-s, --show Show profile
|
||||
|
||||
```
|
||||
|
||||
### Examples
|
||||
```
|
||||
conn profile --add office-user
|
||||
conn --add @office
|
||||
conn --add @datacenter@office
|
||||
|
||||
# Add a node with a profile
|
||||
conn --add server1@datacenter@office --profile @myuser
|
||||
|
||||
# Connect to a node (fuzzy match)
|
||||
conn server1
|
||||
|
||||
# Start the AI Copilot
|
||||
conn ai
|
||||
|
||||
# Run a command on all nodes in a folder
|
||||
conn run @office "uptime"
|
||||
conn --add server@datacenter@office
|
||||
conn --add pc@office
|
||||
conn --show server@datacenter@office
|
||||
conn pc@office
|
||||
conn server
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Plugin Requirements for Connpy
|
||||
|
||||
### Remote Plugin Execution
|
||||
When Connpy operates in remote mode, plugins are executed **transparently on the server**:
|
||||
- The client automatically downloads the plugin source code (`Parser` class context) to generate the local `argparse` structure and provide autocompletion.
|
||||
- The execution phase (`Entrypoint` class) is redirected via gRPC streams to execute in the server's memory.
|
||||
- You can manage remote plugins using the `--remote` flag.
|
||||
|
||||
### General Structure
|
||||
- The plugin script must define specific classes:
|
||||
1. **Class `Parser`**: Handles `argparse.ArgumentParser` initialization.
|
||||
2. **Class `Entrypoint`**: Main execution logic (receives `args`, `parser`, and `connapp`).
|
||||
3. **Class `Preload`**: (Optional) For modifying core app behavior or registering hooks.
|
||||
- The plugin script must be a Python file.
|
||||
- Only the following top-level elements are allowed in the plugin script:
|
||||
- Class definitions
|
||||
- Function definitions
|
||||
- Import statements
|
||||
- The `if __name__ == "__main__":` block for standalone execution
|
||||
- Pass statements
|
||||
|
||||
### Specific Class Requirements
|
||||
- The plugin script must define specific classes with particular attributes and methods. Each class serves a distinct role within the plugin's architecture:
|
||||
1. **Class `Parser`**:
|
||||
- **Purpose**: Handles parsing of command-line arguments.
|
||||
- **Requirements**:
|
||||
- Must contain only one method: `__init__`.
|
||||
- The `__init__` method must initialize at least two attributes:
|
||||
- `self.parser`: An instance of `argparse.ArgumentParser`.
|
||||
- `self.description`: A string containing the description of the parser.
|
||||
2. **Class `Entrypoint`**:
|
||||
- **Purpose**: Acts as the entry point for plugin execution, utilizing parsed arguments and integrating with the main application.
|
||||
- **Requirements**:
|
||||
- Must have an `__init__` method that accepts exactly three parameters besides `self`:
|
||||
- `args`: Arguments passed to the plugin.
|
||||
- The parser instance (typically `self.parser` from the `Parser` class).
|
||||
- The Connapp instance to interact with the Connpy app.
|
||||
3. **Class `Preload`**:
|
||||
- **Purpose**: Performs any necessary preliminary setup or configuration independent of the main parsing and entry logic.
|
||||
- **Requirements**:
|
||||
- Contains at least an `__init__` method that accepts parameter connapp besides `self`.
|
||||
|
||||
### Class Dependencies and Combinations
|
||||
- **Dependencies**:
|
||||
- `Parser` and `Entrypoint` are interdependent and must both be present if one is included.
|
||||
- `Preload` is independent and may exist alone or alongside the other classes.
|
||||
- **Valid Combinations**:
|
||||
- `Parser` and `Entrypoint` together.
|
||||
- `Preload` alone.
|
||||
- All three classes (`Parser`, `Entrypoint`, `Preload`).
|
||||
|
||||
### Preload Modifications and Hooks
|
||||
You can customize the behavior of core classes using hooks:
|
||||
- **`modify(method)`**: Alter class instances (e.g., `connapp.config`, `connapp.ai`).
|
||||
- **`register_pre_hook(method)`**: Logic to run before a method execution.
|
||||
- **`register_post_hook(method)`**: Logic to run after a method execution.
|
||||
|
||||
### Command Completion Support
|
||||
Plugins can provide intelligent tab completion:
|
||||
1. **Tree-based Completion (Recommended)**: Define `_connpy_tree(info)` returning a navigation dictionary.
|
||||
2. **Legacy Completion**: Define `_connpy_completion(wordsnumber, words, info)`.
|
||||
In the `Preload` class of the plugin system, you have the ability to customize the behavior of existing classes and methods within the application through a robust hooking system. This documentation explains how to use the `modify`, `register_pre_hook`, and `register_post_hook` methods to tailor plugin functionality to your needs.
|
||||
|
||||
---
|
||||
#### Modifying Classes with `modify`
|
||||
The `modify` method allows you to alter instances of a class at the time they are created or after their creation. This is particularly useful for setting or modifying configuration settings, altering default behaviors, or adding new functionalities to existing classes without changing the original class definitions.
|
||||
|
||||
## ⚙️ gRPC Service Architecture
|
||||
Connpy can operate in a decoupled mode:
|
||||
1. **Start the API (Server)**: `conn api -s 50051`
|
||||
2. **Configure the Client**:
|
||||
```bash
|
||||
conn config --service-mode remote
|
||||
conn config --remote-host localhost:50051
|
||||
- **Usage**: Modify a class to include additional configurations or changes
|
||||
- **Modify Method Signature**:
|
||||
- `modify(modification_method)`: A function that is invoked with an instance of the class as its argument. This function should perform any modifications directly on this instance.
|
||||
- **Modification Method Signature**:
|
||||
- **Arguments**:
|
||||
- `cls`: This function accepts a single argument, the class instance, which it then modifies.
|
||||
- **Modifiable Classes**:
|
||||
- `connapp.config`
|
||||
- `connapp.node`
|
||||
- `connapp.nodes`
|
||||
- `connapp.ai`
|
||||
- ```python
|
||||
def modify_config(cls):
|
||||
# Example modification: adding a new attribute or modifying an existing one
|
||||
cls.new_attribute = 'New Value'
|
||||
|
||||
class Preload:
|
||||
def __init__(self, connapp):
|
||||
# Applying modification to the config class instance
|
||||
connapp.config.modify(modify_config)
|
||||
```
|
||||
All inventory management and execution will now happen on the server.
|
||||
|
||||
#### Implementing Method Hooks
|
||||
There are 2 methods that allows you to define custom logic to be executed before (`register_pre_hook`) or after (`register_post_hook`) the main logic of a method. This is particularly useful for logging, auditing, preprocessing inputs, postprocessing outputs or adding functionalities.
|
||||
|
||||
- **Usage**: Register hooks to methods to execute additional logic before or after the main method execution.
|
||||
- **Registration Methods Signature**:
|
||||
- `register_pre_hook(pre_hook_method)`: A function that is invoked before the main method is executed. This function should do preprocessing of the arguments.
|
||||
- `register_post_hook(post_hook_method)`: A function that is invoked after the main method is executed. This function should do postprocessing of the outputs.
|
||||
- **Method Signatures for Pre-Hooks**
|
||||
- `pre_hook_method(*args, **kwargs)`
|
||||
- **Arguments**:
|
||||
- `*args`, `**kwargs`: The arguments and keyword arguments that will be passed to the method being hooked. The pre-hook function has the opportunity to inspect and modify these arguments before they are passed to the main method.
|
||||
- **Return**:
|
||||
- Must return a tuple `(args, kwargs)`, which will be used as the new arguments for the main method. If the original arguments are not modified, the function should return them as received.
|
||||
- **Method Signatures for Post-Hooks**:
|
||||
- `post_hook_method(*args, **kwargs)`
|
||||
- **Arguments**:
|
||||
- `*args`, `**kwargs`: The arguments and keyword arguments that were passed to the main method.
|
||||
- `kwargs["result"]`: The value returned by the main method. This allows the post-hook to inspect and even alter the result before it is returned to the original caller.
|
||||
- **Return**:
|
||||
- Can return a modified result, which will replace the original result of the main method, or simply return `kwargs["result"]` to return the original method result.
|
||||
- ```python
|
||||
def pre_processing_hook(*args, **kwargs):
|
||||
print("Pre-processing logic here")
|
||||
# Modify arguments or perform any checks
|
||||
return args, kwargs # Return modified or unmodified args and kwargs
|
||||
|
||||
def post_processing_hook(*args, **kwargs):
|
||||
print("Post-processing logic here")
|
||||
# Modify the result or perform any final logging or cleanup
|
||||
return kwargs["result"] # Return the modified or unmodified result
|
||||
|
||||
class Preload:
|
||||
def __init__(self, connapp):
|
||||
# Registering a pre-hook
|
||||
connapp.ai.some_method.register_pre_hook(pre_processing_hook)
|
||||
|
||||
# Registering a post-hook
|
||||
connapp.node.another_method.register_post_hook(post_processing_hook)
|
||||
```
|
||||
|
||||
### Executable Block
|
||||
- The plugin script can include an executable block:
|
||||
- `if __name__ == "__main__":`
|
||||
- This block allows the plugin to be run as a standalone script for testing or independent use.
|
||||
|
||||
### Script Verification
|
||||
- The `verify_script` method in `plugins.py` is used to check the plugin script's compliance with these standards.
|
||||
- Non-compliant scripts will be rejected to ensure consistency and proper functionality within the plugin system.
|
||||
-
|
||||
### Example Script
|
||||
|
||||
For a practical example of how to write a compatible plugin script, please refer to the following example:
|
||||
|
||||
[Example Plugin Script](https://github.com/fluzzi/awspy)
|
||||
|
||||
This script demonstrates the required structure and implementation details according to the plugin system's standards.
|
||||
|
||||
## http API
|
||||
With the Connpy API you can run commands on devices using http requests
|
||||
|
||||
### 1. List Nodes
|
||||
|
||||
**Endpoint**: `/list_nodes`
|
||||
|
||||
**Method**: `POST`
|
||||
|
||||
**Description**: This route returns a list of nodes. It can also filter the list based on a given keyword.
|
||||
|
||||
#### Request Body:
|
||||
|
||||
```json
|
||||
{
|
||||
"filter": "<keyword>"
|
||||
}
|
||||
```
|
||||
|
||||
* `filter` (optional): A keyword to filter the list of nodes. It returns only the nodes that contain the keyword. If not provided, the route will return the entire list of nodes.
|
||||
|
||||
#### Response:
|
||||
|
||||
- A JSON array containing the filtered list of nodes.
|
||||
|
||||
---
|
||||
|
||||
## 🐍 Automation Module (API)
|
||||
You can use `connpy` as a Python library for your own scripts.
|
||||
### 2. Get Nodes
|
||||
|
||||
### Basic Execution
|
||||
```python
|
||||
**Endpoint**: `/get_nodes`
|
||||
|
||||
**Method**: `POST`
|
||||
|
||||
**Description**: This route returns a dictionary of nodes with all their attributes. It can also filter the nodes based on a given keyword.
|
||||
|
||||
#### Request Body:
|
||||
|
||||
```json
|
||||
{
|
||||
"filter": "<keyword>"
|
||||
}
|
||||
```
|
||||
|
||||
* `filter` (optional): A keyword to filter the nodes. It returns only the nodes that contain the keyword. If not provided, the route will return the entire list of nodes.
|
||||
|
||||
#### Response:
|
||||
|
||||
- A JSON array containing the filtered nodes.
|
||||
|
||||
---
|
||||
|
||||
### 3. Run Commands
|
||||
|
||||
**Endpoint**: `/run_commands`
|
||||
|
||||
**Method**: `POST`
|
||||
|
||||
**Description**: This route runs commands on selected nodes based on the provided action, nodes, and commands. It also supports executing tests by providing expected results.
|
||||
|
||||
#### Request Body:
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "<action>",
|
||||
"nodes": "<nodes>",
|
||||
"commands": "<commands>",
|
||||
"expected": "<expected>",
|
||||
"options": "<options>"
|
||||
}
|
||||
```
|
||||
|
||||
* `action` (required): The action to be performed. Possible values: `run` or `test`.
|
||||
* `nodes` (required): A list of nodes or a single node on which the commands will be executed. The nodes can be specified as individual node names or a node group with the `@` prefix. Node groups can also be specified as arrays with a list of nodes inside the group.
|
||||
* `commands` (required): A list of commands to be executed on the specified nodes.
|
||||
* `expected` (optional, only used when the action is `test`): A single expected result for the test.
|
||||
* `options` (optional): Array to pass options to the run command, options are: `prompt`, `parallel`, `timeout`
|
||||
|
||||
#### Response:
|
||||
|
||||
- A JSON object with the results of the executed commands on the nodes.
|
||||
|
||||
---
|
||||
|
||||
### 4. Ask AI
|
||||
|
||||
**Endpoint**: `/ask_ai`
|
||||
|
||||
**Method**: `POST`
|
||||
|
||||
**Description**: This route sends to chatgpt IA a request that will parse it into an understandable output for the application and then run the request.
|
||||
|
||||
#### Request Body:
|
||||
|
||||
```json
|
||||
{
|
||||
"input": "<user input request>",
|
||||
"dryrun": true or false
|
||||
}
|
||||
```
|
||||
|
||||
* `input` (required): The user input requesting the AI to perform an action on some devices or get the devices list.
|
||||
* `dryrun` (optional): If set to true, it will return the parameters to run the request but it won't run it. default is false.
|
||||
|
||||
#### Response:
|
||||
|
||||
- A JSON array containing the action to run and the parameters and the result of the action.
|
||||
|
||||
## Automation module
|
||||
The automation module
|
||||
### Standalone module
|
||||
```
|
||||
import connpy
|
||||
router = connpy.node("uniqueName", "1.1.1.1", user="admin")
|
||||
router.run(["show ip int brief"])
|
||||
router = connpy.node("uniqueName","ip/host", user="user", password="pass")
|
||||
router.run(["term len 0","show run"])
|
||||
print(router.output)
|
||||
hasip = router.test("show ip int brief","1.1.1.1")
|
||||
if hasip:
|
||||
print("Router has ip 1.1.1.1")
|
||||
else:
|
||||
print("router does not have ip 1.1.1.1")
|
||||
```
|
||||
|
||||
### Parallel Tasks with Variables
|
||||
```python
|
||||
### Using manager configuration
|
||||
```
|
||||
import connpy
|
||||
conf = connpy.configfile()
|
||||
device = conf.getitem("server@office")
|
||||
server = connpy.node("unique name", **device, config=conf)
|
||||
result = server.run(["cd /", "ls -la"])
|
||||
print(result)
|
||||
```
|
||||
### Running parallel tasks
|
||||
```
|
||||
import connpy
|
||||
conf = connpy.configfile()
|
||||
#You can get the nodes from the config from a folder and fitlering in it
|
||||
nodes = conf.getitem("@office", ["router1", "router2", "router3"])
|
||||
#You can also get each node individually:
|
||||
nodes = {}
|
||||
nodes["router1"] = conf.getitem("router1@office")
|
||||
nodes["router2"] = conf.getitem("router2@office")
|
||||
nodes["router10"] = conf.getitem("router10@datacenter")
|
||||
#Also, you can create the nodes manually:
|
||||
nodes = {}
|
||||
nodes["router1"] = {"host": "1.1.1.1", "user": "user", "password": "pass1"}
|
||||
nodes["router2"] = {"host": "1.1.1.2", "user": "user", "password": "pass2"}
|
||||
nodes["router3"] = {"host": "1.1.1.2", "user": "user", "password": "pass3"}
|
||||
#Finally you run some tasks on the nodes
|
||||
mynodes = connpy.nodes(nodes, config = conf)
|
||||
result = mynodes.test(["show ip int br"], "1.1.1.2")
|
||||
for i in result:
|
||||
print("---" + i + "---")
|
||||
print(result[i])
|
||||
print()
|
||||
# Or for one specific node
|
||||
mynodes.router1.run(["term len 0". "show run"], folder = "/home/user/logs")
|
||||
```
|
||||
### Using variables
|
||||
```
|
||||
import connpy
|
||||
config = connpy.configfile()
|
||||
nodes = config.getitem("@office", ["router1", "router2"])
|
||||
nodes = config.getitem("@office", ["router1", "router2", "router3"])
|
||||
commands = []
|
||||
commands.append("config t")
|
||||
commands.append("interface lo {id}")
|
||||
commands.append("ip add {ip} {mask}")
|
||||
commands.append("end")
|
||||
variables = {}
|
||||
variables["router1@office"] = {"ip": "10.57.57.1"}
|
||||
variables["router2@office"] = {"ip": "10.57.57.2"}
|
||||
variables["router3@office"] = {"ip": "10.57.57.3"}
|
||||
variables["__global__"] = {"id": "57"}
|
||||
variables["__global__"]["mask"] = "255.255.255.255"
|
||||
expected = "!"
|
||||
routers = connpy.nodes(nodes, config = config)
|
||||
|
||||
variables = {
|
||||
"router1@office": {"id": "1"},
|
||||
"__global__": {"mask": "255.255.255.0"}
|
||||
}
|
||||
routers.run(["interface lo{id}", "ip address 10.0.0.{id} {mask}"], variables)
|
||||
routers.run(commands, variables)
|
||||
routers.test("ping {ip}", expected, variables)
|
||||
for key in routers.result:
|
||||
print(key, ' ---> ', ("pass" if routers.result[key] else "fail"))
|
||||
```
|
||||
### Using AI
|
||||
```
|
||||
|
||||
### AI Programmatic Use
|
||||
```python
|
||||
import connpy
|
||||
myai = connpy.ai(connpy.configfile())
|
||||
response = myai.ask("What is the status of the BGP neighbors in the office?")
|
||||
conf = connpy.configfile()
|
||||
organization = 'openai-org'
|
||||
api_key = "openai-key"
|
||||
myia = connpy.ai(conf, organization, api_key)
|
||||
input = "go to router 1 and get me the full configuration"
|
||||
result = myia.ask(input, dryrun = False)
|
||||
print(result)
|
||||
```
|
||||
|
||||
---
|
||||
*For detailed developer notes and plugin hooks documentation, see the [Documentation](https://fluzzi.github.io/connpy/).*
|
||||
'''
|
||||
from .core import node,nodes
|
||||
from .configfile import configfile
|
||||
@@ -185,9 +405,9 @@ from .api import *
|
||||
from .ai import ai
|
||||
from .plugins import Plugins
|
||||
from ._version import __version__
|
||||
from . import printer
|
||||
from pkg_resources import get_distribution
|
||||
|
||||
__all__ = ["node", "nodes", "configfile", "connapp", "ai", "Plugins", "printer"]
|
||||
__all__ = ["node", "nodes", "configfile", "connapp", "ai", "Plugins"]
|
||||
__author__ = "Federico Luzzi"
|
||||
__pdoc__ = {
|
||||
'core': False,
|
||||
@@ -202,6 +422,5 @@ __pdoc__ = {
|
||||
'node.deferred_class_hooks': False,
|
||||
'nodes.deferred_class_hooks': False,
|
||||
'connapp': False,
|
||||
'connapp.encrypt': True,
|
||||
'printer': False
|
||||
'connapp.encrypt': True
|
||||
}
|
||||
|
||||
+2
-1
@@ -1 +1,2 @@
|
||||
__version__ = "6.0.0b8"
|
||||
__version__ = "4.0.2"
|
||||
|
||||
|
||||
+434
-1526
File diff suppressed because it is too large
Load Diff
+149
-74
@@ -1,111 +1,186 @@
|
||||
from flask import Flask, request, jsonify
|
||||
from connpy import configfile, node, nodes, hooks
|
||||
from connpy.ai import ai as myai
|
||||
from waitress import serve
|
||||
import os
|
||||
import signal
|
||||
import time
|
||||
|
||||
# Suppress harmless but noisy gRPC fork() warnings from pexpect child processes
|
||||
os.environ["GRPC_VERBOSITY"] = "NONE"
|
||||
os.environ["GRPC_ENABLE_FORK_SUPPORT"] = "0"
|
||||
|
||||
from connpy import hooks, printer
|
||||
from connpy.configfile import configfile
|
||||
app = Flask(__name__)
|
||||
conf = configfile()
|
||||
|
||||
PID_FILE1 = "/run/connpy.pid"
|
||||
PID_FILE2 = "/tmp/connpy.pid"
|
||||
|
||||
def _wait_for_termination():
|
||||
try:
|
||||
while True:
|
||||
time.sleep(86400)
|
||||
except KeyboardInterrupt:
|
||||
pass
|
||||
|
||||
@app.route("/")
|
||||
def root():
|
||||
return jsonify({
|
||||
'message': 'Welcome to Connpy api',
|
||||
'version': '1.0',
|
||||
'documentation': 'https://fluzzi.github.io/connpy/'
|
||||
})
|
||||
|
||||
@app.route("/list_nodes", methods=["POST"])
|
||||
def list_nodes():
|
||||
conf = app.custom_config
|
||||
case = conf.config["case"]
|
||||
try:
|
||||
data = request.get_json()
|
||||
filter = data["filter"]
|
||||
if not case:
|
||||
if isinstance(filter, list):
|
||||
filter = [item.lower() for item in filter]
|
||||
else:
|
||||
filter = filter.lower()
|
||||
output = conf._getallnodes(filter)
|
||||
except:
|
||||
output = conf._getallnodes()
|
||||
return jsonify(output)
|
||||
|
||||
@app.route("/get_nodes", methods=["POST"])
|
||||
def get_nodes():
|
||||
conf = app.custom_config
|
||||
case = conf.config["case"]
|
||||
try:
|
||||
data = request.get_json()
|
||||
filter = data["filter"]
|
||||
if not case:
|
||||
if isinstance(filter, list):
|
||||
filter = [item.lower() for item in filter]
|
||||
else:
|
||||
filter = filter.lower()
|
||||
output = conf._getallnodesfull(filter)
|
||||
except:
|
||||
output = conf._getallnodesfull()
|
||||
return jsonify(output)
|
||||
|
||||
@app.route("/ask_ai", methods=["POST"])
|
||||
def ask_ai():
|
||||
conf = app.custom_config
|
||||
data = request.get_json()
|
||||
input = data["input"]
|
||||
if "dryrun" in data:
|
||||
dryrun = data["dryrun"]
|
||||
else:
|
||||
dryrun = False
|
||||
if "chat_history" in data:
|
||||
chat_history = data["chat_history"]
|
||||
else:
|
||||
chat_history = None
|
||||
ai = myai(conf)
|
||||
return ai.ask(input, dryrun, chat_history)
|
||||
|
||||
@app.route("/confirm", methods=["POST"])
|
||||
def confirm():
|
||||
conf = app.custom_config
|
||||
data = request.get_json()
|
||||
input = data["input"]
|
||||
ai = myai(conf)
|
||||
return str(ai.confirm(input))
|
||||
|
||||
@app.route("/run_commands", methods=["POST"])
|
||||
def run_commands():
|
||||
conf = app.custom_config
|
||||
data = request.get_json()
|
||||
case = conf.config["case"]
|
||||
mynodes = {}
|
||||
args = {}
|
||||
try:
|
||||
action = data["action"]
|
||||
nodelist = data["nodes"]
|
||||
args["commands"] = data["commands"]
|
||||
if action == "test":
|
||||
args["expected"] = data["expected"]
|
||||
except KeyError as e:
|
||||
error = "'{}' is mandatory".format(e.args[0])
|
||||
return({"DataError": error})
|
||||
if isinstance(nodelist, list):
|
||||
mynodes = conf.getitems(nodelist)
|
||||
else:
|
||||
if not case:
|
||||
nodelist = nodelist.lower()
|
||||
if nodelist.startswith("@"):
|
||||
mynodes = conf.getitem(nodelist)
|
||||
else:
|
||||
mynodes[nodelist] = conf.getitem(nodelist)
|
||||
|
||||
mynodes = nodes(mynodes, config=conf)
|
||||
try:
|
||||
args["vars"] = data["vars"]
|
||||
except:
|
||||
pass
|
||||
try:
|
||||
options = data["options"]
|
||||
thisoptions = {k: v for k, v in options.items() if k in ["prompt", "parallel", "timeout"]}
|
||||
args.update(thisoptions)
|
||||
except:
|
||||
options = None
|
||||
if action == "run":
|
||||
output = mynodes.run(**args)
|
||||
elif action == "test":
|
||||
output = {}
|
||||
output["result"] = mynodes.test(**args)
|
||||
output["output"] = mynodes.output
|
||||
else:
|
||||
error = "Wrong action '{}'".format(action)
|
||||
return({"DataError": error})
|
||||
return output
|
||||
|
||||
@hooks.MethodHook
|
||||
def stop_api():
|
||||
# Read the process ID (pid) from the file
|
||||
try:
|
||||
with open(PID_FILE1, "r") as f:
|
||||
pid = int(f.readline().strip())
|
||||
port_line = f.readline().strip()
|
||||
port = int(port_line) if port_line else None
|
||||
port = int(f.readline().strip())
|
||||
PID_FILE=PID_FILE1
|
||||
except (FileNotFoundError, ValueError, OSError):
|
||||
except:
|
||||
try:
|
||||
with open(PID_FILE2, "r") as f:
|
||||
pid = int(f.readline().strip())
|
||||
port_line = f.readline().strip()
|
||||
port = int(port_line) if port_line else None
|
||||
port = int(f.readline().strip())
|
||||
PID_FILE=PID_FILE2
|
||||
except (FileNotFoundError, ValueError, OSError):
|
||||
printer.warning("Connpy API server is not running.")
|
||||
return None
|
||||
except:
|
||||
print("Connpy api server is not running.")
|
||||
return
|
||||
# Send a SIGTERM signal to the process
|
||||
try:
|
||||
os.kill(pid, signal.SIGTERM)
|
||||
except OSError as e:
|
||||
printer.warning(f"Process kill failed (maybe already dead): {e}")
|
||||
except:
|
||||
pass
|
||||
# Delete the PID file
|
||||
os.remove(PID_FILE)
|
||||
printer.info(f"Server with process ID {pid} stopped.")
|
||||
print(f"Server with process ID {pid} stopped.")
|
||||
return port
|
||||
|
||||
def debug_api(port=8048, config=None):
|
||||
from .grpc_layer.server import serve
|
||||
conf = config or configfile()
|
||||
server = serve(conf, port=port, debug=True)
|
||||
printer.info(f"gRPC Server running in debug mode on port {port}...")
|
||||
_wait_for_termination()
|
||||
server.stop(0)
|
||||
from .ai import cleanup
|
||||
cleanup()
|
||||
@hooks.MethodHook
|
||||
def debug_api(port=8048):
|
||||
app.custom_config = configfile()
|
||||
app.run(debug=True, port=port)
|
||||
|
||||
def start_server(port=8048, config=None):
|
||||
try:
|
||||
import sys
|
||||
# Ensure project root is in path for the child process
|
||||
base_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
||||
if base_dir not in sys.path:
|
||||
sys.path.insert(0, base_dir)
|
||||
@hooks.MethodHook
|
||||
def start_server(port=8048):
|
||||
app.custom_config = configfile()
|
||||
serve(app, host='0.0.0.0', port=port)
|
||||
|
||||
from connpy.grpc_layer.server import serve
|
||||
conf = config or configfile()
|
||||
server = serve(conf, port=port, debug=False)
|
||||
_wait_for_termination()
|
||||
server.stop(0)
|
||||
from .ai import cleanup
|
||||
cleanup()
|
||||
except Exception as e:
|
||||
printer.error(f"Background API failed to start: {e}")
|
||||
os._exit(1)
|
||||
|
||||
def start_api(port=8048, config=None):
|
||||
# Check if already running via PID file verification
|
||||
for pid_file in [PID_FILE1, PID_FILE2]:
|
||||
if os.path.exists(pid_file):
|
||||
try:
|
||||
with open(pid_file, "r") as f:
|
||||
pid = int(f.readline().strip())
|
||||
os.kill(pid, 0)
|
||||
# If we get here, process exists
|
||||
printer.info(f"API is already running (PID {pid})")
|
||||
@hooks.MethodHook
|
||||
def start_api(port=8048):
|
||||
if os.path.exists(PID_FILE1) or os.path.exists(PID_FILE2):
|
||||
print("Connpy server is already running.")
|
||||
return
|
||||
except (ValueError, OSError, ProcessLookupError):
|
||||
# Stale PID file, ignore here, start_api will overwrite
|
||||
pass
|
||||
|
||||
pid = os.fork()
|
||||
if pid == 0:
|
||||
# Child process: detached from terminal
|
||||
os.setsid()
|
||||
start_server(port, config=config)
|
||||
start_server(port)
|
||||
else:
|
||||
# Parent process: record PID and exit
|
||||
try:
|
||||
with open(PID_FILE1, "w") as f:
|
||||
f.write(str(pid) + "\n" + str(port))
|
||||
except OSError:
|
||||
except:
|
||||
try:
|
||||
with open(PID_FILE2, "w") as f:
|
||||
f.write(str(pid) + "\n" + str(port))
|
||||
except OSError:
|
||||
printer.error("Couldn't create PID file.")
|
||||
exit(1)
|
||||
printer.start(f"gRPC Server started with process ID {pid} on port {port}")
|
||||
except:
|
||||
print("Cound't create PID file")
|
||||
return
|
||||
print(f'Server is running with process ID {pid} in port {port}')
|
||||
|
||||
|
||||
@@ -1,10 +0,0 @@
|
||||
from .node_handler import NodeHandler
|
||||
from .profile_handler import ProfileHandler
|
||||
from .config_handler import ConfigHandler
|
||||
from .run_handler import RunHandler
|
||||
from .ai_handler import AIHandler
|
||||
from .api_handler import APIHandler
|
||||
from .plugin_handler import PluginHandler
|
||||
from .import_export_handler import ImportExportHandler
|
||||
from .context_handler import ContextHandler
|
||||
|
||||
@@ -1,251 +0,0 @@
|
||||
import sys
|
||||
from rich.panel import Panel
|
||||
from rich.markdown import Markdown
|
||||
from rich.rule import Rule
|
||||
from rich.prompt import Prompt
|
||||
|
||||
from .. import printer
|
||||
|
||||
console = printer.console
|
||||
mdprint = console.print
|
||||
|
||||
class AIHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
|
||||
def dispatch(self, args):
|
||||
if args.list_sessions:
|
||||
sessions = self.app.services.ai.list_sessions()
|
||||
if not sessions:
|
||||
printer.info("No saved AI sessions found.")
|
||||
return
|
||||
columns = ["ID", "Title", "Created At", "Model"]
|
||||
rows = [[s["id"], s["title"], s["created_at"], s["model"]] for s in sessions]
|
||||
printer.table("AI Persisted Sessions", columns, rows)
|
||||
return
|
||||
|
||||
if args.delete_session:
|
||||
try:
|
||||
self.app.services.ai.delete_session(args.delete_session[0])
|
||||
printer.success(f"Session {args.delete_session[0]} deleted.")
|
||||
except Exception as e:
|
||||
printer.error(str(e))
|
||||
return
|
||||
|
||||
if args.mcp is not None:
|
||||
return self.configure_mcp(args)
|
||||
|
||||
# Determinar session_id para retomar
|
||||
session_id = None
|
||||
if args.resume:
|
||||
sessions = self.app.services.ai.list_sessions()
|
||||
session_id = sessions[0]["id"] if sessions else None
|
||||
if not session_id:
|
||||
printer.warning("No previous session found to resume.")
|
||||
elif args.session:
|
||||
session_id = args.session[0]
|
||||
|
||||
# Configurar argumentos adicionales para el servicio de AI
|
||||
# Prioridad: CLI Args > Configuración Local
|
||||
settings = self.app.services.config_svc.get_settings().get("ai", {})
|
||||
arguments = {}
|
||||
|
||||
for key in ["engineer_model", "engineer_api_key", "architect_model", "architect_api_key"]:
|
||||
cli_val = getattr(args, key, None)
|
||||
if cli_val:
|
||||
arguments[key] = cli_val[0]
|
||||
elif settings.get(key):
|
||||
arguments[key] = settings.get(key)
|
||||
|
||||
# Check keys only if running in local mode (not remote)
|
||||
if getattr(self.app.services, "mode", "local") == "local":
|
||||
if not arguments.get("engineer_api_key"):
|
||||
printer.error("Engineer API key not configured. The chat cannot start.")
|
||||
printer.info("Use 'connpy config --engineer-api-key <key>' to set it.")
|
||||
sys.exit(1)
|
||||
if not arguments.get("architect_api_key"):
|
||||
printer.warning("Architect API key not configured. Architect will be unavailable.")
|
||||
printer.info("Use 'connpy config --architect-api-key <key>' to enable it.")
|
||||
|
||||
# El resto de la interacción el CLI la maneja con el agente subyacente
|
||||
self.app.myai = self.app.services.ai
|
||||
self.ai_overrides = arguments
|
||||
|
||||
if args.ask:
|
||||
self.single_question(args, session_id)
|
||||
else:
|
||||
self.interactive_chat(args, session_id)
|
||||
|
||||
def single_question(self, args, session_id):
|
||||
query = " ".join(args.ask)
|
||||
with console.status("[ai_status]Agent is thinking and analyzing...") as status:
|
||||
result = self.app.myai.ask(query, status=status, debug=args.debug, session_id=session_id, trust=args.trust, **self.ai_overrides)
|
||||
|
||||
responder = result.get("responder", "engineer")
|
||||
border = "architect" if responder == "architect" else "engineer"
|
||||
title = "[architect][bold]Network Architect[/bold][/architect]" if responder == "architect" else "[engineer][bold]Network Engineer[/bold][/engineer]"
|
||||
|
||||
if not result.get("streamed"):
|
||||
mdprint(Panel(Markdown(result["response"]), title=title, border_style=border, expand=False))
|
||||
|
||||
if "usage" in result:
|
||||
u = result["usage"]
|
||||
console.print(f"[debug]Tokens: {u['total']} (Input: {u['input']}, Output: {u['output']})[/debug]")
|
||||
|
||||
def interactive_chat(self, args, session_id):
|
||||
history = None
|
||||
if session_id:
|
||||
session_data = self.app.myai.load_session_data(session_id)
|
||||
if session_data:
|
||||
history = session_data.get("history", [])
|
||||
mdprint(Rule(title=f"[header] Resuming Session: {session_data.get('title')} [/header]", style="border"))
|
||||
if history:
|
||||
mdprint(f"[debug]Analyzing {len(history)} previous messages...[/debug]\n")
|
||||
else:
|
||||
printer.error(f"Could not load session {session_id}. Starting clean.")
|
||||
|
||||
if not history:
|
||||
mdprint(Rule(style="engineer"))
|
||||
mdprint(Markdown("**Networking Expert Agent**: Hi! I'm your assistant. I can help you diagnose issues, run commands, and manage your nodes.\nType 'exit' to quit.\n"))
|
||||
mdprint(Rule(style="engineer"))
|
||||
|
||||
while True:
|
||||
try:
|
||||
user_query = Prompt.ask("[user_prompt]User[/user_prompt]")
|
||||
if not user_query.strip(): continue
|
||||
if user_query.lower() in ['exit', 'quit', 'bye', 'cancel']: break
|
||||
|
||||
with console.status("[ai_status]Agent is thinking...") as status:
|
||||
result = self.app.myai.ask(user_query, chat_history=history, status=status, debug=args.debug, trust=args.trust, **self.ai_overrides)
|
||||
|
||||
new_history = result.get("chat_history")
|
||||
if new_history is not None:
|
||||
history = new_history
|
||||
|
||||
responder = result.get("responder", "engineer")
|
||||
border = "architect" if responder == "architect" else "engineer"
|
||||
title = "[architect][bold]Network Architect[/bold][/architect]" if responder == "architect" else "[engineer][bold]Network Engineer[/bold][/engineer]"
|
||||
|
||||
if not result.get("streamed"):
|
||||
response_text = result.get("response", "")
|
||||
if response_text:
|
||||
mdprint(Panel(Markdown(response_text), title=title, border_style=border, expand=False))
|
||||
|
||||
if "usage" in result:
|
||||
u = result["usage"]
|
||||
console.print(f"[debug]Tokens: {u['total']} (Input: {u['input']}, Output: {u['output']})[/debug]")
|
||||
except (KeyboardInterrupt, EOFError):
|
||||
console.print("\n[dim]Session closed.[/dim]")
|
||||
break
|
||||
|
||||
def configure_mcp(self, args):
|
||||
"""Handle MCP server configuration via CLI tokens or interactive wizard."""
|
||||
mcp_args = args.mcp
|
||||
|
||||
# 1. Non-interactive CLI Mode (if arguments are provided)
|
||||
if mcp_args:
|
||||
action = mcp_args[0].lower()
|
||||
|
||||
if action == "list":
|
||||
settings = self.app.services.config_svc.get_settings()
|
||||
mcp_servers = settings.get("ai", {}).get("mcp_servers", {})
|
||||
if not mcp_servers:
|
||||
printer.info("No MCP servers configured.")
|
||||
else:
|
||||
columns = ["Name", "URL", "Enabled", "Auto-load OS"]
|
||||
rows = []
|
||||
for name, cfg in mcp_servers.items():
|
||||
rows.append([
|
||||
name,
|
||||
cfg.get("url", ""),
|
||||
"[green]Yes[/green]" if cfg.get("enabled", True) else "[red]No[/red]",
|
||||
cfg.get("auto_load_on_os", "Any")
|
||||
])
|
||||
printer.table("Configured MCP Servers", columns, rows)
|
||||
return
|
||||
|
||||
elif action == "add":
|
||||
if len(mcp_args) < 3:
|
||||
printer.error("Usage: connpy ai --mcp add <name> <url> [os_filter]")
|
||||
return
|
||||
name, url = mcp_args[1], mcp_args[2]
|
||||
os_filter = mcp_args[3] if len(mcp_args) > 3 else None
|
||||
try:
|
||||
self.app.services.ai.configure_mcp(name, url=url, auto_load_on_os=os_filter)
|
||||
printer.success(f"MCP server '{name}' added/updated.")
|
||||
except Exception as e:
|
||||
printer.error(str(e))
|
||||
return
|
||||
|
||||
elif action == "remove":
|
||||
if len(mcp_args) < 2:
|
||||
printer.error("Usage: connpy ai --mcp remove <name>")
|
||||
return
|
||||
name = mcp_args[1]
|
||||
try:
|
||||
self.app.services.ai.configure_mcp(name, remove=True)
|
||||
printer.success(f"MCP server '{name}' removed.")
|
||||
except Exception as e:
|
||||
printer.error(str(e))
|
||||
return
|
||||
|
||||
elif action in ["enable", "disable"]:
|
||||
if len(mcp_args) < 2:
|
||||
printer.error(f"Usage: connpy ai --mcp {action} <name>")
|
||||
return
|
||||
name = mcp_args[1]
|
||||
enabled = (action == "enable")
|
||||
try:
|
||||
self.app.services.ai.configure_mcp(name, enabled=enabled)
|
||||
printer.success(f"MCP server '{name}' {'enabled' if enabled else 'disabled'}.")
|
||||
except Exception as e:
|
||||
printer.error(str(e))
|
||||
return
|
||||
|
||||
else:
|
||||
printer.error(f"Unknown MCP action: {action}")
|
||||
printer.info("Available actions: list, add, remove, enable, disable")
|
||||
return
|
||||
|
||||
# 2. Interactive Wizard Mode (if no arguments provided)
|
||||
# Import forms dynamically to avoid circular dependencies if any
|
||||
if not hasattr(self.app, "cli_forms"):
|
||||
from .forms import Forms
|
||||
self.app.cli_forms = Forms(self.app)
|
||||
|
||||
settings = self.app.services.config_svc.get_settings()
|
||||
mcp_servers = settings.get("ai", {}).get("mcp_servers", {})
|
||||
|
||||
result = self.app.cli_forms.mcp_wizard(mcp_servers)
|
||||
if not result:
|
||||
return
|
||||
|
||||
action = result["action"]
|
||||
try:
|
||||
if action == "list":
|
||||
# Recursive call to the non-interactive list logic
|
||||
args.mcp = ["list"]
|
||||
return self.configure_mcp(args)
|
||||
|
||||
elif action == "add":
|
||||
self.app.services.ai.configure_mcp(
|
||||
result["name"],
|
||||
url=result["url"],
|
||||
enabled=result["enabled"],
|
||||
auto_load_on_os=result["os"]
|
||||
)
|
||||
printer.success(f"MCP server '{result['name']}' saved.")
|
||||
|
||||
elif action == "update": # Used for toggle
|
||||
self.app.services.ai.configure_mcp(
|
||||
result["name"],
|
||||
enabled=result["enabled"]
|
||||
)
|
||||
printer.success(f"MCP server '{result['name']}' updated.")
|
||||
|
||||
elif action == "remove":
|
||||
self.app.services.ai.configure_mcp(result["name"], remove=True)
|
||||
printer.success(f"MCP server '{result['name']}' removed.")
|
||||
|
||||
except Exception as e:
|
||||
printer.error(str(e))
|
||||
@@ -1,53 +0,0 @@
|
||||
import sys
|
||||
from .. import printer
|
||||
from ..services.exceptions import ConnpyError
|
||||
|
||||
class APIHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
|
||||
def dispatch(self, args):
|
||||
try:
|
||||
status = self.app.services.system.get_api_status()
|
||||
|
||||
if args.command == "stop":
|
||||
if not status["running"]:
|
||||
printer.warning("API does not seem to be running.")
|
||||
else:
|
||||
stopped = self.app.services.system.stop_api()
|
||||
if stopped:
|
||||
printer.success("API stopped successfully.")
|
||||
|
||||
elif args.command == "restart":
|
||||
port = args.data if args.data and isinstance(args.data, int) else None
|
||||
if status["running"]:
|
||||
printer.info(f"Stopping server with process ID {status['pid']}...")
|
||||
|
||||
# Service handles port preservation if port is None
|
||||
self.app.services.system.restart_api(port=port)
|
||||
|
||||
if status["running"]:
|
||||
printer.info(f"Server with process ID {status['pid']} stopped.")
|
||||
|
||||
# Re-fetch status to show the actual port used
|
||||
new_status = self.app.services.system.get_api_status()
|
||||
printer.success(f"API restarted on port {new_status.get('port', 'unknown')}.")
|
||||
|
||||
elif args.command == "start":
|
||||
if status["running"]:
|
||||
msg = f"Connpy server is already running (PID: {status['pid']}"
|
||||
if status.get("port"):
|
||||
msg += f", Port: {status['port']}"
|
||||
msg += ")."
|
||||
printer.warning(msg)
|
||||
else:
|
||||
port = args.data if args.data and isinstance(args.data, int) else 8048
|
||||
self.app.services.system.start_api(port=port)
|
||||
printer.success(f"API started on port {port}.")
|
||||
|
||||
elif args.command == "debug":
|
||||
port = args.data if args.data and isinstance(args.data, int) else 8048
|
||||
self.app.services.system.debug_api(port=port)
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
@@ -1,135 +0,0 @@
|
||||
import sys
|
||||
import yaml
|
||||
from .. import printer
|
||||
from ..services.exceptions import ConnpyError, InvalidConfigurationError
|
||||
from .help_text import get_instructions
|
||||
|
||||
class ConfigHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
|
||||
def dispatch(self, args):
|
||||
actions = {
|
||||
"completion": self.show_completion,
|
||||
"fzf_wrapper": self.show_fzf_wrapper,
|
||||
"case": self.set_case,
|
||||
"fzf": self.set_fzf,
|
||||
"idletime": self.set_idletime,
|
||||
"configfolder": self.set_configfolder,
|
||||
"theme": self.set_theme,
|
||||
"engineer_model": self.set_ai_config,
|
||||
"engineer_api_key": self.set_ai_config,
|
||||
"architect_model": self.set_ai_config,
|
||||
"architect_api_key": self.set_ai_config,
|
||||
"trusted_commands": self.set_ai_config,
|
||||
"service_mode": self.set_service_mode,
|
||||
"remote_host": self.set_remote_host,
|
||||
"sync_remote": self.set_sync_remote
|
||||
}
|
||||
handler = actions.get(getattr(args, "command", None))
|
||||
if handler:
|
||||
return handler(args)
|
||||
|
||||
# If no specific command was triggered, show current configuration
|
||||
return self.show_config(args)
|
||||
|
||||
def show_config(self, args):
|
||||
settings = self.app.services.config_svc.get_settings()
|
||||
yaml_str = yaml.dump(settings, sort_keys=False, default_flow_style=False)
|
||||
printer.data("Current Configuration", yaml_str)
|
||||
|
||||
def set_service_mode(self, args):
|
||||
new_mode = args.data[0]
|
||||
if new_mode == "remote":
|
||||
settings = self.app.services.config_svc.get_settings()
|
||||
if not settings.get("remote_host"):
|
||||
printer.error("Remote host must be configured before switching to remote mode")
|
||||
return
|
||||
|
||||
self.app.services.config_svc.update_setting("service_mode", new_mode)
|
||||
|
||||
# Immediate sync of fzf/text cache files for the new mode
|
||||
try:
|
||||
# 1. Clear old cache files to avoid discrepancies if fetch fails
|
||||
self.app.config._generate_nodes_cache(nodes=[], folders=[], profiles=[])
|
||||
|
||||
# 2. Re-initialize services for the new mode
|
||||
from ..services.provider import ServiceProvider
|
||||
settings = self.app.services.config_svc.get_settings()
|
||||
new_services = ServiceProvider(self.app.config, mode=new_mode, remote_host=settings.get("remote_host"))
|
||||
|
||||
# 3. Fetch data from new mode and generate cache
|
||||
nodes = new_services.nodes.list_nodes()
|
||||
folders = new_services.nodes.list_folders()
|
||||
profiles = new_services.profiles.list_profiles()
|
||||
new_services.nodes.generate_cache(nodes=nodes, folders=folders, profiles=profiles)
|
||||
|
||||
printer.success("Config saved")
|
||||
except Exception as e:
|
||||
printer.success("Config saved")
|
||||
printer.warning(f"Note: Could not synchronize fzf cache: {e}")
|
||||
|
||||
|
||||
def set_remote_host(self, args):
|
||||
self.app.services.config_svc.update_setting("remote_host", args.data[0])
|
||||
printer.success("Config saved")
|
||||
|
||||
def set_theme(self, args):
|
||||
try:
|
||||
valid_styles = self.app.services.config_svc.apply_theme_from_file(args.data[0])
|
||||
# Apply immediately to current session
|
||||
printer.apply_theme(valid_styles)
|
||||
printer.success(f"Theme '{args.data[0]}' applied and saved")
|
||||
except (ConnpyError, InvalidConfigurationError) as e:
|
||||
printer.error(str(e))
|
||||
|
||||
def show_fzf_wrapper(self, args):
|
||||
print(get_instructions("fzf_wrapper_" + args.data[0]))
|
||||
|
||||
def show_completion(self, args):
|
||||
print(get_instructions(args.data[0] + "completion"))
|
||||
|
||||
def set_case(self, args):
|
||||
val = (args.data[0].lower() == "true")
|
||||
self.app.services.config_svc.update_setting("case", val)
|
||||
self.app.case = val
|
||||
printer.success("Config saved")
|
||||
|
||||
def set_fzf(self, args):
|
||||
val = (args.data[0].lower() == "true")
|
||||
self.app.services.config_svc.update_setting("fzf", val)
|
||||
self.app.fzf = val
|
||||
printer.success("Config saved")
|
||||
|
||||
def set_idletime(self, args):
|
||||
try:
|
||||
val = max(0, int(args.data[0]))
|
||||
self.app.services.config_svc.update_setting("idletime", val)
|
||||
printer.success("Config saved")
|
||||
except ValueError:
|
||||
printer.error("Keepalive must be an integer.")
|
||||
|
||||
def set_configfolder(self, args):
|
||||
try:
|
||||
self.app.services.config_svc.set_config_folder(args.data[0])
|
||||
printer.success("Config saved")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
def set_sync_remote(self, args):
|
||||
val = (args.data[0].lower() == "true")
|
||||
self.app.services.config_svc.update_setting("sync_remote", val)
|
||||
self.app.services.sync.sync_remote = val
|
||||
printer.success("Config saved")
|
||||
|
||||
def set_ai_config(self, args):
|
||||
try:
|
||||
settings = self.app.services.config_svc.get_settings()
|
||||
aiconfig = settings.get("ai", {})
|
||||
aiconfig[args.command] = args.data[0]
|
||||
self.app.services.config_svc.update_setting("ai", aiconfig)
|
||||
printer.success("Config saved")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
|
||||
@@ -1,77 +0,0 @@
|
||||
import sys
|
||||
import yaml
|
||||
from .. import printer
|
||||
from ..services.exceptions import ConnpyError
|
||||
|
||||
class ContextHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
self.service = self.app.services.context
|
||||
|
||||
def dispatch(self, args):
|
||||
try:
|
||||
if args.add:
|
||||
if len(args.add) < 2:
|
||||
printer.error("--add requires name and at least one regex")
|
||||
return
|
||||
self.service.add_context(args.add[0], args.add[1:])
|
||||
printer.success(f"Context '{args.add[0]}' added successfully.")
|
||||
|
||||
elif args.rm:
|
||||
if not args.context_name:
|
||||
printer.error("--rm requires a context name")
|
||||
return
|
||||
self.service.delete_context(args.context_name)
|
||||
printer.success(f"Context '{args.context_name}' deleted successfully.")
|
||||
|
||||
elif args.ls:
|
||||
contexts = self.service.list_contexts()
|
||||
for ctx in contexts:
|
||||
if ctx["active"]:
|
||||
printer.success(f"{ctx['name']} (active)")
|
||||
else:
|
||||
printer.custom(" ", ctx["name"])
|
||||
|
||||
elif args.set:
|
||||
if not args.context_name:
|
||||
printer.error("--set requires a context name")
|
||||
return
|
||||
self.service.set_active_context(args.context_name)
|
||||
printer.success(f"Context set to: {args.context_name}")
|
||||
|
||||
elif args.show:
|
||||
if not args.context_name:
|
||||
printer.error("--show requires a context name")
|
||||
return
|
||||
contexts = self.service.contexts
|
||||
if args.context_name not in contexts:
|
||||
printer.error(f"Context '{args.context_name}' does not exist")
|
||||
return
|
||||
yaml_output = yaml.dump(contexts[args.context_name], sort_keys=False, default_flow_style=False)
|
||||
printer.custom(args.context_name, "")
|
||||
print(yaml_output)
|
||||
|
||||
elif args.edit:
|
||||
if len(args.edit) < 2:
|
||||
printer.error("--edit requires name and at least one regex")
|
||||
return
|
||||
self.service.update_context(args.edit[0], args.edit[1:])
|
||||
printer.success(f"Context '{args.edit[0]}' modified successfully.")
|
||||
|
||||
else:
|
||||
# Default behavior if no flags: show list
|
||||
self.dispatch_ls(args)
|
||||
|
||||
except ValueError as e:
|
||||
printer.error(str(e))
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
def dispatch_ls(self, args):
|
||||
contexts = self.service.list_contexts()
|
||||
for ctx in contexts:
|
||||
if ctx["active"]:
|
||||
printer.success(f"{ctx['name']} (active)")
|
||||
else:
|
||||
printer.custom(" ", ctx["name"])
|
||||
@@ -1,280 +0,0 @@
|
||||
import ast
|
||||
import inquirer
|
||||
from .validators import Validators
|
||||
|
||||
class Forms:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
self.validators = Validators(app)
|
||||
|
||||
def questions_edit(self):
|
||||
questions = []
|
||||
questions.append(inquirer.Confirm("host", message="Edit Hostname/IP?"))
|
||||
questions.append(inquirer.Confirm("protocol", message="Edit Protocol/app?"))
|
||||
questions.append(inquirer.Confirm("port", message="Edit Port?"))
|
||||
questions.append(inquirer.Confirm("options", message="Edit Options?"))
|
||||
questions.append(inquirer.Confirm("logs", message="Edit logging path/file?"))
|
||||
questions.append(inquirer.Confirm("tags", message="Edit tags?"))
|
||||
questions.append(inquirer.Confirm("jumphost", message="Edit jumphost?"))
|
||||
questions.append(inquirer.Confirm("user", message="Edit User?"))
|
||||
questions.append(inquirer.Confirm("password", message="Edit password?"))
|
||||
return inquirer.prompt(questions)
|
||||
|
||||
def questions_nodes(self, unique, uniques=None, edit=None):
|
||||
try:
|
||||
defaults = self.app.services.nodes.get_node_details(unique)
|
||||
if "tags" not in defaults:
|
||||
defaults["tags"] = ""
|
||||
if "jumphost" not in defaults:
|
||||
defaults["jumphost"] = ""
|
||||
except Exception:
|
||||
defaults = {"host": "", "protocol": "", "port": "", "user": "", "options": "", "logs": "", "tags": "", "password": "", "jumphost": ""}
|
||||
node = {}
|
||||
if edit is None:
|
||||
edit = {"host": True, "protocol": True, "port": True, "user": True, "password": True, "options": True, "logs": True, "tags": True, "jumphost": True}
|
||||
questions = []
|
||||
if edit["host"]:
|
||||
questions.append(inquirer.Text("host", message="Add Hostname or IP", validate=self.validators.host_validation, default=defaults["host"]))
|
||||
else:
|
||||
node["host"] = defaults["host"]
|
||||
if edit["protocol"]:
|
||||
questions.append(inquirer.Text("protocol", message="Select Protocol/app", validate=self.validators.protocol_validation, default=defaults["protocol"]))
|
||||
else:
|
||||
node["protocol"] = defaults["protocol"]
|
||||
if edit["port"]:
|
||||
questions.append(inquirer.Text("port", message="Select Port Number", validate=self.validators.port_validation, default=defaults["port"]))
|
||||
else:
|
||||
node["port"] = defaults["port"]
|
||||
if edit["options"]:
|
||||
questions.append(inquirer.Text("options", message="Pass extra options to protocol/app", validate=self.validators.default_validation, default=defaults["options"]))
|
||||
else:
|
||||
node["options"] = defaults["options"]
|
||||
if edit["logs"]:
|
||||
questions.append(inquirer.Text("logs", message="Pick logging path/file ", validate=self.validators.default_validation, default=defaults["logs"].replace("{", "{{").replace("}", "}}")))
|
||||
else:
|
||||
node["logs"] = defaults["logs"]
|
||||
if edit["tags"]:
|
||||
questions.append(inquirer.Text("tags", message="Add tags dictionary", validate=self.validators.tags_validation, default=str(defaults["tags"]).replace("{", "{{").replace("}", "}}")))
|
||||
else:
|
||||
node["tags"] = defaults["tags"]
|
||||
if edit["jumphost"]:
|
||||
questions.append(inquirer.Text("jumphost", message="Add Jumphost node", validate=self.validators.jumphost_validation, default=str(defaults["jumphost"]).replace("{", "{{").replace("}", "}}")))
|
||||
else:
|
||||
node["jumphost"] = defaults["jumphost"]
|
||||
if edit["user"]:
|
||||
questions.append(inquirer.Text("user", message="Pick username", validate=self.validators.default_validation, default=defaults["user"]))
|
||||
else:
|
||||
node["user"] = defaults["user"]
|
||||
if edit["password"]:
|
||||
questions.append(inquirer.List("password", message="Password: Use a local password, no password or a list of profiles to reference?", choices=["Local Password", "Profiles", "No Password"]))
|
||||
else:
|
||||
node["password"] = defaults["password"]
|
||||
|
||||
answer = inquirer.prompt(questions)
|
||||
if answer is None:
|
||||
return False
|
||||
|
||||
if "password" in answer:
|
||||
if answer["password"] == "Local Password":
|
||||
passq = [inquirer.Password("password", message="Set Password")]
|
||||
passa = inquirer.prompt(passq)
|
||||
if passa is None:
|
||||
return False
|
||||
answer["password"] = self.app.services.config_svc.encrypt_password(passa["password"])
|
||||
elif answer["password"] == "Profiles":
|
||||
passq = [(inquirer.Text("password", message="Set a @profile or a comma separated list of @profiles", validate=self.validators.pass_validation))]
|
||||
passa = inquirer.prompt(passq)
|
||||
if passa is None:
|
||||
return False
|
||||
answer["password"] = passa["password"].split(",")
|
||||
elif answer["password"] == "No Password":
|
||||
answer["password"] = ""
|
||||
|
||||
if "tags" in answer and not answer["tags"].startswith("@") and answer["tags"]:
|
||||
answer["tags"] = ast.literal_eval(answer["tags"])
|
||||
|
||||
result = {**uniques, **answer, **node}
|
||||
result["type"] = "connection"
|
||||
return result
|
||||
|
||||
def questions_profiles(self, unique, edit=None):
|
||||
try:
|
||||
defaults = self.app.services.profiles.get_profile(unique, resolve=False)
|
||||
if "tags" not in defaults:
|
||||
defaults["tags"] = ""
|
||||
if "jumphost" not in defaults:
|
||||
defaults["jumphost"] = ""
|
||||
except Exception:
|
||||
defaults = {"host": "", "protocol": "", "port": "", "user": "", "options": "", "logs": "", "tags": "", "jumphost": ""}
|
||||
profile = {}
|
||||
if edit is None:
|
||||
edit = {"host": True, "protocol": True, "port": True, "user": True, "password": True, "options": True, "logs": True, "tags": True, "jumphost": True}
|
||||
questions = []
|
||||
if edit["host"]:
|
||||
questions.append(inquirer.Text("host", message="Add Hostname or IP", default=defaults["host"]))
|
||||
else:
|
||||
profile["host"] = defaults["host"]
|
||||
if edit["protocol"]:
|
||||
questions.append(inquirer.Text("protocol", message="Select Protocol/app", validate=self.validators.profile_protocol_validation, default=defaults["protocol"]))
|
||||
else:
|
||||
profile["protocol"] = defaults["protocol"]
|
||||
if edit["port"]:
|
||||
questions.append(inquirer.Text("port", message="Select Port Number", validate=self.validators.profile_port_validation, default=defaults["port"]))
|
||||
else:
|
||||
profile["port"] = defaults["port"]
|
||||
if edit["options"]:
|
||||
questions.append(inquirer.Text("options", message="Pass extra options to protocol/app", default=defaults["options"]))
|
||||
else:
|
||||
profile["options"] = defaults["options"]
|
||||
if edit["logs"]:
|
||||
questions.append(inquirer.Text("logs", message="Pick logging path/file ", default=defaults["logs"].replace("{", "{{").replace("}", "}}")))
|
||||
else:
|
||||
profile["logs"] = defaults["logs"]
|
||||
if edit["tags"]:
|
||||
questions.append(inquirer.Text("tags", message="Add tags dictionary", validate=self.validators.profile_tags_validation, default=str(defaults["tags"]).replace("{", "{{").replace("}", "}}")))
|
||||
else:
|
||||
profile["tags"] = defaults["tags"]
|
||||
if edit["jumphost"]:
|
||||
questions.append(inquirer.Text("jumphost", message="Add Jumphost node", validate=self.validators.profile_jumphost_validation, default=str(defaults["jumphost"]).replace("{", "{{").replace("}", "}}")))
|
||||
else:
|
||||
profile["jumphost"] = defaults["jumphost"]
|
||||
if edit["user"]:
|
||||
questions.append(inquirer.Text("user", message="Pick username", default=defaults["user"]))
|
||||
else:
|
||||
profile["user"] = defaults["user"]
|
||||
if edit["password"]:
|
||||
questions.append(inquirer.Password("password", message="Set Password"))
|
||||
else:
|
||||
profile["password"] = defaults["password"]
|
||||
|
||||
answer = inquirer.prompt(questions)
|
||||
if answer is None:
|
||||
return False
|
||||
|
||||
if "password" in answer:
|
||||
if answer["password"] != "":
|
||||
answer["password"] = self.app.services.config_svc.encrypt_password(answer["password"])
|
||||
|
||||
if "tags" in answer and answer["tags"]:
|
||||
answer["tags"] = ast.literal_eval(answer["tags"])
|
||||
|
||||
result = {**answer, **profile}
|
||||
result["id"] = unique
|
||||
return result
|
||||
|
||||
def questions_bulk(self, nodes="", hosts=""):
|
||||
questions = []
|
||||
questions.append(inquirer.Text("ids", message="add a comma separated list of nodes to add", default=nodes, validate=self.validators.bulk_node_validation))
|
||||
questions.append(inquirer.Text("location", message="Add a @folder, @subfolder@folder or leave empty", validate=self.validators.bulk_folder_validation))
|
||||
questions.append(inquirer.Text("host", message="Add comma separated list of Hostnames or IPs", default=hosts, validate=self.validators.bulk_host_validation))
|
||||
questions.append(inquirer.Text("protocol", message="Select Protocol/app", validate=self.validators.protocol_validation))
|
||||
questions.append(inquirer.Text("port", message="Select Port Number", validate=self.validators.port_validation))
|
||||
questions.append(inquirer.Text("options", message="Pass extra options to protocol/app", validate=self.validators.default_validation))
|
||||
questions.append(inquirer.Text("logs", message="Pick logging path/file ", validate=self.validators.default_validation))
|
||||
questions.append(inquirer.Text("tags", message="Add tags dictionary", validate=self.validators.tags_validation))
|
||||
questions.append(inquirer.Text("jumphost", message="Add Jumphost node", validate=self.validators.jumphost_validation))
|
||||
questions.append(inquirer.Text("user", message="Pick username", validate=self.validators.default_validation))
|
||||
questions.append(inquirer.List("password", message="Password: Use a local password, no password or a list of profiles to reference?", choices=["Local Password", "Profiles", "No Password"]))
|
||||
|
||||
answer = inquirer.prompt(questions)
|
||||
if answer is None:
|
||||
return False
|
||||
|
||||
if "password" in answer:
|
||||
if answer["password"] == "Local Password":
|
||||
passq = [inquirer.Password("password", message="Set Password")]
|
||||
passa = inquirer.prompt(passq)
|
||||
answer["password"] = self.app.services.config_svc.encrypt_password(passa["password"])
|
||||
elif answer["password"] == "Profiles":
|
||||
passq = [(inquirer.Text("password", message="Set a @profile or a comma separated list of @profiles", validate=self.validators.pass_validation))]
|
||||
passa = inquirer.prompt(passq)
|
||||
answer["password"] = passa["password"].split(",")
|
||||
elif answer["password"] == "No Password":
|
||||
answer["password"] = ""
|
||||
|
||||
answer["type"] = "connection"
|
||||
if "tags" in answer and not answer["tags"].startswith("@") and answer["tags"]:
|
||||
answer["tags"] = ast.literal_eval(answer["tags"])
|
||||
|
||||
return answer
|
||||
|
||||
def mcp_wizard(self, mcp_servers):
|
||||
"""Interactive wizard to manage MCP servers."""
|
||||
from .helpers import theme
|
||||
|
||||
while True:
|
||||
options = [
|
||||
("List Configured Servers", "list"),
|
||||
("Add/Update Server", "add"),
|
||||
("Enable/Disable Server", "toggle"),
|
||||
("Remove Server", "remove"),
|
||||
("Back", "exit")
|
||||
]
|
||||
|
||||
questions = [
|
||||
inquirer.List("action", message="MCP Configuration", choices=options)
|
||||
]
|
||||
|
||||
answers = inquirer.prompt(questions, theme=theme)
|
||||
if not answers or answers["action"] == "exit":
|
||||
return None
|
||||
|
||||
action = answers["action"]
|
||||
|
||||
if action == "list":
|
||||
if not mcp_servers:
|
||||
print("\nNo MCP servers configured.\n")
|
||||
else:
|
||||
return {"action": "list"}
|
||||
|
||||
elif action == "add":
|
||||
questions = [
|
||||
inquirer.Text("name", message="Server Name (identifier)"),
|
||||
inquirer.Text("url", message="SSE URL (e.g., http://localhost:8000/sse)"),
|
||||
inquirer.Confirm("enabled", message="Enabled?", default=True),
|
||||
inquirer.Text("auto_load_os", message="Auto-load on specific OS (blank for any)")
|
||||
]
|
||||
answers = inquirer.prompt(questions, theme=theme)
|
||||
if answers:
|
||||
return {
|
||||
"action": "add",
|
||||
"name": answers["name"],
|
||||
"url": answers["url"],
|
||||
"enabled": answers["enabled"],
|
||||
"os": answers["auto_load_os"]
|
||||
}
|
||||
|
||||
elif action == "toggle":
|
||||
if not mcp_servers:
|
||||
print("\nNo servers to toggle.\n")
|
||||
continue
|
||||
|
||||
choices = []
|
||||
for name, cfg in mcp_servers.items():
|
||||
status = "[Enabled]" if cfg.get("enabled", True) else "[Disabled]"
|
||||
choices.append((f"{name} {status}", name))
|
||||
|
||||
questions = [
|
||||
inquirer.List("name", message="Select server to toggle", choices=choices + [("Cancel", None)])
|
||||
]
|
||||
answers = inquirer.prompt(questions, theme=theme)
|
||||
if answers and answers["name"]:
|
||||
current = mcp_servers[answers["name"]].get("enabled", True)
|
||||
return {
|
||||
"action": "update",
|
||||
"name": answers["name"],
|
||||
"enabled": not current
|
||||
}
|
||||
|
||||
elif action == "remove":
|
||||
if not mcp_servers:
|
||||
print("\nNo servers to remove.\n")
|
||||
continue
|
||||
|
||||
questions = [
|
||||
inquirer.List("name", message="Select server to remove", choices=list(mcp_servers.keys()) + ["Cancel"])
|
||||
]
|
||||
answers = inquirer.prompt(questions, theme=theme)
|
||||
if answers and answers["name"] != "Cancel":
|
||||
return {"action": "remove", "name": answers["name"]}
|
||||
return None
|
||||
@@ -1,215 +0,0 @@
|
||||
import os
|
||||
|
||||
def get_help(type, parsers=None):
|
||||
if type == "export":
|
||||
return "Export /path/to/file.yml \[@subfolder1]\[@folder1] \[@subfolderN]\[@folderN]"
|
||||
if type == "import":
|
||||
return "Import /path/to/file.yml"
|
||||
if type == "node":
|
||||
return "node\[@subfolder]\[@folder]\nConnect to specific node or show all matching nodes\n\[@subfolder]\[@folder]\nShow all available connections globally or in specified path"
|
||||
if type == "usage":
|
||||
commands = []
|
||||
for subcommand, subparser in parsers.choices.items():
|
||||
if subparser.description != None:
|
||||
commands.append(subcommand)
|
||||
commands = ",".join(commands)
|
||||
usage_help = f"connpy [-h] [--add | --del | --mod | --show | --debug] [node|folder] [--sftp]\n connpy {{{commands}}} ..."
|
||||
return usage_help
|
||||
return get_instructions(type)
|
||||
|
||||
def get_instructions(type="add"):
|
||||
if type == "add":
|
||||
return """
|
||||
Welcome to Connpy node Addition Wizard!
|
||||
|
||||
Here are some important instructions and tips for configuring your new node:
|
||||
|
||||
1. **Profiles**:
|
||||
- You can use the configured settings in a profile using `@profilename`.
|
||||
|
||||
2. **Available Protocols and Apps**:
|
||||
- ssh
|
||||
- telnet
|
||||
- kubectl (`kubectl exec`)
|
||||
- docker (`docker exec`)
|
||||
- ssm (`aws ssm start-session`)
|
||||
|
||||
3. **Optional Values**:
|
||||
- You can leave any value empty except for the hostname/IP.
|
||||
|
||||
4. **Passwords**:
|
||||
- You can pass one or more passwords using comma-separated `@profiles`.
|
||||
|
||||
5. **Logging**:
|
||||
- You can use the following variables in the logging file name:
|
||||
- `${id}`
|
||||
- `${unique}`
|
||||
- `${host}`
|
||||
- `${port}`
|
||||
- `${user}`
|
||||
- `${protocol}`
|
||||
|
||||
6. **Well-Known Tags**:
|
||||
- `os`: Identified by AI to generate commands based on the operating system.
|
||||
- `screen_length_command`: Used by automation to avoid pagination on different devices (e.g., `terminal length 0` for Cisco devices).
|
||||
- `prompt`: Replaces default app prompt to identify the end of output or where the user can start inputting commands.
|
||||
- `kube_command`: Replaces the default command (`/bin/bash`) for `kubectl exec`.
|
||||
- `docker_command`: Replaces the default command for `docker exec`.
|
||||
- `region`: AWS Region used for `aws ssm start-session`.
|
||||
- `profile`: AWS Profile used for `aws ssm start-session`.
|
||||
- `ssh_options`: Additional SSH options injected when an SSM node is used as a jumphost (e.g., `-i ~/.ssh/key.pem`).
|
||||
- `nc_command`: Replaces the default `nc` command used when bridging connections through Docker or Kubernetes (e.g., `ip netns exec global-vrf nc`).
|
||||
"""
|
||||
if type == "bashcompletion":
|
||||
return '''
|
||||
# Bash completion for connpy
|
||||
# Run: eval "$(connpy config --completion bash)"
|
||||
# Or add it to your .bashrc
|
||||
|
||||
_connpy_autocomplete()
|
||||
{
|
||||
local strings
|
||||
strings=$(python3 -m connpy.completion bash ${#COMP_WORDS[@]} "${COMP_WORDS[@]}")
|
||||
|
||||
local IFS=$'\\t'
|
||||
COMPREPLY=( $(compgen -W "$strings" -- "${COMP_WORDS[$COMP_CWORD]}") )
|
||||
}
|
||||
complete -o nosort -F _connpy_autocomplete conn
|
||||
complete -o nosort -F _connpy_autocomplete connpy
|
||||
'''
|
||||
if type == "zshcompletion":
|
||||
return '''
|
||||
# Zsh completion for connpy
|
||||
# Run: eval "$(connpy config --completion zsh)"
|
||||
# Or add it to your .zshrc
|
||||
# Make sure compinit is loaded
|
||||
|
||||
autoload -U compinit && compinit
|
||||
_connpy_autocomplete()
|
||||
{
|
||||
local COMP_WORDS num strings
|
||||
COMP_WORDS=( $words )
|
||||
num=${#COMP_WORDS[@]}
|
||||
if [[ $words =~ '.* $' ]]; then
|
||||
num=$(($num + 1))
|
||||
fi
|
||||
strings=$(python3 -m connpy.completion zsh ${num} ${COMP_WORDS[@]})
|
||||
|
||||
local IFS=$'\\t'
|
||||
compadd "$@" -- ${=strings}
|
||||
}
|
||||
compdef _connpy_autocomplete conn
|
||||
compdef _connpy_autocomplete connpy
|
||||
'''
|
||||
if type == "fzf_wrapper_bash":
|
||||
return '''\n#Here starts bash 0ms fzf wrapper for connpy
|
||||
connpy() {
|
||||
if [ $# -eq 0 ]; then
|
||||
local selected
|
||||
local configdir=$(cat ~/.config/conn/.folder 2>/dev/null || echo ~/.config/conn)
|
||||
if [ -s "$configdir/.fzf_nodes_cache.txt" ]; then
|
||||
selected=$(cat "$configdir/.fzf_nodes_cache.txt" | fzf-tmux -i -d 25%)
|
||||
else
|
||||
command connpy
|
||||
return
|
||||
fi
|
||||
if [ -n "$selected" ]; then
|
||||
command connpy "$selected"
|
||||
fi
|
||||
else
|
||||
command connpy "$@"
|
||||
fi
|
||||
}
|
||||
alias c="connpy"
|
||||
#Here ends bash 0ms fzf wrapper for connpy
|
||||
'''
|
||||
if type == "fzf_wrapper_zsh":
|
||||
return '''\n#Here starts zsh 0ms fzf wrapper for connpy
|
||||
connpy() {
|
||||
if [ $# -eq 0 ]; then
|
||||
local selected
|
||||
local configdir=$(cat ~/.config/conn/.folder 2>/dev/null || echo ~/.config/conn)
|
||||
if [ -s "$configdir/.fzf_nodes_cache.txt" ]; then
|
||||
selected=$(cat "$configdir/.fzf_nodes_cache.txt" | fzf-tmux -i -d 25%)
|
||||
else
|
||||
command connpy
|
||||
return
|
||||
fi
|
||||
if [ -n "$selected" ]; then
|
||||
command connpy "$selected"
|
||||
fi
|
||||
else
|
||||
command connpy "$@"
|
||||
fi
|
||||
}
|
||||
alias c="connpy"
|
||||
#Here ends zsh 0ms fzf wrapper for connpy
|
||||
'''
|
||||
if type == "run":
|
||||
return "node[@subfolder][@folder] commmand to run\nRun the specific command on the node and print output\n/path/to/file.yaml\nUse a yaml file to run an automation script"
|
||||
if type == "generate":
|
||||
return r'''---
|
||||
tasks:
|
||||
- name: "Config"
|
||||
|
||||
action: 'run' #Action can be test or run. Mandatory
|
||||
|
||||
nodes: #List of nodes to work on. Mandatory
|
||||
- 'router1@office' #You can add specific nodes
|
||||
- '@aws' #entire folders or subfolders
|
||||
- 'router.*@office' #or use regex to filter inside a folder
|
||||
|
||||
commands: #List of commands to send, use {name} to pass variables
|
||||
- 'term len 0'
|
||||
- 'conf t'
|
||||
- 'interface {if}'
|
||||
- 'ip address 10.100.100.{id} 255.255.255.255'
|
||||
- '{commit}'
|
||||
- 'end'
|
||||
|
||||
variables: #Variables to use on commands and expected. Optional
|
||||
__global__: #Global variables to use on all nodes, fallback if missing in the node.
|
||||
commit: ''
|
||||
if: 'loopback100'
|
||||
router1@office:
|
||||
id: 1
|
||||
router2@office:
|
||||
id: 2
|
||||
commit: 'commit'
|
||||
router3@office:
|
||||
id: 3
|
||||
vrouter1@aws:
|
||||
id: 4
|
||||
vrouterN@aws:
|
||||
id: 5
|
||||
|
||||
output: /home/user/logs #Type of output, if null you only get Connection and test result. Choices are: null,stdout,/path/to/folder. Folder path works on both 'run' and 'test' actions.
|
||||
|
||||
options:
|
||||
prompt: r'>$|#$|\$$|>.$|#.$|\$.$' #Optional prompt to check on your devices, default should work on most devices.
|
||||
parallel: 10 #Optional number of nodes to run commands on parallel. Default 10.
|
||||
timeout: 20 #Optional time to wait in seconds for prompt, expected or EOF. Default 20.
|
||||
|
||||
- name: "TestConfig"
|
||||
action: 'test'
|
||||
nodes:
|
||||
- 'router1@office'
|
||||
- '@aws'
|
||||
commands:
|
||||
- 'ping 10.100.100.{id}'
|
||||
expected: '!' #Expected text to find when running test action. Mandatory for 'test'
|
||||
variables:
|
||||
router1@office:
|
||||
id: 1
|
||||
router2@office:
|
||||
id: 2
|
||||
commit: 'commit'
|
||||
router3@office:
|
||||
id: 3
|
||||
vrouter1@aws:
|
||||
id: 4
|
||||
vrouterN@aws:
|
||||
id: 5
|
||||
output: null
|
||||
...'''
|
||||
return ""
|
||||
@@ -1,80 +0,0 @@
|
||||
import os
|
||||
import inquirer
|
||||
try:
|
||||
from pyfzf.pyfzf import FzfPrompt
|
||||
except ImportError:
|
||||
FzfPrompt = None
|
||||
|
||||
def get_config_dir():
|
||||
home = os.path.expanduser("~")
|
||||
defaultdir = os.path.join(home, '.config/conn')
|
||||
pathfile = os.path.join(defaultdir, '.folder')
|
||||
try:
|
||||
with open(pathfile, "r") as f:
|
||||
return f.read().strip()
|
||||
except:
|
||||
return defaultdir
|
||||
|
||||
def nodes_completer(prefix, parsed_args, **kwargs):
|
||||
configdir = get_config_dir()
|
||||
cache_file = os.path.join(configdir, '.fzf_nodes_cache.txt')
|
||||
if os.path.exists(cache_file):
|
||||
with open(cache_file, "r") as f:
|
||||
return [line.strip() for line in f if line.startswith(prefix)]
|
||||
return []
|
||||
|
||||
def folders_completer(prefix, parsed_args, **kwargs):
|
||||
configdir = get_config_dir()
|
||||
cache_file = os.path.join(configdir, '.folders_cache.txt')
|
||||
if os.path.exists(cache_file):
|
||||
with open(cache_file, "r") as f:
|
||||
return [line.strip() for line in f if line.startswith(prefix)]
|
||||
return []
|
||||
|
||||
def profiles_completer(prefix, parsed_args, **kwargs):
|
||||
configdir = get_config_dir()
|
||||
cache_file = os.path.join(configdir, '.profiles_cache.txt')
|
||||
if os.path.exists(cache_file):
|
||||
with open(cache_file, "r") as f:
|
||||
return [line.strip() for line in f if line.startswith(prefix)]
|
||||
return []
|
||||
|
||||
def choose(app, list_, name, action):
|
||||
# Generates an inquirer list to pick
|
||||
# Safeguard: Never prompt if running in autocomplete shell
|
||||
if os.environ.get("_ARGCOMPLETE") or os.environ.get("COMP_LINE"):
|
||||
return None
|
||||
|
||||
if FzfPrompt and app.fzf and os.environ.get("_ARGCOMPLETE") is None and os.environ.get("COMP_LINE") is None:
|
||||
fzf_prompt = FzfPrompt(executable_path="fzf-tmux")
|
||||
if not app.case:
|
||||
fzf_prompt = FzfPrompt(executable_path="fzf-tmux -i")
|
||||
answer = fzf_prompt.prompt(list_, fzf_options="-d 25%")
|
||||
if len(answer) == 0:
|
||||
return None
|
||||
else:
|
||||
return answer[0]
|
||||
else:
|
||||
questions = [inquirer.List(name, message="Pick {} to {}:".format(name,action), choices=list_, carousel=True)]
|
||||
answer = inquirer.prompt(questions)
|
||||
if answer == None:
|
||||
return None
|
||||
else:
|
||||
return answer[name]
|
||||
|
||||
def toplevel_completer(prefix, parsed_args, **kwargs):
|
||||
commands = ["node", "profile", "move", "mv", "copy", "cp", "list", "ls", "bulk", "export", "import", "ai", "run", "api", "context", "plugin", "config", "sync"]
|
||||
|
||||
configdir = get_config_dir()
|
||||
cache_file = os.path.join(configdir, '.fzf_nodes_cache.txt')
|
||||
nodes = []
|
||||
if os.path.exists(cache_file):
|
||||
with open(cache_file, "r") as f:
|
||||
nodes = [line.strip() for line in f if line.startswith(prefix)]
|
||||
|
||||
cache_folders = os.path.join(configdir, '.folders_cache.txt')
|
||||
if os.path.exists(cache_folders):
|
||||
with open(cache_folders, "r") as f:
|
||||
nodes += [line.strip() for line in f if line.startswith(prefix)]
|
||||
|
||||
return [c for c in commands + nodes if c.startswith(prefix)]
|
||||
@@ -1,85 +0,0 @@
|
||||
import os
|
||||
import sys
|
||||
import inquirer
|
||||
from .. import printer
|
||||
from ..services.exceptions import ConnpyError
|
||||
from .forms import Forms
|
||||
|
||||
class ImportExportHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
self.forms = Forms(app)
|
||||
|
||||
def dispatch_import(self, args):
|
||||
file_path = args.data[0]
|
||||
try:
|
||||
printer.warning("This could overwrite your current configuration!")
|
||||
question = [inquirer.Confirm("import", message=f"Are you sure you want to import {file_path}?")]
|
||||
confirm = inquirer.prompt(question)
|
||||
if confirm == None or not confirm["import"]:
|
||||
sys.exit(7)
|
||||
|
||||
self.app.services.import_export.import_from_file(file_path)
|
||||
printer.success(f"File {file_path} imported successfully.")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
def dispatch_export(self, args):
|
||||
file_path = args.data[0]
|
||||
folders = args.data[1:] if len(args.data) > 1 else None
|
||||
try:
|
||||
self.app.services.import_export.export_to_file(file_path, folders=folders)
|
||||
printer.success(f"File {file_path} generated successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
sys.exit()
|
||||
|
||||
def bulk(self, args):
|
||||
if args.file and os.path.isfile(args.file[0]):
|
||||
with open(args.file[0], 'r') as f:
|
||||
lines = f.readlines()
|
||||
|
||||
# Expecting exactly 2 lines
|
||||
if len(lines) < 2:
|
||||
printer.error("The file must contain at least two lines: one for nodes, one for hosts.")
|
||||
sys.exit(11)
|
||||
|
||||
nodes = lines[0].strip()
|
||||
hosts = lines[1].strip()
|
||||
newnodes = self.forms.questions_bulk(nodes, hosts)
|
||||
else:
|
||||
newnodes = self.forms.questions_bulk()
|
||||
|
||||
if newnodes == False:
|
||||
sys.exit(7)
|
||||
|
||||
if not self.app.case:
|
||||
newnodes["location"] = newnodes["location"].lower()
|
||||
newnodes["ids"] = newnodes["ids"].lower()
|
||||
|
||||
# Handle the case where location might be a file reference (e.g. from a prompt)
|
||||
location = newnodes["location"]
|
||||
if location.startswith("@") and "/" in location:
|
||||
# Extract the actual @folder part (e.g. @testall from @testall/.folders_cache.txt)
|
||||
location = location.split("/")[0]
|
||||
newnodes["location"] = location
|
||||
|
||||
ids = newnodes["ids"].split(",")
|
||||
# Append location to each id for proper folder assignment
|
||||
location = newnodes["location"]
|
||||
if location:
|
||||
ids = [f"{i}{location}" for i in ids]
|
||||
|
||||
hosts = newnodes["host"].split(",")
|
||||
|
||||
try:
|
||||
count = self.app.services.nodes.bulk_add(ids, hosts, newnodes)
|
||||
if count > 0:
|
||||
printer.success(f"Successfully added {count} nodes.")
|
||||
else:
|
||||
printer.info("0 nodes added")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
@@ -1,234 +0,0 @@
|
||||
import sys
|
||||
import yaml
|
||||
import inquirer
|
||||
from rich.markdown import Markdown
|
||||
|
||||
from .. import printer
|
||||
from ..services.exceptions import ConnpyError, InvalidConfigurationError
|
||||
from .helpers import choose
|
||||
from .forms import Forms
|
||||
from .help_text import get_instructions
|
||||
|
||||
class NodeHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
self.forms = Forms(app)
|
||||
|
||||
def dispatch(self, args):
|
||||
if not self.app.case and args.data != None:
|
||||
args.data = args.data.lower()
|
||||
actions = {"version": self.version, "connect": self.connect, "add": self.add, "del": self.delete, "mod": self.modify, "show": self.show}
|
||||
return actions.get(args.action)(args)
|
||||
|
||||
def version(self, args):
|
||||
from .._version import __version__
|
||||
printer.info(f"Connpy {__version__}")
|
||||
|
||||
def connect(self, args):
|
||||
if args.data == None:
|
||||
try:
|
||||
matches = self.app.services.nodes.list_nodes()
|
||||
except Exception as e:
|
||||
printer.error(f"Failed to list nodes: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
if len(matches) == 0:
|
||||
printer.warning("There are no nodes created")
|
||||
printer.info("try: connpy --help")
|
||||
sys.exit(9)
|
||||
else:
|
||||
try:
|
||||
matches = self.app.services.nodes.list_nodes(args.data)
|
||||
except Exception:
|
||||
matches = []
|
||||
|
||||
if len(matches) == 0:
|
||||
printer.error(f"{args.data} not found")
|
||||
sys.exit(2)
|
||||
elif len(matches) > 1:
|
||||
matches[0] = choose(self.app, matches, "node", "connect")
|
||||
|
||||
if matches[0] == None:
|
||||
sys.exit(7)
|
||||
|
||||
try:
|
||||
self.app.services.nodes.connect_node(
|
||||
matches[0],
|
||||
sftp=args.sftp,
|
||||
debug=args.debug,
|
||||
logger=self.app._service_logger
|
||||
)
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
def delete(self, args):
|
||||
if args.data == None:
|
||||
printer.error("Missing argument node")
|
||||
sys.exit(3)
|
||||
|
||||
is_folder = args.data.startswith("@")
|
||||
try:
|
||||
if is_folder:
|
||||
matches = self.app.services.nodes.list_folders(args.data)
|
||||
else:
|
||||
matches = self.app.services.nodes.list_nodes(args.data)
|
||||
except Exception:
|
||||
matches = []
|
||||
|
||||
if len(matches) == 0:
|
||||
printer.error(f"{args.data} not found")
|
||||
sys.exit(2)
|
||||
|
||||
printer.info(f"Removing: {matches}")
|
||||
question = [inquirer.Confirm("delete", message="Are you sure you want to continue?")]
|
||||
confirm = inquirer.prompt(question)
|
||||
if confirm == None or not confirm["delete"]:
|
||||
sys.exit(7)
|
||||
|
||||
try:
|
||||
for item in matches:
|
||||
self.app.services.nodes.delete_node(item, is_folder=is_folder)
|
||||
|
||||
if len(matches) == 1:
|
||||
printer.success(f"{matches[0]} deleted successfully")
|
||||
else:
|
||||
printer.success(f"{len(matches)} items deleted successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
def add(self, args):
|
||||
try:
|
||||
args.data = self.app._type_node(args.data)
|
||||
except ValueError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(3)
|
||||
|
||||
if args.data == None:
|
||||
printer.error("Missing argument node")
|
||||
sys.exit(3)
|
||||
|
||||
is_folder = args.data.startswith("@")
|
||||
try:
|
||||
if is_folder:
|
||||
uniques = self.app.services.nodes.explode_unique(args.data)
|
||||
if not uniques:
|
||||
raise InvalidConfigurationError(f"Invalid folder {args.data}")
|
||||
self.app.services.nodes.add_node(args.data, {}, is_folder=True)
|
||||
printer.success(f"{args.data} added successfully")
|
||||
else:
|
||||
if args.data in self.app.nodes_list:
|
||||
printer.error(f"Node '{args.data}' already exists.")
|
||||
sys.exit(1)
|
||||
uniques = self.app.services.nodes.explode_unique(args.data)
|
||||
|
||||
# Fast fail if parent folder does not exist
|
||||
self.app.services.nodes.validate_parent_folder(args.data)
|
||||
|
||||
printer.console.print(Markdown(get_instructions()))
|
||||
|
||||
new_node_data = self.forms.questions_nodes(args.data, uniques)
|
||||
if not new_node_data:
|
||||
sys.exit(7)
|
||||
self.app.services.nodes.add_node(args.data, new_node_data)
|
||||
printer.success(f"{args.data} added successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
def show(self, args):
|
||||
if args.data == None:
|
||||
printer.error("Missing argument node")
|
||||
sys.exit(3)
|
||||
|
||||
try:
|
||||
matches = self.app.services.nodes.list_nodes(args.data)
|
||||
except Exception:
|
||||
matches = []
|
||||
|
||||
if len(matches) == 0:
|
||||
printer.error(f"{args.data} not found")
|
||||
sys.exit(2)
|
||||
elif len(matches) > 1:
|
||||
matches[0] = choose(self.app, matches, "node", "show")
|
||||
|
||||
if matches[0] == None:
|
||||
sys.exit(7)
|
||||
|
||||
try:
|
||||
node = self.app.services.nodes.get_node_details(matches[0])
|
||||
yaml_output = yaml.dump(node, sort_keys=False, default_flow_style=False)
|
||||
printer.data(matches[0], yaml_output)
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
def modify(self, args):
|
||||
if args.data == None:
|
||||
printer.error("Missing argument node")
|
||||
sys.exit(3)
|
||||
|
||||
try:
|
||||
matches = self.app.services.nodes.list_nodes(args.data)
|
||||
except Exception:
|
||||
matches = []
|
||||
|
||||
if len(matches) == 0:
|
||||
printer.error(f"No connection found with filter: {args.data}")
|
||||
sys.exit(2)
|
||||
|
||||
unique = matches[0] if len(matches) == 1 else None
|
||||
uniques = self.app.services.nodes.explode_unique(unique) if unique else {"id": None, "folder": None}
|
||||
|
||||
printer.info(f"Editing: {matches}")
|
||||
node_details = {}
|
||||
for i in matches:
|
||||
node_details[i] = self.app.services.nodes.get_node_details(i)
|
||||
|
||||
edits = self.forms.questions_edit()
|
||||
if edits == None:
|
||||
sys.exit(7)
|
||||
|
||||
# Use first match as base for defaults if multiple matches exist
|
||||
base_unique = matches[0]
|
||||
base_uniques = self.app.services.nodes.explode_unique(base_unique)
|
||||
updatenode = self.forms.questions_nodes(base_unique, base_uniques, edit=edits)
|
||||
if not updatenode:
|
||||
sys.exit(7)
|
||||
|
||||
try:
|
||||
if len(matches) == 1:
|
||||
# Comparison for "Nothing to do"
|
||||
current = node_details[matches[0]].copy()
|
||||
current.update(uniques)
|
||||
current["type"] = "connection"
|
||||
if sorted(updatenode.items()) == sorted(current.items()):
|
||||
printer.info("Nothing to do here")
|
||||
return
|
||||
self.app.services.nodes.update_node(matches[0], updatenode)
|
||||
printer.success(f"{args.data} edited successfully")
|
||||
else:
|
||||
editcount = 0
|
||||
for k in matches:
|
||||
updated_item = self.app.services.nodes.explode_unique(k)
|
||||
updated_item["type"] = "connection"
|
||||
updated_item.update(node_details[k])
|
||||
|
||||
this_item_changed = False
|
||||
for key, should_edit in edits.items():
|
||||
if should_edit:
|
||||
this_item_changed = True
|
||||
updated_item[key] = updatenode[key]
|
||||
|
||||
if this_item_changed:
|
||||
editcount += 1
|
||||
self.app.services.nodes.update_node(k, updated_item)
|
||||
|
||||
if editcount == 0:
|
||||
printer.info("Nothing to do here")
|
||||
else:
|
||||
printer.success(f"{matches} edited successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
@@ -1,150 +0,0 @@
|
||||
import sys
|
||||
import yaml
|
||||
from .. import printer
|
||||
from ..services.exceptions import ConnpyError
|
||||
|
||||
class PluginHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
|
||||
def dispatch(self, args):
|
||||
try:
|
||||
# We determine the target PluginService/PluginStub based on standard 'mode'
|
||||
# But wait, local plugins should go to app.services._init_local version
|
||||
# Or we can just use the provided app.services.plugins and pass the appropriate grpc calls if needed.
|
||||
|
||||
is_remote = getattr(args, "remote", False)
|
||||
if is_remote and self.app.services.mode != "remote":
|
||||
printer.error("Cannot use --remote flag when not running in remote mode.")
|
||||
return
|
||||
|
||||
if args.add:
|
||||
self.app.services.plugins.add_plugin(args.add[0], args.add[1])
|
||||
printer.success(f"Plugin {args.add[0]} added successfully{' remotely' if is_remote else ''}.")
|
||||
elif args.update:
|
||||
self.app.services.plugins.add_plugin(args.update[0], args.update[1], update=True)
|
||||
printer.success(f"Plugin {args.update[0]} updated successfully{' remotely' if is_remote else ''}.")
|
||||
elif args.delete:
|
||||
self.app.services.plugins.delete_plugin(args.delete[0])
|
||||
printer.success(f"Plugin {args.delete[0]} deleted successfully{' remotely' if is_remote else ''}.")
|
||||
elif args.enable:
|
||||
name = args.enable[0]
|
||||
if is_remote:
|
||||
self.app.plugins.preferences[name] = "remote"
|
||||
else:
|
||||
if name in self.app.plugins.preferences:
|
||||
del self.app.plugins.preferences[name]
|
||||
|
||||
self.app.plugins._save_preferences(self.app.services.config_svc.get_default_dir())
|
||||
|
||||
# Always try to enable it locally (remove .bkp) if it exists
|
||||
# regardless of mode, to keep files consistent with "enabled" state
|
||||
try:
|
||||
# We use a local service instance to ensure we touch local files
|
||||
from ..services.plugin_service import PluginService
|
||||
local_svc = PluginService(self.app.services.config)
|
||||
local_svc.enable_plugin(name)
|
||||
except Exception:
|
||||
pass # Ignore if not found locally or already enabled
|
||||
|
||||
if is_remote and self.app.services.mode == "remote":
|
||||
self.app.services.plugins.enable_plugin(name)
|
||||
|
||||
printer.success(f"Plugin {name} enabled successfully{' remotely' if is_remote else ' locally'}.")
|
||||
elif args.disable:
|
||||
name = args.disable[0]
|
||||
success = False
|
||||
if is_remote:
|
||||
if self.app.services.mode == "remote":
|
||||
self.app.services.plugins.disable_plugin(name)
|
||||
success = True
|
||||
else:
|
||||
# Disable locally
|
||||
from ..services.plugin_service import PluginService
|
||||
local_svc = PluginService(self.app.services.config)
|
||||
try:
|
||||
if local_svc.disable_plugin(name):
|
||||
success = True
|
||||
except Exception as e:
|
||||
printer.warning(f"Could not disable local plugin: {e}")
|
||||
|
||||
if success:
|
||||
printer.success(f"Plugin {name} disabled successfully{' remotely' if is_remote else ' locally'}.")
|
||||
|
||||
# If any remote operation was performed, trigger a sync to update local cache immediately
|
||||
if is_remote and self.app.services.mode == "remote":
|
||||
try:
|
||||
import os
|
||||
cache_dir = os.path.join(self.app.services.config_svc.get_default_dir(), "remote_plugins")
|
||||
# We use a dummy subparser choice check bypass by passing force_sync=True
|
||||
# or just letting the hasher handle it.
|
||||
self.app.plugins._import_remote_plugins_to_argparse(
|
||||
self.app.services.plugins,
|
||||
self.app.subparsers, # We'll need to make sure this is available
|
||||
cache_dir,
|
||||
force_sync=True
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
elif getattr(args, "sync", False):
|
||||
# The actual sync logic is performed in connapp.py during init
|
||||
# if the --sync flag is detected in sys.argv
|
||||
printer.success("Remote plugins synchronized successfully.")
|
||||
elif args.list:
|
||||
# We need to fetch both local and remote if in remote mode
|
||||
local_plugins = {}
|
||||
remote_plugins = {}
|
||||
|
||||
# Fetch depending on mode
|
||||
if self.app.services.mode == "remote":
|
||||
# For local we need to instantiate a local plugin service bypassing stub
|
||||
from ..services.plugin_service import PluginService
|
||||
local_svc = PluginService(self.app.services.config)
|
||||
local_plugins = local_svc.list_plugins()
|
||||
remote_plugins = self.app.services.plugins.list_plugins()
|
||||
else:
|
||||
local_plugins = self.app.services.plugins.list_plugins()
|
||||
|
||||
from rich.table import Table
|
||||
|
||||
table = Table(title="Available Plugins", show_header=True, header_style="bold cyan")
|
||||
table.add_column("Plugin", style="cyan")
|
||||
table.add_column("State", style="bold")
|
||||
table.add_column("Origin", style="magenta")
|
||||
|
||||
# Populate local plugins
|
||||
for name, details in local_plugins.items():
|
||||
state = "Disabled" if not details.get("enabled", True) else "Active"
|
||||
color = "red" if state == "Disabled" else "green"
|
||||
|
||||
if self.app.services.mode == "remote" and state == "Active":
|
||||
if self.app.plugins.preferences.get(name) == "remote":
|
||||
state = "Shadowed (Override by Remote)"
|
||||
color = "yellow"
|
||||
|
||||
table.add_row(name, f"[{color}]{state}[/{color}]", "Local")
|
||||
|
||||
# Populate remote plugins
|
||||
if self.app.services.mode == "remote":
|
||||
for name, details in remote_plugins.items():
|
||||
state = "Disabled" if not details.get("enabled", True) else "Active"
|
||||
color = "red" if state == "Disabled" else "green"
|
||||
|
||||
if state == "Active":
|
||||
pref = self.app.plugins.preferences.get(name, "local")
|
||||
# If preference isn't remote and the plugin exists locally, local takes priority
|
||||
if pref != "remote" and name in local_plugins:
|
||||
state = "Shadowed (Override by Local)"
|
||||
color = "yellow"
|
||||
|
||||
table.add_row(name, f"[{color}]{state}[/{color}]", "Remote")
|
||||
|
||||
if not local_plugins and not remote_plugins:
|
||||
printer.console.print(" No plugins found.")
|
||||
else:
|
||||
printer.console.print(table)
|
||||
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
@@ -1,96 +0,0 @@
|
||||
import sys
|
||||
import yaml
|
||||
import inquirer
|
||||
|
||||
from .. import printer
|
||||
from ..services.exceptions import ConnpyError, ProfileNotFoundError
|
||||
from .forms import Forms
|
||||
|
||||
class ProfileHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
self.forms = Forms(app)
|
||||
|
||||
def dispatch(self, args):
|
||||
if not self.app.case:
|
||||
args.data[0] = args.data[0].lower()
|
||||
actions = {"add": self.add, "del": self.delete, "mod": self.modify, "show": self.show}
|
||||
return actions.get(args.action)(args)
|
||||
|
||||
def delete(self, args):
|
||||
name = args.data[0]
|
||||
try:
|
||||
self.app.services.profiles.get_profile(name)
|
||||
except ProfileNotFoundError:
|
||||
printer.error(f"{name} not found")
|
||||
sys.exit(2)
|
||||
|
||||
if name == "default":
|
||||
printer.error("Can't delete default profile")
|
||||
sys.exit(6)
|
||||
|
||||
question = [inquirer.Confirm("delete", message=f"Are you sure you want to delete {name}?")]
|
||||
confirm = inquirer.prompt(question)
|
||||
if confirm == None or not confirm["delete"]:
|
||||
sys.exit(7)
|
||||
|
||||
try:
|
||||
self.app.services.profiles.delete_profile(name)
|
||||
printer.success(f"{name} deleted successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(8)
|
||||
|
||||
def show(self, args):
|
||||
try:
|
||||
profile = self.app.services.profiles.get_profile(args.data[0])
|
||||
yaml_output = yaml.dump(profile, sort_keys=False, default_flow_style=False)
|
||||
printer.data(args.data[0], yaml_output)
|
||||
except ProfileNotFoundError:
|
||||
printer.error(f"{args.data[0]} not found")
|
||||
sys.exit(2)
|
||||
|
||||
def add(self, args):
|
||||
name = args.data[0]
|
||||
if name in self.app.services.profiles.list_profiles():
|
||||
printer.error(f"Profile '{name}' already exists.")
|
||||
sys.exit(4)
|
||||
|
||||
new_profile_data = self.forms.questions_profiles(name)
|
||||
if not new_profile_data:
|
||||
sys.exit(7)
|
||||
|
||||
try:
|
||||
self.app.services.profiles.add_profile(name, new_profile_data)
|
||||
printer.success(f"{name} added successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
def modify(self, args):
|
||||
name = args.data[0]
|
||||
try:
|
||||
profile = self.app.services.profiles.get_profile(name, resolve=False)
|
||||
except ProfileNotFoundError:
|
||||
printer.error(f"Profile '{name}' not found")
|
||||
sys.exit(2)
|
||||
|
||||
old_profile = {"id": name, **profile}
|
||||
edits = self.forms.questions_edit()
|
||||
if edits == None:
|
||||
sys.exit(7)
|
||||
|
||||
update_profile_data = self.forms.questions_profiles(name, edit=edits)
|
||||
if not update_profile_data:
|
||||
sys.exit(7)
|
||||
|
||||
if sorted(update_profile_data.items()) == sorted(old_profile.items()):
|
||||
printer.info("Nothing to do here")
|
||||
return
|
||||
|
||||
try:
|
||||
self.app.services.profiles.update_profile(name, update_profile_data)
|
||||
printer.success(f"{name} edited successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
@@ -1,167 +0,0 @@
|
||||
import os
|
||||
import sys
|
||||
import yaml
|
||||
import threading
|
||||
from rich.rule import Rule
|
||||
from .. import printer
|
||||
from ..services.exceptions import ConnpyError
|
||||
from .help_text import get_instructions
|
||||
|
||||
class RunHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
self.print_lock = threading.Lock()
|
||||
|
||||
def dispatch(self, args):
|
||||
if len(args.data) > 1:
|
||||
args.action = "noderun"
|
||||
actions = {"noderun": self.node_run, "generate": self.yaml_generate, "run": self.yaml_run}
|
||||
return actions.get(args.action)(args)
|
||||
|
||||
def node_run(self, args):
|
||||
nodes_filter = args.data[0]
|
||||
commands = [" ".join(args.data[1:])]
|
||||
|
||||
try:
|
||||
header_printed = False
|
||||
|
||||
if hasattr(args, 'test_expected') and args.test_expected:
|
||||
# Mode: Test
|
||||
def _on_node_complete(unique, node_output, node_status, node_result):
|
||||
nonlocal header_printed
|
||||
with self.print_lock:
|
||||
if not header_printed:
|
||||
printer.console.print(Rule("OUTPUT", style="header"))
|
||||
header_printed = True
|
||||
printer.test_panel(unique, node_output, node_status, node_result)
|
||||
|
||||
results = self.app.services.execution.test_commands(
|
||||
nodes_filter=nodes_filter,
|
||||
commands=commands,
|
||||
expected=args.test_expected,
|
||||
on_node_complete=_on_node_complete
|
||||
)
|
||||
printer.test_summary(results)
|
||||
else:
|
||||
# Mode: Normal Run
|
||||
def _on_node_complete(unique, node_output, node_status):
|
||||
nonlocal header_printed
|
||||
with self.print_lock:
|
||||
if not header_printed:
|
||||
printer.console.print(Rule("OUTPUT", style="header"))
|
||||
header_printed = True
|
||||
printer.node_panel(unique, node_output, node_status)
|
||||
|
||||
results = self.app.services.execution.run_commands(
|
||||
nodes_filter=nodes_filter,
|
||||
commands=commands,
|
||||
on_node_complete=_on_node_complete
|
||||
)
|
||||
printer.run_summary(results)
|
||||
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
def yaml_generate(self, args):
|
||||
if os.path.exists(args.data[0]):
|
||||
printer.error(f"File '{args.data[0]}' already exists.")
|
||||
sys.exit(14)
|
||||
else:
|
||||
with open(args.data[0], "w") as file:
|
||||
file.write(get_instructions("generate"))
|
||||
printer.success(f"File {args.data[0]} generated successfully")
|
||||
sys.exit()
|
||||
|
||||
def yaml_run(self, args):
|
||||
path = args.data[0]
|
||||
try:
|
||||
with open(path, "r") as f:
|
||||
playbook = yaml.load(f, Loader=yaml.FullLoader)
|
||||
|
||||
for task in playbook.get("tasks", []):
|
||||
self.cli_run(task)
|
||||
|
||||
except Exception as e:
|
||||
printer.error(f"Failed to run playbook {path}: {e}")
|
||||
sys.exit(10)
|
||||
|
||||
def cli_run(self, script):
|
||||
name = script.get("name", "Task")
|
||||
try:
|
||||
action = script["action"]
|
||||
nodelist = script["nodes"]
|
||||
commands = script["commands"]
|
||||
variables = script.get("variables")
|
||||
output_cfg = script["output"]
|
||||
options = script.get("options", {})
|
||||
except KeyError as e:
|
||||
printer.error(f"[{name}] '{e.args[0]}' is mandatory in script")
|
||||
sys.exit(11)
|
||||
|
||||
stdout = (output_cfg == "stdout")
|
||||
folder = output_cfg if output_cfg not in [None, "stdout"] else None
|
||||
prompt = options.get("prompt")
|
||||
|
||||
try:
|
||||
header_printed = False
|
||||
if action == "run":
|
||||
# If stdout is true, we stream results as they arrive
|
||||
def _on_run_complete(unique, node_output, node_status):
|
||||
nonlocal header_printed
|
||||
if stdout:
|
||||
with self.print_lock:
|
||||
if not header_printed:
|
||||
printer.console.print(Rule(name.upper(), style="header"))
|
||||
header_printed = True
|
||||
printer.node_panel(unique, node_output, node_status)
|
||||
|
||||
results = self.app.services.execution.run_commands(
|
||||
nodes_filter=nodelist,
|
||||
commands=commands,
|
||||
variables=variables,
|
||||
parallel=options.get("parallel", 10),
|
||||
timeout=options.get("timeout", 20),
|
||||
folder=folder,
|
||||
prompt=prompt,
|
||||
on_node_complete=_on_run_complete
|
||||
)
|
||||
# Final Summary
|
||||
if not stdout and not folder:
|
||||
with self.print_lock:
|
||||
printer.console.print(Rule(name.upper(), style="header"))
|
||||
for unique, data in results.items():
|
||||
output = data["output"] if isinstance(data, dict) else data
|
||||
printer.node_panel(unique, output, 0)
|
||||
|
||||
# ALWAYS show the aggregate execution summary at the end
|
||||
printer.run_summary(results)
|
||||
|
||||
elif action == "test":
|
||||
expected = script.get("expected", [])
|
||||
# Show test_panel per node ONLY if stdout is True
|
||||
def _on_test_complete(unique, node_output, node_status, node_result):
|
||||
nonlocal header_printed
|
||||
if stdout:
|
||||
with self.print_lock:
|
||||
if not header_printed:
|
||||
printer.console.print(Rule(name.upper(), style="header"))
|
||||
header_printed = True
|
||||
printer.test_panel(unique, node_output, node_status, node_result)
|
||||
|
||||
results = self.app.services.execution.test_commands(
|
||||
nodes_filter=nodelist,
|
||||
commands=commands,
|
||||
expected=expected,
|
||||
variables=variables,
|
||||
parallel=options.get("parallel", 10),
|
||||
timeout=options.get("timeout", 20),
|
||||
folder=folder,
|
||||
prompt=prompt,
|
||||
on_node_complete=_on_test_complete
|
||||
)
|
||||
# ALWAYS show the aggregate summary at the end
|
||||
printer.test_summary(results)
|
||||
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
@@ -1,126 +0,0 @@
|
||||
import sys
|
||||
import yaml
|
||||
from .. import printer
|
||||
|
||||
class SyncHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
|
||||
def dispatch(self, args):
|
||||
action = getattr(args, "action", None)
|
||||
actions = {
|
||||
"login": self.login,
|
||||
"logout": self.logout,
|
||||
"status": self.status,
|
||||
"list": self.list_backups,
|
||||
"once": self.once,
|
||||
"restore": self.restore,
|
||||
"start": self.start,
|
||||
"stop": self.stop
|
||||
}
|
||||
handler = actions.get(action)
|
||||
if handler:
|
||||
return handler(args)
|
||||
|
||||
return self.status(args)
|
||||
|
||||
def login(self, args):
|
||||
self.app.services.sync.login()
|
||||
|
||||
def logout(self, args):
|
||||
self.app.services.sync.logout()
|
||||
|
||||
def status(self, args):
|
||||
status = self.app.services.sync.check_login_status()
|
||||
enabled = self.app.services.sync.sync_enabled
|
||||
remote = self.app.services.sync.sync_remote
|
||||
|
||||
printer.info(f"Login Status: {status}")
|
||||
printer.info(f"Auto-Sync: {'Enabled' if enabled else 'Disabled'}")
|
||||
printer.info(f"Sync Remote Nodes: {'Yes' if remote else 'No'}")
|
||||
|
||||
def list_backups(self, args):
|
||||
backups = self.app.services.sync.list_backups()
|
||||
if backups:
|
||||
yaml_output = yaml.dump(backups, sort_keys=False, default_flow_style=False)
|
||||
printer.custom("backups", "")
|
||||
print(yaml_output)
|
||||
else:
|
||||
printer.info("No backups found or not logged in.")
|
||||
|
||||
def once(self, args):
|
||||
# Manual backup. We check if we should include remote nodes
|
||||
remote_data = None
|
||||
if self.app.services.sync.sync_remote and self.app.services.mode == "remote":
|
||||
inventory = self.app.services.nodes.get_inventory()
|
||||
# Merge with local settings
|
||||
local_settings = self.app.services.config_svc.get_settings()
|
||||
local_settings.pop("configfolder", None)
|
||||
|
||||
# Maintain proper config structure: {config: {}, connections: {}, profiles: {}}
|
||||
remote_data = {
|
||||
"config": local_settings,
|
||||
"connections": inventory.get("connections", {}),
|
||||
"profiles": inventory.get("profiles", {})
|
||||
}
|
||||
|
||||
if self.app.services.sync.compress_and_upload(remote_data):
|
||||
printer.success("Manual backup completed.")
|
||||
|
||||
def restore(self, args):
|
||||
import inquirer
|
||||
file_id = getattr(args, "id", None)
|
||||
|
||||
# Segmented flags
|
||||
restore_config = getattr(args, "restore_config", False)
|
||||
restore_nodes = getattr(args, "restore_nodes", False)
|
||||
|
||||
# If neither is specified, we restore ALL (backwards compatibility)
|
||||
if not restore_config and not restore_nodes:
|
||||
restore_config = True
|
||||
restore_nodes = True
|
||||
|
||||
# 1. Analyze what we are about to restore
|
||||
info = self.app.services.sync.analyze_backup_content(file_id)
|
||||
if not info:
|
||||
printer.error("Could not analyze backup content.")
|
||||
return
|
||||
|
||||
# 2. Show detailed info
|
||||
printer.info("Restoration Details:")
|
||||
if restore_config:
|
||||
print(f" - Local Settings: Yes")
|
||||
print(f" - RSA Key (.osk): {'Yes' if info['has_key'] else 'No'}")
|
||||
if restore_nodes:
|
||||
target = "REMOTE" if self.app.services.mode == "remote" else "LOCAL"
|
||||
print(f" - Nodes: {info['nodes']}")
|
||||
print(f" - Folders: {info['folders']}")
|
||||
print(f" - Profiles: {info['profiles']}")
|
||||
print(f" - Destination: {target}")
|
||||
print("")
|
||||
|
||||
questions = [inquirer.Confirm("confirm", message="Do you want to proceed with the restoration?", default=False)]
|
||||
answers = inquirer.prompt(questions)
|
||||
|
||||
if not answers or not answers["confirm"]:
|
||||
printer.info("Restore cancelled.")
|
||||
return
|
||||
|
||||
# 3. Perform the actual restore
|
||||
if self.app.services.sync.restore_backup(
|
||||
file_id=file_id,
|
||||
restore_config=restore_config,
|
||||
restore_nodes=restore_nodes,
|
||||
app_instance=self.app
|
||||
):
|
||||
printer.success("Restore completed successfully.")
|
||||
|
||||
def start(self, args):
|
||||
self.app.services.config_svc.update_setting("sync", True)
|
||||
self.app.services.sync.sync_enabled = True
|
||||
printer.success("Auto-sync enabled.")
|
||||
|
||||
def stop(self, args):
|
||||
self.app.services.config_svc.update_setting("sync", False)
|
||||
self.app.services.sync.sync_enabled = False
|
||||
printer.success("Auto-sync disabled.")
|
||||
@@ -1,438 +0,0 @@
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
import time
|
||||
import asyncio
|
||||
import fcntl
|
||||
import termios
|
||||
import tty
|
||||
from typing import Any, Dict, List, Optional, Callable
|
||||
from textwrap import dedent
|
||||
|
||||
from rich.console import Console
|
||||
from rich.panel import Panel
|
||||
from rich.markdown import Markdown
|
||||
from rich.live import Live
|
||||
from prompt_toolkit import PromptSession
|
||||
from prompt_toolkit.key_binding import KeyBindings
|
||||
from prompt_toolkit.formatted_text import HTML
|
||||
from prompt_toolkit.history import InMemoryHistory
|
||||
|
||||
from ..printer import connpy_theme
|
||||
from connpy.utils import log_cleaner
|
||||
from ..services.ai_service import AIService
|
||||
|
||||
class CopilotInterface:
|
||||
def __init__(self, config, history=None, pt_input=None, pt_output=None, rich_file=None, session_state=None):
|
||||
self.config = config
|
||||
self.history = history or InMemoryHistory()
|
||||
self.pt_input = pt_input
|
||||
self.pt_output = pt_output
|
||||
self.ai_service = AIService(config)
|
||||
self.session_state = session_state if session_state is not None else {
|
||||
'persona': 'engineer',
|
||||
'trust_mode': False,
|
||||
'memories': [],
|
||||
'os': None,
|
||||
'prompt': None
|
||||
}
|
||||
|
||||
if rich_file:
|
||||
self.console = Console(theme=connpy_theme, force_terminal=True, file=rich_file)
|
||||
else:
|
||||
self.console = Console(theme=connpy_theme)
|
||||
|
||||
self.mode_range, self.mode_single, self.mode_lines = 0, 1, 2
|
||||
|
||||
def _get_theme_color(self, style_name: str, fallback: str = "white") -> str:
|
||||
"""Extract Hex or ANSI color name from the active rich theme."""
|
||||
try:
|
||||
style = connpy_theme.styles.get(style_name)
|
||||
if style and style.color:
|
||||
# If it's a standard color like 'green', Rich might return its hex triplet
|
||||
if style.color.is_default: return fallback
|
||||
return style.color.triplet.hex if style.color.triplet else style.color.name
|
||||
except: pass
|
||||
return fallback
|
||||
|
||||
async def run_session(self,
|
||||
raw_bytes: bytes,
|
||||
cmd_byte_positions: List[tuple],
|
||||
node_info: dict,
|
||||
on_ai_call: Callable):
|
||||
"""
|
||||
Runs the interactive Copilot session.
|
||||
on_ai_call: async function(active_buffer, question) -> result_dict
|
||||
"""
|
||||
from rich.rule import Rule
|
||||
|
||||
try:
|
||||
# Prepare UI state
|
||||
buffer = log_cleaner(raw_bytes.decode(errors='replace'))
|
||||
blocks = self.ai_service.build_context_blocks(raw_bytes, cmd_byte_positions, node_info)
|
||||
last_line = buffer.split('\n')[-1].strip() if buffer.strip() else "(prompt)"
|
||||
blocks.append((len(raw_bytes), last_line[:80]))
|
||||
|
||||
state = {
|
||||
'context_cmd': 1,
|
||||
'total_cmds': len(blocks),
|
||||
'total_lines': len(buffer.split('\n')),
|
||||
'context_lines': min(50, len(buffer.split('\n'))),
|
||||
'context_mode': self.mode_range,
|
||||
'cancelled': False,
|
||||
'toolbar_msg': '',
|
||||
'msg_expiry': 0
|
||||
}
|
||||
|
||||
# 1. Visual Separation
|
||||
self.console.print("") # Salto de línea real
|
||||
self.console.print(Rule(title="[bold cyan] AI TERMINAL COPILOT [/bold cyan]", style="cyan"))
|
||||
self.console.print(Panel(
|
||||
"[dim]Type your question. Enter to send, Escape/Ctrl+C to cancel.\n"
|
||||
"Tab to change context mode. Ctrl+\u2191/\u2193 to adjust context. \u2191\u2193 for question history.[/dim]",
|
||||
border_style="cyan"
|
||||
))
|
||||
self.console.print("\n") # Pequeño espacio antes del prompt del copilot
|
||||
|
||||
bindings = KeyBindings()
|
||||
@bindings.add('c-up')
|
||||
def _(event):
|
||||
if state['context_mode'] == self.mode_lines:
|
||||
state['context_lines'] = min(state['context_lines'] + 50, state['total_lines'])
|
||||
else:
|
||||
state['context_cmd'] = min(state['context_cmd'] + 1, state['total_cmds'])
|
||||
event.app.invalidate()
|
||||
@bindings.add('c-down')
|
||||
def _(event):
|
||||
if state['context_mode'] == self.mode_lines:
|
||||
state['context_lines'] = max(state['context_lines'] - 50, min(50, state['total_lines']))
|
||||
else:
|
||||
state['context_cmd'] = max(state['context_cmd'] - 1, 1)
|
||||
event.app.invalidate()
|
||||
@bindings.add('tab')
|
||||
def _(event):
|
||||
buf = event.current_buffer
|
||||
# If typing a slash command (no spaces yet), use tab to autocomplete inline
|
||||
if buf.text.startswith('/') and ' ' not in buf.text:
|
||||
buf.complete_next()
|
||||
else:
|
||||
state['context_mode'] = (state['context_mode'] + 1) % 3
|
||||
event.app.invalidate()
|
||||
@bindings.add('escape', eager=True)
|
||||
@bindings.add('c-c')
|
||||
def _(event):
|
||||
state['cancelled'] = True
|
||||
event.app.exit(result='')
|
||||
|
||||
def get_active_buffer():
|
||||
if state['context_mode'] == self.mode_lines:
|
||||
return '\n'.join(buffer.split('\n')[-state['context_lines']:])
|
||||
idx = max(0, state['total_cmds'] - state['context_cmd'])
|
||||
start, preview = blocks[idx]
|
||||
if state['context_mode'] == self.mode_single and idx + 1 < state['total_cmds']:
|
||||
end = blocks[idx + 1][0]
|
||||
active_raw = raw_bytes[start:end]
|
||||
else:
|
||||
active_raw = raw_bytes[start:]
|
||||
return preview + "\n" + log_cleaner(active_raw.decode(errors='replace'))
|
||||
|
||||
def get_prompt_text():
|
||||
import html
|
||||
# Always use user_prompt color for the Ask prompt
|
||||
color = self._get_theme_color("user_prompt", "cyan")
|
||||
|
||||
if state['context_mode'] == self.mode_lines:
|
||||
text = html.escape(f"Ask [Ctx: {state['context_lines']}/{state['total_lines']}L]: ")
|
||||
return HTML(f'<style fg="{color}">{text}</style>')
|
||||
active = get_active_buffer()
|
||||
lines_count = len(active.split('\n'))
|
||||
mode_str = {self.mode_range: "Range", self.mode_single: "Cmd"}[state['context_mode']]
|
||||
text = html.escape(f"Ask [{mode_str} {state['context_cmd']} ~{lines_count}L]: ")
|
||||
return HTML(f'<style fg="{color}">{text}</style>')
|
||||
|
||||
from prompt_toolkit.application.current import get_app
|
||||
|
||||
def get_toolbar():
|
||||
import html
|
||||
app = get_app()
|
||||
c_warning = self._get_theme_color("warning", "yellow")
|
||||
|
||||
if app and app.current_buffer:
|
||||
text = app.current_buffer.text
|
||||
# Solo mostrar ayuda de comandos si estamos escribiendo el primer comando y no hay espacios
|
||||
if text.startswith('/') and ' ' not in text:
|
||||
commands = ['/os', '/prompt', '/architect', '/engineer', '/trust', '/untrust', '/memorize', '/clear']
|
||||
matches = [c for c in commands if c.startswith(text.lower())]
|
||||
if matches:
|
||||
m_text = html.escape(f"Available: {' '.join(matches)}")
|
||||
return HTML(f'<style fg="{c_warning}">{m_text}</style>' + " " * 20)
|
||||
|
||||
m_label = {self.mode_range: "RANGE", self.mode_single: "SINGLE", self.mode_lines: "LINES"}[state['context_mode']]
|
||||
if state['context_mode'] == self.mode_lines:
|
||||
base_str = f'\u25b6 Ctrl+\u2191/\u2193 adjusts by 50 lines [Tab: {m_label}]'
|
||||
else:
|
||||
idx = max(0, state['total_cmds'] - state['context_cmd'])
|
||||
desc = blocks[idx][1]
|
||||
base_str = f'\u25b6 {desc} [Tab: {m_label}]'
|
||||
|
||||
# Wrap base_str in a style to maintain consistency and avoid glitches
|
||||
# The fg color will be inherited from bottom-toolbar global style if not specified here
|
||||
base_html = f'<span>{html.escape(base_str)}</span>'
|
||||
|
||||
res_html = base_html
|
||||
if state.get('toolbar_msg'):
|
||||
if time.time() < state.get('msg_expiry', 0):
|
||||
msg = html.escape(state['toolbar_msg'])
|
||||
res_html = f'<style fg="{c_warning}">⚙️ {msg}</style> | ' + base_html
|
||||
else:
|
||||
state['toolbar_msg'] = ''
|
||||
|
||||
# Pad with spaces to ensure the line is cleared when the message disappears
|
||||
return HTML(res_html + " " * 20)
|
||||
|
||||
from prompt_toolkit.completion import Completer, Completion
|
||||
class SlashCommandCompleter(Completer):
|
||||
def get_completions(self, document, complete_event):
|
||||
text = document.text_before_cursor
|
||||
if text.startswith('/'):
|
||||
parts = text.split()
|
||||
# Only autocomplete the first word
|
||||
if len(parts) <= 1 or (len(parts) == 1 and not text.endswith(' ')):
|
||||
cmd_part = parts[0] if parts else text
|
||||
commands = [
|
||||
('/os', 'Set device OS (e.g. cisco_ios)'),
|
||||
('/prompt', 'Override prompt regex'),
|
||||
('/architect', 'Switch to Architect persona'),
|
||||
('/engineer', 'Switch to Engineer persona'),
|
||||
('/trust', 'Enable auto-execute'),
|
||||
('/untrust', 'Disable auto-execute'),
|
||||
('/memorize', 'Add fact to memory'),
|
||||
('/clear', 'Clear memory')
|
||||
]
|
||||
for cmd, desc in commands:
|
||||
if cmd.startswith(cmd_part.lower()):
|
||||
yield Completion(cmd, start_position=-len(cmd_part), display_meta=desc)
|
||||
|
||||
copilot_completer = SlashCommandCompleter()
|
||||
|
||||
while True:
|
||||
# 2. Ask question
|
||||
from prompt_toolkit.styles import Style
|
||||
c_contrast = self._get_theme_color("contrast", "gray")
|
||||
ui_style = Style.from_dict({
|
||||
'bottom-toolbar': f'fg:{c_contrast}',
|
||||
})
|
||||
|
||||
session = PromptSession(
|
||||
history=self.history,
|
||||
input=self.pt_input,
|
||||
output=self.pt_output,
|
||||
completer=copilot_completer,
|
||||
reserve_space_for_menu=0,
|
||||
style=ui_style
|
||||
)
|
||||
try:
|
||||
# Usamos un try/finally interno para asegurar que si algo falla en prompt_async,
|
||||
# no nos quedemos con la terminal en un estado extraño.
|
||||
question = await session.prompt_async(
|
||||
get_prompt_text,
|
||||
key_bindings=bindings,
|
||||
bottom_toolbar=get_toolbar
|
||||
)
|
||||
except (KeyboardInterrupt, EOFError):
|
||||
state['cancelled'] = True
|
||||
question = ""
|
||||
|
||||
if state['cancelled'] or not question.strip() or question.strip().lower() in ['cancel', 'exit', 'quit']:
|
||||
return "cancel", None, None
|
||||
|
||||
# 3. Process Input via AIService
|
||||
directive = self.ai_service.process_copilot_input(question, self.session_state)
|
||||
|
||||
if directive["action"] == "state_update":
|
||||
state['toolbar_msg'] = directive['message']
|
||||
state['msg_expiry'] = time.time() + 3 # 3 seconds timeout
|
||||
|
||||
async def delayed_refresh():
|
||||
await asyncio.sleep(3.1)
|
||||
# Only invalidate if the message hasn't been replaced by a newer one
|
||||
if state.get('toolbar_msg') == directive['message']:
|
||||
state['toolbar_msg'] = '' # Explicitly clear
|
||||
try:
|
||||
from prompt_toolkit.application.current import get_app
|
||||
app = get_app()
|
||||
if app: app.invalidate()
|
||||
except: pass
|
||||
asyncio.create_task(delayed_refresh())
|
||||
|
||||
# Mover el cursor arriba y limpiar la línea para que el nuevo prompt reemplace al anterior
|
||||
sys.stdout.write('\x1b[1A\x1b[2K')
|
||||
sys.stdout.flush()
|
||||
continue
|
||||
else:
|
||||
# Limpiar el mensaje de la barra cuando se hace una pregunta real
|
||||
state['toolbar_msg'] = ''
|
||||
|
||||
clean_question = directive.get("clean_prompt", question)
|
||||
overrides = directive.get("overrides", {})
|
||||
|
||||
# Merge node_info with session_state and overrides
|
||||
merged_node_info = node_info.copy()
|
||||
if self.session_state['os']: merged_node_info['os'] = self.session_state['os']
|
||||
if self.session_state['prompt']: merged_node_info['prompt'] = self.session_state['prompt']
|
||||
merged_node_info['persona'] = self.session_state['persona']
|
||||
merged_node_info['trust'] = self.session_state['trust_mode']
|
||||
merged_node_info['memories'] = list(self.session_state['memories'])
|
||||
|
||||
for k, v in overrides.items():
|
||||
merged_node_info[k] = v
|
||||
|
||||
# Enrich question
|
||||
past = self.history.get_strings()
|
||||
if len(past) > 1:
|
||||
clean_past = [q for q in past[-6:-1] if not q.startswith('/')]
|
||||
if clean_past:
|
||||
history_text = "\n".join(f"- {q}" for q in clean_past)
|
||||
clean_question = f"Previous questions:\n{history_text}\n\nCurrent Question:\n{clean_question}"
|
||||
|
||||
# 3. AI Execution
|
||||
# Use persona from overrides (one-shot) or from session state
|
||||
active_persona = merged_node_info.get('persona', self.session_state.get('persona', 'engineer'))
|
||||
persona_color = self._get_theme_color(active_persona, fallback="cyan")
|
||||
|
||||
active_buffer = get_active_buffer()
|
||||
live_text = "Thinking..."
|
||||
panel = Panel(live_text, title=f"[bold {persona_color}]Copilot Guide[/bold {persona_color}]", border_style=persona_color)
|
||||
|
||||
def on_chunk(text):
|
||||
nonlocal live_text
|
||||
if live_text == "Thinking...": live_text = ""
|
||||
live_text += text
|
||||
|
||||
with Live(panel, console=self.console, refresh_per_second=10) as live:
|
||||
def update_live(t):
|
||||
live.update(Panel(Markdown(t), title=f"[bold {persona_color}]Copilot Guide[/bold {persona_color}]", border_style=persona_color))
|
||||
|
||||
wrapped_chunk = lambda t: (on_chunk(t), update_live(live_text))
|
||||
|
||||
# Check for interruption during AI call
|
||||
ai_task = asyncio.create_task(on_ai_call(active_buffer, clean_question, wrapped_chunk, merged_node_info))
|
||||
|
||||
try:
|
||||
while not ai_task.done():
|
||||
await asyncio.sleep(0.05)
|
||||
result = await ai_task
|
||||
except asyncio.CancelledError:
|
||||
return "cancel", None, None
|
||||
|
||||
if not result or result.get("error"):
|
||||
if result and result.get("error"): self.console.print(f"[red]Error: {result['error']}[/red]")
|
||||
return "cancel", None, None
|
||||
|
||||
# 4. Handle result
|
||||
if live_text == "Thinking..." and result.get("guide"):
|
||||
self.console.print(Panel(Markdown(result["guide"]), title=f"[bold {persona_color}]Copilot Guide[/bold {persona_color}]", border_style=persona_color))
|
||||
|
||||
commands = result.get("commands", [])
|
||||
if not commands:
|
||||
self.console.print("")
|
||||
return "continue", None, None
|
||||
|
||||
risk = result.get("risk_level", "low")
|
||||
risk_style = {"low": "success", "high": "warning", "destructive": "error"}.get(risk, "success")
|
||||
style_color = self._get_theme_color(risk_style, fallback="green")
|
||||
|
||||
cmd_text = "\n".join(f" {i+1}. {c}" for i, c in enumerate(commands))
|
||||
# Explicitly use 'bold style_color' for both TITLE and BORDER to ensure maximum consistency
|
||||
self.console.print(Panel(cmd_text, title=f"[bold {style_color}]Suggested Commands [{risk.upper()}][/bold {style_color}]", border_style=f"bold {style_color}"))
|
||||
|
||||
if merged_node_info.get('trust', False) and risk != "destructive":
|
||||
self.console.print(f"[dim]⚙️ Auto-executing (Trust Mode)[/dim]")
|
||||
return "send_all", commands, None
|
||||
|
||||
confirm_session = PromptSession(input=self.pt_input, output=self.pt_output)
|
||||
c_bindings = KeyBindings()
|
||||
@c_bindings.add('escape', eager=True)
|
||||
@c_bindings.add('c-c')
|
||||
def _(ev): ev.app.exit(result='n')
|
||||
|
||||
import html
|
||||
try:
|
||||
p_text = html.escape(f"Send? (y/n/e/range) [n]: ")
|
||||
# Use the EXACT same style_color and force bold="true" for Prompt-Toolkit
|
||||
action = await confirm_session.prompt_async(HTML(f'<style fg="{style_color}" bold="true">{p_text}</style>'), key_bindings=c_bindings)
|
||||
except (KeyboardInterrupt, EOFError):
|
||||
self.console.print("")
|
||||
return "continue", None, None
|
||||
|
||||
def parse_indices(text, max_len):
|
||||
"""Helper to parse '1-3, 5, 7' into [0, 1, 2, 4, 6]."""
|
||||
indices = []
|
||||
# Replace commas with spaces and split
|
||||
parts = text.replace(',', ' ').split()
|
||||
for part in parts:
|
||||
if '-' in part:
|
||||
try:
|
||||
start, end = map(int, part.split('-'))
|
||||
# Ensure inclusive and 0-indexed
|
||||
indices.extend(range(start-1, end))
|
||||
except: continue
|
||||
elif part.isdigit():
|
||||
indices.append(int(part)-1)
|
||||
# Filter valid indices and remove duplicates
|
||||
return [i for i in sorted(set(indices)) if 0 <= i < max_len]
|
||||
|
||||
action_l = (action or "n").lower().strip()
|
||||
if action_l in ('y', 'yes', 'all'):
|
||||
return "send_all", commands, None
|
||||
|
||||
# Check for numeric selection (e.g., "1, 2-4")
|
||||
if re.match(r'^[0-9,\-\s]+$', action_l):
|
||||
selected_idxs = parse_indices(action_l, len(commands))
|
||||
if selected_idxs:
|
||||
return "send_all", [commands[i] for i in selected_idxs], None
|
||||
|
||||
elif action_l.startswith('e'):
|
||||
# Check if it's a selective edit like 'e1-2'
|
||||
selection_str = action_l[1:].strip()
|
||||
if selection_str:
|
||||
idxs = parse_indices(selection_str, len(commands))
|
||||
cmds_to_edit = [commands[i] for i in idxs] if idxs else commands
|
||||
else:
|
||||
cmds_to_edit = commands
|
||||
|
||||
target = "\n".join(cmds_to_edit)
|
||||
e_bindings = KeyBindings()
|
||||
@e_bindings.add('c-j')
|
||||
def _(ev): ev.app.exit(result=ev.app.current_buffer.text)
|
||||
@e_bindings.add('escape', 'enter')
|
||||
def _(ev): ev.app.exit(result=ev.app.current_buffer.text)
|
||||
@e_bindings.add('escape')
|
||||
def _(ev): ev.app.exit(result='')
|
||||
|
||||
c_edit = self._get_theme_color("user_prompt", "cyan")
|
||||
import html
|
||||
e_text = html.escape("Edit (Ctrl+Enter or Esc+Enter to submit):\n")
|
||||
try:
|
||||
edited = await confirm_session.prompt_async(
|
||||
HTML(f'<style fg="{c_edit}">{e_text}</style>'),
|
||||
default=target, multiline=True, key_bindings=e_bindings
|
||||
)
|
||||
except (KeyboardInterrupt, EOFError):
|
||||
self.console.print("")
|
||||
return "continue", None, None
|
||||
|
||||
if edited and edited.strip():
|
||||
# Split by lines to ensure core.py applies delay between each command
|
||||
lines = [l.strip() for l in edited.split('\n') if l.strip()]
|
||||
return "custom", None, lines
|
||||
|
||||
self.console.print("")
|
||||
return "continue", None, None
|
||||
|
||||
return "cancel", None, None
|
||||
|
||||
finally:
|
||||
state['cancelled'] = True
|
||||
self.console.print("[dim]Returning to session...[/dim]")
|
||||
|
||||
@@ -1,139 +0,0 @@
|
||||
import re
|
||||
import ast
|
||||
import inquirer
|
||||
|
||||
class Validators:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
|
||||
def host_validation(self, answers, current, regex = "^.+$"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Host cannot be empty")
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
return True
|
||||
|
||||
def profile_protocol_validation(self, answers, current, regex = "(^ssh$|^telnet$|^kubectl$|^docker$|^ssm$|^$)"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Pick between ssh, telnet, kubectl, docker, ssm or leave empty")
|
||||
return True
|
||||
|
||||
def protocol_validation(self, answers, current, regex = "(^ssh$|^telnet$|^kubectl$|^docker$|^ssm$|^$|^@.+$)"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Pick between ssh, telnet, kubectl, docker, ssm, leave empty or @profile")
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
return True
|
||||
|
||||
def profile_port_validation(self, answers, current, regex = "(^[0-9]*$)"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Pick a port between 1-65535, @profile o leave empty")
|
||||
try:
|
||||
port = int(current)
|
||||
except ValueError:
|
||||
port = 0
|
||||
if current != "" and not 1 <= int(port) <= 65535:
|
||||
raise inquirer.errors.ValidationError("", reason="Pick a port between 1-65535 or leave empty")
|
||||
return True
|
||||
|
||||
def port_validation(self, answers, current, regex = "(^[0-9]*$|^@.+$)"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Pick a port between 1-65535, @profile or leave empty")
|
||||
try:
|
||||
port = int(current)
|
||||
except ValueError:
|
||||
port = 0
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
elif current != "" and not 1 <= int(port) <= 65535:
|
||||
raise inquirer.errors.ValidationError("", reason="Pick a port between 1-65535, @profile o leave empty")
|
||||
return True
|
||||
|
||||
def pass_validation(self, answers, current, regex = "(^@.+$)"):
|
||||
profiles = current.split(",")
|
||||
for i in profiles:
|
||||
if not re.match(regex, i) or i[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(i))
|
||||
return True
|
||||
|
||||
def tags_validation(self, answers, current):
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
elif current != "":
|
||||
isdict = False
|
||||
try:
|
||||
isdict = ast.literal_eval(current)
|
||||
except Exception:
|
||||
pass
|
||||
if not isinstance (isdict, dict):
|
||||
raise inquirer.errors.ValidationError("", reason="Tags should be a python dictionary.".format(current))
|
||||
return True
|
||||
|
||||
def profile_tags_validation(self, answers, current):
|
||||
if current != "":
|
||||
isdict = False
|
||||
try:
|
||||
isdict = ast.literal_eval(current)
|
||||
except Exception:
|
||||
pass
|
||||
if not isinstance (isdict, dict):
|
||||
raise inquirer.errors.ValidationError("", reason="Tags should be a python dictionary.".format(current))
|
||||
return True
|
||||
|
||||
def jumphost_validation(self, answers, current):
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
elif current != "":
|
||||
if current not in self.app.nodes_list:
|
||||
raise inquirer.errors.ValidationError("", reason="Node {} don't exist.".format(current))
|
||||
return True
|
||||
|
||||
def profile_jumphost_validation(self, answers, current):
|
||||
if current != "":
|
||||
if current not in self.app.nodes_list:
|
||||
raise inquirer.errors.ValidationError("", reason="Node {} don't exist.".format(current))
|
||||
return True
|
||||
|
||||
def default_validation(self, answers, current):
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
return True
|
||||
|
||||
def bulk_node_validation(self, answers, current, regex = "^[0-9a-zA-Z_.,$#-]+$"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Host cannot be empty")
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
return True
|
||||
|
||||
def bulk_folder_validation(self, answers, current):
|
||||
if not self.app.case:
|
||||
current = current.lower()
|
||||
|
||||
candidate = current
|
||||
if "/" in current:
|
||||
candidate = current.split("/")[0]
|
||||
|
||||
matches = list(filter(lambda k: k == candidate, self.app.folders))
|
||||
if current != "" and len(matches) == 0:
|
||||
raise inquirer.errors.ValidationError("", reason="Location {} don't exist".format(current))
|
||||
return True
|
||||
|
||||
def bulk_host_validation(self, answers, current, regex = "^.+$"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Host cannot be empty")
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
hosts = current.split(",")
|
||||
nodes = answers["ids"].split(",")
|
||||
if len(hosts) > 1 and len(hosts) != len(nodes):
|
||||
raise inquirer.errors.ValidationError("", reason="Hosts list should be the same length of nodes list")
|
||||
return True
|
||||
+116
-348
@@ -1,23 +1,39 @@
|
||||
import sys
|
||||
import os
|
||||
|
||||
def load_txt_cache(filepath):
|
||||
try:
|
||||
with open(filepath, "r") as f:
|
||||
return f.read().splitlines()
|
||||
except FileNotFoundError:
|
||||
return []
|
||||
|
||||
def get_cwd(words, option=None, folderonly=False):
|
||||
import json
|
||||
import glob
|
||||
import importlib.util
|
||||
|
||||
def _getallnodes(config):
|
||||
#get all nodes on configfile
|
||||
nodes = []
|
||||
layer1 = [k for k,v in config["connections"].items() if isinstance(v, dict) and v["type"] == "connection"]
|
||||
folders = [k for k,v in config["connections"].items() if isinstance(v, dict) and v["type"] == "folder"]
|
||||
nodes.extend(layer1)
|
||||
for f in folders:
|
||||
layer2 = [k + "@" + f for k,v in config["connections"][f].items() if isinstance(v, dict) and v["type"] == "connection"]
|
||||
nodes.extend(layer2)
|
||||
subfolders = [k for k,v in config["connections"][f].items() if isinstance(v, dict) and v["type"] == "subfolder"]
|
||||
for s in subfolders:
|
||||
layer3 = [k + "@" + s + "@" + f for k,v in config["connections"][f][s].items() if isinstance(v, dict) and v["type"] == "connection"]
|
||||
nodes.extend(layer3)
|
||||
return nodes
|
||||
|
||||
def _getallfolders(config):
|
||||
#get all folders on configfile
|
||||
folders = ["@" + k for k,v in config["connections"].items() if isinstance(v, dict) and v["type"] == "folder"]
|
||||
subfolders = []
|
||||
for f in folders:
|
||||
s = ["@" + k + f for k,v in config["connections"][f[1:]].items() if isinstance(v, dict) and v["type"] == "subfolder"]
|
||||
subfolders.extend(s)
|
||||
folders.extend(subfolders)
|
||||
return folders
|
||||
|
||||
def _getcwd(words, option, folderonly=False):
|
||||
# Expand tilde to home directory if present
|
||||
if words[-1].startswith("~"):
|
||||
words[-1] = os.path.expanduser(words[-1])
|
||||
|
||||
# If option is not provided, try to infer it from the first word
|
||||
if option is None and words:
|
||||
option = words[0]
|
||||
|
||||
if words[-1] == option:
|
||||
path = './*'
|
||||
else:
|
||||
@@ -35,21 +51,6 @@ def get_cwd(words, option=None, folderonly=False):
|
||||
def _get_plugins(which, defaultdir):
|
||||
# Path to core_plugins relative to this script
|
||||
core_path = os.path.dirname(os.path.realpath(__file__)) + "/core_plugins"
|
||||
remote_path = os.path.join(defaultdir, "remote_plugins")
|
||||
|
||||
# Load preferences
|
||||
import json
|
||||
pref_path = os.path.join(defaultdir, "plugin_preferences.json")
|
||||
try:
|
||||
with open(pref_path) as f:
|
||||
preferences = json.load(f)
|
||||
except Exception:
|
||||
preferences = {}
|
||||
|
||||
# Load service mode
|
||||
# We try to infer if we are in remote mode by checking config.yaml or .folder
|
||||
# but for completion usually we just want to know if remote cache exists.
|
||||
# However, to be strict we should check preferences.
|
||||
|
||||
def get_plugins_from_directory(directory):
|
||||
enabled_files = []
|
||||
@@ -60,38 +61,21 @@ def _get_plugins(which, defaultdir):
|
||||
for file in os.listdir(directory):
|
||||
# Check if the file is a Python file
|
||||
if file.endswith('.py'):
|
||||
name = os.path.splitext(file)[0]
|
||||
enabled_files.append(name)
|
||||
all_plugins[name] = os.path.join(directory, file)
|
||||
enabled_files.append(os.path.splitext(file)[0])
|
||||
all_plugins[os.path.splitext(file)[0]] = os.path.join(directory, file)
|
||||
# Check if the file is a Python backup file
|
||||
elif file.endswith('.py.bkp'):
|
||||
name = os.path.splitext(os.path.splitext(file)[0])[0]
|
||||
disabled_files.append(name)
|
||||
disabled_files.append(os.path.splitext(os.path.splitext(file)[0])[0])
|
||||
return enabled_files, disabled_files, all_plugins
|
||||
|
||||
# Get plugins from all directories
|
||||
# Get plugins from both directories
|
||||
user_enabled, user_disabled, user_all_plugins = get_plugins_from_directory(defaultdir + "/plugins")
|
||||
core_enabled, core_disabled, core_all_plugins = get_plugins_from_directory(core_path)
|
||||
remote_enabled, remote_disabled, remote_all_plugins = get_plugins_from_directory(remote_path)
|
||||
|
||||
# Calculate final paths respecting priorities and preferences
|
||||
# Priority: User Local > Core Local > Remote (unless preferred)
|
||||
|
||||
# Start with core
|
||||
final_all_plugins = core_all_plugins.copy()
|
||||
# Override with user local
|
||||
final_all_plugins.update(user_all_plugins)
|
||||
|
||||
# For remote, we only use them if:
|
||||
# 1. They don't exist locally OR
|
||||
# 2. Preference is explicitly 'remote'
|
||||
for name, path in remote_all_plugins.items():
|
||||
if name not in final_all_plugins or preferences.get(name) == "remote":
|
||||
final_all_plugins[name] = path
|
||||
|
||||
# Combine enabled/disabled for the helper commands
|
||||
enabled_files = list(set(user_enabled + core_enabled + [k for k,v in remote_all_plugins.items() if preferences.get(k) == "remote"]))
|
||||
disabled_files = list(set(user_disabled + core_disabled))
|
||||
# Combine the results from user and core plugins
|
||||
enabled_files = user_enabled
|
||||
disabled_files = user_disabled
|
||||
all_plugins = {**user_all_plugins, **core_all_plugins} # Merge dictionaries
|
||||
|
||||
# Return based on the command
|
||||
if which == "--disable":
|
||||
@@ -102,248 +86,7 @@ def _get_plugins(which, defaultdir):
|
||||
all_files = enabled_files + disabled_files
|
||||
return all_files
|
||||
elif which == "all":
|
||||
return final_all_plugins
|
||||
|
||||
|
||||
def _build_tree(nodes, folders, profiles, plugins, configdir):
|
||||
"""Build the declarative CLI navigation tree.
|
||||
|
||||
Structure:
|
||||
- dict: keys are completions + subnavigation.
|
||||
"__extra__" adds dynamic data.
|
||||
"__exclude_used__" filters already-typed words.
|
||||
"*" absorbs unknown positional words and loops to a specific node.
|
||||
- list: static choice completions.
|
||||
- callable: dynamic completions (called with `words`, returns list).
|
||||
- None: no further completions.
|
||||
"""
|
||||
_nodes = lambda w=None: list(nodes)
|
||||
_folders = lambda w=None: list(folders)
|
||||
_profiles = lambda w=None: list(profiles)
|
||||
_nodes_folders = lambda w=None: list(nodes) + list(folders)
|
||||
|
||||
_profile_values = {"__extra__": _profiles}
|
||||
|
||||
# --- Stateful/Looping Nodes ---
|
||||
|
||||
# list nodes
|
||||
list_nodes = {"__exclude_used__": True}
|
||||
list_nodes.update({
|
||||
"--format": {"*": list_nodes},
|
||||
"--filter": {"*": list_nodes},
|
||||
"*": list_nodes
|
||||
})
|
||||
|
||||
# export / import / run loops
|
||||
export_dict = {"--help": None, "-h": None}
|
||||
export_dict.update({
|
||||
"*": export_dict,
|
||||
"__extra__": lambda w: get_cwd(w, "export", True) + [f for f in folders if not any(x in f for x in w[1:-1])]
|
||||
})
|
||||
|
||||
import_dict = {"--help": None, "-h": None}
|
||||
import_dict.update({
|
||||
"*": import_dict,
|
||||
"__extra__": lambda w: get_cwd(w, "import")
|
||||
})
|
||||
|
||||
# --- Run Loop ---
|
||||
# After the first positional argument (Node filter or YAML file),
|
||||
# we stop suggesting nodes and only allow flags or commands.
|
||||
run_after_node = {"--help": None, "-h": None}
|
||||
run_after_node.update({
|
||||
"--test": {"*": run_after_node},
|
||||
"-t": {"*": run_after_node},
|
||||
"*": run_after_node # Consume commands
|
||||
})
|
||||
|
||||
run_dict = {
|
||||
"--generate": {"__extra__": lambda w: get_cwd(w, "--generate")},
|
||||
"-g": {"__extra__": lambda w: get_cwd(w, "-g")},
|
||||
"--test": {"*": None},
|
||||
"-t": {"*": None},
|
||||
"--help": None,
|
||||
"-h": None,
|
||||
"__extra__": lambda w: get_cwd(w, "run") + list(nodes),
|
||||
"*": run_after_node
|
||||
}
|
||||
|
||||
# State Machine Definitions
|
||||
mcp_dict = {
|
||||
"list": None,
|
||||
"add": {"*": {"*": {"*": None}}}, # name url [os]
|
||||
"remove": {"*": None},
|
||||
"enable": {"*": None},
|
||||
"disable": {"*": None},
|
||||
"--help": None, "-h": None
|
||||
}
|
||||
|
||||
ai_dict = {"__exclude_used__": True, "--help": None, "-h": None}
|
||||
for opt in ["--engineer-model", "--engineer-api-key", "--architect-model", "--architect-api-key"]:
|
||||
ai_dict[opt] = {"*": ai_dict} # takes value, loops back
|
||||
for opt in ["--debug", "--trust", "--list", "--list-sessions", "--session", "--resume", "--delete", "--delete-session", "-y"]:
|
||||
ai_dict[opt] = ai_dict # takes no value, loops back
|
||||
ai_dict["--mcp"] = mcp_dict
|
||||
ai_dict["*"] = ai_dict
|
||||
|
||||
mv_state = {"__extra__": _nodes, "--help": None, "-h": None}
|
||||
cp_state = {"__extra__": _nodes, "--help": None, "-h": None}
|
||||
ls_state = {
|
||||
"profiles": None,
|
||||
"nodes": list_nodes,
|
||||
"folders": None,
|
||||
}
|
||||
|
||||
# --- Connect (default command) ---
|
||||
# Long flags are offered; short forms (-d/-t) only used for navigation.
|
||||
# Two states: before node (offer nodes + remaining long flags)
|
||||
# after node (offer only remaining long flags, no more nodes)
|
||||
connect_flags_long = ["--debug", "--sftp"]
|
||||
connect_flags_all = ["--debug", "-d", "--sftp", "-t"]
|
||||
|
||||
# Post-node: only offer remaining long flags
|
||||
connect_after_node = {"__exclude_used__": True}
|
||||
for f in connect_flags_all:
|
||||
connect_after_node[f] = connect_after_node
|
||||
|
||||
# Pre-node: offer nodes + remaining long flags, consume node → post-node state
|
||||
connect_dict = {"__exclude_used__": True}
|
||||
connect_dict["__extra__"] = lambda w: (
|
||||
list(nodes) + list(folders) + (list(plugins.keys()) if plugins else [])
|
||||
)
|
||||
connect_dict["*"] = connect_after_node
|
||||
for f in connect_flags_all:
|
||||
connect_dict[f] = connect_dict
|
||||
|
||||
# --- Main Tree ---
|
||||
return {
|
||||
# Root: offer nodes + long flags; after a node go to post-node state
|
||||
"__extra__": lambda w: list(nodes) + list(folders) + (list(plugins.keys()) if plugins else []),
|
||||
"*": connect_after_node,
|
||||
|
||||
"--debug": connect_dict,
|
||||
"-d": connect_dict,
|
||||
"--sftp": connect_dict,
|
||||
"-t": connect_dict,
|
||||
|
||||
"--add": {"profile": _profile_values},
|
||||
"--del": {"profile": _profile_values, "__extra__": _nodes_folders},
|
||||
"--rm": {"profile": _profile_values, "__extra__": _nodes_folders},
|
||||
"--edit": {"profile": _profile_values, "__extra__": _nodes},
|
||||
"--mod": {"profile": _profile_values, "__extra__": _nodes},
|
||||
"--show": {"profile": _profile_values, "__extra__": _nodes},
|
||||
"--help": None,
|
||||
|
||||
"-a": {"profile": _profile_values},
|
||||
"-r": {"profile": _profile_values, "__extra__": _nodes_folders},
|
||||
"-e": {"profile": _profile_values, "__extra__": _nodes},
|
||||
"-s": {"profile": _profile_values, "__extra__": _nodes},
|
||||
|
||||
"profile": {
|
||||
"--add": None, "--rm": _profiles, "--del": _profiles,
|
||||
"--edit": _profiles, "--mod": _profiles, "--show": _profiles,
|
||||
"--help": None,
|
||||
"-a": None, "-r": _profiles, "-e": _profiles, "-s": _profiles, "-h": None,
|
||||
},
|
||||
"move": mv_state,
|
||||
"mv": mv_state,
|
||||
"copy": cp_state,
|
||||
"cp": cp_state,
|
||||
|
||||
"list": ls_state,
|
||||
"ls": ls_state,
|
||||
|
||||
"bulk": {"--file": None, "--help": None, "-f": None, "-h": None},
|
||||
"run": run_dict,
|
||||
"export": export_dict,
|
||||
"import": import_dict,
|
||||
"ai": ai_dict,
|
||||
|
||||
"api": {
|
||||
"--start": None, "--restart": None, "--stop": None, "--debug": None,
|
||||
"--help": None,
|
||||
"-s": None, "-r": None, "-x": None, "-d": None, "-h": None,
|
||||
},
|
||||
"context": {
|
||||
"--add": None, "--rm": None, "--del": None,
|
||||
"--ls": None, "--set": None,
|
||||
"--show": None, "--edit": None, "--mod": None,
|
||||
"--help": None,
|
||||
"-a": None, "-r": None, "-s": None, "-e": None, "-h": None,
|
||||
},
|
||||
"plugin": {
|
||||
"--add": {"*": lambda w: get_cwd(w, "--add")},
|
||||
"--update": {"*": lambda w: get_cwd(w, "--update")},
|
||||
"--del": lambda w: _get_plugins("--del", configdir),
|
||||
"--enable": lambda w: _get_plugins("--enable", configdir),
|
||||
"--disable": lambda w: _get_plugins("--disable", configdir),
|
||||
"--list": None, "--help": None,
|
||||
"-h": None,
|
||||
},
|
||||
"config": {
|
||||
"--allow-uppercase": ["true", "false"],
|
||||
"--fzf": ["true", "false"],
|
||||
"--keepalive": None,
|
||||
"--completion": ["bash", "zsh"],
|
||||
"--fzf-wrapper": ["bash", "zsh"],
|
||||
"--configfolder": lambda w: get_cwd(w, "--configfolder", True),
|
||||
"--engineer-model": None, "--engineer-api-key": None,
|
||||
"--architect-model": None, "--architect-api-key": None,
|
||||
"--theme": None,
|
||||
"--service-mode": ["local", "remote"],
|
||||
"--remote": None,
|
||||
"--sync-remote": ["true", "false"],
|
||||
"--trusted-commands": None,
|
||||
"--help": None, "-h": None,
|
||||
},
|
||||
"sync": {
|
||||
"--login": None, "--logout": None,
|
||||
"--status": None, "--list": None,
|
||||
"--once": None, "--restore": None,
|
||||
"--start": None, "--stop": None,
|
||||
"--id": None, "--nodes": None, "--config": None,
|
||||
"--help": None, "-h": None,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def resolve_completion(words, tree):
|
||||
"""Navigate the tree following typed words, properly handling dynamic state loops."""
|
||||
current = tree
|
||||
for word in words[:-1]:
|
||||
if isinstance(current, dict):
|
||||
if word in current:
|
||||
current = current[word]
|
||||
elif "*" in current:
|
||||
current = current["*"]
|
||||
else:
|
||||
return []
|
||||
else:
|
||||
return []
|
||||
|
||||
results = []
|
||||
if isinstance(current, dict):
|
||||
results = [k for k in current
|
||||
if not k.startswith("__")
|
||||
and not k.startswith("*")
|
||||
and not (len(k) == 2 and k in ["mv", "cp", "ls"])
|
||||
and not (len(k) == 2 and k[0] == "-" and k[1] != "-")]
|
||||
|
||||
if current.get("__exclude_used__"):
|
||||
results = [r for r in results if r not in words[:-1]]
|
||||
|
||||
extra = current.get("__extra__")
|
||||
if callable(extra):
|
||||
results.extend(extra(words))
|
||||
elif isinstance(extra, list):
|
||||
results.extend(extra)
|
||||
elif isinstance(current, list):
|
||||
results = list(current)
|
||||
elif callable(current):
|
||||
results = list(current(words))
|
||||
|
||||
return results
|
||||
|
||||
return all_plugins
|
||||
|
||||
def main():
|
||||
home = os.path.expanduser("~")
|
||||
@@ -352,17 +95,17 @@ def main():
|
||||
try:
|
||||
with open(pathfile, "r") as f:
|
||||
configdir = f.read().strip()
|
||||
except (FileNotFoundError, IOError):
|
||||
except:
|
||||
configdir = defaultdir
|
||||
cachefile = configdir + '/.config.cache.json'
|
||||
|
||||
nodes = load_txt_cache(configdir + '/.fzf_nodes_cache.txt')
|
||||
folders = load_txt_cache(configdir + '/.folders_cache.txt')
|
||||
profiles = load_txt_cache(configdir + '/.profiles_cache.txt')
|
||||
plugins = _get_plugins("all", configdir)
|
||||
|
||||
defaultfile = configdir + '/config.json'
|
||||
jsonconf = open(defaultfile)
|
||||
config = json.load(jsonconf)
|
||||
nodes = _getallnodes(config)
|
||||
folders = _getallfolders(config)
|
||||
profiles = list(config["profiles"].keys())
|
||||
plugins = _get_plugins("all", defaultdir)
|
||||
info = {}
|
||||
info["config"] = None
|
||||
info["config"] = config
|
||||
info["nodes"] = nodes
|
||||
info["folders"] = folders
|
||||
info["profiles"] = profiles
|
||||
@@ -374,62 +117,87 @@ def main():
|
||||
positions = [1,3]
|
||||
wordsnumber = int(sys.argv[positions[0]])
|
||||
words = sys.argv[positions[1]:]
|
||||
if wordsnumber == 2:
|
||||
strings=["--add", "--del", "--rm", "--edit", "--mod", "--show", "mv", "move", "ls", "list", "cp", "copy", "profile", "run", "bulk", "config", "api", "ai", "export", "import", "--help", "plugin"]
|
||||
if plugins:
|
||||
strings.extend(plugins.keys())
|
||||
strings.extend(nodes)
|
||||
strings.extend(folders)
|
||||
|
||||
# --- Plugin completion ---
|
||||
# Try new tree API first: _connpy_tree integrates into the main tree.
|
||||
# Fall back to legacy _connpy_completion for older plugins.
|
||||
if wordsnumber >= 3 and plugins and words[0] in plugins:
|
||||
import importlib.util
|
||||
plugin_path = plugins[words[0]]
|
||||
elif wordsnumber >=3 and words[0] in plugins.keys():
|
||||
try:
|
||||
spec = importlib.util.spec_from_file_location("module.name", plugin_path)
|
||||
spec = importlib.util.spec_from_file_location("module.name", plugins[words[0]])
|
||||
module = importlib.util.module_from_spec(spec)
|
||||
spec.loader.exec_module(module)
|
||||
module.get_cwd = get_cwd
|
||||
except Exception:
|
||||
exit()
|
||||
|
||||
# New API: _connpy_tree → integrate into main tree and use resolver
|
||||
if hasattr(module, "_connpy_tree"):
|
||||
plugin_node = module._connpy_tree(info)
|
||||
tree = _build_tree(nodes, folders, profiles, plugins, configdir)
|
||||
tree[words[0]] = plugin_node
|
||||
strings = resolve_completion(words, tree)
|
||||
|
||||
# Legacy API: _connpy_completion → delegate entirely
|
||||
elif hasattr(module, "_connpy_completion"):
|
||||
import json
|
||||
try:
|
||||
with open(cachefile, "r") as jsonconf:
|
||||
info["config"] = json.load(jsonconf)
|
||||
except Exception:
|
||||
try:
|
||||
import yaml
|
||||
with open(configdir + '/config.yaml', "r") as yamlconf:
|
||||
info["config"] = yaml.safe_load(yamlconf)
|
||||
except Exception:
|
||||
info["config"] = {}
|
||||
try:
|
||||
plugin_completion = getattr(module, "_connpy_completion")
|
||||
strings = plugin_completion(wordsnumber, words, info)
|
||||
except Exception:
|
||||
except:
|
||||
exit()
|
||||
elif wordsnumber >= 3 and words[0] == "ai":
|
||||
if wordsnumber == 3:
|
||||
strings = ["--help", "--org", "--model", "--api_key"]
|
||||
else:
|
||||
strings = ["--org", "--model", "--api_key"]
|
||||
elif wordsnumber == 3:
|
||||
strings=[]
|
||||
if words[0] == "profile":
|
||||
strings=["--add", "--rm", "--del", "--edit", "--mod", "--show", "--help"]
|
||||
if words[0] == "config":
|
||||
strings=["--allow-uppercase", "--keepalive", "--completion", "--fzf", "--configfolder", "--openai-org", "--openai-org-api-key", "--openai-org-model","--help"]
|
||||
if words[0] == "api":
|
||||
strings=["--start", "--stop", "--restart", "--debug", "--help"]
|
||||
if words[0] in ["--mod", "--edit", "-e", "--show", "-s", "--add", "-a", "--rm", "--del", "-r"]:
|
||||
strings=["profile"]
|
||||
if words[0] in ["list", "ls"]:
|
||||
strings=["profiles", "nodes", "folders"]
|
||||
if words[0] in ["bulk", "mv", "cp", "copy"]:
|
||||
strings=["--help"]
|
||||
if words[0] in ["--rm", "--del", "-r"]:
|
||||
strings.extend(folders)
|
||||
if words[0] in ["--rm", "--del", "-r", "--mod", "--edit", "-e", "--show", "-s", "mv", "move", "cp", "copy"]:
|
||||
strings.extend(nodes)
|
||||
if words[0] == "plugin":
|
||||
strings = ["--help", "--add", "--update", "--del", "--enable", "--disable", "--list"]
|
||||
if words[0] in ["run", "import", "export"]:
|
||||
strings = ["--help"]
|
||||
if words[0] == "export":
|
||||
pathstrings = _getcwd(words, words[0], True)
|
||||
else:
|
||||
pathstrings = _getcwd(words, words[0])
|
||||
strings.extend(pathstrings)
|
||||
if words[0] == "run":
|
||||
strings.extend(nodes)
|
||||
|
||||
elif wordsnumber >= 4 and words[0] == "export" and words[1] != "--help":
|
||||
strings = [item for item in folders if not any(word in item for word in words[:-1])]
|
||||
|
||||
elif wordsnumber >= 4 and words[0] in ["list", "ls"] and words[1] == "nodes":
|
||||
options = ["--format", "--filter"]
|
||||
strings = [item for item in options if not any(word in item for word in words[:-1])]
|
||||
|
||||
elif wordsnumber == 4:
|
||||
strings=[]
|
||||
if words[0] == "profile" and words[1] in ["--rm", "--del", "-r", "--mod", "--edit", "-e", "--show", "-s"]:
|
||||
strings.extend(profiles)
|
||||
if words[1] == "profile" and words[0] in ["--rm", "--del", "-r", "--mod", "--edit", "-e", "--show", "-s"]:
|
||||
strings.extend(profiles)
|
||||
if words[0] == "config" and words[1] == "--completion":
|
||||
strings=["bash", "zsh"]
|
||||
if words[0] == "config" and words[1] in ["--fzf", "--allow-uppercase"]:
|
||||
strings=["true", "false"]
|
||||
if words[0] == "config" and words[1] in ["--configfolder"]:
|
||||
strings=_getcwd(words,words[1],True)
|
||||
if words[0] == "plugin" and words[1] in ["--update", "--del", "--enable", "--disable"]:
|
||||
strings=_get_plugins(words[1], defaultdir)
|
||||
|
||||
elif wordsnumber == 5 and words[0] == "plugin" and words[1] in ["--add", "--update"]:
|
||||
strings=_getcwd(words, words[2])
|
||||
else:
|
||||
exit()
|
||||
|
||||
# --- Tree-based completion ---
|
||||
else:
|
||||
tree = _build_tree(nodes, folders, profiles, plugins, configdir)
|
||||
strings = resolve_completion(words, tree)
|
||||
|
||||
current_word = words[-1] if len(words) > 0 else ""
|
||||
matches = [s for s in strings if s.startswith(current_word)]
|
||||
|
||||
if app == "bash":
|
||||
strings = [s if s.endswith('/') else f"'{s} '" for s in matches]
|
||||
else:
|
||||
strings = matches
|
||||
|
||||
strings = [s if s.endswith('/') else f"'{s} '" for s in strings]
|
||||
print('\t'.join(strings))
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
||||
+86
-256
@@ -3,19 +3,15 @@
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
import yaml
|
||||
import shutil
|
||||
from Crypto.PublicKey import RSA
|
||||
from Crypto.Cipher import PKCS1_OAEP
|
||||
from pathlib import Path
|
||||
from copy import deepcopy
|
||||
from .hooks import MethodHook, ClassHook
|
||||
from . import printer
|
||||
|
||||
class NoAliasDumper(yaml.SafeDumper):
|
||||
def ignore_aliases(self, data):
|
||||
return True
|
||||
|
||||
|
||||
#functions and classes
|
||||
|
||||
@ClassHook
|
||||
class configfile:
|
||||
@@ -49,7 +45,7 @@ class configfile:
|
||||
### Optional Parameters:
|
||||
|
||||
- conf (str): Path/file to config file. If left empty default
|
||||
path is ~/.config/conn/config.yaml
|
||||
path is ~/.config/conn/config.json
|
||||
|
||||
- key (str): Path/file to RSA key file. If left empty default
|
||||
path is ~/.config/conn/.osk
|
||||
@@ -57,207 +53,77 @@ class configfile:
|
||||
'''
|
||||
home = os.path.expanduser("~")
|
||||
defaultdir = home + '/.config/conn'
|
||||
|
||||
if conf is None:
|
||||
# Standard path: use ~/.config/conn and respect .folder redirection
|
||||
self.anchor_path = defaultdir
|
||||
self.defaultdir = defaultdir
|
||||
Path(defaultdir).mkdir(parents=True, exist_ok=True)
|
||||
|
||||
Path(f"{defaultdir}/plugins").mkdir(parents=True, exist_ok=True)
|
||||
pathfile = defaultdir + '/.folder'
|
||||
try:
|
||||
with open(pathfile, "r") as f:
|
||||
configdir = f.read().strip()
|
||||
except (FileNotFoundError, IOError):
|
||||
except:
|
||||
with open(pathfile, "w") as f:
|
||||
f.write(str(defaultdir))
|
||||
configdir = defaultdir
|
||||
|
||||
self.defaultdir = configdir
|
||||
self.file = configdir + '/config.yaml'
|
||||
self.key = key or (configdir + '/.osk')
|
||||
|
||||
# Ensure redirected directories exist
|
||||
Path(configdir).mkdir(parents=True, exist_ok=True)
|
||||
Path(f"{configdir}/plugins").mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Backwards compatibility: Migrate from JSON to YAML only for default path
|
||||
legacy_json = configdir + '/config.json'
|
||||
legacy_noext = configdir + '/config'
|
||||
legacy_file = None
|
||||
if os.path.exists(legacy_json): legacy_file = legacy_json
|
||||
elif os.path.exists(legacy_noext): legacy_file = legacy_noext
|
||||
|
||||
if not os.path.exists(self.file) and legacy_file:
|
||||
try:
|
||||
with open(legacy_file, 'r') as f:
|
||||
old_data = json.load(f)
|
||||
if not self._validate_config(old_data):
|
||||
printer.warning(f"Legacy config {legacy_file} has invalid structure, skipping migration.")
|
||||
defaultfile = configdir + '/config.json'
|
||||
defaultkey = configdir + '/.osk'
|
||||
if conf == None:
|
||||
self.file = defaultfile
|
||||
else:
|
||||
with open(self.file, 'w') as f:
|
||||
yaml.dump(old_data, f, Dumper=NoAliasDumper, default_flow_style=False, sort_keys=False)
|
||||
# Verify the written YAML can be read back correctly
|
||||
with open(self.file, 'r') as f:
|
||||
verify = yaml.safe_load(f)
|
||||
if not self._validate_config(verify):
|
||||
os.remove(self.file)
|
||||
printer.warning("YAML verification failed after migration, keeping legacy config.")
|
||||
self.file = conf
|
||||
if key == None:
|
||||
self.key = defaultkey
|
||||
else:
|
||||
# Note: cachefile is derived later, we use temp one for migration sync
|
||||
temp_cache = configdir + '/.config.cache.json'
|
||||
with open(temp_cache, 'w') as f:
|
||||
json.dump(old_data, f)
|
||||
shutil.move(legacy_file, legacy_file + ".backup")
|
||||
printer.success(f"Migrated legacy config ({len(old_data.get('connections',{}))} folders/nodes) into YAML and Cache successfully!")
|
||||
except Exception as e:
|
||||
if os.path.exists(self.file):
|
||||
try: os.remove(self.file)
|
||||
except OSError: pass
|
||||
printer.warning(f"Failed to migrate legacy config: {e}")
|
||||
else:
|
||||
# Custom path (common in tests): isolate everything to the conf parent directory
|
||||
self.file = os.path.abspath(conf)
|
||||
configdir = os.path.dirname(self.file)
|
||||
self.anchor_path = configdir
|
||||
self.defaultdir = configdir
|
||||
self.key = os.path.abspath(key) if key else (configdir + '/.osk')
|
||||
|
||||
# Sidecar files always live next to the config file (or in the redirected configdir)
|
||||
self.cachefile = configdir + '/.config.cache.json'
|
||||
self.fzf_cachefile = configdir + '/.fzf_nodes_cache.txt'
|
||||
self.folders_cachefile = configdir + '/.folders_cache.txt'
|
||||
self.profiles_cachefile = configdir + '/.profiles_cache.txt'
|
||||
|
||||
self.key = key
|
||||
if os.path.exists(self.file):
|
||||
config = self._loadconfig(self.file)
|
||||
else:
|
||||
config = self._createconfig(self.file)
|
||||
|
||||
self.config = config["config"]
|
||||
self.connections = config["connections"]
|
||||
self.profiles = config["profiles"]
|
||||
|
||||
if not os.path.exists(self.key):
|
||||
self._createkey(self.key)
|
||||
with open(self.key) as f:
|
||||
self.privatekey = RSA.import_key(f.read())
|
||||
f.close()
|
||||
self.publickey = self.privatekey.publickey()
|
||||
|
||||
# Self-heal text caches if they are missing
|
||||
if not os.path.exists(self.fzf_cachefile) or not os.path.exists(self.folders_cachefile) or not os.path.exists(self.profiles_cachefile):
|
||||
self._generate_nodes_cache()
|
||||
|
||||
|
||||
def _validate_config(self, data):
|
||||
"""Verify config data has the required structure."""
|
||||
if not isinstance(data, dict):
|
||||
return False
|
||||
required = {"config", "connections", "profiles"}
|
||||
return required.issubset(data.keys())
|
||||
|
||||
def _loadconfig(self, conf):
|
||||
#Loads config file using dual cache
|
||||
cache_exists = os.path.exists(self.cachefile)
|
||||
yaml_time = os.path.getmtime(conf) if os.path.exists(conf) else 0
|
||||
cache_time = os.path.getmtime(self.cachefile) if cache_exists else 0
|
||||
|
||||
if not cache_exists or yaml_time > cache_time:
|
||||
with open(conf, 'r') as f:
|
||||
data = yaml.safe_load(f)
|
||||
if not self._validate_config(data):
|
||||
# YAML is broken, try to recover from cache
|
||||
if cache_exists:
|
||||
printer.warning("Config file appears corrupt, recovering from cache...")
|
||||
with open(self.cachefile, 'r') as f:
|
||||
data = json.load(f)
|
||||
if self._validate_config(data):
|
||||
# Re-write the YAML from good cache
|
||||
with open(conf, 'w') as f:
|
||||
yaml.dump(data, f, Dumper=NoAliasDumper, default_flow_style=False, sort_keys=False)
|
||||
return data
|
||||
# Both broken or no cache - create fresh
|
||||
printer.error("Config file is corrupt and no valid cache exists. Creating default config.")
|
||||
return self._createconfig(conf)
|
||||
try:
|
||||
with open(self.cachefile, 'w') as f:
|
||||
json.dump(data, f)
|
||||
except Exception:
|
||||
pass
|
||||
return data
|
||||
else:
|
||||
with open(self.cachefile, 'r') as f:
|
||||
data = json.load(f)
|
||||
if not self._validate_config(data):
|
||||
# Cache broken, try yaml
|
||||
with open(conf, 'r') as f:
|
||||
data = yaml.safe_load(f)
|
||||
if self._validate_config(data):
|
||||
return data
|
||||
# Both broken
|
||||
printer.error("Both config and cache are corrupt. Creating default config.")
|
||||
return self._createconfig(conf)
|
||||
return data
|
||||
#Loads config file
|
||||
jsonconf = open(conf)
|
||||
jsondata = json.load(jsonconf)
|
||||
jsonconf.close()
|
||||
return jsondata
|
||||
|
||||
def _createconfig(self, conf):
|
||||
#Create config file (always writes defaults, safe for recovery)
|
||||
#Create config file
|
||||
defaultconfig = {'config': {'case': False, 'idletime': 30, 'fzf': False}, 'connections': {}, 'profiles': { "default": { "host":"", "protocol":"ssh", "port":"", "user":"", "password":"", "options":"", "logs":"", "tags": "", "jumphost":""}}}
|
||||
if not os.path.exists(conf):
|
||||
with open(conf, "w") as f:
|
||||
yaml.dump(defaultconfig, f, Dumper=NoAliasDumper, default_flow_style=False, sort_keys=False)
|
||||
json.dump(defaultconfig, f, indent = 4)
|
||||
f.close()
|
||||
os.chmod(conf, 0o600)
|
||||
try:
|
||||
with open(self.cachefile, 'w') as f:
|
||||
json.dump(defaultconfig, f)
|
||||
except Exception:
|
||||
pass
|
||||
return defaultconfig
|
||||
jsonconf = open(conf)
|
||||
jsondata = json.load(jsonconf)
|
||||
jsonconf.close()
|
||||
return jsondata
|
||||
|
||||
@MethodHook
|
||||
def _saveconfig(self, conf):
|
||||
#Save config file atomically to prevent corruption
|
||||
#Save config file
|
||||
newconfig = {"config":{}, "connections": {}, "profiles": {}}
|
||||
newconfig["config"] = self.config
|
||||
newconfig["connections"] = self.connections
|
||||
newconfig["profiles"] = self.profiles
|
||||
tmpfile = conf + '.tmp'
|
||||
try:
|
||||
with open(tmpfile, "w") as f:
|
||||
yaml.dump(newconfig, f, Dumper=NoAliasDumper, default_flow_style=False, sort_keys=False)
|
||||
# Atomic replace: only overwrite original if write succeeded
|
||||
shutil.move(tmpfile, conf)
|
||||
with open(self.cachefile, "w") as f:
|
||||
json.dump(newconfig, f)
|
||||
self._generate_nodes_cache()
|
||||
except (IOError, OSError) as e:
|
||||
printer.error(f"Failed to save config: {e}")
|
||||
# Clean up temp file if it exists
|
||||
if os.path.exists(tmpfile):
|
||||
try:
|
||||
os.remove(tmpfile)
|
||||
except OSError:
|
||||
pass
|
||||
with open(conf, "w") as f:
|
||||
json.dump(newconfig, f, indent = 4)
|
||||
f.close()
|
||||
except:
|
||||
return 1
|
||||
return 0
|
||||
|
||||
def _generate_nodes_cache(self, nodes=None, folders=None, profiles=None):
|
||||
try:
|
||||
if nodes is None:
|
||||
nodes = self._getallnodes()
|
||||
if folders is None:
|
||||
folders = self._getallfolders()
|
||||
if profiles is None:
|
||||
profiles = list(self.profiles.keys())
|
||||
|
||||
with open(self.fzf_cachefile, "w") as f:
|
||||
f.write("\n".join(nodes))
|
||||
with open(self.folders_cachefile, "w") as f:
|
||||
f.write("\n".join(folders))
|
||||
with open(self.profiles_cachefile, "w") as f:
|
||||
f.write("\n".join(profiles))
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
def _createkey(self, keyfile):
|
||||
#Create key file
|
||||
key = RSA.generate(2048)
|
||||
@@ -289,7 +155,7 @@ class configfile:
|
||||
return result
|
||||
|
||||
@MethodHook
|
||||
def getitem(self, unique, keys = None, extract = False):
|
||||
def getitem(self, unique, keys = None):
|
||||
'''
|
||||
Get an node or a group of nodes from configfile which can be passed to node/nodes class
|
||||
|
||||
@@ -303,8 +169,6 @@ class configfile:
|
||||
|
||||
- keys (list): In case you pass a folder as unique, you can filter
|
||||
nodes inside the folder passing a list.
|
||||
- extract (bool): If True, extract information from profiles.
|
||||
Default False.
|
||||
|
||||
### Returns:
|
||||
|
||||
@@ -320,35 +184,21 @@ class configfile:
|
||||
folder = self.connections[uniques["folder"]]
|
||||
newfolder = deepcopy(folder)
|
||||
newfolder.pop("type")
|
||||
for node_name in folder.keys():
|
||||
if node_name == "type":
|
||||
for node in folder.keys():
|
||||
if node == "type":
|
||||
continue
|
||||
if "type" in newfolder[node_name].keys():
|
||||
if newfolder[node_name]["type"] == "subfolder":
|
||||
newfolder.pop(node_name)
|
||||
if "type" in newfolder[node].keys():
|
||||
if newfolder[node]["type"] == "subfolder":
|
||||
newfolder.pop(node)
|
||||
else:
|
||||
newfolder[node_name].pop("type")
|
||||
|
||||
if keys != None:
|
||||
newfolder = dict((k, newfolder[k]) for k in keys)
|
||||
|
||||
if extract:
|
||||
for node_name, node_keys in newfolder.items():
|
||||
for key, value in node_keys.items():
|
||||
profile = re.search("^@(.*)", str(value))
|
||||
if profile:
|
||||
try:
|
||||
newfolder[node_name][key] = self.profiles[profile.group(1)][key]
|
||||
except KeyError:
|
||||
newfolder[node_name][key] = ""
|
||||
elif value == '' and key == "protocol":
|
||||
try:
|
||||
newfolder[node_name][key] = self.profiles["default"][key]
|
||||
except KeyError:
|
||||
newfolder[node_name][key] = "ssh"
|
||||
|
||||
newfolder[node].pop("type")
|
||||
if keys == None:
|
||||
newfolder = {"{}{}".format(k,unique):v for k,v in newfolder.items()}
|
||||
return newfolder
|
||||
else:
|
||||
f_newfolder = dict((k, newfolder[k]) for k in keys)
|
||||
f_newfolder = {"{}{}".format(k,unique):v for k,v in f_newfolder.items()}
|
||||
return f_newfolder
|
||||
else:
|
||||
if uniques.keys() >= {"folder", "subfolder"}:
|
||||
node = self.connections[uniques["folder"]][uniques["subfolder"]][uniques["id"]]
|
||||
@@ -358,24 +208,10 @@ class configfile:
|
||||
node = self.connections[uniques["id"]]
|
||||
newnode = deepcopy(node)
|
||||
newnode.pop("type")
|
||||
|
||||
if extract:
|
||||
for key, value in newnode.items():
|
||||
profile = re.search("^@(.*)", str(value))
|
||||
if profile:
|
||||
try:
|
||||
newnode[key] = self.profiles[profile.group(1)][key]
|
||||
except KeyError:
|
||||
newnode[key] = ""
|
||||
elif value == '' and key == "protocol":
|
||||
try:
|
||||
newnode[key] = self.profiles["default"][key]
|
||||
except KeyError:
|
||||
newnode[key] = "ssh"
|
||||
return newnode
|
||||
|
||||
@MethodHook
|
||||
def getitems(self, uniques, extract = False):
|
||||
def getitems(self, uniques):
|
||||
'''
|
||||
Get a group of nodes from configfile which can be passed to node/nodes class
|
||||
|
||||
@@ -385,11 +221,6 @@ class configfile:
|
||||
from the connection manager. It can be a
|
||||
list of strings.
|
||||
|
||||
### Optional Parameters:
|
||||
|
||||
- extract (bool): If True, extract information from profiles.
|
||||
Default False.
|
||||
|
||||
### Returns:
|
||||
|
||||
dict: Dictionary containing information of node or multiple
|
||||
@@ -400,15 +231,23 @@ class configfile:
|
||||
if isinstance(uniques, str):
|
||||
uniques = [uniques]
|
||||
for i in uniques:
|
||||
if i.startswith("@"):
|
||||
if isinstance(i, dict):
|
||||
name = list(i.keys())[0]
|
||||
mylist = i[name]
|
||||
if not self.config["case"]:
|
||||
name = name.lower()
|
||||
mylist = [item.lower() for item in mylist]
|
||||
this = self.getitem(name, mylist)
|
||||
nodes.update(this)
|
||||
elif i.startswith("@"):
|
||||
if not self.config["case"]:
|
||||
i = i.lower()
|
||||
this = self.getitem(i, extract = extract)
|
||||
this = self.getitem(i)
|
||||
nodes.update(this)
|
||||
else:
|
||||
if not self.config["case"]:
|
||||
i = i.lower()
|
||||
this = self.getitem(i, extract = extract)
|
||||
this = self.getitem(i)
|
||||
nodes[i] = this
|
||||
return nodes
|
||||
|
||||
@@ -468,57 +307,48 @@ class configfile:
|
||||
def _getallnodes(self, filter = None):
|
||||
#get all nodes on configfile
|
||||
nodes = []
|
||||
layer1 = [k for k,v in self.connections.items() if isinstance(v, dict) and v.get("type") == "connection"]
|
||||
folders = [k for k,v in self.connections.items() if isinstance(v, dict) and v.get("type") == "folder"]
|
||||
layer1 = [k for k,v in self.connections.items() if isinstance(v, dict) and v["type"] == "connection"]
|
||||
folders = [k for k,v in self.connections.items() if isinstance(v, dict) and v["type"] == "folder"]
|
||||
nodes.extend(layer1)
|
||||
for f in folders:
|
||||
layer2 = [k + "@" + f for k,v in self.connections[f].items() if isinstance(v, dict) and v.get("type") == "connection"]
|
||||
layer2 = [k + "@" + f for k,v in self.connections[f].items() if isinstance(v, dict) and v["type"] == "connection"]
|
||||
nodes.extend(layer2)
|
||||
subfolders = [k for k,v in self.connections[f].items() if isinstance(v, dict) and v.get("type") == "subfolder"]
|
||||
subfolders = [k for k,v in self.connections[f].items() if isinstance(v, dict) and v["type"] == "subfolder"]
|
||||
for s in subfolders:
|
||||
layer3 = [k + "@" + s + "@" + f for k,v in self.connections[f][s].items() if isinstance(v, dict) and v.get("type") == "connection"]
|
||||
layer3 = [k + "@" + s + "@" + f for k,v in self.connections[f][s].items() if isinstance(v, dict) and v["type"] == "connection"]
|
||||
nodes.extend(layer3)
|
||||
if filter:
|
||||
flat_filter = []
|
||||
if isinstance(filter, str):
|
||||
flat_filter = [filter]
|
||||
nodes = [item for item in nodes if re.search(filter, item)]
|
||||
elif isinstance(filter, list):
|
||||
for item in filter:
|
||||
if isinstance(item, str):
|
||||
flat_filter.append(item)
|
||||
nodes = [item for item in nodes if any(re.search(pattern, item) for pattern in filter)]
|
||||
else:
|
||||
printer.error("Filter must be a string or a list of strings")
|
||||
sys.exit(1)
|
||||
nodes = [item for item in nodes if any(re.search(pattern, item) for pattern in flat_filter)]
|
||||
raise ValueError("filter must be a string or a list of strings")
|
||||
return nodes
|
||||
|
||||
@MethodHook
|
||||
def _getallnodesfull(self, filter = None, extract = True):
|
||||
#get all nodes on configfile with all their attributes.
|
||||
nodes = {}
|
||||
layer1 = {k:v for k,v in self.connections.items() if isinstance(v, dict) and v.get("type") == "connection"}
|
||||
folders = [k for k,v in self.connections.items() if isinstance(v, dict) and v.get("type") == "folder"]
|
||||
layer1 = {k:v for k,v in self.connections.items() if isinstance(v, dict) and v["type"] == "connection"}
|
||||
folders = [k for k,v in self.connections.items() if isinstance(v, dict) and v["type"] == "folder"]
|
||||
nodes.update(layer1)
|
||||
for f in folders:
|
||||
layer2 = {k + "@" + f:v for k,v in self.connections[f].items() if isinstance(v, dict) and v.get("type") == "connection"}
|
||||
layer2 = {k + "@" + f:v for k,v in self.connections[f].items() if isinstance(v, dict) and v["type"] == "connection"}
|
||||
nodes.update(layer2)
|
||||
subfolders = [k for k,v in self.connections[f].items() if isinstance(v, dict) and v.get("type") == "subfolder"]
|
||||
subfolders = [k for k,v in self.connections[f].items() if isinstance(v, dict) and v["type"] == "subfolder"]
|
||||
for s in subfolders:
|
||||
layer3 = {k + "@" + s + "@" + f:v for k,v in self.connections[f][s].items() if isinstance(v, dict) and v.get("type") == "connection"}
|
||||
layer3 = {k + "@" + s + "@" + f:v for k,v in self.connections[f][s].items() if isinstance(v, dict) and v["type"] == "connection"}
|
||||
nodes.update(layer3)
|
||||
if filter:
|
||||
flat_filter = []
|
||||
if isinstance(filter, str):
|
||||
flat_filter = [filter]
|
||||
filter = "^(?!.*@).+$" if filter == "@" else filter
|
||||
nodes = {k: v for k, v in nodes.items() if re.search(filter, k)}
|
||||
elif isinstance(filter, list):
|
||||
for item in filter:
|
||||
if isinstance(item, str):
|
||||
flat_filter.append(item)
|
||||
filter = ["^(?!.*@).+$" if item == "@" else item for item in filter]
|
||||
nodes = {k: v for k, v in nodes.items() if any(re.search(pattern, k) for pattern in filter)}
|
||||
else:
|
||||
printer.error("Filter must be a string or a list of strings")
|
||||
sys.exit(1)
|
||||
flat_filter = ["^(?!.*@).+$" if item == "@" else item for item in flat_filter]
|
||||
nodes = {k: v for k, v in nodes.items() if any(re.search(pattern, k) for pattern in flat_filter)}
|
||||
raise ValueError("filter must be a string or a list of strings")
|
||||
if extract:
|
||||
for node, keys in nodes.items():
|
||||
for key, value in keys.items():
|
||||
@@ -526,12 +356,12 @@ class configfile:
|
||||
if profile:
|
||||
try:
|
||||
nodes[node][key] = self.profiles[profile.group(1)][key]
|
||||
except KeyError:
|
||||
except:
|
||||
nodes[node][key] = ""
|
||||
elif value == '' and key == "protocol":
|
||||
try:
|
||||
nodes[node][key] = self.profiles["default"][key]
|
||||
except KeyError:
|
||||
nodes[node][key] = config.profiles["default"][key]
|
||||
except:
|
||||
nodes[node][key] = "ssh"
|
||||
return nodes
|
||||
|
||||
@@ -539,27 +369,27 @@ class configfile:
|
||||
@MethodHook
|
||||
def _getallfolders(self):
|
||||
#get all folders on configfile
|
||||
folders = ["@" + k for k,v in self.connections.items() if isinstance(v, dict) and v.get("type") == "folder"]
|
||||
folders = ["@" + k for k,v in self.connections.items() if isinstance(v, dict) and v["type"] == "folder"]
|
||||
subfolders = []
|
||||
for f in folders:
|
||||
s = ["@" + k + f for k,v in self.connections[f[1:]].items() if isinstance(v, dict) and v.get("type") == "subfolder"]
|
||||
s = ["@" + k + f for k,v in self.connections[f[1:]].items() if isinstance(v, dict) and v["type"] == "subfolder"]
|
||||
subfolders.extend(s)
|
||||
folders.extend(subfolders)
|
||||
return folders
|
||||
|
||||
@MethodHook
|
||||
def _profileused(self, profile):
|
||||
#Return all the nodes that uses this profile.
|
||||
#Check if profile is used before deleting it
|
||||
nodes = []
|
||||
layer1 = [k for k,v in self.connections.items() if isinstance(v, dict) and v.get("type") == "connection" and ("@" + profile in v.values() or ( isinstance(v.get("password"),list) and "@" + profile in v.get("password")))]
|
||||
folders = [k for k,v in self.connections.items() if isinstance(v, dict) and v.get("type") == "folder"]
|
||||
layer1 = [k for k,v in self.connections.items() if isinstance(v, dict) and v["type"] == "connection" and ("@" + profile in v.values() or ( isinstance(v["password"],list) and "@" + profile in v["password"]))]
|
||||
folders = [k for k,v in self.connections.items() if isinstance(v, dict) and v["type"] == "folder"]
|
||||
nodes.extend(layer1)
|
||||
for f in folders:
|
||||
layer2 = [k + "@" + f for k,v in self.connections[f].items() if isinstance(v, dict) and v.get("type") == "connection" and ("@" + profile in v.values() or ( isinstance(v.get("password"),list) and "@" + profile in v.get("password")))]
|
||||
layer2 = [k + "@" + f for k,v in self.connections[f].items() if isinstance(v, dict) and v["type"] == "connection" and ("@" + profile in v.values() or ( isinstance(v["password"],list) and "@" + profile in v["password"]))]
|
||||
nodes.extend(layer2)
|
||||
subfolders = [k for k,v in self.connections[f].items() if isinstance(v, dict) and v.get("type") == "subfolder"]
|
||||
subfolders = [k for k,v in self.connections[f].items() if isinstance(v, dict) and v["type"] == "subfolder"]
|
||||
for s in subfolders:
|
||||
layer3 = [k + "@" + s + "@" + f for k,v in self.connections[f][s].items() if isinstance(v, dict) and v.get("type") == "connection" and ("@" + profile in v.values() or ( isinstance(v.get("password"),list) and "@" + profile in v.get("password")))]
|
||||
layer3 = [k + "@" + s + "@" + f for k,v in self.connections[f][s].items() if isinstance(v, dict) and v["type"] == "connection" and ("@" + profile in v.values() or ( isinstance(v["password"],list) and "@" + profile in v["password"]))]
|
||||
nodes.extend(layer3)
|
||||
return nodes
|
||||
|
||||
|
||||
+1412
-408
File diff suppressed because it is too large
Load Diff
+126
-736
File diff suppressed because it is too large
Load Diff
@@ -1,402 +0,0 @@
|
||||
import argparse
|
||||
import sys
|
||||
|
||||
class Parser:
|
||||
def __init__(self):
|
||||
self.parser = argparse.ArgumentParser(description="Capture packets remotely using a saved SSH node", epilog="All unknown arguments will be passed to tcpdump.")
|
||||
|
||||
self.parser.add_argument("node", nargs='?', help="Name of the saved node (must use SSH)")
|
||||
self.parser.add_argument("interface", nargs='?', help="Network interface to capture on")
|
||||
self.parser.add_argument("--ns", "--namespace", dest="namespace", help="Optional network namespace")
|
||||
self.parser.add_argument("-w","--wireshark", action="store_true", help="Open live capture in Wireshark")
|
||||
self.parser.add_argument("--set-wireshark-path", metavar="PATH", help="Set the default path to Wireshark binary")
|
||||
self.parser.add_argument(
|
||||
"-f", "--filter",
|
||||
dest="tcpdump_filter",
|
||||
metavar="ARG",
|
||||
nargs="*",
|
||||
default=["not", "port", "22"],
|
||||
help="tcpdump filter expression (e.g., -f port 443 and udp). Default: not port 22"
|
||||
)
|
||||
self.parser.add_argument(
|
||||
"--unknown-args",
|
||||
action="store_true",
|
||||
default=True,
|
||||
help=argparse.SUPPRESS
|
||||
)
|
||||
|
||||
class Entrypoint:
|
||||
@staticmethod
|
||||
def get_remote_capture_class():
|
||||
import subprocess
|
||||
import random
|
||||
import socket
|
||||
import time
|
||||
import threading
|
||||
from pexpect import TIMEOUT
|
||||
from connpy import printer
|
||||
|
||||
class RemoteCapture:
|
||||
def __init__(self, connapp, node_name, interface, namespace=None, use_wireshark=False, tcpdump_filter=None, tcpdump_args=None):
|
||||
self.connapp = connapp
|
||||
self.node_name = node_name
|
||||
self.interface = interface
|
||||
self.namespace = namespace
|
||||
self.use_wireshark = use_wireshark
|
||||
self.tcpdump_filter = tcpdump_filter or []
|
||||
self.tcpdump_args = tcpdump_args if isinstance(tcpdump_args, list) else []
|
||||
|
||||
if node_name.startswith("@"): # fuzzy match
|
||||
matches = self.connapp.services.nodes.list_nodes(node_name)
|
||||
else:
|
||||
matches = self.connapp.services.nodes.list_nodes(f"^{node_name}")
|
||||
|
||||
if not matches:
|
||||
printer.error(f"Node '{node_name}' not found.")
|
||||
sys.exit(2)
|
||||
elif len(matches) > 1:
|
||||
from ..cli.helpers import choose
|
||||
matches[0] = choose(self.connapp, matches, "node", "capture")
|
||||
|
||||
if matches[0] is None:
|
||||
sys.exit(7)
|
||||
|
||||
node_data = self.connapp.services.nodes.get_node_details(matches[0])
|
||||
self.node = self.connapp.node(matches[0], **node_data, config=self.connapp.config)
|
||||
|
||||
if self.node.protocol != "ssh":
|
||||
printer.error(f"Node '{self.node.unique}' must be an SSH connection.")
|
||||
sys.exit(2)
|
||||
|
||||
settings = self.connapp.services.config_svc.get_settings()
|
||||
self.wireshark_path = settings.get("wireshark_path")
|
||||
|
||||
def _start_local_listener(self, port, ws_proc=None):
|
||||
self.fake_connection = False
|
||||
self.listener_active = True
|
||||
self.listener_conn = None
|
||||
self.listener_connected = threading.Event()
|
||||
|
||||
def listen():
|
||||
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
|
||||
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
|
||||
s.bind(("localhost", port))
|
||||
s.listen(1)
|
||||
printer.start(f"Listening on localhost:{port}")
|
||||
|
||||
conn, addr = s.accept()
|
||||
self.listener_conn = conn
|
||||
if not self.fake_connection:
|
||||
printer.start(f"Connection from {addr}")
|
||||
self.listener_connected.set()
|
||||
|
||||
try:
|
||||
while self.listener_active:
|
||||
data = conn.recv(4096)
|
||||
if not data:
|
||||
break
|
||||
|
||||
if self.use_wireshark and ws_proc:
|
||||
try:
|
||||
ws_proc.stdin.write(data)
|
||||
ws_proc.stdin.flush()
|
||||
except BrokenPipeError:
|
||||
printer.info("Wireshark closed the pipe.")
|
||||
break
|
||||
else:
|
||||
sys.stdout.buffer.write(data)
|
||||
sys.stdout.buffer.flush()
|
||||
except Exception as e:
|
||||
if isinstance(e, BrokenPipeError):
|
||||
printer.info("Listener closed due to broken pipe.")
|
||||
else:
|
||||
printer.error(f"Listener error: {e}")
|
||||
finally:
|
||||
conn.close()
|
||||
self.listener_conn = None
|
||||
|
||||
self.listener_thread = threading.Thread(target=listen)
|
||||
self.listener_thread.daemon = True
|
||||
self.listener_thread.start()
|
||||
|
||||
def _is_port_in_use(self, port):
|
||||
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
|
||||
return s.connect_ex(('localhost', port)) == 0
|
||||
|
||||
def _find_free_port(self, start=20000, end=30000):
|
||||
for _ in range(10):
|
||||
port = random.randint(start, end)
|
||||
if not self._is_port_in_use(port):
|
||||
return port
|
||||
printer.error("No free port found for SSH tunnel.")
|
||||
sys.exit(1)
|
||||
|
||||
def _monitor_wireshark(self, ws_proc):
|
||||
try:
|
||||
while True:
|
||||
try:
|
||||
ws_proc.wait(timeout=1)
|
||||
self.listener_active = False
|
||||
if self.listener_conn:
|
||||
printer.info("Wireshark exited, stopping listener.")
|
||||
try:
|
||||
self.listener_conn.shutdown(socket.SHUT_RDWR)
|
||||
self.listener_conn.close()
|
||||
except Exception:
|
||||
pass
|
||||
break
|
||||
except subprocess.TimeoutExpired:
|
||||
if not self.listener_active:
|
||||
break
|
||||
time.sleep(0.2)
|
||||
except Exception as e:
|
||||
printer.warning(f"Error in monitor_wireshark: {e}")
|
||||
|
||||
def _detect_sudo_requirement(self):
|
||||
base_cmd = f"tcpdump -i {self.interface} -w - -U -c 1"
|
||||
if self.namespace:
|
||||
base_cmd = f"ip netns exec {self.namespace} {base_cmd}"
|
||||
|
||||
cmds = [base_cmd, f"sudo {base_cmd}"]
|
||||
|
||||
printer.info(f"Verifying sudo requirement")
|
||||
for cmd in cmds:
|
||||
try:
|
||||
self.node.child.sendline(cmd)
|
||||
start_time = time.time()
|
||||
while time.time() - start_time < 3:
|
||||
try:
|
||||
index = self.node.child.expect([
|
||||
r'listening on',
|
||||
r'permission denied',
|
||||
r'cannot',
|
||||
r'No such file or directory',
|
||||
], timeout=1)
|
||||
|
||||
if index == 0:
|
||||
self.node.child.send("\x03")
|
||||
return "sudo" in cmd
|
||||
else:
|
||||
break
|
||||
except Exception:
|
||||
continue
|
||||
|
||||
self.node.child.send("\x03")
|
||||
time.sleep(0.5)
|
||||
try:
|
||||
self.node.child.read_nonblocking(size=1024, timeout=0.5)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
except Exception as e:
|
||||
printer.warning(f"Error during sudo detection: {e}")
|
||||
continue
|
||||
|
||||
printer.error(f"Failed to run tcpdump on remote node '{self.node.unique}'")
|
||||
sys.exit(4)
|
||||
|
||||
def _monitor_capture_output(self):
|
||||
try:
|
||||
index = self.node.child.expect([
|
||||
r'Broken pipe',
|
||||
r'packet[s]? captured'
|
||||
], timeout=None)
|
||||
if index == 0:
|
||||
printer.error("Tcpdump failed: Broken pipe.")
|
||||
else:
|
||||
printer.success("Tcpdump finished capturing packets.")
|
||||
|
||||
self.listener_active = False
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def _sendline_until_connected(self, cmd, retries=5, interval=2):
|
||||
for attempt in range(1, retries + 1):
|
||||
printer.info(f"Attempt {attempt}/{retries} to connect listener...")
|
||||
self.node.child.sendline(cmd)
|
||||
|
||||
try:
|
||||
index = self.node.child.expect([
|
||||
r'listening on',
|
||||
TIMEOUT,
|
||||
r'permission',
|
||||
r'not permitted',
|
||||
r'invalid',
|
||||
r'unrecognized',
|
||||
r'Unable',
|
||||
r'No such',
|
||||
r'illegal',
|
||||
r'not found',
|
||||
r'non-ether',
|
||||
r'syntax error'
|
||||
], timeout=5)
|
||||
|
||||
if index == 0:
|
||||
self.monitor_end = threading.Thread(target=self._monitor_capture_output)
|
||||
self.monitor_end.daemon = True
|
||||
self.monitor_end.start()
|
||||
|
||||
if self.listener_connected.wait(timeout=interval):
|
||||
printer.success("Listener successfully received a connection.")
|
||||
return True
|
||||
else:
|
||||
printer.warning("No connection yet. Retrying...")
|
||||
|
||||
elif index == 1:
|
||||
error = f"tcpdump did not respond within the expected time.\nCommand used:\n{cmd}\n\u2192 Please verify the command syntax."
|
||||
return f"{error}"
|
||||
else:
|
||||
before_last_line = self.node.child.before.decode().splitlines()[-1]
|
||||
error = f"Tcpdump error detected: {before_last_line}{self.node.child.after.decode()}{self.node.child.readline().decode()}".rstrip()
|
||||
return f"{error}"
|
||||
|
||||
except Exception as e:
|
||||
printer.warning(f"Unexpected error during tcpdump startup: {e}")
|
||||
return False
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def _build_tcpdump_command(self):
|
||||
base = f"tcpdump -i {self.interface}"
|
||||
if self.use_wireshark:
|
||||
base += " -w - -U"
|
||||
else:
|
||||
base += " -l"
|
||||
|
||||
if self.namespace:
|
||||
base = f"ip netns exec {self.namespace} {base}"
|
||||
|
||||
if self.requires_sudo:
|
||||
base = f"sudo {base}"
|
||||
|
||||
if self.tcpdump_args:
|
||||
base += " " + " ".join(self.tcpdump_args)
|
||||
|
||||
if self.tcpdump_filter:
|
||||
base += " " + " ".join(self.tcpdump_filter)
|
||||
|
||||
base += f" | nc localhost {self.local_port}"
|
||||
return base
|
||||
|
||||
def run(self):
|
||||
if self.use_wireshark:
|
||||
if not self.wireshark_path:
|
||||
printer.error("Wireshark path not set in config.\nUse '--set-wireshark-path /full/path/to/wireshark' to configure it.")
|
||||
sys.exit(1)
|
||||
|
||||
self.local_port = self._find_free_port()
|
||||
self.node.options += f" -o ExitOnForwardFailure=yes -R {self.local_port}:localhost:{self.local_port}"
|
||||
|
||||
connection = self.node._connect()
|
||||
if connection is not True:
|
||||
printer.error(f"Could not connect to {self.node.unique}\n{connection}")
|
||||
sys.exit(1)
|
||||
|
||||
self.requires_sudo = self._detect_sudo_requirement()
|
||||
tcpdump_cmd = self._build_tcpdump_command()
|
||||
|
||||
ws_proc = None
|
||||
monitor_thread = None
|
||||
|
||||
if self.use_wireshark:
|
||||
printer.info(f"Live capture from {self.node.unique}:{self.interface}, launching Wireshark...")
|
||||
try:
|
||||
ws_proc = subprocess.Popen([self.wireshark_path, "-k", "-i", "-"], stdin=subprocess.PIPE, stderr=subprocess.PIPE)
|
||||
except Exception as e:
|
||||
printer.error(f"Failed to launch Wireshark: {e}\nMake sure the path is correct and Wireshark is installed.")
|
||||
exit(1)
|
||||
|
||||
monitor_thread = threading.Thread(target=self._monitor_wireshark, args=(ws_proc,))
|
||||
monitor_thread.daemon = True
|
||||
monitor_thread.start()
|
||||
else:
|
||||
printer.info(f"Live text capture from {self.node.unique}:{self.interface}")
|
||||
printer.info("Press Ctrl+C to stop.\n")
|
||||
|
||||
try:
|
||||
self._start_local_listener(self.local_port, ws_proc=ws_proc)
|
||||
time.sleep(1)
|
||||
|
||||
result = self._sendline_until_connected(tcpdump_cmd, retries=5, interval=2)
|
||||
if result is not True:
|
||||
if isinstance(result, str):
|
||||
printer.error(f"{result}")
|
||||
else:
|
||||
printer.error("Listener connection failed after all retries.")
|
||||
self.listener_active = False
|
||||
return
|
||||
|
||||
while self.listener_active:
|
||||
time.sleep(0.5)
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print("")
|
||||
printer.warning("Capture interrupted by user.")
|
||||
self.listener_active = False
|
||||
finally:
|
||||
if self.listener_conn:
|
||||
try:
|
||||
self.listener_conn.shutdown(socket.SHUT_RDWR)
|
||||
self.listener_conn.close()
|
||||
except OSError: pass
|
||||
if hasattr(self.node, "child"):
|
||||
self.node.child.close(force=True)
|
||||
|
||||
return RemoteCapture
|
||||
|
||||
def __init__(self, args, parser, connapp):
|
||||
from connpy import printer
|
||||
if "--" in args.unknown_args:
|
||||
args.unknown_args.remove("--")
|
||||
if args.set_wireshark_path:
|
||||
connapp.services.config_svc.update_setting("wireshark_path", args.set_wireshark_path)
|
||||
printer.success(f"Wireshark path updated to: {args.set_wireshark_path}")
|
||||
return
|
||||
|
||||
if not args.node or not args.interface:
|
||||
parser.error("node and interface are required unless --set-wireshark-path is used")
|
||||
|
||||
RemoteCapture = self.get_remote_capture_class()
|
||||
capture = RemoteCapture(
|
||||
connapp=connapp, node_name=args.node, interface=args.interface,
|
||||
namespace=args.namespace, use_wireshark=args.wireshark,
|
||||
tcpdump_filter=args.tcpdump_filter, tcpdump_args=args.unknown_args
|
||||
)
|
||||
capture.run()
|
||||
|
||||
def _connpy_tree(info=None):
|
||||
"""Declarative completion tree for the capture plugin following completion.py patterns."""
|
||||
nodes = info.get("nodes", []) if info else []
|
||||
|
||||
|
||||
|
||||
# State 2: Main capture loop (No setup flag here)
|
||||
capture_main = {"__exclude_used__": True}
|
||||
|
||||
# Inline logic to suggest nodes only if no positional has been provided yet
|
||||
get_nodes = lambda w: nodes if not [x for x in w[:-1] if not x.startswith("-") and x != "capture"] else []
|
||||
capture_main["__extra__"] = get_nodes
|
||||
capture_main["*"] = capture_main
|
||||
|
||||
for f in ["--wireshark", "-w", "--help", "-h"]:
|
||||
capture_main[f] = capture_main
|
||||
for f in ["--namespace", "--filter", "-f"]:
|
||||
capture_main[f] = {"*": capture_main}
|
||||
|
||||
# State 1: Start (Highly discoverable configuration)
|
||||
capture_start = {
|
||||
"__exclude_used__": True,
|
||||
"__extra__": get_nodes,
|
||||
"--set-wireshark-path": {"__extra__": lambda w: get_cwd(w, "--set-wireshark-path")}
|
||||
}
|
||||
|
||||
# Transitions from start to main
|
||||
for f in ["--wireshark", "-w", "--help", "-h"]:
|
||||
capture_start[f] = capture_main
|
||||
for f in ["--namespace", "--filter", "-f"]:
|
||||
capture_start[f] = {"*": capture_main}
|
||||
|
||||
capture_start["*"] = capture_main
|
||||
|
||||
return capture_start
|
||||
Executable
+378
@@ -0,0 +1,378 @@
|
||||
#!/usr/bin/python3
|
||||
import argparse
|
||||
import os
|
||||
import time
|
||||
import zipfile
|
||||
import tempfile
|
||||
import io
|
||||
import yaml
|
||||
import threading
|
||||
from google.oauth2.credentials import Credentials
|
||||
from google.auth.transport.requests import Request
|
||||
from googleapiclient.discovery import build
|
||||
from google.auth.exceptions import RefreshError
|
||||
from google_auth_oauthlib.flow import InstalledAppFlow
|
||||
from googleapiclient.http import MediaFileUpload,MediaIoBaseDownload
|
||||
from googleapiclient.errors import HttpError
|
||||
from datetime import datetime
|
||||
|
||||
class sync:
|
||||
|
||||
def __init__(self, connapp):
|
||||
self.scopes = ['https://www.googleapis.com/auth/drive.appdata']
|
||||
self.token_file = f"{connapp.config.defaultdir}/gtoken.json"
|
||||
self.file = connapp.config.file
|
||||
self.key = connapp.config.key
|
||||
self.google_client = f"{os.path.dirname(os.path.abspath(__file__))}/sync_client"
|
||||
self.connapp = connapp
|
||||
try:
|
||||
self.sync = self.connapp.config.config["sync"]
|
||||
except:
|
||||
self.sync = False
|
||||
|
||||
def login(self):
|
||||
creds = None
|
||||
# The file token.json stores the user's access and refresh tokens.
|
||||
if os.path.exists(self.token_file):
|
||||
creds = Credentials.from_authorized_user_file(self.token_file, self.scopes)
|
||||
|
||||
try:
|
||||
# If there are no valid credentials available, let the user log in.
|
||||
if not creds or not creds.valid:
|
||||
if creds and creds.expired and creds.refresh_token:
|
||||
creds.refresh(Request())
|
||||
else:
|
||||
flow = InstalledAppFlow.from_client_secrets_file(
|
||||
self.google_client, self.scopes)
|
||||
creds = flow.run_local_server(port=0, access_type='offline')
|
||||
|
||||
# Save the credentials for the next run
|
||||
with open(self.token_file, 'w') as token:
|
||||
token.write(creds.to_json())
|
||||
|
||||
print("Logged in successfully.")
|
||||
|
||||
except RefreshError as e:
|
||||
# If refresh fails, delete the invalid token file and start a new login flow
|
||||
if os.path.exists(self.token_file):
|
||||
os.remove(self.token_file)
|
||||
print("Existing token was invalid and has been removed. Please log in again.")
|
||||
flow = InstalledAppFlow.from_client_secrets_file(
|
||||
self.google_client, self.scopes)
|
||||
creds = flow.run_local_server(port=0, access_type='offline')
|
||||
with open(self.token_file, 'w') as token:
|
||||
token.write(creds.to_json())
|
||||
print("Logged in successfully after re-authentication.")
|
||||
|
||||
def logout(self):
|
||||
if os.path.exists(self.token_file):
|
||||
os.remove(self.token_file)
|
||||
print("Logged out successfully.")
|
||||
else:
|
||||
print("No credentials file found. Already logged out.")
|
||||
|
||||
def get_credentials(self):
|
||||
# Load credentials from token.json
|
||||
if os.path.exists(self.token_file):
|
||||
creds = Credentials.from_authorized_user_file(self.token_file, self.scopes)
|
||||
else:
|
||||
print("Credentials file not found.")
|
||||
return 0
|
||||
|
||||
# If there are no valid credentials available, ask the user to log in again
|
||||
if not creds or not creds.valid:
|
||||
if creds and creds.expired and creds.refresh_token:
|
||||
try:
|
||||
creds.refresh(Request())
|
||||
except RefreshError:
|
||||
print("Could not refresh access token. Please log in again.")
|
||||
return 0
|
||||
else:
|
||||
print("Credentials are missing or invalid. Please log in.")
|
||||
return 0
|
||||
return creds
|
||||
|
||||
def check_login_status(self):
|
||||
# Check if the credentials file exists
|
||||
if os.path.exists(self.token_file):
|
||||
# Load credentials from token.json
|
||||
creds = Credentials.from_authorized_user_file(self.token_file)
|
||||
|
||||
# If credentials are expired, refresh them
|
||||
if creds and creds.expired and creds.refresh_token:
|
||||
try:
|
||||
creds.refresh(Request())
|
||||
except RefreshError:
|
||||
pass
|
||||
|
||||
# Check if the credentials are valid after refresh
|
||||
if creds.valid:
|
||||
return True
|
||||
else:
|
||||
return "Invalid"
|
||||
else:
|
||||
return False
|
||||
|
||||
def status(self):
|
||||
print(f"Login: {self.check_login_status()}")
|
||||
print(f"Sync: {self.sync}")
|
||||
|
||||
|
||||
def get_appdata_files(self):
|
||||
|
||||
creds = self.get_credentials()
|
||||
if not creds:
|
||||
return 0
|
||||
|
||||
try:
|
||||
# Create the Google Drive service
|
||||
service = build("drive", "v3", credentials=creds)
|
||||
|
||||
# List files in the appDataFolder
|
||||
response = (
|
||||
service.files()
|
||||
.list(
|
||||
spaces="appDataFolder",
|
||||
fields="files(id, name, appProperties)",
|
||||
pageSize=10,
|
||||
)
|
||||
.execute()
|
||||
)
|
||||
|
||||
files_info = []
|
||||
for file in response.get("files", []):
|
||||
# Extract file information
|
||||
file_id = file.get("id")
|
||||
file_name = file.get("name")
|
||||
timestamp = file.get("appProperties", {}).get("timestamp")
|
||||
human_readable_date = file.get("appProperties", {}).get("date")
|
||||
files_info.append({"name": file_name, "id": file_id, "date": human_readable_date, "timestamp": timestamp})
|
||||
|
||||
return files_info
|
||||
|
||||
except HttpError as error:
|
||||
print(f"An error occurred: {error}")
|
||||
return 0
|
||||
|
||||
|
||||
def dump_appdata_files_yaml(self):
|
||||
files_info = self.get_appdata_files()
|
||||
if not files_info:
|
||||
print("Failed to retrieve files or no files found.")
|
||||
return
|
||||
# Pretty print as YAML
|
||||
yaml_output = yaml.dump(files_info, sort_keys=False, default_flow_style=False)
|
||||
print(yaml_output)
|
||||
|
||||
|
||||
def backup_file_to_drive(self, file_path, timestamp):
|
||||
|
||||
creds = self.get_credentials()
|
||||
if not creds:
|
||||
return 1
|
||||
|
||||
# Create the Google Drive service
|
||||
service = build('drive', 'v3', credentials=creds)
|
||||
|
||||
# Convert timestamp to a human-readable date
|
||||
human_readable_date = datetime.fromtimestamp(timestamp/1000).strftime('%Y-%m-%d %H:%M:%S')
|
||||
|
||||
# Upload the file to Google Drive with timestamp metadata
|
||||
file_metadata = {
|
||||
'name': os.path.basename(file_path),
|
||||
'parents': ["appDataFolder"],
|
||||
'appProperties': {
|
||||
'timestamp': str(timestamp),
|
||||
'date': human_readable_date # Add human-readable date attribute
|
||||
}
|
||||
}
|
||||
media = MediaFileUpload(file_path)
|
||||
|
||||
try:
|
||||
file = service.files().create(body=file_metadata, media_body=media, fields='id').execute()
|
||||
return 0
|
||||
except Exception as e:
|
||||
return f"An error occurred: {e}"
|
||||
|
||||
def delete_file_by_id(self, file_id):
|
||||
creds = self.get_credentials()
|
||||
if not creds:
|
||||
return 1
|
||||
|
||||
try:
|
||||
# Create the Google Drive service
|
||||
service = build("drive", "v3", credentials=creds)
|
||||
|
||||
# Delete the file
|
||||
service.files().delete(fileId=file_id).execute()
|
||||
return 0
|
||||
except Exception as e:
|
||||
return f"An error occurred: {e}"
|
||||
|
||||
def compress_specific_files(self, zip_path):
|
||||
with zipfile.ZipFile(zip_path, 'w', zipfile.ZIP_DEFLATED) as zipf:
|
||||
zipf.write(self.file, "config.json")
|
||||
zipf.write(self.key, ".osk")
|
||||
|
||||
def compress_and_upload(self):
|
||||
# Read the file content to get the folder path
|
||||
timestamp = int(time.time() * 1000)
|
||||
# Create a temporary directory for storing the zip file
|
||||
with tempfile.TemporaryDirectory() as tmp_dir:
|
||||
# Compress specific files from the folder path to a zip file in the temporary directory
|
||||
zip_path = os.path.join(tmp_dir, f"connpy-backup-{timestamp}.zip")
|
||||
self.compress_specific_files(zip_path)
|
||||
|
||||
# Get the files in the app data folder
|
||||
app_data_files = self.get_appdata_files()
|
||||
if app_data_files == 0:
|
||||
return 1
|
||||
|
||||
# If there are 10 or more files, remove the oldest one based on timestamp
|
||||
if len(app_data_files) >= 10:
|
||||
oldest_file = min(app_data_files, key=lambda x: x['timestamp'])
|
||||
delete_old = self.delete_file_by_id(oldest_file['id'])
|
||||
if delete_old:
|
||||
print(delete_old)
|
||||
return 1
|
||||
|
||||
# Upload the new file
|
||||
upload_new = self.backup_file_to_drive(zip_path, timestamp)
|
||||
if upload_new:
|
||||
print(upload_new)
|
||||
return 1
|
||||
|
||||
print("Backup to google uploaded successfully.")
|
||||
return 0
|
||||
|
||||
def decompress_zip(self, zip_path):
|
||||
try:
|
||||
with zipfile.ZipFile(zip_path, 'r') as zipf:
|
||||
# Extract the specific file to the specified destination
|
||||
zipf.extract("config.json", os.path.dirname(self.file))
|
||||
zipf.extract(".osk", os.path.dirname(self.key))
|
||||
return 0
|
||||
except Exception as e:
|
||||
print(f"An error occurred: {e}")
|
||||
return 1
|
||||
|
||||
def download_file_by_id(self, file_id, destination_path):
|
||||
|
||||
creds = self.get_credentials()
|
||||
if not creds:
|
||||
return 1
|
||||
|
||||
try:
|
||||
# Create the Google Drive service
|
||||
service = build('drive', 'v3', credentials=creds)
|
||||
|
||||
# Download the file
|
||||
request = service.files().get_media(fileId=file_id)
|
||||
fh = io.FileIO(destination_path, mode='wb')
|
||||
downloader = MediaIoBaseDownload(fh, request)
|
||||
done = False
|
||||
while done is False:
|
||||
status, done = downloader.next_chunk()
|
||||
|
||||
return 0
|
||||
except Exception as e:
|
||||
return f"An error occurred: {e}"
|
||||
|
||||
def restore_last_config(self, file_id=None):
|
||||
# Get the files in the app data folder
|
||||
app_data_files = self.get_appdata_files()
|
||||
if not app_data_files:
|
||||
print("No files found in app data folder.")
|
||||
return 1
|
||||
|
||||
# Check if a specific file_id was provided and if it exists in the list
|
||||
if file_id:
|
||||
selected_file = next((f for f in app_data_files if f['id'] == file_id), None)
|
||||
if not selected_file:
|
||||
print(f"No file found with ID: {file_id}")
|
||||
return 1
|
||||
else:
|
||||
# Find the latest file based on timestamp
|
||||
selected_file = max(app_data_files, key=lambda x: x['timestamp'])
|
||||
|
||||
# Download the selected file to a temporary location
|
||||
temp_download_path = os.path.join(tempfile.gettempdir(), 'connpy-backup.zip')
|
||||
if self.download_file_by_id(selected_file['id'], temp_download_path):
|
||||
return 1
|
||||
|
||||
# Unzip the downloaded file to the destination folder
|
||||
if self.decompress_zip(temp_download_path):
|
||||
print("Failed to decompress the file.")
|
||||
return 1
|
||||
|
||||
print(f"Backup from Google Drive restored successfully: {selected_file['name']}")
|
||||
return 0
|
||||
|
||||
def config_listener_post(self, args, kwargs):
|
||||
if self.sync:
|
||||
if self.check_login_status() == True:
|
||||
if not kwargs["result"]:
|
||||
self.compress_and_upload()
|
||||
else:
|
||||
print("Sync cannot be performed. Please check your login status.")
|
||||
return kwargs["result"]
|
||||
|
||||
def config_listener_pre(self, *args, **kwargs):
|
||||
try:
|
||||
self.sync = self.connapp.config.config["sync"]
|
||||
except:
|
||||
self.sync = False
|
||||
return args, kwargs
|
||||
|
||||
def start_post_thread(self, *args, **kwargs):
|
||||
post_thread = threading.Thread(target=self.config_listener_post, args=(args,kwargs))
|
||||
post_thread.start()
|
||||
|
||||
class Preload:
|
||||
def __init__(self, connapp):
|
||||
syncapp = sync(connapp)
|
||||
connapp.config._saveconfig.register_post_hook(syncapp.start_post_thread)
|
||||
connapp.config._saveconfig.register_pre_hook(syncapp.config_listener_pre)
|
||||
|
||||
class Parser:
|
||||
def __init__(self):
|
||||
self.parser = argparse.ArgumentParser(description="Sync config with Google")
|
||||
self.description = "Sync config with Google"
|
||||
subparsers = self.parser.add_subparsers(title="Commands", dest='command',metavar="")
|
||||
login_parser = subparsers.add_parser("login", help="Login to Google to enable synchronization")
|
||||
logout_parser = subparsers.add_parser("logout", help="Logout from Google")
|
||||
start_parser = subparsers.add_parser("start", help="Start synchronizing with Google")
|
||||
stop_parser = subparsers.add_parser("stop", help="Stop any ongoing synchronization")
|
||||
restore_parser = subparsers.add_parser("restore", help="Restore data from Google")
|
||||
backup_parser = subparsers.add_parser("once", help="Backup current configuration to Google once")
|
||||
restore_parser.add_argument("--id", type=str, help="Optional file ID to restore a specific backup", required=False)
|
||||
status_parser = subparsers.add_parser("status", help="Check the current status of synchronization")
|
||||
list_parser = subparsers.add_parser("list", help="List all backups stored on Google")
|
||||
|
||||
class Entrypoint:
|
||||
def __init__(self, args, parser, connapp):
|
||||
syncapp = sync(connapp)
|
||||
if args.command == 'login':
|
||||
syncapp.login()
|
||||
elif args.command == "status":
|
||||
syncapp.status()
|
||||
elif args.command == "start":
|
||||
connapp._change_settings("sync", True)
|
||||
elif args.command == "stop":
|
||||
connapp._change_settings("sync", False)
|
||||
elif args.command == "list":
|
||||
syncapp.dump_appdata_files_yaml()
|
||||
elif args.command == "once":
|
||||
syncapp.compress_and_upload()
|
||||
elif args.command == "restore":
|
||||
syncapp.restore_last_config(args.id)
|
||||
elif args.command == "logout":
|
||||
syncapp.logout()
|
||||
|
||||
def _connpy_completion(wordsnumber, words, info = None):
|
||||
if wordsnumber == 3:
|
||||
result = ["--help", "login", "status", "start", "stop", "list", "once", "restore", "logout"]
|
||||
#NETMASK_completion
|
||||
if wordsnumber == 4 and words[1] == "restore":
|
||||
result = ["--help", "--id"]
|
||||
return result
|
||||
@@ -1,8 +0,0 @@
|
||||
import sys
|
||||
import os
|
||||
|
||||
# gRPC generated files use absolute imports that assume their directory is in sys.path.
|
||||
# We add this directory to sys.path to allow imports like 'import connpy_pb2' to succeed.
|
||||
current_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
if current_dir not in sys.path:
|
||||
sys.path.insert(0, current_dir)
|
||||
File diff suppressed because one or more lines are too long
File diff suppressed because it is too large
Load Diff
@@ -1,25 +0,0 @@
|
||||
syntax = "proto3";
|
||||
package connpy_remote;
|
||||
|
||||
message IdRequest {
|
||||
string id = 1;
|
||||
}
|
||||
|
||||
message StringResponse {
|
||||
string value = 1;
|
||||
}
|
||||
|
||||
message PluginInvokeRequest {
|
||||
string name = 1;
|
||||
string args_json = 2;
|
||||
}
|
||||
|
||||
message OutputChunk {
|
||||
string text = 1;
|
||||
bool is_error = 2;
|
||||
}
|
||||
|
||||
service RemotePluginService {
|
||||
rpc get_plugin_source(IdRequest) returns (StringResponse);
|
||||
rpc invoke_plugin(PluginInvokeRequest) returns (stream OutputChunk);
|
||||
}
|
||||
@@ -1,44 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Generated by the protocol buffer compiler. DO NOT EDIT!
|
||||
# NO CHECKED-IN PROTOBUF GENCODE
|
||||
# source: remote_plugin.proto
|
||||
# Protobuf Python Version: 6.31.1
|
||||
"""Generated protocol buffer code."""
|
||||
from google.protobuf import descriptor as _descriptor
|
||||
from google.protobuf import descriptor_pool as _descriptor_pool
|
||||
from google.protobuf import runtime_version as _runtime_version
|
||||
from google.protobuf import symbol_database as _symbol_database
|
||||
from google.protobuf.internal import builder as _builder
|
||||
_runtime_version.ValidateProtobufRuntimeVersion(
|
||||
_runtime_version.Domain.PUBLIC,
|
||||
6,
|
||||
31,
|
||||
1,
|
||||
'',
|
||||
'remote_plugin.proto'
|
||||
)
|
||||
# @@protoc_insertion_point(imports)
|
||||
|
||||
_sym_db = _symbol_database.Default()
|
||||
|
||||
|
||||
|
||||
|
||||
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x13remote_plugin.proto\x12\rconnpy_remote\"\x17\n\tIdRequest\x12\n\n\x02id\x18\x01 \x01(\t\"\x1f\n\x0eStringResponse\x12\r\n\x05value\x18\x01 \x01(\t\"6\n\x13PluginInvokeRequest\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x11\n\targs_json\x18\x02 \x01(\t\"-\n\x0bOutputChunk\x12\x0c\n\x04text\x18\x01 \x01(\t\x12\x10\n\x08is_error\x18\x02 \x01(\x08\x32\xb6\x01\n\x13RemotePluginService\x12L\n\x11get_plugin_source\x12\x18.connpy_remote.IdRequest\x1a\x1d.connpy_remote.StringResponse\x12Q\n\rinvoke_plugin\x12\".connpy_remote.PluginInvokeRequest\x1a\x1a.connpy_remote.OutputChunk0\x01\x62\x06proto3')
|
||||
|
||||
_globals = globals()
|
||||
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
|
||||
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'remote_plugin_pb2', _globals)
|
||||
if not _descriptor._USE_C_DESCRIPTORS:
|
||||
DESCRIPTOR._loaded_options = None
|
||||
_globals['_IDREQUEST']._serialized_start=38
|
||||
_globals['_IDREQUEST']._serialized_end=61
|
||||
_globals['_STRINGRESPONSE']._serialized_start=63
|
||||
_globals['_STRINGRESPONSE']._serialized_end=94
|
||||
_globals['_PLUGININVOKEREQUEST']._serialized_start=96
|
||||
_globals['_PLUGININVOKEREQUEST']._serialized_end=150
|
||||
_globals['_OUTPUTCHUNK']._serialized_start=152
|
||||
_globals['_OUTPUTCHUNK']._serialized_end=197
|
||||
_globals['_REMOTEPLUGINSERVICE']._serialized_start=200
|
||||
_globals['_REMOTEPLUGINSERVICE']._serialized_end=382
|
||||
# @@protoc_insertion_point(module_scope)
|
||||
@@ -1,140 +0,0 @@
|
||||
# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
|
||||
"""Client and server classes corresponding to protobuf-defined services."""
|
||||
import grpc
|
||||
import warnings
|
||||
|
||||
from . import remote_plugin_pb2 as remote__plugin__pb2
|
||||
|
||||
GRPC_GENERATED_VERSION = '1.80.0'
|
||||
GRPC_VERSION = grpc.__version__
|
||||
_version_not_supported = False
|
||||
|
||||
try:
|
||||
from grpc._utilities import first_version_is_lower
|
||||
_version_not_supported = first_version_is_lower(GRPC_VERSION, GRPC_GENERATED_VERSION)
|
||||
except ImportError:
|
||||
_version_not_supported = True
|
||||
|
||||
if _version_not_supported:
|
||||
raise RuntimeError(
|
||||
f'The grpc package installed is at version {GRPC_VERSION},'
|
||||
+ ' but the generated code in remote_plugin_pb2_grpc.py depends on'
|
||||
+ f' grpcio>={GRPC_GENERATED_VERSION}.'
|
||||
+ f' Please upgrade your grpc module to grpcio>={GRPC_GENERATED_VERSION}'
|
||||
+ f' or downgrade your generated code using grpcio-tools<={GRPC_VERSION}.'
|
||||
)
|
||||
|
||||
|
||||
class RemotePluginServiceStub(object):
|
||||
"""Missing associated documentation comment in .proto file."""
|
||||
|
||||
def __init__(self, channel):
|
||||
"""Constructor.
|
||||
|
||||
Args:
|
||||
channel: A grpc.Channel.
|
||||
"""
|
||||
self.get_plugin_source = channel.unary_unary(
|
||||
'/connpy_remote.RemotePluginService/get_plugin_source',
|
||||
request_serializer=remote__plugin__pb2.IdRequest.SerializeToString,
|
||||
response_deserializer=remote__plugin__pb2.StringResponse.FromString,
|
||||
_registered_method=True)
|
||||
self.invoke_plugin = channel.unary_stream(
|
||||
'/connpy_remote.RemotePluginService/invoke_plugin',
|
||||
request_serializer=remote__plugin__pb2.PluginInvokeRequest.SerializeToString,
|
||||
response_deserializer=remote__plugin__pb2.OutputChunk.FromString,
|
||||
_registered_method=True)
|
||||
|
||||
|
||||
class RemotePluginServiceServicer(object):
|
||||
"""Missing associated documentation comment in .proto file."""
|
||||
|
||||
def get_plugin_source(self, request, context):
|
||||
"""Missing associated documentation comment in .proto file."""
|
||||
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
|
||||
context.set_details('Method not implemented!')
|
||||
raise NotImplementedError('Method not implemented!')
|
||||
|
||||
def invoke_plugin(self, request, context):
|
||||
"""Missing associated documentation comment in .proto file."""
|
||||
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
|
||||
context.set_details('Method not implemented!')
|
||||
raise NotImplementedError('Method not implemented!')
|
||||
|
||||
|
||||
def add_RemotePluginServiceServicer_to_server(servicer, server):
|
||||
rpc_method_handlers = {
|
||||
'get_plugin_source': grpc.unary_unary_rpc_method_handler(
|
||||
servicer.get_plugin_source,
|
||||
request_deserializer=remote__plugin__pb2.IdRequest.FromString,
|
||||
response_serializer=remote__plugin__pb2.StringResponse.SerializeToString,
|
||||
),
|
||||
'invoke_plugin': grpc.unary_stream_rpc_method_handler(
|
||||
servicer.invoke_plugin,
|
||||
request_deserializer=remote__plugin__pb2.PluginInvokeRequest.FromString,
|
||||
response_serializer=remote__plugin__pb2.OutputChunk.SerializeToString,
|
||||
),
|
||||
}
|
||||
generic_handler = grpc.method_handlers_generic_handler(
|
||||
'connpy_remote.RemotePluginService', rpc_method_handlers)
|
||||
server.add_generic_rpc_handlers((generic_handler,))
|
||||
server.add_registered_method_handlers('connpy_remote.RemotePluginService', rpc_method_handlers)
|
||||
|
||||
|
||||
# This class is part of an EXPERIMENTAL API.
|
||||
class RemotePluginService(object):
|
||||
"""Missing associated documentation comment in .proto file."""
|
||||
|
||||
@staticmethod
|
||||
def get_plugin_source(request,
|
||||
target,
|
||||
options=(),
|
||||
channel_credentials=None,
|
||||
call_credentials=None,
|
||||
insecure=False,
|
||||
compression=None,
|
||||
wait_for_ready=None,
|
||||
timeout=None,
|
||||
metadata=None):
|
||||
return grpc.experimental.unary_unary(
|
||||
request,
|
||||
target,
|
||||
'/connpy_remote.RemotePluginService/get_plugin_source',
|
||||
remote__plugin__pb2.IdRequest.SerializeToString,
|
||||
remote__plugin__pb2.StringResponse.FromString,
|
||||
options,
|
||||
channel_credentials,
|
||||
insecure,
|
||||
call_credentials,
|
||||
compression,
|
||||
wait_for_ready,
|
||||
timeout,
|
||||
metadata,
|
||||
_registered_method=True)
|
||||
|
||||
@staticmethod
|
||||
def invoke_plugin(request,
|
||||
target,
|
||||
options=(),
|
||||
channel_credentials=None,
|
||||
call_credentials=None,
|
||||
insecure=False,
|
||||
compression=None,
|
||||
wait_for_ready=None,
|
||||
timeout=None,
|
||||
metadata=None):
|
||||
return grpc.experimental.unary_stream(
|
||||
request,
|
||||
target,
|
||||
'/connpy_remote.RemotePluginService/invoke_plugin',
|
||||
remote__plugin__pb2.PluginInvokeRequest.SerializeToString,
|
||||
remote__plugin__pb2.OutputChunk.FromString,
|
||||
options,
|
||||
channel_credentials,
|
||||
insecure,
|
||||
call_credentials,
|
||||
compression,
|
||||
wait_for_ready,
|
||||
timeout,
|
||||
metadata,
|
||||
_registered_method=True)
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,984 +0,0 @@
|
||||
import grpc
|
||||
import queue
|
||||
import threading
|
||||
from functools import wraps
|
||||
from google.protobuf.empty_pb2 import Empty
|
||||
|
||||
from . import connpy_pb2, connpy_pb2_grpc, remote_plugin_pb2, remote_plugin_pb2_grpc
|
||||
from .utils import to_value, from_value, to_struct, from_struct
|
||||
from ..services.exceptions import ConnpyError
|
||||
from ..hooks import MethodHook
|
||||
from .. import printer
|
||||
from ..cli.terminal_ui import CopilotInterface
|
||||
from ..utils import log_cleaner
|
||||
|
||||
def handle_errors(func):
|
||||
@wraps(func)
|
||||
def wrapper(*args, **kwargs):
|
||||
try:
|
||||
return func(*args, **kwargs)
|
||||
except grpc.RpcError as e:
|
||||
# Re-raise gRPC errors as native ConnpyError to keep CLI handlers agnostic
|
||||
details = e.details()
|
||||
|
||||
# Identify the host if available on the instance
|
||||
instance = args[0] if args else None
|
||||
host = getattr(instance, "remote_host", "remote host")
|
||||
|
||||
# Make common gRPC errors more readable
|
||||
if "failed to connect to all addresses" in details:
|
||||
simplified = f"Failed to connect to remote host at {host} (Connection refused)"
|
||||
elif "Method not found" in details:
|
||||
simplified = f"Remote server at {host} is using an incompatible version"
|
||||
elif "Deadline Exceeded" in details:
|
||||
simplified = f"Request to {host} timed out"
|
||||
else:
|
||||
simplified = details
|
||||
|
||||
raise ConnpyError(simplified)
|
||||
return wrapper
|
||||
class NodeStub:
|
||||
def __init__(self, channel, remote_host, config=None):
|
||||
self.stub = connpy_pb2_grpc.NodeServiceStub(channel)
|
||||
self.remote_host = remote_host
|
||||
self.config = config
|
||||
|
||||
def _handle_remote_copilot(self, res, request_queue, response_queue, client_buffer_bytes, cmd_byte_positions, pause_generator, resume_generator, old_tty):
|
||||
import json, asyncio, termios, sys, tty, queue
|
||||
from ..core import copilot_terminal_mode
|
||||
from . import connpy_pb2
|
||||
|
||||
pause_generator()
|
||||
|
||||
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_tty)
|
||||
interface = CopilotInterface(
|
||||
self.config,
|
||||
history=getattr(self, 'copilot_history', None),
|
||||
session_state=getattr(self, 'copilot_state', None)
|
||||
)
|
||||
self.copilot_history = interface.history
|
||||
self.copilot_state = interface.session_state
|
||||
|
||||
node_info = json.loads(res.copilot_node_info_json) if res.copilot_node_info_json else {}
|
||||
|
||||
async def on_ai_call_remote(active_buffer, question, chunk_callback, merged_node_info):
|
||||
# Send request to server
|
||||
request_queue.put(connpy_pb2.InteractRequest(
|
||||
copilot_question=question,
|
||||
copilot_context_buffer=active_buffer,
|
||||
copilot_node_info_json=json.dumps(merged_node_info)
|
||||
))
|
||||
# Wait for chunks from server
|
||||
while True:
|
||||
try:
|
||||
chunk_res = response_queue.get(timeout=0.1)
|
||||
if chunk_res is None: return {"error": "Server disconnected"}
|
||||
if chunk_res.copilot_stream_chunk:
|
||||
chunk_callback(chunk_res.copilot_stream_chunk)
|
||||
elif chunk_res.copilot_response_json:
|
||||
return json.loads(chunk_res.copilot_response_json)
|
||||
except queue.Empty:
|
||||
await asyncio.sleep(0.05)
|
||||
|
||||
# Wrap in async loop
|
||||
async def run_remote_copilot():
|
||||
while True:
|
||||
action, commands, custom_cmd = await interface.run_session(
|
||||
raw_bytes=bytes(client_buffer_bytes),
|
||||
cmd_byte_positions=cmd_byte_positions,
|
||||
node_info=node_info,
|
||||
on_ai_call=on_ai_call_remote
|
||||
)
|
||||
|
||||
if action == "continue":
|
||||
# Send continue signal to server to loop back for another question
|
||||
request_queue.put(connpy_pb2.InteractRequest(copilot_action="continue"))
|
||||
continue
|
||||
|
||||
return action, commands, custom_cmd
|
||||
|
||||
with copilot_terminal_mode():
|
||||
action, commands, custom_cmd = asyncio.run(run_remote_copilot())
|
||||
|
||||
# Prepare final action for server
|
||||
action_sent = "cancel"
|
||||
if action == "send_all" and commands:
|
||||
# In remote mode, send the selected commands as a custom block
|
||||
# so the server executes exactly what the user picked (e.g., selection '1')
|
||||
action_sent = f"custom:{chr(10).join(commands)}"
|
||||
elif action == "custom" and custom_cmd:
|
||||
action_sent = f"custom:{chr(10).join(custom_cmd)}"
|
||||
request_queue.put(connpy_pb2.InteractRequest(copilot_action=action_sent))
|
||||
resume_generator()
|
||||
tty.setraw(sys.stdin.fileno())
|
||||
|
||||
@handle_errors
|
||||
def connect_node(self, unique_id, sftp=False, debug=False, logger=None):
|
||||
import sys
|
||||
import select
|
||||
import tty
|
||||
import termios
|
||||
import queue
|
||||
import os
|
||||
import threading
|
||||
|
||||
request_queue = queue.Queue()
|
||||
client_buffer_bytes = bytearray()
|
||||
cmd_byte_positions = [(0, None)]
|
||||
pause_stdin = [False]
|
||||
wake_r, wake_w = os.pipe()
|
||||
|
||||
def pause_generator():
|
||||
pause_stdin[0] = True
|
||||
os.write(wake_w, b'\x00')
|
||||
|
||||
def resume_generator():
|
||||
pause_stdin[0] = False
|
||||
|
||||
def request_generator():
|
||||
cols, rows = 80, 24
|
||||
try:
|
||||
size = os.get_terminal_size()
|
||||
cols, rows = size.columns, size.lines
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
yield connpy_pb2.InteractRequest(
|
||||
id=unique_id, sftp=sftp, debug=debug, cols=cols, rows=rows
|
||||
)
|
||||
|
||||
while True:
|
||||
try:
|
||||
while True:
|
||||
req = request_queue.get_nowait()
|
||||
if req is None:
|
||||
return
|
||||
yield req
|
||||
except queue.Empty:
|
||||
pass
|
||||
|
||||
if pause_stdin[0]:
|
||||
import time
|
||||
time.sleep(0.05)
|
||||
continue
|
||||
|
||||
r, _, _ = select.select([sys.stdin.fileno(), wake_r], [], [], 0.05)
|
||||
if wake_r in r:
|
||||
os.read(wake_r, 1)
|
||||
continue
|
||||
if sys.stdin.fileno() in r and not pause_stdin[0]:
|
||||
try:
|
||||
data = os.read(sys.stdin.fileno(), 1024)
|
||||
if not data:
|
||||
break
|
||||
if b'\r' in data or b'\n' in data:
|
||||
cmd_byte_positions.append((len(client_buffer_bytes), None))
|
||||
yield connpy_pb2.InteractRequest(stdin_data=data)
|
||||
except OSError:
|
||||
break
|
||||
|
||||
# Fetch node details for the connection message
|
||||
try:
|
||||
node_details = self.get_node_details(unique_id)
|
||||
host = node_details.get("host", "unknown")
|
||||
port = str(node_details.get("port", ""))
|
||||
protocol = "sftp" if sftp else node_details.get("protocol", "ssh")
|
||||
port_str = f":{port}" if port and protocol not in ["ssm", "kubectl", "docker"] else ""
|
||||
conn_msg = f"Connected to {unique_id} at {host}{port_str} via: {protocol}"
|
||||
except Exception:
|
||||
conn_msg = f"Connected to {unique_id}"
|
||||
|
||||
old_tty = termios.tcgetattr(sys.stdin)
|
||||
try:
|
||||
import time
|
||||
tty.setraw(sys.stdin.fileno())
|
||||
response_iterator = self.stub.interact_node(request_generator())
|
||||
|
||||
import queue
|
||||
response_queue = queue.Queue()
|
||||
|
||||
def response_consumer():
|
||||
try:
|
||||
for r in response_iterator:
|
||||
response_queue.put(r)
|
||||
except Exception:
|
||||
pass
|
||||
response_queue.put(None)
|
||||
|
||||
t_consumer = threading.Thread(target=response_consumer, daemon=True)
|
||||
t_consumer.start()
|
||||
|
||||
# First phase: Wait for connection status, print early data
|
||||
try:
|
||||
while True:
|
||||
res = response_queue.get()
|
||||
if res is None:
|
||||
return
|
||||
if res.stdout_data:
|
||||
data = res.stdout_data
|
||||
if debug:
|
||||
data = data.replace(b'\x1b[H\x1b[2J', b'').replace(b'\x1bc', b'').replace(b'\x1b[3J', b'')
|
||||
os.write(sys.stdout.fileno(), data)
|
||||
|
||||
if res.success:
|
||||
# Connection established on server, show success message
|
||||
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_tty)
|
||||
printer.success(conn_msg)
|
||||
pause_stdin[0] = False
|
||||
tty.setraw(sys.stdin.fileno())
|
||||
break
|
||||
|
||||
if res.error_message:
|
||||
# Connection failed on server
|
||||
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_tty)
|
||||
printer.error(f"Connection failed: {res.error_message}")
|
||||
return
|
||||
except queue.Empty:
|
||||
return
|
||||
|
||||
# Second phase: Stream active session
|
||||
# Clear screen filter is only applied before success (Phase 1).
|
||||
# Once the user has a prompt, Ctrl+L must work normally.
|
||||
while True:
|
||||
res = response_queue.get()
|
||||
if res is None:
|
||||
break
|
||||
if res.copilot_prompt:
|
||||
self._handle_remote_copilot(
|
||||
res, request_queue, response_queue,
|
||||
client_buffer_bytes, cmd_byte_positions,
|
||||
pause_generator, resume_generator, old_tty
|
||||
)
|
||||
continue
|
||||
|
||||
if res.copilot_injected_command:
|
||||
cmd_byte_positions.append((len(client_buffer_bytes), res.copilot_injected_command))
|
||||
|
||||
if res.stdout_data:
|
||||
os.write(sys.stdout.fileno(), res.stdout_data)
|
||||
client_buffer_bytes.extend(res.stdout_data)
|
||||
finally:
|
||||
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_tty)
|
||||
os.close(wake_r)
|
||||
os.close(wake_w)
|
||||
|
||||
@handle_errors
|
||||
def connect_dynamic(self, connection_params, debug=False):
|
||||
import sys
|
||||
import select
|
||||
import tty
|
||||
import termios
|
||||
import queue
|
||||
import os
|
||||
import json
|
||||
|
||||
params_json = json.dumps(connection_params)
|
||||
request_queue = queue.Queue()
|
||||
client_buffer_bytes = bytearray()
|
||||
cmd_byte_positions = [(0, None)]
|
||||
pause_stdin = [False]
|
||||
wake_r, wake_w = os.pipe()
|
||||
|
||||
def pause_generator():
|
||||
pause_stdin[0] = True
|
||||
os.write(wake_w, b'\x00')
|
||||
|
||||
def resume_generator():
|
||||
pause_stdin[0] = False
|
||||
|
||||
def request_generator():
|
||||
cols, rows = 80, 24
|
||||
try:
|
||||
size = os.get_terminal_size()
|
||||
cols, rows = size.columns, size.lines
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
yield connpy_pb2.InteractRequest(
|
||||
id="dynamic", debug=debug, cols=cols, rows=rows,
|
||||
connection_params_json=params_json
|
||||
)
|
||||
|
||||
while True:
|
||||
try:
|
||||
while True:
|
||||
req = request_queue.get_nowait()
|
||||
if req is None:
|
||||
return
|
||||
yield req
|
||||
except queue.Empty:
|
||||
pass
|
||||
|
||||
if pause_stdin[0]:
|
||||
import time
|
||||
time.sleep(0.05)
|
||||
continue
|
||||
|
||||
r, _, _ = select.select([sys.stdin.fileno(), wake_r], [], [], 0.05)
|
||||
if wake_r in r:
|
||||
os.read(wake_r, 1)
|
||||
continue
|
||||
if sys.stdin.fileno() in r and not pause_stdin[0]:
|
||||
try:
|
||||
data = os.read(sys.stdin.fileno(), 1024)
|
||||
if not data:
|
||||
break
|
||||
if b'\r' in data or b'\n' in data:
|
||||
cmd_byte_positions.append((len(client_buffer_bytes), None))
|
||||
yield connpy_pb2.InteractRequest(stdin_data=data)
|
||||
except OSError:
|
||||
break
|
||||
|
||||
# Prepare connection message
|
||||
try:
|
||||
node_name = connection_params.get("name", "dynamic@remote")
|
||||
host = connection_params.get("host", "dynamic")
|
||||
port = str(connection_params.get("port", ""))
|
||||
protocol = connection_params.get("protocol", "ssh")
|
||||
port_str = f":{port}" if port and protocol not in ["ssm", "kubectl", "docker"] else ""
|
||||
conn_msg = f"Connected to {node_name} at {host}{port_str} via: {protocol}"
|
||||
except Exception:
|
||||
node_name = connection_params.get("name", "dynamic@remote") if isinstance(connection_params, dict) else "dynamic@remote"
|
||||
conn_msg = f"Connected to {node_name}"
|
||||
|
||||
old_tty = termios.tcgetattr(sys.stdin)
|
||||
try:
|
||||
import time
|
||||
tty.setraw(sys.stdin.fileno())
|
||||
response_iterator = self.stub.interact_node(request_generator())
|
||||
|
||||
import queue
|
||||
response_queue = queue.Queue()
|
||||
|
||||
def response_consumer():
|
||||
try:
|
||||
for r in response_iterator:
|
||||
response_queue.put(r)
|
||||
except Exception:
|
||||
pass
|
||||
response_queue.put(None)
|
||||
|
||||
t_consumer = threading.Thread(target=response_consumer, daemon=True)
|
||||
t_consumer.start()
|
||||
|
||||
# First phase: Wait for connection status, print early data
|
||||
try:
|
||||
while True:
|
||||
res = response_queue.get()
|
||||
if res is None:
|
||||
return
|
||||
if res.stdout_data:
|
||||
data = res.stdout_data
|
||||
if debug:
|
||||
data = data.replace(b'\x1b[H\x1b[2J', b'').replace(b'\x1bc', b'').replace(b'\x1b[3J', b'')
|
||||
os.write(sys.stdout.fileno(), data)
|
||||
|
||||
if res.success:
|
||||
# Connection established on server, show success message
|
||||
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_tty)
|
||||
printer.success(conn_msg)
|
||||
pause_stdin[0] = False
|
||||
tty.setraw(sys.stdin.fileno())
|
||||
break
|
||||
|
||||
if res.error_message:
|
||||
# Connection failed on server
|
||||
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_tty)
|
||||
printer.error(f"Connection failed: {res.error_message}")
|
||||
return
|
||||
except queue.Empty:
|
||||
return
|
||||
|
||||
# Second phase: Stream active session
|
||||
while True:
|
||||
res = response_queue.get()
|
||||
if res is None:
|
||||
break
|
||||
if res.copilot_prompt:
|
||||
self._handle_remote_copilot(
|
||||
res, request_queue, response_queue,
|
||||
client_buffer_bytes, cmd_byte_positions,
|
||||
pause_generator, resume_generator, old_tty
|
||||
)
|
||||
continue
|
||||
|
||||
if res.copilot_injected_command:
|
||||
cmd_byte_positions.append((len(client_buffer_bytes), res.copilot_injected_command))
|
||||
|
||||
if res.stdout_data:
|
||||
os.write(sys.stdout.fileno(), res.stdout_data)
|
||||
client_buffer_bytes.extend(res.stdout_data)
|
||||
finally:
|
||||
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_tty)
|
||||
os.close(wake_r)
|
||||
os.close(wake_w)
|
||||
|
||||
@MethodHook
|
||||
@handle_errors
|
||||
def list_nodes(self, filter_str=None, format_str=None):
|
||||
req = connpy_pb2.FilterRequest(filter_str=filter_str or "", format_str=format_str or "")
|
||||
return from_value(self.stub.list_nodes(req).data) or []
|
||||
|
||||
@MethodHook
|
||||
@handle_errors
|
||||
def list_folders(self, filter_str=None):
|
||||
req = connpy_pb2.FilterRequest(filter_str=filter_str or "")
|
||||
return from_value(self.stub.list_folders(req).data) or []
|
||||
|
||||
@handle_errors
|
||||
def get_node_details(self, unique_id):
|
||||
return from_struct(self.stub.get_node_details(connpy_pb2.IdRequest(id=unique_id)).data)
|
||||
|
||||
@handle_errors
|
||||
def explode_unique(self, unique_id):
|
||||
return from_value(self.stub.explode_unique(connpy_pb2.IdRequest(id=unique_id)).data)
|
||||
|
||||
@handle_errors
|
||||
def validate_parent_folder(self, unique_id):
|
||||
self.stub.validate_parent_folder(connpy_pb2.IdRequest(id=unique_id))
|
||||
|
||||
@handle_errors
|
||||
def generate_cache(self, nodes=None, folders=None, profiles=None):
|
||||
# 1. Update remote cache on server
|
||||
self.stub.generate_cache(Empty())
|
||||
|
||||
# 2. Update local fzf/text cache files
|
||||
# If no data provided, we fetch it all from remote to sync local files
|
||||
if nodes is None and folders is None and profiles is None:
|
||||
nodes = self.list_nodes()
|
||||
folders = self.list_folders()
|
||||
# We don't have direct access to ProfileStub here, but usually
|
||||
# node cache is what matters for fzf. We'll fetch profiles if we can.
|
||||
# For now, let's sync what we have.
|
||||
|
||||
if nodes is not None or folders is not None or profiles is not None:
|
||||
self.config._generate_nodes_cache(nodes=nodes, folders=folders, profiles=profiles)
|
||||
|
||||
def _trigger_local_cache_sync(self):
|
||||
"""Helper to fetch remote data and update local fzf cache files after a change."""
|
||||
try:
|
||||
nodes = self.list_nodes()
|
||||
folders = self.list_folders()
|
||||
self.generate_cache(nodes=nodes, folders=folders)
|
||||
except Exception:
|
||||
# Failure to sync cache shouldn't break the main operation's success feedback
|
||||
pass
|
||||
|
||||
@handle_errors
|
||||
def add_node(self, unique_id, data, is_folder=False):
|
||||
req = connpy_pb2.NodeRequest(id=unique_id, data=to_struct(data), is_folder=is_folder)
|
||||
self.stub.add_node(req)
|
||||
self._trigger_local_cache_sync()
|
||||
|
||||
@handle_errors
|
||||
def update_node(self, unique_id, data):
|
||||
req = connpy_pb2.NodeRequest(id=unique_id, data=to_struct(data), is_folder=False)
|
||||
self.stub.update_node(req)
|
||||
self._trigger_local_cache_sync()
|
||||
|
||||
@handle_errors
|
||||
def delete_node(self, unique_id, is_folder=False):
|
||||
req = connpy_pb2.DeleteRequest(id=unique_id, is_folder=is_folder)
|
||||
self.stub.delete_node(req)
|
||||
self._trigger_local_cache_sync()
|
||||
|
||||
@handle_errors
|
||||
def move_node(self, src_id, dst_id, copy=False):
|
||||
req = connpy_pb2.MoveRequest(src_id=src_id, dst_id=dst_id, copy=copy)
|
||||
self.stub.move_node(req)
|
||||
self._trigger_local_cache_sync()
|
||||
|
||||
@handle_errors
|
||||
def bulk_add(self, ids, hosts, common_data):
|
||||
req = connpy_pb2.BulkRequest(ids=ids, hosts=hosts, common_data=to_struct(common_data))
|
||||
self.stub.bulk_add(req)
|
||||
self._trigger_local_cache_sync()
|
||||
|
||||
@handle_errors
|
||||
def set_reserved_names(self, names):
|
||||
self.stub.set_reserved_names(connpy_pb2.ListRequest(items=names))
|
||||
self._trigger_local_cache_sync()
|
||||
|
||||
@handle_errors
|
||||
def full_replace(self, connections, profiles):
|
||||
req = connpy_pb2.FullReplaceRequest(
|
||||
connections=to_struct(connections),
|
||||
profiles=to_struct(profiles)
|
||||
)
|
||||
self.stub.full_replace(req)
|
||||
self._trigger_local_cache_sync()
|
||||
|
||||
@handle_errors
|
||||
def get_inventory(self):
|
||||
resp = self.stub.get_inventory(Empty())
|
||||
return {
|
||||
"connections": from_struct(resp.connections),
|
||||
"profiles": from_struct(resp.profiles)
|
||||
}
|
||||
|
||||
|
||||
class ProfileStub:
|
||||
def __init__(self, channel, remote_host, node_stub=None):
|
||||
self.stub = connpy_pb2_grpc.ProfileServiceStub(channel)
|
||||
self.remote_host = remote_host
|
||||
self.node_stub = node_stub
|
||||
|
||||
@handle_errors
|
||||
def list_profiles(self, filter_str=None):
|
||||
req = connpy_pb2.FilterRequest(filter_str=filter_str or "")
|
||||
return from_value(self.stub.list_profiles(req).data) or []
|
||||
|
||||
@handle_errors
|
||||
def get_profile(self, name, resolve=True):
|
||||
req = connpy_pb2.ProfileRequest(name=name, resolve=resolve)
|
||||
return from_struct(self.stub.get_profile(req).data)
|
||||
|
||||
@handle_errors
|
||||
def add_profile(self, name, data):
|
||||
req = connpy_pb2.NodeRequest(id=name, data=to_struct(data))
|
||||
self.stub.add_profile(req)
|
||||
if self.node_stub:
|
||||
self.node_stub._trigger_local_cache_sync()
|
||||
|
||||
@handle_errors
|
||||
def resolve_node_data(self, node_data):
|
||||
req = connpy_pb2.StructRequest(data=to_struct(node_data))
|
||||
return from_struct(self.stub.resolve_node_data(req).data)
|
||||
|
||||
@handle_errors
|
||||
def delete_profile(self, name):
|
||||
req = connpy_pb2.IdRequest(id=name)
|
||||
self.stub.delete_profile(req)
|
||||
if self.node_stub:
|
||||
self.node_stub._trigger_local_cache_sync()
|
||||
|
||||
@handle_errors
|
||||
def update_profile(self, name, data):
|
||||
req = connpy_pb2.NodeRequest(id=name, data=to_struct(data))
|
||||
self.stub.update_profile(req)
|
||||
if self.node_stub:
|
||||
self.node_stub._trigger_local_cache_sync()
|
||||
|
||||
class ConfigStub:
|
||||
def __init__(self, channel, remote_host):
|
||||
self.stub = connpy_pb2_grpc.ConfigServiceStub(channel)
|
||||
self.remote_host = remote_host
|
||||
|
||||
@handle_errors
|
||||
def get_settings(self):
|
||||
return from_struct(self.stub.get_settings(Empty()).data)
|
||||
|
||||
@handle_errors
|
||||
def update_setting(self, key, value):
|
||||
self.stub.update_setting(connpy_pb2.UpdateRequest(key=key, value=to_value(value)))
|
||||
|
||||
@handle_errors
|
||||
def get_default_dir(self):
|
||||
return self.stub.get_default_dir(Empty()).value
|
||||
|
||||
@handle_errors
|
||||
def set_config_folder(self, folder):
|
||||
self.stub.set_config_folder(connpy_pb2.StringRequest(value=folder))
|
||||
|
||||
@handle_errors
|
||||
def encrypt_password(self, password):
|
||||
return self.stub.encrypt_password(connpy_pb2.StringRequest(value=password)).value
|
||||
|
||||
class PluginStub:
|
||||
def __init__(self, channel, remote_host):
|
||||
self.stub = connpy_pb2_grpc.PluginServiceStub(channel)
|
||||
self.remote_stub = remote_plugin_pb2_grpc.RemotePluginServiceStub(channel)
|
||||
self.remote_host = remote_host
|
||||
|
||||
@handle_errors
|
||||
def list_plugins(self):
|
||||
return from_value(self.stub.list_plugins(Empty()).data)
|
||||
|
||||
@handle_errors
|
||||
def add_plugin(self, name, source_file, update=False):
|
||||
# Read the local file content to send it to the server
|
||||
with open(source_file, "r") as f:
|
||||
content = f.read()
|
||||
|
||||
# Use source_file as a marker for "content-inside"
|
||||
marker_content = f"---CONTENT---\n{content}"
|
||||
req = connpy_pb2.PluginRequest(name=name, source_file=marker_content, update=update)
|
||||
self.stub.add_plugin(req)
|
||||
|
||||
@handle_errors
|
||||
def delete_plugin(self, name):
|
||||
self.stub.delete_plugin(connpy_pb2.IdRequest(id=name))
|
||||
|
||||
@handle_errors
|
||||
def enable_plugin(self, name):
|
||||
self.stub.enable_plugin(connpy_pb2.IdRequest(id=name))
|
||||
|
||||
@handle_errors
|
||||
def disable_plugin(self, name):
|
||||
self.stub.disable_plugin(connpy_pb2.IdRequest(id=name))
|
||||
|
||||
@handle_errors
|
||||
def get_plugin_source(self, name):
|
||||
resp = self.remote_stub.get_plugin_source(remote_plugin_pb2.IdRequest(id=name))
|
||||
return resp.value
|
||||
|
||||
@handle_errors
|
||||
def invoke_plugin(self, name, args_namespace):
|
||||
import json
|
||||
args_dict = {k: v for k, v in vars(args_namespace).items()
|
||||
if isinstance(v, (str, int, float, bool, list, type(None)))}
|
||||
if hasattr(args_namespace, "func") and hasattr(args_namespace.func, "__name__"):
|
||||
args_dict["__func_name__"] = args_namespace.func.__name__
|
||||
|
||||
req = remote_plugin_pb2.PluginInvokeRequest(name=name, args_json=json.dumps(args_dict))
|
||||
for chunk in self.remote_stub.invoke_plugin(req):
|
||||
yield chunk.text
|
||||
|
||||
class ExecutionStub:
|
||||
def __init__(self, channel, remote_host):
|
||||
self.stub = connpy_pb2_grpc.ExecutionServiceStub(channel)
|
||||
self.remote_host = remote_host
|
||||
|
||||
@handle_errors
|
||||
def run_commands(self, nodes_filter, commands, variables=None, parallel=10, timeout=10, folder=None, prompt=None, **kwargs):
|
||||
nodes_list = [nodes_filter] if isinstance(nodes_filter, str) else list(nodes_filter)
|
||||
req = connpy_pb2.RunRequest(
|
||||
nodes=nodes_list,
|
||||
commands=commands,
|
||||
folder=folder or "",
|
||||
prompt=prompt or "",
|
||||
parallel=parallel,
|
||||
timeout=timeout,
|
||||
name=kwargs.get("name", "")
|
||||
)
|
||||
if variables is not None:
|
||||
req.vars.CopyFrom(to_struct(variables))
|
||||
|
||||
final_results = {}
|
||||
on_complete = kwargs.get("on_node_complete")
|
||||
|
||||
for response in self.stub.run_commands(req):
|
||||
if on_complete:
|
||||
on_complete(response.unique_id, response.output, response.status)
|
||||
final_results[response.unique_id] = {
|
||||
"output": response.output,
|
||||
"status": response.status
|
||||
}
|
||||
|
||||
return final_results
|
||||
|
||||
@handle_errors
|
||||
def test_commands(self, nodes_filter, commands, expected, variables=None, parallel=10, timeout=10, prompt=None, **kwargs):
|
||||
nodes_list = [nodes_filter] if isinstance(nodes_filter, str) else list(nodes_filter)
|
||||
req = connpy_pb2.TestRequest(
|
||||
nodes=nodes_list,
|
||||
commands=commands,
|
||||
expected=expected if isinstance(expected, list) else [expected],
|
||||
folder=kwargs.get("folder", ""),
|
||||
prompt=prompt or "",
|
||||
parallel=parallel,
|
||||
timeout=timeout,
|
||||
name=kwargs.get("name", "")
|
||||
)
|
||||
if variables is not None:
|
||||
req.vars.CopyFrom(to_struct(variables))
|
||||
|
||||
final_results = {}
|
||||
on_complete = kwargs.get("on_node_complete")
|
||||
|
||||
for response in self.stub.test_commands(req):
|
||||
result_dict = from_struct(response.test_result) if response.HasField("test_result") else {}
|
||||
if on_complete:
|
||||
on_complete(response.unique_id, response.output, response.status, result_dict)
|
||||
final_results[response.unique_id] = result_dict
|
||||
|
||||
return final_results
|
||||
|
||||
@handle_errors
|
||||
def run_cli_script(self, nodes_filter, script_path, parallel=10):
|
||||
req = connpy_pb2.ScriptRequest(param1=nodes_filter, param2=script_path, parallel=parallel)
|
||||
return from_struct(self.stub.run_cli_script(req).data)
|
||||
|
||||
@handle_errors
|
||||
def run_yaml_playbook(self, playbook_path, parallel=10):
|
||||
req = connpy_pb2.ScriptRequest(param1=playbook_path, parallel=parallel)
|
||||
return from_struct(self.stub.run_yaml_playbook(req).data)
|
||||
|
||||
class ImportExportStub:
|
||||
def __init__(self, channel, remote_host):
|
||||
self.stub = connpy_pb2_grpc.ImportExportServiceStub(channel)
|
||||
self.remote_host = remote_host
|
||||
|
||||
@handle_errors
|
||||
def export_to_file(self, file_path, folders=None):
|
||||
req = connpy_pb2.ExportRequest(file_path=file_path, folders=folders or [])
|
||||
self.stub.export_to_file(req)
|
||||
|
||||
@handle_errors
|
||||
def import_from_file(self, file_path):
|
||||
with open(file_path, "r") as f:
|
||||
content = f.read()
|
||||
# Marker to tell the server this is content, not a path
|
||||
marker_content = f"---YAML---\n{content}"
|
||||
self.stub.import_from_file(connpy_pb2.StringRequest(value=marker_content))
|
||||
|
||||
@handle_errors
|
||||
def set_reserved_names(self, names):
|
||||
self.stub.set_reserved_names(connpy_pb2.ListRequest(items=names))
|
||||
|
||||
class AIStub:
|
||||
def __init__(self, channel, remote_host):
|
||||
self.stub = connpy_pb2_grpc.AIServiceStub(channel)
|
||||
self.remote_host = remote_host
|
||||
|
||||
@handle_errors
|
||||
def ask(self, input_text, dryrun=False, chat_history=None, session_id=None, debug=False, status=None, **overrides):
|
||||
import queue
|
||||
from rich.prompt import Prompt
|
||||
from rich.text import Text
|
||||
from rich.live import Live
|
||||
from rich.panel import Panel
|
||||
from rich.markdown import Markdown
|
||||
|
||||
req_queue = queue.Queue()
|
||||
|
||||
initial_req = connpy_pb2.AskRequest(
|
||||
input_text=input_text,
|
||||
dryrun=dryrun,
|
||||
session_id=session_id or "",
|
||||
debug=debug,
|
||||
engineer_model=overrides.get("engineer_model", ""),
|
||||
engineer_api_key=overrides.get("engineer_api_key", ""),
|
||||
architect_model=overrides.get("architect_model", ""),
|
||||
architect_api_key=overrides.get("architect_api_key", ""),
|
||||
trust=overrides.get("trust", False)
|
||||
)
|
||||
if chat_history is not None:
|
||||
initial_req.chat_history.CopyFrom(to_value(chat_history))
|
||||
|
||||
req_queue.put(initial_req)
|
||||
|
||||
def request_generator():
|
||||
while True:
|
||||
req = req_queue.get()
|
||||
if req is None: break
|
||||
yield req
|
||||
|
||||
responses = self.stub.ask(request_generator())
|
||||
|
||||
full_content = ""
|
||||
live_display = None
|
||||
final_result = {"response": "", "chat_history": []}
|
||||
|
||||
# Background thread to pull responses from gRPC into a local queue
|
||||
# This prevents KeyboardInterrupt from corrupting the gRPC iterator state
|
||||
response_queue = queue.Queue()
|
||||
|
||||
def pull_responses():
|
||||
try:
|
||||
for response in responses:
|
||||
response_queue.put(("data", response))
|
||||
except Exception as e:
|
||||
response_queue.put(("error", e))
|
||||
finally:
|
||||
response_queue.put((None, None))
|
||||
|
||||
threading.Thread(target=pull_responses, daemon=True).start()
|
||||
|
||||
try:
|
||||
while True:
|
||||
try:
|
||||
# BLOCKING GET from local queue (interruptible by signal)
|
||||
msg_type, response = response_queue.get()
|
||||
except KeyboardInterrupt:
|
||||
# Signal interruption to the server
|
||||
if status:
|
||||
status.update("[error]Interrupted! Closing pending tasks...")
|
||||
|
||||
# Send the interrupt signal to the server
|
||||
req_queue.put(connpy_pb2.AskRequest(interrupt=True))
|
||||
|
||||
# CONTINUE the loop to receive remaining data and summary from the queue
|
||||
continue
|
||||
|
||||
if msg_type is None: # Sentinel
|
||||
break
|
||||
|
||||
if msg_type == "error":
|
||||
# Re-raise or handle gRPC error from background thread
|
||||
if isinstance(response, grpc.RpcError):
|
||||
raise response
|
||||
printer.warning(f"Stream interrupted: {response}")
|
||||
break
|
||||
|
||||
if response.status_update:
|
||||
if response.requires_confirmation:
|
||||
if status: status.stop()
|
||||
|
||||
# Show prompt and wait for answer
|
||||
prompt_text = Text.from_ansi(response.status_update)
|
||||
ans = Prompt.ask(prompt_text)
|
||||
|
||||
if status:
|
||||
status.update("[ai_status]Agent: Resuming...")
|
||||
status.start()
|
||||
|
||||
req_queue.put(connpy_pb2.AskRequest(confirmation_answer=ans))
|
||||
continue
|
||||
|
||||
if status:
|
||||
status.update(response.status_update)
|
||||
continue
|
||||
|
||||
if response.debug_message:
|
||||
if debug:
|
||||
if live_display:
|
||||
try: live_display.stop()
|
||||
except: pass
|
||||
if status:
|
||||
try: status.stop()
|
||||
except: pass
|
||||
printer.console.print(Text.from_ansi(response.debug_message))
|
||||
if live_display:
|
||||
try: live_display.start()
|
||||
except: pass
|
||||
elif status:
|
||||
try: status.start()
|
||||
except: pass
|
||||
continue
|
||||
|
||||
if response.important_message:
|
||||
if live_display:
|
||||
try: live_display.stop()
|
||||
except: pass
|
||||
if status:
|
||||
try: status.stop()
|
||||
except: pass
|
||||
printer.console.print(Text.from_ansi(response.important_message))
|
||||
if live_display:
|
||||
try: live_display.start()
|
||||
except: pass
|
||||
elif status:
|
||||
try: status.start()
|
||||
except: pass
|
||||
continue
|
||||
|
||||
if not response.is_final:
|
||||
if response.text_chunk:
|
||||
full_content += response.text_chunk
|
||||
|
||||
if not live_display:
|
||||
if status:
|
||||
try: status.stop()
|
||||
except: pass
|
||||
|
||||
from rich.console import Console as RichConsole
|
||||
from ..printer import connpy_theme, get_original_stdout
|
||||
stable_console = RichConsole(theme=connpy_theme, file=get_original_stdout())
|
||||
|
||||
# We default to Engineer title during stream, final result will correct it if needed
|
||||
live_display = Live(
|
||||
Panel(Markdown(full_content), title="[bold engineer]Network Engineer[/bold engineer]", border_style="engineer", expand=False),
|
||||
console=stable_console,
|
||||
refresh_per_second=8,
|
||||
transient=False
|
||||
)
|
||||
live_display.start()
|
||||
else:
|
||||
live_display.update(
|
||||
Panel(Markdown(full_content), title="[bold engineer]Network Engineer[/bold engineer]", border_style="engineer", expand=False)
|
||||
)
|
||||
continue
|
||||
|
||||
if response.is_final:
|
||||
if live_display:
|
||||
try: live_display.stop()
|
||||
except: pass
|
||||
# Final stop for status to ensure it disappears before the panel
|
||||
if status:
|
||||
try: status.stop()
|
||||
except: pass
|
||||
|
||||
final_result = from_struct(response.full_result)
|
||||
responder = final_result.get("responder", "engineer")
|
||||
alias = "architect" if responder == "architect" else "engineer"
|
||||
role_label = "Network Architect" if responder == "architect" else "Network Engineer"
|
||||
title = f"[bold {alias}]{role_label}[/bold {alias}]"
|
||||
|
||||
content_to_print = full_content or final_result.get("response", "")
|
||||
if content_to_print:
|
||||
if live_display:
|
||||
# Re-render the final frame with correct title/colors
|
||||
live_display.update(Panel(Markdown(content_to_print), title=title, border_style=alias, expand=False))
|
||||
else:
|
||||
printer.console.print(Panel(Markdown(content_to_print), title=title, border_style=alias, expand=False))
|
||||
break
|
||||
except Exception as e:
|
||||
# Check if it was a gRPC error that we should let handle_errors catch
|
||||
if isinstance(e, grpc.RpcError):
|
||||
raise
|
||||
printer.warning(f"Stream interrupted: {e}")
|
||||
finally:
|
||||
req_queue.put(None)
|
||||
|
||||
if full_content:
|
||||
final_result["streamed"] = True
|
||||
|
||||
return final_result
|
||||
|
||||
@handle_errors
|
||||
def confirm(self, input_text, console=None):
|
||||
return self.stub.confirm(connpy_pb2.StringRequest(value=input_text)).value
|
||||
|
||||
@handle_errors
|
||||
def list_sessions(self):
|
||||
return from_value(self.stub.list_sessions(Empty()).data)
|
||||
|
||||
@handle_errors
|
||||
def delete_session(self, session_id):
|
||||
self.stub.delete_session(connpy_pb2.StringRequest(value=session_id))
|
||||
|
||||
@handle_errors
|
||||
def configure_provider(self, provider, model=None, api_key=None):
|
||||
req = connpy_pb2.ProviderRequest(provider=provider, model=model or "", api_key=api_key or "")
|
||||
self.stub.configure_provider(req)
|
||||
|
||||
@handle_errors
|
||||
def configure_mcp(self, name, url=None, enabled=True, auto_load_on_os=None, remove=False):
|
||||
req = connpy_pb2.MCPRequest(
|
||||
name=name,
|
||||
url=url or "",
|
||||
enabled=enabled,
|
||||
auto_load_on_os=auto_load_on_os or "",
|
||||
remove=remove
|
||||
)
|
||||
self.stub.configure_mcp(req)
|
||||
|
||||
@handle_errors
|
||||
def load_session_data(self, session_id):
|
||||
return from_struct(self.stub.load_session_data(connpy_pb2.StringRequest(value=session_id)).data)
|
||||
|
||||
class SystemStub:
|
||||
def __init__(self, channel, remote_host):
|
||||
self.stub = connpy_pb2_grpc.SystemServiceStub(channel)
|
||||
self.remote_host = remote_host
|
||||
|
||||
@handle_errors
|
||||
def start_api(self, port=None):
|
||||
self.stub.start_api(connpy_pb2.IntRequest(value=port or 8048))
|
||||
|
||||
@handle_errors
|
||||
def debug_api(self, port=None):
|
||||
self.stub.debug_api(connpy_pb2.IntRequest(value=port or 8048))
|
||||
|
||||
@handle_errors
|
||||
def stop_api(self):
|
||||
self.stub.stop_api(Empty())
|
||||
|
||||
@handle_errors
|
||||
def restart_api(self, port=None):
|
||||
self.stub.restart_api(connpy_pb2.IntRequest(value=port or 8048))
|
||||
|
||||
@handle_errors
|
||||
def get_api_status(self):
|
||||
return self.stub.get_api_status(Empty()).value
|
||||
@@ -1,30 +0,0 @@
|
||||
import json
|
||||
from google.protobuf import json_format
|
||||
from google.protobuf.struct_pb2 import Struct, Value
|
||||
|
||||
def to_value(obj):
|
||||
if obj is None:
|
||||
v = Value()
|
||||
v.null_value = 0
|
||||
return v
|
||||
json_str = json.dumps(obj)
|
||||
v = Value()
|
||||
json_format.Parse(json_str, v)
|
||||
return v
|
||||
|
||||
def from_value(val):
|
||||
if not val.HasField("kind"):
|
||||
return None
|
||||
return json.loads(json_format.MessageToJson(val))
|
||||
|
||||
def to_struct(obj):
|
||||
if not obj:
|
||||
return Struct()
|
||||
s = Struct()
|
||||
json_format.ParseDict(obj, s)
|
||||
return s
|
||||
|
||||
def from_struct(struct):
|
||||
if not struct:
|
||||
return {}
|
||||
return json_format.MessageToDict(struct, preserving_proto_field_name=True)
|
||||
+4
-8
@@ -1,7 +1,6 @@
|
||||
#!/usr/bin/env python3
|
||||
#Imports
|
||||
from functools import wraps, partial, update_wrapper
|
||||
from . import printer
|
||||
|
||||
#functions and classes
|
||||
|
||||
@@ -20,21 +19,18 @@ class MethodHook:
|
||||
try:
|
||||
args, kwargs = hook(*args, **kwargs)
|
||||
except Exception as e:
|
||||
hook_name = getattr(hook, "__name__", str(hook))
|
||||
printer.error(f"{self.func.__name__} Pre-hook {hook_name} raised an exception: {e}")
|
||||
print(f"{self.func.__name__} Pre-hook {hook.__name__} raised an exception: {e}")
|
||||
|
||||
try:
|
||||
result = self.func(*args, **kwargs)
|
||||
|
||||
finally:
|
||||
# Execute post-hooks after the original function
|
||||
if self.post_hooks:
|
||||
#printer.info(f"Executing {len(self.post_hooks)} post-hooks for {self.func.__name__}...")
|
||||
pass
|
||||
for hook in self.post_hooks:
|
||||
try:
|
||||
result = hook(*args, **kwargs, result=result) # Pass result to hooks
|
||||
except Exception as e:
|
||||
hook_name = getattr(hook, "__name__", str(hook))
|
||||
printer.error(f"{self.func.__name__} Post-hook {hook_name} raised an exception: {e}")
|
||||
print(f"{self.func.__name__} Post-hook {hook.__name__} raised an exception: {e}")
|
||||
|
||||
return result
|
||||
|
||||
|
||||
@@ -1,171 +0,0 @@
|
||||
import asyncio
|
||||
import json
|
||||
import os
|
||||
import threading
|
||||
from typing import Any, Dict, List, Optional
|
||||
import logging
|
||||
|
||||
try:
|
||||
from mcp import ClientSession
|
||||
from mcp.client.sse import sse_client
|
||||
MCP_AVAILABLE = True
|
||||
except ImportError:
|
||||
MCP_AVAILABLE = False
|
||||
|
||||
# Silence noisy MCP and HTTP internal logging
|
||||
logging.getLogger("mcp").setLevel(logging.CRITICAL)
|
||||
logging.getLogger("httpx").setLevel(logging.CRITICAL)
|
||||
logging.getLogger("httpcore").setLevel(logging.CRITICAL)
|
||||
|
||||
class MCPClientManager:
|
||||
"""Manages MCP SSE client connections for connpy."""
|
||||
|
||||
_instance = None
|
||||
_lock = threading.Lock()
|
||||
|
||||
def __new__(cls, *args, **kwargs):
|
||||
with cls._lock:
|
||||
if cls._instance is None:
|
||||
cls._instance = super(MCPClientManager, cls).__new__(cls)
|
||||
cls._instance._initialized = False
|
||||
return cls._instance
|
||||
|
||||
def __init__(self, config=None):
|
||||
if self._initialized:
|
||||
return
|
||||
self.config = config
|
||||
self.sessions: Dict[str, Dict[str, Any]] = {} # name -> {session, stack}
|
||||
self.tool_cache: Dict[str, List[Dict[str, Any]]] = {}
|
||||
self._connecting: Dict[str, asyncio.Future] = {}
|
||||
self._initialized = True
|
||||
|
||||
async def get_tools_for_llm(self, os_filter: Optional[str] = None) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Fetches tools from enabled MCP servers that match the OS filter.
|
||||
"""
|
||||
if not MCP_AVAILABLE:
|
||||
return []
|
||||
|
||||
all_llm_tools = []
|
||||
try:
|
||||
mcp_config = self.config.config.get("ai", {}).get("mcp_servers", {})
|
||||
except Exception:
|
||||
return []
|
||||
|
||||
async def _fetch(name, cfg):
|
||||
if not cfg.get("enabled", True): return []
|
||||
|
||||
# Filter by OS if specified in config (primarily used for copilot strict matching)
|
||||
auto_os = cfg.get("auto_load_on_os")
|
||||
if os_filter is not None and auto_os and os_filter.lower() != auto_os.lower():
|
||||
return []
|
||||
|
||||
try:
|
||||
session = await self._ensure_connected(name, cfg)
|
||||
if session:
|
||||
if name in self.tool_cache: return self.tool_cache[name]
|
||||
llm_tools = await self._fetch_tools_as_openai(name, session)
|
||||
self.tool_cache[name] = llm_tools
|
||||
return llm_tools
|
||||
except Exception:
|
||||
pass
|
||||
return []
|
||||
|
||||
tasks = [ _fetch(name, cfg) for name, cfg in mcp_config.items() ]
|
||||
|
||||
if tasks:
|
||||
results = await asyncio.gather(*tasks)
|
||||
for tools in results:
|
||||
all_llm_tools.extend(tools)
|
||||
|
||||
return all_llm_tools
|
||||
|
||||
async def _ensure_connected(self, name: str, cfg: Dict[str, Any]) -> Optional[Any]:
|
||||
if not MCP_AVAILABLE: return None
|
||||
|
||||
if name in self.sessions and self.sessions[name].get("session"):
|
||||
return self.sessions[name]["session"]
|
||||
|
||||
url = cfg.get("url")
|
||||
if not url:
|
||||
return None
|
||||
|
||||
if name in self._connecting:
|
||||
try:
|
||||
return await asyncio.wait_for(asyncio.shield(self._connecting[name]), timeout=10.0)
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
loop = asyncio.get_running_loop()
|
||||
fut = loop.create_future()
|
||||
self._connecting[name] = fut
|
||||
|
||||
try:
|
||||
from contextlib import AsyncExitStack
|
||||
stack = AsyncExitStack()
|
||||
|
||||
async def _do_connect():
|
||||
read, write = await stack.enter_async_context(sse_client(url))
|
||||
session = await stack.enter_async_context(ClientSession(read, write))
|
||||
await session.initialize()
|
||||
return session
|
||||
|
||||
session = await asyncio.wait_for(_do_connect(), timeout=15.0)
|
||||
self.sessions[name] = {"session": session, "stack": stack}
|
||||
fut.set_result(session)
|
||||
return session
|
||||
except Exception:
|
||||
fut.set_result(None)
|
||||
return None
|
||||
finally:
|
||||
if name in self._connecting:
|
||||
del self._connecting[name]
|
||||
|
||||
async def _fetch_tools_as_openai(self, server_name: str, session: Any) -> List[Dict[str, Any]]:
|
||||
try:
|
||||
result = await asyncio.wait_for(session.list_tools(), timeout=5.0)
|
||||
openai_tools = []
|
||||
for tool in result.tools:
|
||||
# Use mcp_ prefix to ensure valid function name for LiteLLM/Gemini
|
||||
prefixed_name = f"mcp_{server_name}__{tool.name}"
|
||||
openai_tools.append({
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": prefixed_name,
|
||||
"description": f"[{server_name}] {tool.description}",
|
||||
"parameters": tool.inputSchema
|
||||
}
|
||||
})
|
||||
return openai_tools
|
||||
except Exception:
|
||||
return []
|
||||
|
||||
async def call_tool(self, full_tool_name: str, arguments: Dict[str, Any]) -> Any:
|
||||
"""Calls an MCP tool and returns text result."""
|
||||
if not MCP_AVAILABLE:
|
||||
return "Error: MCP SDK is not installed."
|
||||
|
||||
if "__" not in full_tool_name:
|
||||
return f"Error: Tool {full_tool_name} is not a valid MCP tool."
|
||||
|
||||
clean_name = full_tool_name[4:] if full_tool_name.startswith("mcp_") else full_tool_name
|
||||
server_name, tool_name = clean_name.split("__", 1)
|
||||
|
||||
if server_name not in self.sessions:
|
||||
return f"Error: MCP server {server_name} is not connected."
|
||||
|
||||
session = self.sessions[server_name]["session"]
|
||||
try:
|
||||
result = await asyncio.wait_for(session.call_tool(tool_name, arguments), timeout=60.0)
|
||||
text_outputs = [content.text for content in result.content if hasattr(content, "text")]
|
||||
return "\n".join(text_outputs) if text_outputs else str(result)
|
||||
except Exception as e:
|
||||
return f"Error calling tool {tool_name} on {server_name}: {str(e)}"
|
||||
|
||||
async def shutdown(self):
|
||||
"""Close all SSE connections."""
|
||||
for name, data in self.sessions.items():
|
||||
stack = data.get("stack")
|
||||
if stack:
|
||||
await stack.aclose()
|
||||
self.sessions = {}
|
||||
+10
-131
@@ -4,34 +4,12 @@ import importlib.util
|
||||
import sys
|
||||
import argparse
|
||||
import os
|
||||
from connpy import printer
|
||||
|
||||
class Plugins:
|
||||
def __init__(self):
|
||||
self.plugins = {}
|
||||
self.plugin_parsers = {}
|
||||
self.preloads = {}
|
||||
self.remote_plugins = {}
|
||||
self.preferences = {}
|
||||
|
||||
def _load_preferences(self, config_dir):
|
||||
import json
|
||||
path = os.path.join(config_dir, "plugin_preferences.json")
|
||||
try:
|
||||
with open(path) as f:
|
||||
self.preferences = json.load(f)
|
||||
except (FileNotFoundError, json.JSONDecodeError):
|
||||
self.preferences = {}
|
||||
|
||||
def _save_preferences(self, config_dir):
|
||||
import json
|
||||
path = os.path.join(config_dir, "plugin_preferences.json")
|
||||
try:
|
||||
with open(path, "w") as f:
|
||||
json.dump(self.preferences, f, indent=4)
|
||||
except OSError as e:
|
||||
printer.error(f"Failed to save plugin preferences: {e}")
|
||||
|
||||
|
||||
def verify_script(self, file_path):
|
||||
"""
|
||||
@@ -52,7 +30,8 @@ class Plugins:
|
||||
### Verifications:
|
||||
- The presence of only allowed top-level elements.
|
||||
- The existence of two specific classes: 'Parser' and 'Entrypoint'. and/or specific class: Preload.
|
||||
- 'Parser' class must only have an '__init__' method and must assign 'self.parser'.
|
||||
- 'Parser' class must only have an '__init__' method and must assign 'self.parser'
|
||||
and 'self.description'.
|
||||
- 'Entrypoint' class must have an '__init__' method accepting specific arguments.
|
||||
|
||||
If any of these checks fail, the function returns an error message indicating
|
||||
@@ -83,8 +62,8 @@ class Plugins:
|
||||
if not (isinstance(node.test, ast.Compare) and
|
||||
isinstance(node.test.left, ast.Name) and
|
||||
node.test.left.id == '__name__' and
|
||||
((hasattr(ast, 'Str') and isinstance(node.test.comparators[0], getattr(ast, 'Str')) and node.test.comparators[0].s == '__main__') or
|
||||
(hasattr(ast, 'Constant') and isinstance(node.test.comparators[0], getattr(ast, 'Constant')) and node.test.comparators[0].value == '__main__'))):
|
||||
isinstance(node.test.comparators[0], ast.Str) and
|
||||
node.test.comparators[0].s == '__main__'):
|
||||
return "Only __name__ == __main__ If is allowed"
|
||||
|
||||
elif not isinstance(node, (ast.FunctionDef, ast.ClassDef, ast.Import, ast.ImportFrom, ast.Pass)):
|
||||
@@ -98,12 +77,11 @@ class Plugins:
|
||||
if not all(isinstance(method, ast.FunctionDef) and method.name == '__init__' for method in node.body):
|
||||
return "Parser class should only have __init__ method"
|
||||
|
||||
# Check if 'self.parser' is assigned in __init__ method
|
||||
# Check if 'self.parser' and 'self.description' are assigned in __init__ method
|
||||
init_method = node.body[0]
|
||||
assigned_attrs = [target.attr for expr in init_method.body if isinstance(expr, ast.Assign) for target in expr.targets if isinstance(target, ast.Attribute) and isinstance(target.value, ast.Name) and target.value.id == 'self']
|
||||
if 'parser' not in assigned_attrs:
|
||||
return "Parser class should set self.parser"
|
||||
|
||||
if 'parser' not in assigned_attrs or 'description' not in assigned_attrs:
|
||||
return "Parser class should set self.parser and self.description" # 'self.parser' or 'self.description' not assigned in __init__
|
||||
|
||||
elif node.name == 'Entrypoint':
|
||||
has_entrypoint = True
|
||||
@@ -135,123 +113,24 @@ class Plugins:
|
||||
spec.loader.exec_module(module)
|
||||
return module
|
||||
|
||||
def _import_plugins_to_argparse(self, directory, subparsers, remote_enabled=False):
|
||||
if not os.path.exists(directory):
|
||||
return
|
||||
def _import_plugins_to_argparse(self, directory, subparsers):
|
||||
for filename in os.listdir(directory):
|
||||
commands = subparsers.choices.keys()
|
||||
if filename.endswith(".py"):
|
||||
root_filename = os.path.splitext(filename)[0]
|
||||
if root_filename in commands:
|
||||
continue
|
||||
|
||||
# Check preferences: if remote is preferred AND remote is enabled, skip local loading
|
||||
if remote_enabled and self.preferences.get(root_filename) == "remote":
|
||||
continue
|
||||
|
||||
# Construct the full path
|
||||
filepath = os.path.join(directory, filename)
|
||||
check_file = self.verify_script(filepath)
|
||||
if check_file:
|
||||
printer.error(f"Failed to load plugin: {filename}. Reason: {check_file}")
|
||||
print(f"Failed to load plugin: {filename}. Reason: {check_file}")
|
||||
continue
|
||||
else:
|
||||
self.plugins[root_filename] = self._import_from_path(filepath)
|
||||
if hasattr(self.plugins[root_filename], "Parser"):
|
||||
self.plugin_parsers[root_filename] = self.plugins[root_filename].Parser()
|
||||
plugin = self.plugin_parsers[root_filename]
|
||||
# Default to RichHelpFormatter if plugin doesn't set one
|
||||
try:
|
||||
from rich_argparse import RichHelpFormatter as _RHF
|
||||
fmt = plugin.parser.formatter_class
|
||||
if fmt is argparse.HelpFormatter or fmt is argparse.RawTextHelpFormatter or fmt is argparse.RawDescriptionHelpFormatter:
|
||||
fmt = _RHF
|
||||
except ImportError:
|
||||
fmt = plugin.parser.formatter_class
|
||||
subparsers.add_parser(root_filename, parents=[self.plugin_parsers[root_filename].parser], add_help=False, help=plugin.parser.description, usage=plugin.parser.usage, description=plugin.parser.description, epilog=plugin.parser.epilog, formatter_class=fmt)
|
||||
subparsers.add_parser(root_filename, parents=[self.plugin_parsers[root_filename].parser], add_help=False, description=self.plugin_parsers[root_filename].description)
|
||||
if hasattr(self.plugins[root_filename], "Preload"):
|
||||
self.preloads[root_filename] = self.plugins[root_filename]
|
||||
|
||||
def _import_remote_plugins_to_argparse(self, plugin_stub, subparsers, cache_dir, force_sync=False):
|
||||
import hashlib
|
||||
os.makedirs(cache_dir, exist_ok=True)
|
||||
|
||||
try:
|
||||
remote_plugins_info = plugin_stub.list_plugins()
|
||||
except Exception:
|
||||
return
|
||||
|
||||
# Pruning: Remove local cached files that are no longer on the server
|
||||
for local_file in os.listdir(cache_dir):
|
||||
if local_file.endswith(".py"):
|
||||
name = local_file[:-3]
|
||||
if name not in remote_plugins_info:
|
||||
try:
|
||||
os.remove(os.path.join(cache_dir, local_file))
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
for name, info in remote_plugins_info.items():
|
||||
if not info.get("enabled", True):
|
||||
continue
|
||||
|
||||
pref = self.preferences.get(name, "local")
|
||||
if pref != "remote" and name in self.plugins:
|
||||
continue
|
||||
if not force_sync and name in subparsers.choices:
|
||||
continue
|
||||
|
||||
cache_path = os.path.join(cache_dir, f"{name}.py")
|
||||
|
||||
# Hash comparison
|
||||
remote_hash = info.get("hash", "")
|
||||
local_hash = ""
|
||||
if os.path.exists(cache_path):
|
||||
try:
|
||||
with open(cache_path, "rb") as f:
|
||||
local_hash = hashlib.md5(f.read()).hexdigest()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Update only if hash differs or force_sync is True
|
||||
if force_sync or remote_hash != local_hash or not os.path.exists(cache_path):
|
||||
try:
|
||||
source = plugin_stub.get_plugin_source(name)
|
||||
with open(cache_path, "w") as f:
|
||||
f.write(source)
|
||||
except Exception as e:
|
||||
printer.warning(f"Failed to sync remote plugin {name}: {e}")
|
||||
continue
|
||||
|
||||
# Verify and load
|
||||
check_file = self.verify_script(cache_path)
|
||||
if check_file:
|
||||
printer.warning(f"Remote plugin {name} failed verification: {check_file}")
|
||||
continue
|
||||
|
||||
module = self._import_from_path(cache_path)
|
||||
if hasattr(module, "Parser"):
|
||||
self.plugin_parsers[name] = module.Parser()
|
||||
self.remote_plugins[name] = True
|
||||
plugin = self.plugin_parsers[name]
|
||||
try:
|
||||
from rich_argparse import RichHelpFormatter as _RHF
|
||||
fmt = plugin.parser.formatter_class
|
||||
if fmt is argparse.HelpFormatter or fmt is argparse.RawTextHelpFormatter or fmt is argparse.RawDescriptionHelpFormatter:
|
||||
fmt = _RHF
|
||||
except ImportError:
|
||||
fmt = plugin.parser.formatter_class
|
||||
|
||||
# If force_sync, we might be re-registering, but argparse subparsers.add_parser
|
||||
# might fail if it exists. We check if it's already there.
|
||||
if name not in subparsers.choices:
|
||||
subparsers.add_parser(
|
||||
name,
|
||||
parents=[plugin.parser],
|
||||
add_help=False,
|
||||
help=f"[remote] {plugin.parser.description}",
|
||||
usage=plugin.parser.usage,
|
||||
description=plugin.parser.description,
|
||||
epilog=plugin.parser.epilog,
|
||||
formatter_class=fmt
|
||||
)
|
||||
|
||||
@@ -1,498 +0,0 @@
|
||||
import sys
|
||||
import threading
|
||||
import io
|
||||
|
||||
_local = threading.local()
|
||||
|
||||
class ThreadLocalStream:
|
||||
def __init__(self, original):
|
||||
self._original = original
|
||||
|
||||
def _get_stream(self):
|
||||
s = getattr(_local, 'stream', None)
|
||||
return s if s is not None else self._original
|
||||
|
||||
def write(self, data):
|
||||
stream = self._get_stream()
|
||||
if stream:
|
||||
stream.write(data)
|
||||
|
||||
def flush(self):
|
||||
stream = self._get_stream()
|
||||
if stream:
|
||||
stream.flush()
|
||||
|
||||
def isatty(self):
|
||||
stream = self._get_stream()
|
||||
return stream.isatty() if stream else False
|
||||
|
||||
def __getattr__(self, name):
|
||||
# Avoid recursion during initialization or if _original is not yet set
|
||||
if name in ('_original', '_get_stream'):
|
||||
raise AttributeError(name)
|
||||
stream = self._get_stream()
|
||||
if stream:
|
||||
return getattr(stream, name)
|
||||
raise AttributeError(f"'NoneType' object has no attribute '{name}'")
|
||||
|
||||
# Patch stdout/stderr only once at module level
|
||||
if not isinstance(sys.stdout, ThreadLocalStream):
|
||||
sys.stdout = ThreadLocalStream(sys.stdout)
|
||||
if not isinstance(sys.stderr, ThreadLocalStream):
|
||||
sys.stderr = ThreadLocalStream(sys.stderr)
|
||||
|
||||
def _get_local():
|
||||
if not hasattr(_local, 'console'):
|
||||
_local.console = None
|
||||
if not hasattr(_local, 'err_console'):
|
||||
_local.err_console = None
|
||||
if not hasattr(_local, 'theme') or _local.theme is None:
|
||||
from rich.theme import Theme
|
||||
_local.theme = Theme(_global_active_styles)
|
||||
return _local
|
||||
|
||||
def set_thread_stream(stream):
|
||||
if stream is None:
|
||||
if hasattr(_local, 'stream'):
|
||||
del _local.stream
|
||||
else:
|
||||
_local.stream = stream
|
||||
|
||||
def get_original_stdout():
|
||||
if isinstance(sys.stdout, ThreadLocalStream):
|
||||
return sys.stdout._original
|
||||
return sys.stdout
|
||||
|
||||
def get_original_stderr():
|
||||
if isinstance(sys.stderr, ThreadLocalStream):
|
||||
return sys.stderr._original
|
||||
return sys.stderr
|
||||
|
||||
# Centralized design system
|
||||
STYLES = {
|
||||
"info": "#00ffff", # Cyan
|
||||
"warning": "#ffff00", # Yellow
|
||||
"error": "#ff0000", # Red
|
||||
"success": "#00ff00", # Green
|
||||
"debug": "#888888",
|
||||
"header": "bold #00ffff",
|
||||
"key": "bold #00ffff",
|
||||
"border": "#00ffff",
|
||||
"pass": "bold #00ff00",
|
||||
"fail": "bold #ff0000",
|
||||
"engineer": "#5fafff", # Sky Blue (lighter than pure blue)
|
||||
"architect": "#9370db", # Medium Purple
|
||||
"ai_status": "bold #00ff00",
|
||||
"user_prompt": "bold #00afd7", # Deep Sky Blue / Soft Cyan
|
||||
"unavailable": "#d78700",
|
||||
"contrast": "#bbbbbb",
|
||||
}
|
||||
|
||||
LIGHT_THEME = {
|
||||
"info": "#00008b", # Navy Blue
|
||||
"warning": "#d78700", # Orange
|
||||
"error": "#cd0000", # Dark Red
|
||||
"success": "#006400", # Dark Green
|
||||
"debug": "#777777",
|
||||
"header": "bold #00008b",
|
||||
"key": "bold #00008b",
|
||||
"border": "#00008b",
|
||||
"pass": "bold #006400",
|
||||
"fail": "bold #cd0000",
|
||||
"engineer": "#00008b",
|
||||
"architect": "#8b008b", # Dark Magenta
|
||||
"ai_status": "bold #006400",
|
||||
"user_prompt": "bold #00008b",
|
||||
"unavailable": "#666666",
|
||||
"contrast": "#777777",
|
||||
}
|
||||
|
||||
_global_active_styles = STYLES.copy()
|
||||
|
||||
def _get_console():
|
||||
local = _get_local()
|
||||
|
||||
# Self-healing patch: if sys.stdout was replaced (e.g. by pytest), re-wrap it.
|
||||
if not isinstance(sys.stdout, ThreadLocalStream):
|
||||
sys.stdout = ThreadLocalStream(sys.stdout)
|
||||
|
||||
current_out = sys.stdout
|
||||
|
||||
# Detect if we need to recreate the console (stream changed or closed)
|
||||
needs_recreate = (local.console is None or
|
||||
getattr(local, '_last_stdout', None) is not current_out)
|
||||
|
||||
# Extra check for closed files in test environments
|
||||
if not needs_recreate and local.console is not None:
|
||||
try:
|
||||
if hasattr(local.console.file, 'closed') and local.console.file.closed:
|
||||
needs_recreate = True
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
if needs_recreate:
|
||||
from rich.console import Console
|
||||
from rich.theme import Theme
|
||||
if local.theme is None:
|
||||
local.theme = Theme(STYLES)
|
||||
local.console = Console(theme=local.theme, file=current_out)
|
||||
local._last_stdout = current_out
|
||||
|
||||
return local.console
|
||||
|
||||
def _get_err_console():
|
||||
local = _get_local()
|
||||
|
||||
# Self-healing patch for stderr
|
||||
if not isinstance(sys.stderr, ThreadLocalStream):
|
||||
sys.stderr = ThreadLocalStream(sys.stderr)
|
||||
|
||||
current_err = sys.stderr
|
||||
|
||||
needs_recreate = (local.err_console is None or
|
||||
getattr(local, '_last_stderr', None) is not current_err)
|
||||
|
||||
if not needs_recreate and local.err_console is not None:
|
||||
try:
|
||||
if hasattr(local.err_console.file, 'closed') and local.err_console.file.closed:
|
||||
needs_recreate = True
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
if needs_recreate:
|
||||
from rich.console import Console
|
||||
from rich.theme import Theme
|
||||
if local.theme is None:
|
||||
local.theme = Theme(STYLES)
|
||||
local.err_console = Console(stderr=True, theme=local.theme, file=current_err)
|
||||
local._last_stderr = current_err
|
||||
|
||||
return local.err_console
|
||||
|
||||
def set_thread_console(console):
|
||||
_get_local().console = console
|
||||
|
||||
def set_thread_err_console(console):
|
||||
_get_local().err_console = console
|
||||
|
||||
def clear_thread_state():
|
||||
"""Removes all thread-local printer state. Useful for gRPC thread reuse."""
|
||||
for attr in ["stream", "console", "err_console", "theme", "_last_stdout", "_last_stderr"]:
|
||||
if hasattr(_local, attr):
|
||||
delattr(_local, attr)
|
||||
|
||||
@property
|
||||
def console():
|
||||
return _get_console()
|
||||
|
||||
@property
|
||||
def err_console():
|
||||
return _get_err_console()
|
||||
|
||||
@property
|
||||
def connpy_theme():
|
||||
local = _get_local()
|
||||
if local.theme is None:
|
||||
from rich.theme import Theme
|
||||
local.theme = Theme(_global_active_styles)
|
||||
return local.theme
|
||||
|
||||
def apply_theme(user_styles=None):
|
||||
"""
|
||||
Updates the global console themes with user-defined styles.
|
||||
If a style is missing in user_styles, it falls back to the default in STYLES.
|
||||
"""
|
||||
global _global_active_styles
|
||||
local = _get_local()
|
||||
from rich.theme import Theme
|
||||
|
||||
# Start with a copy of defaults
|
||||
active_styles = STYLES.copy()
|
||||
if user_styles:
|
||||
# Merge user styles (only if they are valid keys)
|
||||
for key, value in user_styles.items():
|
||||
if key in active_styles:
|
||||
active_styles[key] = value
|
||||
|
||||
_global_active_styles = active_styles
|
||||
local.theme = Theme(active_styles)
|
||||
if local.console:
|
||||
local.console.push_theme(local.theme)
|
||||
if local.err_console:
|
||||
local.err_console.push_theme(local.theme)
|
||||
return active_styles
|
||||
|
||||
|
||||
def _format_multiline(tag, message, style=None):
|
||||
message = str(message)
|
||||
lines = message.splitlines()
|
||||
if not lines:
|
||||
if style:
|
||||
return f"[{style}]\\[{tag}][/{style}]"
|
||||
return f"\\[{tag}]"
|
||||
|
||||
# Apply style to the tag if provided
|
||||
styled_tag = f"[{style}]\\[{tag}][/{style}]" if style else f"\\[{tag}]"
|
||||
if style:
|
||||
# Include brackets in the styling
|
||||
styled_tag = f"[{style}]\\[{tag}][/{style}]"
|
||||
formatted = [f"{styled_tag} {lines[0]}"]
|
||||
|
||||
# Indent subsequent lines
|
||||
indent = " " * (len(tag) + 3)
|
||||
for line in lines[1:]:
|
||||
formatted.append(f"{indent}{line}")
|
||||
return "\n".join(formatted)
|
||||
|
||||
def info(message):
|
||||
_get_console().print(_format_multiline("i", message, style="info"))
|
||||
|
||||
def success(message):
|
||||
_get_console().print(_format_multiline("✓", message, style="success"))
|
||||
|
||||
def start(message):
|
||||
_get_console().print(_format_multiline("+", message, style="success"))
|
||||
|
||||
def warning(message):
|
||||
_get_console().print(_format_multiline("!", message, style="warning"))
|
||||
|
||||
def error(message):
|
||||
_get_err_console().print(_format_multiline("✗", message, style="error"))
|
||||
|
||||
def debug(message):
|
||||
_get_console().print(_format_multiline("d", message, style="debug"))
|
||||
|
||||
def custom(tag, message):
|
||||
_get_console().print(_format_multiline(tag, message, style="header"))
|
||||
|
||||
def table(title, columns, rows, header_style="header", box=None):
|
||||
from rich.table import Table
|
||||
t = Table(title=title, header_style=header_style, box=box)
|
||||
for col in columns:
|
||||
t.add_column(col)
|
||||
for row in rows:
|
||||
t.add_row(*[str(item) for item in row])
|
||||
_get_console().print(t)
|
||||
|
||||
def data(title, content, language="yaml"):
|
||||
"""Display structured data with syntax highlighting inside a panel."""
|
||||
from rich.syntax import Syntax
|
||||
from rich.panel import Panel
|
||||
syntax = Syntax(content, language, theme="ansi_dark", word_wrap=True, background_color="default")
|
||||
panel = Panel(syntax, title=f"[header]{title}[/header]", border_style="border", expand=False)
|
||||
_get_console().print(panel)
|
||||
|
||||
def node_panel(unique, output, status, title_prefix=""):
|
||||
"""Display node execution result in a styled panel."""
|
||||
from rich.panel import Panel
|
||||
from rich.text import Text
|
||||
from rich.console import Group
|
||||
import os
|
||||
|
||||
try:
|
||||
cols, _ = os.get_terminal_size()
|
||||
except OSError:
|
||||
cols = 80
|
||||
|
||||
if status == 0:
|
||||
status_str = "[pass]✓ PASS[/pass]"
|
||||
border = "pass"
|
||||
else:
|
||||
status_str = f"[fail]✗ FAIL({status})[/fail]"
|
||||
border = "fail"
|
||||
|
||||
title_line = f"{title_prefix}[bold]{unique}[/bold] — {status_str}"
|
||||
stripped = output.strip() if output else ""
|
||||
code_block = Text(stripped + "\n") if stripped else Text()
|
||||
|
||||
_get_console().print(Panel(Group(Text(), code_block), title=title_line, width=cols, border_style=border))
|
||||
|
||||
def test_panel(unique, output, status, result):
|
||||
"""Display test execution result in a styled panel."""
|
||||
from rich.panel import Panel
|
||||
from rich.text import Text
|
||||
from rich.console import Group
|
||||
import os
|
||||
|
||||
try:
|
||||
cols, _ = os.get_terminal_size()
|
||||
except OSError:
|
||||
cols = 80
|
||||
|
||||
is_pass = (status == 0 and result and all(result.values()))
|
||||
|
||||
if is_pass:
|
||||
status_str = "[pass]✓ PASS[/pass]"
|
||||
border = "pass"
|
||||
else:
|
||||
status_str = f"[fail]✗ FAIL[/fail]"
|
||||
border = "fail"
|
||||
|
||||
title_line = f"[bold]{unique}[/bold] — {status_str}"
|
||||
|
||||
stripped = output.strip() if output else ""
|
||||
code_block = Text(stripped + "\n") if stripped else Text()
|
||||
|
||||
test_results = Text()
|
||||
test_results.append("\nTEST RESULTS:\n", style="header")
|
||||
if result:
|
||||
max_key_len = max(len(k) for k in result.keys())
|
||||
for k, v in result.items():
|
||||
mark = "✓" if v else "✗"
|
||||
style = "success" if v else "error"
|
||||
test_results.append(f" {k.ljust(max_key_len)} {mark}\n", style=style)
|
||||
else:
|
||||
test_results.append(" No results (execution failed)\n", style="error")
|
||||
|
||||
_get_console().print(Panel(Group(Text(), code_block, test_results), title=title_line, width=cols, border_style=border))
|
||||
|
||||
def test_summary(results):
|
||||
"""Print an aggregate summary of multiple test results in a single panel."""
|
||||
from rich.panel import Panel
|
||||
from rich.text import Text
|
||||
from rich.console import Group
|
||||
import os
|
||||
|
||||
try:
|
||||
cols, _ = os.get_terminal_size()
|
||||
except OSError:
|
||||
cols = 80
|
||||
|
||||
summary_content = Text()
|
||||
total_passed = 0
|
||||
total_failed = 0
|
||||
total_partial = 0
|
||||
|
||||
if not results:
|
||||
summary_content.append(" No test results found.\n", style="error")
|
||||
else:
|
||||
for node, test_result in results.items():
|
||||
summary_content.append(f"• ", style="border")
|
||||
summary_content.append(f"{node.ljust(40)}", style="bold")
|
||||
|
||||
if test_result:
|
||||
passed_count = sum(1 for v in test_result.values() if v)
|
||||
total_count = len(test_result)
|
||||
|
||||
if passed_count == total_count:
|
||||
total_passed += 1
|
||||
node_style = "success"
|
||||
mark = "✓ PASS"
|
||||
elif passed_count > 0:
|
||||
total_partial += 1
|
||||
node_style = "warning"
|
||||
mark = f"⚠ PARTIAL ({passed_count}/{total_count})"
|
||||
else:
|
||||
total_failed += 1
|
||||
node_style = "error"
|
||||
mark = "✗ FAIL"
|
||||
|
||||
summary_content.append(f" {mark}\n", style=node_style)
|
||||
for k, v in test_result.items():
|
||||
res_mark = "✓" if v else "✗"
|
||||
res_style = "success" if v else "error"
|
||||
summary_content.append(f" {k.ljust(38)} {res_mark}\n", style=res_style)
|
||||
else:
|
||||
total_failed += 1
|
||||
summary_content.append(" ✗ FAIL\n", style="error")
|
||||
summary_content.append(" No results (execution failed)\n", style="error")
|
||||
|
||||
status_parts = []
|
||||
if total_passed: status_parts.append(f"[pass]{total_passed} PASSED[/pass]")
|
||||
if total_partial: status_parts.append(f"[warning]{total_partial} PARTIAL[/warning]")
|
||||
if total_failed: status_parts.append(f"[fail]{total_failed} FAILED[/fail]")
|
||||
|
||||
status_str = " | ".join(status_parts) if status_parts else "[error]NO RESULTS[/error]"
|
||||
title_line = f"AGGREGATE TEST SUMMARY — {status_str}"
|
||||
|
||||
_get_console().print(Panel(Group(Text(), summary_content), title=title_line, width=cols, border_style="border"))
|
||||
|
||||
def run_summary(results):
|
||||
"""Print an aggregate summary of multiple execution results in a single panel."""
|
||||
from rich.panel import Panel
|
||||
from rich.text import Text
|
||||
from rich.console import Group
|
||||
import os
|
||||
|
||||
try:
|
||||
cols, _ = os.get_terminal_size()
|
||||
except OSError:
|
||||
cols = 80
|
||||
|
||||
summary_content = Text()
|
||||
total_ok = 0
|
||||
total_err = 0
|
||||
|
||||
if not results:
|
||||
summary_content.append(" No execution results found.\n", style="error")
|
||||
else:
|
||||
for node, data in results.items():
|
||||
summary_content.append(f"• ", style="border")
|
||||
summary_content.append(f"{node.ljust(40)}", style="bold")
|
||||
|
||||
# Check if we have a status dict or just output (for backward compatibility)
|
||||
status = data.get("status", 0) if isinstance(data, dict) else 0
|
||||
|
||||
if status == 0:
|
||||
total_ok += 1
|
||||
summary_content.append(f" ✓ DONE\n", style="success")
|
||||
else:
|
||||
total_err += 1
|
||||
summary_content.append(f" ✗ FAIL({status})\n", style="error")
|
||||
|
||||
status_parts = []
|
||||
if total_ok: status_parts.append(f"[success]{total_ok} DONE[/success]")
|
||||
if total_err: status_parts.append(f"[error]{total_err} FAILED[/error]")
|
||||
|
||||
status_str = " | ".join(status_parts) if status_parts else "[error]NO RESULTS[/error]"
|
||||
title_line = f"AGGREGATE EXECUTION SUMMARY — {status_str}"
|
||||
|
||||
_get_console().print(Panel(Group(Text(), summary_content), title=title_line, width=cols, border_style="border"))
|
||||
|
||||
def header(text):
|
||||
"""Print a section header."""
|
||||
from rich.rule import Rule
|
||||
_get_console().print(Rule(text, style="header"))
|
||||
|
||||
def kv(key, value):
|
||||
"""Print an inline key-value pair."""
|
||||
_get_console().print(f"[key]{key}[/key]: {value}")
|
||||
|
||||
def confirm_action(item, action):
|
||||
"""Print a confirmation pre-action message."""
|
||||
_get_console().print(f"\\[i] [bold]{action}[/bold]: {item}", style="info")
|
||||
|
||||
# Compatibility proxies
|
||||
class _ConsoleProxy:
|
||||
def __getattr__(self, name):
|
||||
return getattr(_get_console(), name)
|
||||
def __call__(self, *args, **kwargs):
|
||||
return _get_console()(*args, **kwargs)
|
||||
def __enter__(self):
|
||||
return _get_console().__enter__()
|
||||
def __exit__(self, exc_type, exc_val, exc_tb):
|
||||
return _get_console().__exit__(exc_type, exc_val, exc_tb)
|
||||
|
||||
class _ErrConsoleProxy:
|
||||
def __getattr__(self, name):
|
||||
return getattr(_get_err_console(), name)
|
||||
def __call__(self, *args, **kwargs):
|
||||
return _get_err_console()(*args, **kwargs)
|
||||
def __enter__(self):
|
||||
return _get_err_console().__enter__()
|
||||
def __exit__(self, exc_type, exc_val, exc_tb):
|
||||
return _get_err_console().__exit__(exc_type, exc_val, exc_tb)
|
||||
|
||||
console = _ConsoleProxy()
|
||||
err_console = _ErrConsoleProxy()
|
||||
|
||||
# theme also needs to be lazy
|
||||
class _ThemeProxy:
|
||||
def __getattr__(self, name):
|
||||
local = _get_local()
|
||||
if local.theme is None:
|
||||
from rich.theme import Theme
|
||||
local.theme = Theme(_global_active_styles)
|
||||
return getattr(local.theme, name)
|
||||
|
||||
connpy_theme = _ThemeProxy()
|
||||
@@ -1,294 +0,0 @@
|
||||
syntax = "proto3";
|
||||
|
||||
package connpy;
|
||||
|
||||
import "google/protobuf/struct.proto";
|
||||
import "google/protobuf/empty.proto";
|
||||
|
||||
service NodeService {
|
||||
rpc list_nodes (FilterRequest) returns (ValueResponse) {}
|
||||
rpc list_folders (FilterRequest) returns (ValueResponse) {}
|
||||
rpc get_node_details (IdRequest) returns (StructResponse) {}
|
||||
rpc explode_unique (IdRequest) returns (ValueResponse) {}
|
||||
rpc generate_cache (google.protobuf.Empty) returns (google.protobuf.Empty) {}
|
||||
rpc add_node (NodeRequest) returns (google.protobuf.Empty) {}
|
||||
rpc update_node (NodeRequest) returns (google.protobuf.Empty) {}
|
||||
rpc delete_node (DeleteRequest) returns (google.protobuf.Empty) {}
|
||||
rpc move_node (MoveRequest) returns (google.protobuf.Empty) {}
|
||||
rpc bulk_add (BulkRequest) returns (google.protobuf.Empty) {}
|
||||
rpc validate_parent_folder (IdRequest) returns (google.protobuf.Empty) {}
|
||||
rpc set_reserved_names (ListRequest) returns (google.protobuf.Empty) {}
|
||||
rpc interact_node (stream InteractRequest) returns (stream InteractResponse) {}
|
||||
rpc full_replace (FullReplaceRequest) returns (google.protobuf.Empty) {}
|
||||
rpc get_inventory (google.protobuf.Empty) returns (FullReplaceRequest) {}
|
||||
}
|
||||
|
||||
service ProfileService {
|
||||
rpc list_profiles (FilterRequest) returns (ValueResponse) {}
|
||||
rpc get_profile (ProfileRequest) returns (StructResponse) {}
|
||||
rpc add_profile (NodeRequest) returns (google.protobuf.Empty) {}
|
||||
rpc resolve_node_data (StructRequest) returns (StructResponse) {}
|
||||
rpc delete_profile (IdRequest) returns (google.protobuf.Empty) {}
|
||||
rpc update_profile (NodeRequest) returns (google.protobuf.Empty) {}
|
||||
}
|
||||
|
||||
service ConfigService {
|
||||
rpc get_settings (google.protobuf.Empty) returns (StructResponse) {}
|
||||
rpc get_default_dir (google.protobuf.Empty) returns (StringResponse) {}
|
||||
rpc set_config_folder (StringRequest) returns (google.protobuf.Empty) {}
|
||||
rpc update_setting (UpdateRequest) returns (google.protobuf.Empty) {}
|
||||
rpc encrypt_password (StringRequest) returns (StringResponse) {}
|
||||
rpc apply_theme_from_file (StringRequest) returns (StructResponse) {}
|
||||
}
|
||||
|
||||
service PluginService {
|
||||
rpc list_plugins (google.protobuf.Empty) returns (ValueResponse) {}
|
||||
rpc add_plugin (PluginRequest) returns (google.protobuf.Empty) {}
|
||||
rpc delete_plugin (IdRequest) returns (google.protobuf.Empty) {}
|
||||
rpc enable_plugin (IdRequest) returns (google.protobuf.Empty) {}
|
||||
rpc disable_plugin (IdRequest) returns (google.protobuf.Empty) {}
|
||||
}
|
||||
|
||||
service ExecutionService {
|
||||
rpc run_commands (RunRequest) returns (stream NodeRunResult) {}
|
||||
rpc test_commands (TestRequest) returns (stream NodeRunResult) {}
|
||||
rpc run_cli_script (ScriptRequest) returns (StructResponse) {}
|
||||
rpc run_yaml_playbook (ScriptRequest) returns (StructResponse) {}
|
||||
}
|
||||
|
||||
service ImportExportService {
|
||||
rpc export_to_file (ExportRequest) returns (google.protobuf.Empty) {}
|
||||
rpc import_from_file (StringRequest) returns (google.protobuf.Empty) {}
|
||||
rpc set_reserved_names (ListRequest) returns (google.protobuf.Empty) {}
|
||||
}
|
||||
|
||||
service AIService {
|
||||
rpc ask (stream AskRequest) returns (stream AIResponse) {}
|
||||
rpc confirm (StringRequest) returns (BoolResponse) {}
|
||||
rpc ask_copilot (CopilotRequest) returns (CopilotResponse) {}
|
||||
rpc list_sessions (google.protobuf.Empty) returns (ValueResponse) {}
|
||||
rpc delete_session (StringRequest) returns (google.protobuf.Empty) {}
|
||||
rpc configure_provider (ProviderRequest) returns (google.protobuf.Empty) {}
|
||||
rpc configure_mcp (MCPRequest) returns (google.protobuf.Empty) {}
|
||||
rpc load_session_data (StringRequest) returns (StructResponse) {}
|
||||
}
|
||||
|
||||
service SystemService {
|
||||
rpc start_api (IntRequest) returns (google.protobuf.Empty) {}
|
||||
rpc debug_api (IntRequest) returns (google.protobuf.Empty) {}
|
||||
rpc stop_api (google.protobuf.Empty) returns (google.protobuf.Empty) {}
|
||||
rpc restart_api (IntRequest) returns (google.protobuf.Empty) {}
|
||||
rpc get_api_status (google.protobuf.Empty) returns (BoolResponse) {}
|
||||
}
|
||||
|
||||
// Request and Response Messages
|
||||
|
||||
message InteractRequest {
|
||||
string id = 1;
|
||||
bool sftp = 2;
|
||||
bool debug = 3;
|
||||
bytes stdin_data = 4;
|
||||
int32 cols = 5;
|
||||
int32 rows = 6;
|
||||
string connection_params_json = 7;
|
||||
// Copilot fields
|
||||
string copilot_question = 8;
|
||||
string copilot_action = 9;
|
||||
string copilot_context_buffer = 10;
|
||||
string copilot_node_info_json = 13;
|
||||
}
|
||||
|
||||
message InteractResponse {
|
||||
bytes stdout_data = 1;
|
||||
bool success = 2;
|
||||
string error_message = 3;
|
||||
// Copilot fields
|
||||
bool copilot_prompt = 4;
|
||||
string copilot_buffer_preview = 5;
|
||||
string copilot_response_json = 6;
|
||||
string copilot_node_info_json = 7;
|
||||
string copilot_stream_chunk = 8;
|
||||
string copilot_injected_command = 9;
|
||||
}
|
||||
|
||||
message FilterRequest {
|
||||
string filter_str = 1;
|
||||
string format_str = 2;
|
||||
}
|
||||
|
||||
message ValueResponse {
|
||||
google.protobuf.Value data = 1;
|
||||
}
|
||||
|
||||
message IdRequest {
|
||||
string id = 1;
|
||||
}
|
||||
|
||||
message NodeRequest {
|
||||
string id = 1;
|
||||
google.protobuf.Struct data = 2;
|
||||
bool is_folder = 3;
|
||||
}
|
||||
|
||||
message DeleteRequest {
|
||||
string id = 1;
|
||||
bool is_folder = 2;
|
||||
}
|
||||
|
||||
message MessageValue {
|
||||
string value = 1;
|
||||
}
|
||||
|
||||
message MoveRequest {
|
||||
string src_id = 1;
|
||||
string dst_id = 2;
|
||||
bool copy = 3;
|
||||
}
|
||||
|
||||
message BulkRequest {
|
||||
repeated string ids = 1;
|
||||
repeated string hosts = 2;
|
||||
google.protobuf.Struct common_data = 3;
|
||||
}
|
||||
|
||||
message StructResponse {
|
||||
google.protobuf.Struct data = 1;
|
||||
}
|
||||
|
||||
message ProfileRequest {
|
||||
string name = 1;
|
||||
bool resolve = 2;
|
||||
}
|
||||
|
||||
message StructRequest {
|
||||
google.protobuf.Struct data = 1;
|
||||
}
|
||||
|
||||
message StringRequest {
|
||||
string value = 1;
|
||||
}
|
||||
|
||||
message StringResponse {
|
||||
string value = 1;
|
||||
}
|
||||
|
||||
message UpdateRequest {
|
||||
string key = 1;
|
||||
google.protobuf.Value value = 2;
|
||||
}
|
||||
|
||||
message PluginRequest {
|
||||
string name = 1;
|
||||
string source_file = 2;
|
||||
bool update = 3;
|
||||
}
|
||||
|
||||
message RunRequest {
|
||||
repeated string nodes = 1;
|
||||
repeated string commands = 2;
|
||||
string folder = 3;
|
||||
string prompt = 4;
|
||||
int32 parallel = 5;
|
||||
google.protobuf.Struct vars = 6;
|
||||
int32 timeout = 7;
|
||||
string name = 8;
|
||||
}
|
||||
|
||||
message TestRequest {
|
||||
repeated string nodes = 1;
|
||||
repeated string commands = 2;
|
||||
repeated string expected = 3;
|
||||
string folder = 4;
|
||||
string prompt = 5;
|
||||
int32 parallel = 6;
|
||||
google.protobuf.Struct vars = 7;
|
||||
int32 timeout = 8;
|
||||
string name = 9;
|
||||
}
|
||||
|
||||
message ScriptRequest {
|
||||
string param1 = 1; // nodes_filter or playbook_path
|
||||
string param2 = 2; // script_path or ""
|
||||
int32 parallel = 3;
|
||||
}
|
||||
|
||||
message ExportRequest {
|
||||
string file_path = 1;
|
||||
repeated string folders = 2;
|
||||
}
|
||||
|
||||
message ListRequest {
|
||||
repeated string items = 1;
|
||||
}
|
||||
|
||||
message AskRequest {
|
||||
string input_text = 1;
|
||||
bool dryrun = 2;
|
||||
google.protobuf.Value chat_history = 3;
|
||||
string session_id = 4;
|
||||
bool debug = 5;
|
||||
string engineer_model = 6;
|
||||
string engineer_api_key = 7;
|
||||
string architect_model = 8;
|
||||
string architect_api_key = 9;
|
||||
bool trust = 10;
|
||||
string confirmation_answer = 11;
|
||||
bool interrupt = 12;
|
||||
}
|
||||
|
||||
message AIResponse {
|
||||
string text_chunk = 1;
|
||||
bool is_final = 2;
|
||||
google.protobuf.Struct full_result = 3;
|
||||
string status_update = 4;
|
||||
string debug_message = 5;
|
||||
bool requires_confirmation = 6;
|
||||
string important_message = 7;
|
||||
}
|
||||
|
||||
message BoolResponse {
|
||||
bool value = 1;
|
||||
}
|
||||
|
||||
message ProviderRequest {
|
||||
string provider = 1;
|
||||
string model = 2;
|
||||
string api_key = 3;
|
||||
}
|
||||
|
||||
message IntRequest {
|
||||
int32 value = 1;
|
||||
}
|
||||
|
||||
message NodeRunResult {
|
||||
string unique_id = 1;
|
||||
string output = 2;
|
||||
int32 status = 3;
|
||||
google.protobuf.Struct test_result = 4;
|
||||
}
|
||||
|
||||
message FullReplaceRequest {
|
||||
google.protobuf.Struct connections = 1;
|
||||
google.protobuf.Struct profiles = 2;
|
||||
}
|
||||
|
||||
message CopilotRequest {
|
||||
string terminal_buffer = 1;
|
||||
string user_question = 2;
|
||||
string node_info_json = 3;
|
||||
}
|
||||
|
||||
message CopilotResponse {
|
||||
repeated string commands = 1;
|
||||
string guide = 2;
|
||||
string risk_level = 3;
|
||||
string error = 4;
|
||||
}
|
||||
|
||||
message MCPRequest {
|
||||
string name = 1;
|
||||
string url = 2;
|
||||
bool enabled = 3;
|
||||
string auto_load_on_os = 4;
|
||||
bool remove = 5;
|
||||
}
|
||||
@@ -1,28 +0,0 @@
|
||||
from .exceptions import *
|
||||
from .node_service import NodeService
|
||||
from .profile_service import ProfileService
|
||||
from .execution_service import ExecutionService
|
||||
from .import_export_service import ImportExportService
|
||||
from .ai_service import AIService
|
||||
from .plugin_service import PluginService
|
||||
from .config_service import ConfigService
|
||||
from .system_service import SystemService
|
||||
|
||||
__all__ = [
|
||||
'NodeService',
|
||||
'ProfileService',
|
||||
'ExecutionService',
|
||||
'ImportExportService',
|
||||
'AIService',
|
||||
'PluginService',
|
||||
'ConfigService',
|
||||
'SystemService',
|
||||
'ConnpyError',
|
||||
'NodeNotFoundError',
|
||||
'NodeAlreadyExistsError',
|
||||
'ProfileNotFoundError',
|
||||
'ProfileAlreadyExistsError',
|
||||
'ExecutionError',
|
||||
'InvalidConfigurationError'
|
||||
]
|
||||
|
||||
@@ -1,202 +0,0 @@
|
||||
import re
|
||||
from .base import BaseService
|
||||
from .exceptions import InvalidConfigurationError
|
||||
from connpy.utils import log_cleaner
|
||||
|
||||
class AIService(BaseService):
|
||||
"""Business logic for interacting with AI agents and LLM configurations."""
|
||||
|
||||
def build_context_blocks(self, raw_bytes: bytes, cmd_byte_positions: list, node_info: dict) -> list:
|
||||
"""Identifies command blocks in the terminal history."""
|
||||
blocks = []
|
||||
if not (cmd_byte_positions and len(cmd_byte_positions) >= 2 and raw_bytes):
|
||||
return blocks
|
||||
|
||||
default_prompt = r'>$|#$|\$$|>.$|#.$|\$.$'
|
||||
device_prompt = node_info.get("prompt", default_prompt) if isinstance(node_info, dict) else default_prompt
|
||||
prompt_re_str = re.sub(r'(?<!\\)\$', '', device_prompt)
|
||||
try:
|
||||
prompt_re = re.compile(prompt_re_str)
|
||||
except Exception:
|
||||
prompt_re = re.compile(re.sub(r'(?<!\\)\$', '', default_prompt))
|
||||
|
||||
for i in range(1, len(cmd_byte_positions)):
|
||||
pos, known_cmd = cmd_byte_positions[i]
|
||||
prev_pos = cmd_byte_positions[i-1][0]
|
||||
|
||||
if known_cmd:
|
||||
prev_chunk = raw_bytes[prev_pos:pos]
|
||||
prev_cleaned = log_cleaner(prev_chunk.decode(errors='replace'))
|
||||
prev_lines = [l for l in prev_cleaned.split('\n') if l.strip()]
|
||||
prompt_text = prev_lines[-1].strip() if prev_lines else ""
|
||||
preview = f"{prompt_text}{known_cmd}" if prompt_text else known_cmd
|
||||
blocks.append((pos, preview[:80]))
|
||||
else:
|
||||
chunk = raw_bytes[prev_pos:pos]
|
||||
cleaned = log_cleaner(chunk.decode(errors='replace'))
|
||||
lines = [l for l in cleaned.split('\n') if l.strip()]
|
||||
preview = lines[-1].strip() if lines else ""
|
||||
|
||||
if preview:
|
||||
match = prompt_re.search(preview)
|
||||
if match:
|
||||
cmd_text = preview[match.end():].strip()
|
||||
if cmd_text:
|
||||
blocks.append((pos, preview[:80]))
|
||||
return blocks
|
||||
|
||||
def process_copilot_input(self, input_text: str, session_state: dict) -> dict:
|
||||
"""Parses slash commands and manages session state. Returns directive dict."""
|
||||
text = input_text.strip()
|
||||
if not text.startswith('/'):
|
||||
return {"action": "execute", "clean_prompt": text, "overrides": {}}
|
||||
|
||||
parts = text.split(maxsplit=1)
|
||||
cmd = parts[0].lower()
|
||||
args = parts[1] if len(parts) > 1 else ""
|
||||
|
||||
# 1. State Commands (Persistent)
|
||||
if cmd == "/os":
|
||||
if args:
|
||||
session_state['os'] = args
|
||||
return {"action": "state_update", "message": f"OS context changed to {args}"}
|
||||
elif cmd == "/prompt":
|
||||
if args:
|
||||
session_state['prompt'] = args
|
||||
return {"action": "state_update", "message": f"Prompt regex changed to {args}"}
|
||||
elif cmd == "/memorize":
|
||||
if args:
|
||||
session_state['memories'].append(args)
|
||||
return {"action": "state_update", "message": f"Memory added: {args}"}
|
||||
elif cmd == "/clear":
|
||||
session_state['memories'] = []
|
||||
return {"action": "state_update", "message": "Memory cleared"}
|
||||
|
||||
# 2. Hybrid Commands
|
||||
elif cmd == "/architect":
|
||||
if not args:
|
||||
session_state['persona'] = 'architect'
|
||||
return {"action": "state_update", "message": "Persona set to Architect"}
|
||||
else:
|
||||
return {"action": "execute", "clean_prompt": args, "overrides": {"persona": "architect"}}
|
||||
|
||||
elif cmd == "/engineer":
|
||||
if not args:
|
||||
session_state['persona'] = 'engineer'
|
||||
return {"action": "state_update", "message": "Persona set to Engineer"}
|
||||
else:
|
||||
return {"action": "execute", "clean_prompt": args, "overrides": {"persona": "engineer"}}
|
||||
|
||||
elif cmd == "/trust":
|
||||
if not args:
|
||||
session_state['trust_mode'] = True
|
||||
return {"action": "state_update", "message": "Auto-execute (trust) enabled for session"}
|
||||
else:
|
||||
return {"action": "execute", "clean_prompt": args, "overrides": {"trust": True}}
|
||||
|
||||
elif cmd == "/untrust":
|
||||
if not args:
|
||||
session_state['trust_mode'] = False
|
||||
return {"action": "state_update", "message": "Auto-execute (trust) disabled for session"}
|
||||
else:
|
||||
return {"action": "execute", "clean_prompt": args, "overrides": {"trust": False}}
|
||||
|
||||
# Unknown command, execute normally
|
||||
return {"action": "execute", "clean_prompt": text, "overrides": {}}
|
||||
|
||||
def ask(self, input_text, dryrun=False, chat_history=None, status=None, debug=False, session_id=None, console=None, chunk_callback=None, confirm_handler=None, trust=False, **overrides):
|
||||
"""Send a prompt to the AI agent."""
|
||||
from connpy.ai import ai
|
||||
agent = ai(self.config, console=console, confirm_handler=confirm_handler, trust=trust, **overrides)
|
||||
return agent.ask(input_text, dryrun, chat_history, status=status, debug=debug, session_id=session_id, chunk_callback=chunk_callback)
|
||||
|
||||
|
||||
def confirm(self, input_text, console=None):
|
||||
"""Ask for a safe confirmation of an action."""
|
||||
from connpy.ai import ai
|
||||
agent = ai(self.config, console=console)
|
||||
return agent.confirm(input_text)
|
||||
|
||||
def ask_copilot(self, terminal_buffer, user_question, node_info=None, chunk_callback=None):
|
||||
"""Ask the AI copilot for terminal assistance."""
|
||||
from connpy.ai import ai, run_ai_async
|
||||
agent = ai(self.config)
|
||||
future = run_ai_async(agent.aask_copilot(terminal_buffer, user_question, node_info, chunk_callback=chunk_callback))
|
||||
return future.result()
|
||||
|
||||
async def aask_copilot(self, terminal_buffer, user_question, node_info=None, chunk_callback=None):
|
||||
"""Ask the AI copilot for terminal assistance asynchronously."""
|
||||
from connpy.ai import ai, run_ai_async
|
||||
import asyncio
|
||||
agent = ai(self.config)
|
||||
future = run_ai_async(agent.aask_copilot(terminal_buffer, user_question, node_info, chunk_callback=chunk_callback))
|
||||
return await asyncio.wrap_future(future)
|
||||
|
||||
|
||||
def list_sessions(self):
|
||||
"""Return a list of all saved AI sessions."""
|
||||
from connpy.ai import ai
|
||||
agent = ai(self.config)
|
||||
return agent._get_sessions()
|
||||
|
||||
def delete_session(self, session_id):
|
||||
"""Delete an AI session by ID."""
|
||||
import os
|
||||
sessions_dir = os.path.join(self.config.defaultdir, "ai_sessions")
|
||||
path = os.path.join(sessions_dir, f"{session_id}.json")
|
||||
if os.path.exists(path):
|
||||
os.remove(path)
|
||||
else:
|
||||
raise InvalidConfigurationError(f"Session '{session_id}' not found.")
|
||||
|
||||
def configure_provider(self, provider, model=None, api_key=None):
|
||||
"""Update AI provider settings in the configuration."""
|
||||
settings = self.config.config.get("ai", {})
|
||||
if model:
|
||||
settings[f"{provider}_model"] = model
|
||||
if api_key:
|
||||
settings[f"{provider}_api_key"] = api_key
|
||||
|
||||
self.config.config["ai"] = settings
|
||||
self.config._saveconfig(self.config.file)
|
||||
|
||||
def configure_mcp(self, name, url=None, enabled=None, auto_load_on_os=None, remove=False):
|
||||
"""Update MCP server settings in the configuration with smart merging."""
|
||||
ai_settings = self.config.config.get("ai", {})
|
||||
mcp_servers = ai_settings.get("mcp_servers", {})
|
||||
|
||||
if remove:
|
||||
if name in mcp_servers:
|
||||
del mcp_servers[name]
|
||||
else:
|
||||
# Get existing or new
|
||||
server_cfg = mcp_servers.get(name, {})
|
||||
|
||||
# Partial updates
|
||||
if url is not None:
|
||||
server_cfg["url"] = url
|
||||
|
||||
if enabled is not None:
|
||||
server_cfg["enabled"] = bool(enabled)
|
||||
elif "enabled" not in server_cfg:
|
||||
server_cfg["enabled"] = True # Default for new entries
|
||||
|
||||
if auto_load_on_os is not None:
|
||||
if auto_load_on_os == "": # Explicit clear
|
||||
if "auto_load_on_os" in server_cfg:
|
||||
del server_cfg["auto_load_on_os"]
|
||||
else:
|
||||
server_cfg["auto_load_on_os"] = auto_load_on_os
|
||||
|
||||
mcp_servers[name] = server_cfg
|
||||
|
||||
ai_settings["mcp_servers"] = mcp_servers
|
||||
self.config.config["ai"] = ai_settings
|
||||
self.config._saveconfig(self.config.file)
|
||||
|
||||
def load_session_data(self, session_id):
|
||||
"""Load a session's raw data by ID."""
|
||||
from connpy.ai import ai
|
||||
agent = ai(self.config)
|
||||
return agent.load_session_data(session_id)
|
||||
|
||||
@@ -1,33 +0,0 @@
|
||||
from connpy.hooks import MethodHook
|
||||
|
||||
class BaseService:
|
||||
"""Base class for all connpy services, providing common configuration access."""
|
||||
|
||||
def __init__(self, config=None):
|
||||
"""
|
||||
Initialize the service.
|
||||
|
||||
Args:
|
||||
config: An instance of configfile (or None to instantiate a new one/use global context).
|
||||
"""
|
||||
from connpy import configfile
|
||||
self.config = config or configfile()
|
||||
self.hooks = MethodHook
|
||||
self.reserved_names = []
|
||||
|
||||
def set_reserved_names(self, names):
|
||||
"""Inject a list of reserved names (e.g. from the CLI)."""
|
||||
self.reserved_names = names
|
||||
|
||||
def _validate_node_name(self, unique_id):
|
||||
"""Check if the node name in unique_id is reserved."""
|
||||
from .exceptions import ReservedNameError
|
||||
if not self.reserved_names:
|
||||
return
|
||||
|
||||
uniques = self.config._explode_unique(unique_id)
|
||||
if uniques and "id" in uniques:
|
||||
# We only validate the 'id' (the actual node name), folders are prefixed with @
|
||||
node_name = uniques["id"]
|
||||
if node_name in self.reserved_names:
|
||||
raise ReservedNameError(f"Node name '{node_name}' is a reserved command.")
|
||||
@@ -1,86 +0,0 @@
|
||||
import os
|
||||
import shutil
|
||||
import base64
|
||||
from typing import Any, Dict
|
||||
from Crypto.PublicKey import RSA
|
||||
from Crypto.Cipher import PKCS1_OAEP
|
||||
from .base import BaseService
|
||||
from .exceptions import ConnpyError, InvalidConfigurationError, NodeNotFoundError
|
||||
|
||||
|
||||
class ConfigService(BaseService):
|
||||
"""Business logic for general application settings and state configuration."""
|
||||
|
||||
def get_settings(self) -> Dict[str, Any]:
|
||||
"""Get the global configuration settings block."""
|
||||
settings = self.config.config.copy()
|
||||
settings["configfolder"] = self.config.defaultdir
|
||||
return settings
|
||||
|
||||
def get_default_dir(self) -> str:
|
||||
"""Get the default configuration directory."""
|
||||
return self.config.defaultdir
|
||||
|
||||
def set_config_folder(self, folder_path: str):
|
||||
"""Set the default location for config file by writing to ~/.config/conn/.folder"""
|
||||
if not os.path.isdir(folder_path):
|
||||
raise ConnpyError(f"readable_dir:{folder_path} is not a valid path")
|
||||
|
||||
pathfile = os.path.join(self.config.anchor_path, ".folder")
|
||||
folder = os.path.abspath(folder_path).rstrip('/')
|
||||
|
||||
try:
|
||||
with open(pathfile, "w") as f:
|
||||
f.write(str(folder))
|
||||
except Exception as e:
|
||||
raise ConnpyError(f"Failed to save config folder: {e}")
|
||||
|
||||
def update_setting(self, key, value):
|
||||
"""Update a setting in the configuration file."""
|
||||
self.config.config[key] = value
|
||||
self.config._saveconfig(self.config.file)
|
||||
|
||||
def encrypt_password(self, password):
|
||||
"""Encrypt a password using the application's configuration encryption key."""
|
||||
return self.config.encrypt(password)
|
||||
|
||||
def apply_theme_from_file(self, theme_input):
|
||||
"""Apply 'dark', 'light' theme or load a YAML theme file and save it to the configuration."""
|
||||
import yaml
|
||||
from ..printer import STYLES, LIGHT_THEME
|
||||
|
||||
if theme_input == "dark":
|
||||
valid_styles = {}
|
||||
self.update_setting("theme", valid_styles)
|
||||
return valid_styles
|
||||
elif theme_input == "light":
|
||||
valid_styles = LIGHT_THEME.copy()
|
||||
self.update_setting("theme", valid_styles)
|
||||
return valid_styles
|
||||
|
||||
if not os.path.exists(theme_input):
|
||||
raise InvalidConfigurationError(f"Theme file '{theme_input}' not found.")
|
||||
|
||||
try:
|
||||
with open(theme_input, 'r') as f:
|
||||
user_styles = yaml.safe_load(f)
|
||||
except Exception as e:
|
||||
raise InvalidConfigurationError(f"Failed to parse theme file: {e}")
|
||||
|
||||
if not isinstance(user_styles, dict):
|
||||
raise InvalidConfigurationError("Theme file must be a YAML dictionary.")
|
||||
|
||||
# Support both direct styles and nested under 'theme' key
|
||||
if "theme" in user_styles and isinstance(user_styles["theme"], dict):
|
||||
user_styles = user_styles["theme"]
|
||||
|
||||
# Filter for valid styles only (prevent junk in config)
|
||||
valid_styles = {k: v for k, v in user_styles.items() if k in STYLES}
|
||||
|
||||
if not valid_styles:
|
||||
raise InvalidConfigurationError("No valid style keys found in theme file.")
|
||||
|
||||
# Persist and return merged styles
|
||||
self.update_setting("theme", valid_styles)
|
||||
return valid_styles
|
||||
|
||||
@@ -1,87 +0,0 @@
|
||||
import re
|
||||
from typing import List, Dict, Any
|
||||
from .base import BaseService
|
||||
from ..hooks import MethodHook
|
||||
from .. import printer
|
||||
|
||||
class ContextService(BaseService):
|
||||
"""Business logic for managing and applying regex-based contexts locally."""
|
||||
|
||||
@property
|
||||
def contexts(self) -> Dict[str, List[str]]:
|
||||
return self.config.config.get("contexts", {"all": [".*"]})
|
||||
|
||||
@property
|
||||
def current_context(self) -> str:
|
||||
return self.config.config.get("current_context", "all")
|
||||
|
||||
def list_contexts(self) -> List[Dict[str, Any]]:
|
||||
result = []
|
||||
for name in self.contexts.keys():
|
||||
result.append({
|
||||
"name": name,
|
||||
"active": (name == self.current_context),
|
||||
"regexes": self.contexts[name]
|
||||
})
|
||||
return result
|
||||
|
||||
def add_context(self, name: str, regexes: List[str]):
|
||||
if not name.isalnum():
|
||||
raise ValueError("Context name must be alphanumeric")
|
||||
|
||||
ctxs = self.contexts
|
||||
if name in ctxs:
|
||||
raise ValueError(f"Context '{name}' already exists")
|
||||
|
||||
ctxs[name] = regexes
|
||||
self.config.config["contexts"] = ctxs
|
||||
self.config._saveconfig(self.config.file)
|
||||
|
||||
def update_context(self, name: str, regexes: List[str]):
|
||||
if name == "all":
|
||||
raise ValueError("Cannot modify default context 'all'")
|
||||
|
||||
ctxs = self.contexts
|
||||
if name not in ctxs:
|
||||
raise ValueError(f"Context '{name}' does not exist")
|
||||
|
||||
ctxs[name] = regexes
|
||||
self.config.config["contexts"] = ctxs
|
||||
self.config._saveconfig(self.config.file)
|
||||
|
||||
def delete_context(self, name: str):
|
||||
if name == "all":
|
||||
raise ValueError("Cannot delete default context 'all'")
|
||||
if name == self.current_context:
|
||||
raise ValueError(f"Cannot delete active context '{name}'")
|
||||
|
||||
ctxs = self.contexts
|
||||
if name not in ctxs:
|
||||
raise ValueError(f"Context '{name}' does not exist")
|
||||
|
||||
del ctxs[name]
|
||||
self.config.config["contexts"] = ctxs
|
||||
self.config._saveconfig(self.config.file)
|
||||
|
||||
def set_active_context(self, name: str):
|
||||
if name not in self.contexts:
|
||||
raise ValueError(f"Context '{name}' does not exist")
|
||||
|
||||
self.config.config["current_context"] = name
|
||||
self.config._saveconfig(self.config.file)
|
||||
|
||||
def get_active_regexes(self) -> List[re.Pattern]:
|
||||
patterns = self.contexts.get(self.current_context, [".*"])
|
||||
return [re.compile(p) for p in patterns]
|
||||
|
||||
def _match_any(self, node_name: str, patterns: List[re.Pattern]) -> bool:
|
||||
return any(p.match(node_name) for p in patterns)
|
||||
|
||||
# Hook handlers for filtering
|
||||
def filter_node_list(self, *args, **kwargs):
|
||||
patterns = self.get_active_regexes()
|
||||
return [node for node in kwargs["result"] if self._match_any(node, patterns)]
|
||||
|
||||
def filter_node_dict(self, *args, **kwargs):
|
||||
patterns = self.get_active_regexes()
|
||||
return {k: v for k, v in kwargs["result"].items() if self._match_any(k, patterns)}
|
||||
@@ -1,31 +0,0 @@
|
||||
class ConnpyError(Exception):
|
||||
"""Base exception for all connpy services."""
|
||||
pass
|
||||
|
||||
class NodeNotFoundError(ConnpyError):
|
||||
"""Raised when a connection or folder is not found."""
|
||||
pass
|
||||
|
||||
class NodeAlreadyExistsError(ConnpyError):
|
||||
"""Raised when a node or folder already exists."""
|
||||
pass
|
||||
|
||||
class ProfileNotFoundError(ConnpyError):
|
||||
"""Raised when a profile is not found."""
|
||||
pass
|
||||
|
||||
class ProfileAlreadyExistsError(ConnpyError):
|
||||
"""Raised when a profile with the same name already exists."""
|
||||
pass
|
||||
|
||||
class ExecutionError(ConnpyError):
|
||||
"""Raised when an execution fails or returns error."""
|
||||
pass
|
||||
|
||||
class InvalidConfigurationError(ConnpyError):
|
||||
"""Raised when data or configuration input is invalid."""
|
||||
pass
|
||||
|
||||
class ReservedNameError(ConnpyError):
|
||||
"""Raised when a node name conflicts with a reserved command."""
|
||||
pass
|
||||
@@ -1,159 +0,0 @@
|
||||
from typing import List, Dict, Any, Callable, Optional
|
||||
import os
|
||||
import yaml
|
||||
from .base import BaseService
|
||||
from connpy.core import nodes as Nodes
|
||||
from .exceptions import ConnpyError
|
||||
|
||||
class ExecutionService(BaseService):
|
||||
"""Business logic for executing commands on nodes and running automation scripts."""
|
||||
|
||||
def run_commands(
|
||||
self,
|
||||
nodes_filter: str,
|
||||
commands: List[str],
|
||||
variables: Optional[Dict[str, Any]] = None,
|
||||
parallel: int = 10,
|
||||
timeout: int = 20,
|
||||
folder: Optional[str] = None,
|
||||
prompt: Optional[str] = None,
|
||||
on_node_complete: Optional[Callable] = None,
|
||||
logger: Optional[Callable] = None,
|
||||
name: Optional[str] = None
|
||||
) -> Dict[str, str]:
|
||||
|
||||
"""Execute commands on a set of nodes."""
|
||||
try:
|
||||
matched_names = self.config._getallnodes(nodes_filter)
|
||||
if not matched_names:
|
||||
raise ConnpyError(f"No nodes found matching filter: {nodes_filter}")
|
||||
|
||||
node_data = self.config.getitems(matched_names, extract=True)
|
||||
executor = Nodes(node_data, config=self.config)
|
||||
self.last_executor = executor
|
||||
|
||||
results = executor.run(
|
||||
commands=commands,
|
||||
vars=variables,
|
||||
parallel=parallel,
|
||||
timeout=timeout,
|
||||
folder=folder,
|
||||
prompt=prompt,
|
||||
on_complete=on_node_complete,
|
||||
logger=logger
|
||||
)
|
||||
|
||||
# Combine output and status for the caller
|
||||
full_results = {}
|
||||
for unique in results:
|
||||
full_results[unique] = {
|
||||
"output": results[unique],
|
||||
"status": executor.status.get(unique, 1)
|
||||
}
|
||||
|
||||
return full_results
|
||||
except Exception as e:
|
||||
raise ConnpyError(f"Execution failed: {e}")
|
||||
|
||||
def test_commands(
|
||||
self,
|
||||
nodes_filter: str,
|
||||
commands: List[str],
|
||||
expected: List[str],
|
||||
variables: Optional[Dict[str, Any]] = None,
|
||||
parallel: int = 10,
|
||||
timeout: int = 20,
|
||||
folder: Optional[str] = None,
|
||||
prompt: Optional[str] = None,
|
||||
on_node_complete: Optional[Callable] = None,
|
||||
logger: Optional[Callable] = None,
|
||||
name: Optional[str] = None
|
||||
) -> Dict[str, Dict[str, bool]]:
|
||||
|
||||
"""Run commands and verify expected output on a set of nodes."""
|
||||
try:
|
||||
matched_names = self.config._getallnodes(nodes_filter)
|
||||
if not matched_names:
|
||||
raise ConnpyError(f"No nodes found matching filter: {nodes_filter}")
|
||||
|
||||
node_data = self.config.getitems(matched_names, extract=True)
|
||||
executor = Nodes(node_data, config=self.config)
|
||||
self.last_executor = executor
|
||||
|
||||
results = executor.test(
|
||||
commands=commands,
|
||||
expected=expected,
|
||||
vars=variables,
|
||||
parallel=parallel,
|
||||
timeout=timeout,
|
||||
folder=folder,
|
||||
prompt=prompt,
|
||||
on_complete=on_node_complete,
|
||||
logger=logger
|
||||
)
|
||||
return results
|
||||
except Exception as e:
|
||||
raise ConnpyError(f"Testing failed: {e}")
|
||||
|
||||
def run_cli_script(self, nodes_filter: str, script_path: str, parallel: int = 10) -> Dict[str, str]:
|
||||
"""Run a plain-text script containing one command per line."""
|
||||
if not os.path.exists(script_path):
|
||||
raise ConnpyError(f"Script file not found: {script_path}")
|
||||
|
||||
try:
|
||||
with open(script_path, "r") as f:
|
||||
commands = [line.strip() for line in f if line.strip()]
|
||||
except Exception as e:
|
||||
raise ConnpyError(f"Failed to read script {script_path}: {e}")
|
||||
|
||||
return self.run_commands(nodes_filter, commands, parallel=parallel)
|
||||
|
||||
def run_yaml_playbook(self, playbook_data: str, parallel: int = 10) -> Dict[str, Any]:
|
||||
"""Run a structured Connpy YAML automation playbook (from path or content)."""
|
||||
playbook = None
|
||||
if playbook_data.startswith("---YAML---\n"):
|
||||
try:
|
||||
content = playbook_data[len("---YAML---\n"):]
|
||||
playbook = yaml.load(content, Loader=yaml.FullLoader)
|
||||
except Exception as e:
|
||||
raise ConnpyError(f"Failed to parse YAML content: {e}")
|
||||
else:
|
||||
if not os.path.exists(playbook_data):
|
||||
raise ConnpyError(f"Playbook file not found: {playbook_data}")
|
||||
try:
|
||||
with open(playbook_data, "r") as f:
|
||||
playbook = yaml.load(f, Loader=yaml.FullLoader)
|
||||
except Exception as e:
|
||||
raise ConnpyError(f"Failed to load playbook {playbook_data}: {e}")
|
||||
|
||||
# Basic validation
|
||||
if not isinstance(playbook, dict) or "nodes" not in playbook or "commands" not in playbook:
|
||||
raise ConnpyError("Invalid playbook format: missing 'nodes' or 'commands' keys.")
|
||||
|
||||
action = playbook.get("action", "run")
|
||||
options = playbook.get("options", {})
|
||||
|
||||
# Extract all fields similar to RunHandler.cli_run
|
||||
exec_args = {
|
||||
"nodes_filter": playbook["nodes"],
|
||||
"commands": playbook["commands"],
|
||||
"variables": playbook.get("variables"),
|
||||
"parallel": options.get("parallel", parallel),
|
||||
"timeout": playbook.get("timeout", options.get("timeout", 20)),
|
||||
"prompt": options.get("prompt"),
|
||||
"name": playbook.get("name", "Task")
|
||||
}
|
||||
|
||||
# Map 'output' field to folder path if it's not stdout/null
|
||||
output_cfg = playbook.get("output")
|
||||
if output_cfg not in [None, "stdout"]:
|
||||
exec_args["folder"] = output_cfg
|
||||
|
||||
if action == "run":
|
||||
return self.run_commands(**exec_args)
|
||||
elif action == "test":
|
||||
exec_args["expected"] = playbook.get("expected", [])
|
||||
return self.test_commands(**exec_args)
|
||||
else:
|
||||
raise ConnpyError(f"Unsupported playbook action: {action}")
|
||||
|
||||
@@ -1,115 +0,0 @@
|
||||
from .base import BaseService
|
||||
import yaml
|
||||
import os
|
||||
from copy import deepcopy
|
||||
from .exceptions import InvalidConfigurationError, NodeNotFoundError, ReservedNameError
|
||||
from ..configfile import NoAliasDumper
|
||||
|
||||
|
||||
class ImportExportService(BaseService):
|
||||
"""Business logic for YAML/JSON inventory import and export."""
|
||||
|
||||
def export_to_file(self, file_path, folders=None):
|
||||
"""Export nodes/folders to a YAML file."""
|
||||
if os.path.exists(file_path):
|
||||
raise InvalidConfigurationError(f"File '{file_path}' already exists.")
|
||||
|
||||
data = self.export_to_dict(folders)
|
||||
try:
|
||||
with open(file_path, "w") as f:
|
||||
yaml.dump(data, f, Dumper=NoAliasDumper, default_flow_style=False)
|
||||
except OSError as e:
|
||||
raise InvalidConfigurationError(f"Failed to export to '{file_path}': {e}")
|
||||
|
||||
def export_to_dict(self, folders=None):
|
||||
"""Export nodes/folders to a dictionary."""
|
||||
if not folders:
|
||||
return deepcopy(self.config.connections)
|
||||
else:
|
||||
# Validate folders exist
|
||||
for f in folders:
|
||||
if f != "@" and f not in self.config._getallfolders():
|
||||
raise NodeNotFoundError(f"Folder '{f}' not found.")
|
||||
|
||||
flat = self.config._getallnodesfull(folders, extract=False)
|
||||
nested = {}
|
||||
for k, v in flat.items():
|
||||
uniques = self.config._explode_unique(k)
|
||||
if not uniques:
|
||||
continue
|
||||
|
||||
if "folder" in uniques and "subfolder" in uniques:
|
||||
f_name = uniques["folder"]
|
||||
s_name = uniques["subfolder"]
|
||||
i_name = uniques["id"]
|
||||
|
||||
if f_name not in nested:
|
||||
nested[f_name] = {"type": "folder"}
|
||||
if s_name not in nested[f_name]:
|
||||
nested[f_name][s_name] = {"type": "subfolder"}
|
||||
|
||||
nested[f_name][s_name][i_name] = v
|
||||
|
||||
elif "folder" in uniques:
|
||||
f_name = uniques["folder"]
|
||||
i_name = uniques["id"]
|
||||
|
||||
if f_name not in nested:
|
||||
nested[f_name] = {"type": "folder"}
|
||||
|
||||
nested[f_name][i_name] = v
|
||||
else:
|
||||
i_name = uniques["id"]
|
||||
nested[i_name] = v
|
||||
|
||||
return nested
|
||||
|
||||
def import_from_file(self, file_path):
|
||||
"""Import nodes/folders from a YAML file."""
|
||||
if not os.path.exists(file_path):
|
||||
raise InvalidConfigurationError(f"File '{file_path}' does not exist.")
|
||||
|
||||
try:
|
||||
with open(file_path, "r") as f:
|
||||
data = yaml.load(f, Loader=yaml.FullLoader)
|
||||
self.import_from_dict(data)
|
||||
except Exception as e:
|
||||
raise InvalidConfigurationError(f"Failed to read/parse import file: {e}")
|
||||
|
||||
def import_from_dict(self, data):
|
||||
"""Import nodes/folders from a dictionary."""
|
||||
if not isinstance(data, dict):
|
||||
raise InvalidConfigurationError("Invalid import data format: expected a dictionary of nodes.")
|
||||
|
||||
def _traverse_import(node_data, current_folder='', current_subfolder=''):
|
||||
for k, v in node_data.items():
|
||||
if k == "type":
|
||||
continue
|
||||
if isinstance(v, dict):
|
||||
node_type = v.get("type", "connection")
|
||||
if node_type == "folder":
|
||||
self.config._folder_add(folder=k)
|
||||
_traverse_import(v, current_folder=k, current_subfolder='')
|
||||
elif node_type == "subfolder":
|
||||
self.config._folder_add(folder=current_folder, subfolder=k)
|
||||
_traverse_import(v, current_folder=current_folder, current_subfolder=k)
|
||||
elif node_type == "connection":
|
||||
unique_id = k
|
||||
if current_subfolder:
|
||||
unique_id = f"{k}@{current_subfolder}@{current_folder}"
|
||||
elif current_folder:
|
||||
unique_id = f"{k}@{current_folder}"
|
||||
self._validate_node_name(unique_id)
|
||||
|
||||
kwargs = deepcopy(v)
|
||||
kwargs['id'] = k
|
||||
kwargs['folder'] = current_folder
|
||||
kwargs['subfolder'] = current_subfolder
|
||||
|
||||
self.config._connections_add(**kwargs)
|
||||
else:
|
||||
# Invalid format skip
|
||||
pass
|
||||
|
||||
_traverse_import(data)
|
||||
self.config._saveconfig(self.config.file)
|
||||
@@ -1,273 +0,0 @@
|
||||
import re
|
||||
from .base import BaseService
|
||||
from .exceptions import (
|
||||
NodeNotFoundError, NodeAlreadyExistsError,
|
||||
InvalidConfigurationError, ReservedNameError
|
||||
)
|
||||
|
||||
class NodeService(BaseService):
|
||||
def __init__(self, config=None):
|
||||
super().__init__(config)
|
||||
|
||||
|
||||
def list_nodes(self, filter_str=None, format_str=None):
|
||||
"""Return a listed filtered by regex match and formatted if needed."""
|
||||
nodes = self.config._getallnodes()
|
||||
case_sensitive = self.config.config.get("case", False)
|
||||
|
||||
if filter_str:
|
||||
flags = re.IGNORECASE if not case_sensitive else 0
|
||||
nodes = [n for n in nodes if re.search(filter_str, n, flags)]
|
||||
|
||||
if not format_str:
|
||||
return nodes
|
||||
|
||||
from .profile_service import ProfileService
|
||||
profile_service = ProfileService(self.config)
|
||||
|
||||
formatted_nodes = []
|
||||
for n_id in nodes:
|
||||
# Use ProfileService to resolve profiles for dynamic formatting
|
||||
details = self.config.getitem(n_id, extract=False)
|
||||
if details:
|
||||
details = profile_service.resolve_node_data(details)
|
||||
|
||||
name = n_id.split("@")[0]
|
||||
location = n_id.partition("@")[2] or "root"
|
||||
|
||||
# Prepare context for .format() with all details
|
||||
context = details.copy()
|
||||
context.update({
|
||||
"name": name,
|
||||
"NAME": name.upper(),
|
||||
"location": location,
|
||||
"LOCATION": location.upper(),
|
||||
})
|
||||
|
||||
# Add exploded uniques (id, folder, subfolder)
|
||||
uniques = self.config._explode_unique(n_id)
|
||||
if uniques:
|
||||
context.update(uniques)
|
||||
|
||||
# Add uppercase versions of all keys for convenience
|
||||
for k, v in list(context.items()):
|
||||
if isinstance(v, str):
|
||||
context[k.upper()] = v.upper()
|
||||
|
||||
try:
|
||||
formatted_nodes.append(format_str.format(**context))
|
||||
except (KeyError, IndexError, ValueError):
|
||||
# Fallback to original string if format fails
|
||||
formatted_nodes.append(n_id)
|
||||
return formatted_nodes
|
||||
|
||||
def list_folders(self, filter_str=None):
|
||||
"""Return all unique folders, optionally filtered by regex."""
|
||||
folders = self.config._getallfolders()
|
||||
case_sensitive = self.config.config.get("case", False)
|
||||
|
||||
if filter_str:
|
||||
if filter_str.startswith("@"):
|
||||
if not case_sensitive:
|
||||
folders = [f for f in folders if f.lower() == filter_str.lower()]
|
||||
else:
|
||||
folders = [f for f in folders if f == filter_str]
|
||||
else:
|
||||
flags = re.IGNORECASE if not case_sensitive else 0
|
||||
folders = [f for f in folders if re.search(filter_str, f, flags)]
|
||||
return folders
|
||||
|
||||
def get_node_details(self, unique_id):
|
||||
"""Return full configuration dictionary for a specific node."""
|
||||
try:
|
||||
details = self.config.getitem(unique_id)
|
||||
if not details:
|
||||
raise NodeNotFoundError(f"Node '{unique_id}' not found.")
|
||||
return details
|
||||
except (KeyError, TypeError):
|
||||
raise NodeNotFoundError(f"Node '{unique_id}' not found.")
|
||||
|
||||
def explode_unique(self, unique_id):
|
||||
"""Explode a unique ID into a dictionary of its parts."""
|
||||
return self.config._explode_unique(unique_id)
|
||||
|
||||
def generate_cache(self, nodes=None, folders=None, profiles=None):
|
||||
"""Generate and update the internal nodes cache."""
|
||||
self.config._generate_nodes_cache(nodes=nodes, folders=folders, profiles=profiles)
|
||||
|
||||
def validate_parent_folder(self, unique_id, is_folder=False):
|
||||
"""Check if parent folder exists for a given node unique ID."""
|
||||
if is_folder:
|
||||
uniques = self.config._explode_unique(unique_id)
|
||||
if uniques and "subfolder" in uniques and "folder" in uniques:
|
||||
parent_folder = f"@{uniques['folder']}"
|
||||
if parent_folder not in self.config._getallfolders():
|
||||
raise NodeNotFoundError(f"Folder '{parent_folder}' not found.")
|
||||
else:
|
||||
node_folder = unique_id.partition("@")[2]
|
||||
if node_folder:
|
||||
parent_folder = f"@{node_folder}"
|
||||
if parent_folder not in self.config._getallfolders():
|
||||
raise NodeNotFoundError(f"Folder '{parent_folder}' not found.")
|
||||
|
||||
|
||||
def add_node(self, unique_id, data, is_folder=False):
|
||||
"""Logic for adding a new node or folder to configuration."""
|
||||
if not is_folder:
|
||||
self._validate_node_name(unique_id)
|
||||
|
||||
all_nodes = self.config._getallnodes()
|
||||
all_folders = self.config._getallfolders()
|
||||
|
||||
if is_folder:
|
||||
if unique_id in all_folders:
|
||||
raise NodeAlreadyExistsError(f"Folder '{unique_id}' already exists.")
|
||||
uniques = self.config._explode_unique(unique_id)
|
||||
if not uniques:
|
||||
raise InvalidConfigurationError(f"Invalid folder name '{unique_id}'.")
|
||||
|
||||
# Check if parent folder exists when creating a subfolder
|
||||
if "subfolder" in uniques:
|
||||
self.validate_parent_folder(unique_id, is_folder=True)
|
||||
|
||||
self.config._folder_add(**uniques)
|
||||
self.config._saveconfig(self.config.file)
|
||||
else:
|
||||
if unique_id in all_nodes:
|
||||
raise NodeAlreadyExistsError(f"Node '{unique_id}' already exists.")
|
||||
|
||||
# Check if parent folder exists when creating a node in a folder
|
||||
self.validate_parent_folder(unique_id)
|
||||
|
||||
# Ensure 'id' is in data for config._connections_add
|
||||
if "id" not in data:
|
||||
uniques = self.config._explode_unique(unique_id)
|
||||
if uniques and "id" in uniques:
|
||||
data["id"] = uniques["id"]
|
||||
|
||||
self.config._connections_add(**data)
|
||||
self.config._saveconfig(self.config.file)
|
||||
|
||||
def update_node(self, unique_id, data):
|
||||
"""Explicitly update an existing node."""
|
||||
all_nodes = self.config._getallnodes()
|
||||
if unique_id not in all_nodes:
|
||||
raise NodeNotFoundError(f"Node '{unique_id}' not found.")
|
||||
|
||||
# Ensure 'id' is in data for config._connections_add
|
||||
if "id" not in data:
|
||||
uniques = self.config._explode_unique(unique_id)
|
||||
if uniques:
|
||||
data["id"] = uniques["id"]
|
||||
|
||||
# config._connections_add actually handles updates if ID exists correctly
|
||||
self.config._connections_add(**data)
|
||||
self.config._saveconfig(self.config.file)
|
||||
|
||||
def delete_node(self, unique_id, is_folder=False):
|
||||
"""Logic for deleting a node or folder."""
|
||||
if is_folder:
|
||||
uniques = self.config._explode_unique(unique_id)
|
||||
if not uniques:
|
||||
raise NodeNotFoundError(f"Folder '{unique_id}' not found or invalid.")
|
||||
self.config._folder_del(**uniques)
|
||||
else:
|
||||
uniques = self.config._explode_unique(unique_id)
|
||||
if not uniques:
|
||||
raise NodeNotFoundError(f"Node '{unique_id}' not found or invalid.")
|
||||
self.config._connections_del(**uniques)
|
||||
|
||||
self.config._saveconfig(self.config.file)
|
||||
|
||||
def connect_node(self, unique_id, sftp=False, debug=False, logger=None):
|
||||
"""Interact with a node directly."""
|
||||
from connpy.core import node
|
||||
from .profile_service import ProfileService
|
||||
|
||||
node_data = self.config.getitem(unique_id, extract=False)
|
||||
if not node_data:
|
||||
raise NodeNotFoundError(f"Node '{unique_id}' not found.")
|
||||
|
||||
# Resolve profiles
|
||||
profile_service = ProfileService(self.config)
|
||||
resolved_data = profile_service.resolve_node_data(node_data)
|
||||
|
||||
n = node(unique_id, **resolved_data, config=self.config)
|
||||
if sftp:
|
||||
n.protocol = "sftp"
|
||||
|
||||
n.interact(debug=debug, logger=logger)
|
||||
|
||||
def move_node(self, src_id, dst_id, copy=False):
|
||||
"""Move or copy a node."""
|
||||
self._validate_node_name(dst_id)
|
||||
|
||||
node_data = self.config.getitem(src_id)
|
||||
if not node_data:
|
||||
raise NodeNotFoundError(f"Source node '{src_id}' not found.")
|
||||
|
||||
if dst_id in self.config._getallnodes():
|
||||
raise NodeAlreadyExistsError(f"Destination node '{dst_id}' already exists.")
|
||||
|
||||
new_uniques = self.config._explode_unique(dst_id)
|
||||
if not new_uniques:
|
||||
raise InvalidConfigurationError(f"Invalid destination format '{dst_id}'.")
|
||||
|
||||
new_node_data = node_data.copy()
|
||||
new_node_data.update(new_uniques)
|
||||
|
||||
self.config._connections_add(**new_node_data)
|
||||
|
||||
if not copy:
|
||||
src_uniques = self.config._explode_unique(src_id)
|
||||
self.config._connections_del(**src_uniques)
|
||||
|
||||
self.config._saveconfig(self.config.file)
|
||||
|
||||
def bulk_add(self, ids, hosts, common_data):
|
||||
"""Add multiple nodes with shared common configuration."""
|
||||
count = 0
|
||||
all_nodes = self.config._getallnodes()
|
||||
|
||||
for i, uid in enumerate(ids):
|
||||
if uid in all_nodes:
|
||||
continue
|
||||
|
||||
try:
|
||||
self._validate_node_name(uid)
|
||||
except ReservedNameError:
|
||||
# For bulk, we might want to just skip or log.
|
||||
# CLI caller will handle if it wants to be strict.
|
||||
continue
|
||||
|
||||
host = hosts[i] if i < len(hosts) else hosts[0]
|
||||
uniques = self.config._explode_unique(uid)
|
||||
if not uniques:
|
||||
continue
|
||||
|
||||
node_data = common_data.copy()
|
||||
node_data.pop("ids", None)
|
||||
node_data.pop("location", None)
|
||||
node_data.update(uniques)
|
||||
node_data["host"] = host
|
||||
node_data["type"] = "connection"
|
||||
|
||||
self.config._connections_add(**node_data)
|
||||
count += 1
|
||||
|
||||
if count > 0:
|
||||
self.config._saveconfig(self.config.file)
|
||||
return count
|
||||
|
||||
def full_replace(self, connections, profiles):
|
||||
"""Replace all connections and profiles with new data."""
|
||||
self.config.connections = connections
|
||||
self.config.profiles = profiles
|
||||
self.config._saveconfig(self.config.file)
|
||||
|
||||
def get_inventory(self):
|
||||
"""Return a full snapshot of connections and profiles."""
|
||||
return {
|
||||
"connections": self.config.connections,
|
||||
"profiles": self.config.profiles
|
||||
}
|
||||
@@ -1,276 +0,0 @@
|
||||
from .base import BaseService
|
||||
import yaml
|
||||
import os
|
||||
from .exceptions import InvalidConfigurationError, NodeNotFoundError
|
||||
|
||||
|
||||
class PluginService(BaseService):
|
||||
"""Business logic for enabling, disabling, and listing plugins."""
|
||||
|
||||
def list_plugins(self):
|
||||
"""List all core and user-defined plugins with their status and hash."""
|
||||
import os
|
||||
import hashlib
|
||||
|
||||
# Check for user plugins directory
|
||||
plugin_dir = os.path.join(self.config.defaultdir, "plugins")
|
||||
# Check for core plugins directory
|
||||
core_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), "..", "core_plugins")
|
||||
|
||||
all_plugin_info = {}
|
||||
|
||||
def get_hash(path):
|
||||
try:
|
||||
with open(path, "rb") as f:
|
||||
return hashlib.md5(f.read()).hexdigest()
|
||||
except Exception:
|
||||
return ""
|
||||
|
||||
# User plugins
|
||||
if os.path.exists(plugin_dir):
|
||||
for f in os.listdir(plugin_dir):
|
||||
if f.endswith(".py"):
|
||||
name = f[:-3]
|
||||
path = os.path.join(plugin_dir, f)
|
||||
all_plugin_info[name] = {"enabled": True, "hash": get_hash(path)}
|
||||
elif f.endswith(".py.bkp"):
|
||||
name = f[:-7]
|
||||
all_plugin_info[name] = {"enabled": False}
|
||||
|
||||
return all_plugin_info
|
||||
|
||||
def add_plugin(self, name, source_file, update=False):
|
||||
"""Add or update a plugin from a local file."""
|
||||
import os
|
||||
import shutil
|
||||
from connpy.plugins import Plugins
|
||||
|
||||
if not name.isalpha() or not name.islower() or len(name) > 15:
|
||||
raise InvalidConfigurationError("Plugin name should be lowercase letters up to 15 characters.")
|
||||
|
||||
p_manager = Plugins()
|
||||
# Check for bad script
|
||||
error = p_manager.verify_script(source_file)
|
||||
if error:
|
||||
raise InvalidConfigurationError(f"Invalid plugin script: {error}")
|
||||
|
||||
self._save_plugin_file(name, source_file, update, is_path=True)
|
||||
|
||||
def add_plugin_from_bytes(self, name, content, update=False):
|
||||
"""Add or update a plugin from bytes (gRPC)."""
|
||||
import tempfile
|
||||
import os
|
||||
|
||||
if not name.isalpha() or not name.islower() or len(name) > 15:
|
||||
raise InvalidConfigurationError("Plugin name should be lowercase letters up to 15 characters.")
|
||||
|
||||
# Write to temp file to verify script
|
||||
with tempfile.NamedTemporaryFile(suffix=".py", delete=False) as tmp:
|
||||
tmp.write(content)
|
||||
tmp_path = tmp.name
|
||||
|
||||
try:
|
||||
from connpy.plugins import Plugins
|
||||
p_manager = Plugins()
|
||||
error = p_manager.verify_script(tmp_path)
|
||||
if error:
|
||||
raise InvalidConfigurationError(f"Invalid plugin script: {error}")
|
||||
|
||||
self._save_plugin_file(name, tmp_path, update, is_path=True)
|
||||
finally:
|
||||
if os.path.exists(tmp_path):
|
||||
os.remove(tmp_path)
|
||||
|
||||
def _save_plugin_file(self, name, source, update=False, is_path=True):
|
||||
import os
|
||||
import shutil
|
||||
|
||||
plugin_dir = os.path.join(self.config.defaultdir, "plugins")
|
||||
os.makedirs(plugin_dir, exist_ok=True)
|
||||
|
||||
target_file = os.path.join(plugin_dir, f"{name}.py")
|
||||
backup_file = f"{target_file}.bkp"
|
||||
|
||||
if not update and (os.path.exists(target_file) or os.path.exists(backup_file)):
|
||||
raise InvalidConfigurationError(f"Plugin '{name}' already exists.")
|
||||
|
||||
try:
|
||||
if is_path:
|
||||
shutil.copy2(source, target_file)
|
||||
else:
|
||||
with open(target_file, "wb") as f:
|
||||
f.write(source)
|
||||
except OSError as e:
|
||||
raise InvalidConfigurationError(f"Failed to save plugin file: {e}")
|
||||
|
||||
def delete_plugin(self, name):
|
||||
"""Remove a plugin file permanently."""
|
||||
import os
|
||||
plugin_file = os.path.join(self.config.defaultdir, "plugins", f"{name}.py")
|
||||
disabled_file = f"{plugin_file}.bkp"
|
||||
|
||||
deleted = False
|
||||
for f in [plugin_file, disabled_file]:
|
||||
if os.path.exists(f):
|
||||
try:
|
||||
os.remove(f)
|
||||
deleted = True
|
||||
except OSError as e:
|
||||
raise InvalidConfigurationError(f"Failed to delete plugin file '{f}': {e}")
|
||||
|
||||
if not deleted:
|
||||
raise InvalidConfigurationError(f"Plugin '{name}' not found.")
|
||||
|
||||
def enable_plugin(self, name):
|
||||
"""Activate a plugin by renaming its backup file."""
|
||||
import os
|
||||
plugin_file = os.path.join(self.config.defaultdir, "plugins", f"{name}.py")
|
||||
disabled_file = f"{plugin_file}.bkp"
|
||||
|
||||
if os.path.exists(plugin_file):
|
||||
return False # Already enabled
|
||||
|
||||
if not os.path.exists(disabled_file):
|
||||
raise InvalidConfigurationError(f"Plugin '{name}' not found.")
|
||||
|
||||
try:
|
||||
os.rename(disabled_file, plugin_file)
|
||||
return True
|
||||
except OSError as e:
|
||||
raise InvalidConfigurationError(f"Failed to enable plugin '{name}': {e}")
|
||||
|
||||
def disable_plugin(self, name):
|
||||
"""Deactivate a plugin by renaming it to a backup file."""
|
||||
import os
|
||||
plugin_file = os.path.join(self.config.defaultdir, "plugins", f"{name}.py")
|
||||
disabled_file = f"{plugin_file}.bkp"
|
||||
|
||||
if os.path.exists(disabled_file):
|
||||
return False # Already disabled
|
||||
|
||||
if not os.path.exists(plugin_file):
|
||||
raise InvalidConfigurationError(f"Plugin '{name}' not found or is a core plugin.")
|
||||
|
||||
try:
|
||||
os.rename(plugin_file, disabled_file)
|
||||
return True
|
||||
except OSError as e:
|
||||
raise InvalidConfigurationError(f"Failed to disable plugin '{name}': {e}")
|
||||
|
||||
def get_plugin_source(self, name):
|
||||
import os
|
||||
from ..services.exceptions import InvalidConfigurationError
|
||||
|
||||
plugin_file = os.path.join(self.config.defaultdir, "plugins", f"{name}.py")
|
||||
core_path = os.path.dirname(os.path.realpath(__file__)) + f"/../core_plugins/{name}.py"
|
||||
|
||||
if os.path.exists(plugin_file):
|
||||
target = plugin_file
|
||||
elif os.path.exists(core_path):
|
||||
target = core_path
|
||||
else:
|
||||
raise InvalidConfigurationError(f"Plugin '{name}' not found")
|
||||
|
||||
with open(target, "r") as f:
|
||||
return f.read()
|
||||
|
||||
def invoke_plugin(self, name, args_dict):
|
||||
import sys, io
|
||||
from argparse import Namespace
|
||||
from ..services.exceptions import InvalidConfigurationError
|
||||
from connpy.plugins import Plugins
|
||||
class MockApp:
|
||||
is_mock = True
|
||||
def __init__(self, config):
|
||||
from ..core import node, nodes
|
||||
from ..ai import ai
|
||||
from ..services.provider import ServiceProvider
|
||||
|
||||
self.config = config
|
||||
self.node = node
|
||||
self.nodes = nodes
|
||||
self.ai = ai
|
||||
|
||||
self.services = ServiceProvider(config, mode="local")
|
||||
|
||||
# Get settings for CLI behavior
|
||||
settings = self.services.config_svc.get_settings()
|
||||
self.case = settings.get("case", False)
|
||||
self.fzf = settings.get("fzf", False)
|
||||
|
||||
try:
|
||||
self.nodes_list = self.services.nodes.list_nodes()
|
||||
self.folders = self.services.nodes.list_folders()
|
||||
self.profiles = self.services.profiles.list_profiles()
|
||||
except Exception:
|
||||
self.nodes_list = []
|
||||
self.folders = []
|
||||
self.profiles = []
|
||||
|
||||
args = Namespace(**args_dict)
|
||||
|
||||
p_manager = Plugins()
|
||||
import os
|
||||
plugin_file = os.path.join(self.config.defaultdir, "plugins", f"{name}.py")
|
||||
core_path = os.path.dirname(os.path.realpath(__file__)) + f"/../core_plugins/{name}.py"
|
||||
|
||||
if os.path.exists(plugin_file):
|
||||
target = plugin_file
|
||||
elif os.path.exists(core_path):
|
||||
target = core_path
|
||||
else:
|
||||
raise InvalidConfigurationError(f"Plugin '{name}' not found")
|
||||
|
||||
module = p_manager._import_from_path(target)
|
||||
parser = module.Parser().parser if hasattr(module, "Parser") else None
|
||||
|
||||
if "__func_name__" in args_dict and hasattr(module, args_dict["__func_name__"]):
|
||||
args.func = getattr(module, args_dict["__func_name__"])
|
||||
|
||||
app = MockApp(self.config)
|
||||
|
||||
from .. import printer
|
||||
from rich.console import Console
|
||||
|
||||
from rich.console import Console
|
||||
import queue
|
||||
import threading
|
||||
|
||||
q = queue.Queue()
|
||||
|
||||
class QueueIO(io.StringIO):
|
||||
def write(self, s):
|
||||
q.put(s)
|
||||
return len(s)
|
||||
def flush(self):
|
||||
pass
|
||||
|
||||
buf = QueueIO()
|
||||
old_console = printer._get_console()
|
||||
old_err_console = printer._get_err_console()
|
||||
|
||||
def run_plugin():
|
||||
printer.set_thread_console(Console(file=buf, theme=printer.connpy_theme, force_terminal=True))
|
||||
printer.set_thread_err_console(Console(file=buf, theme=printer.connpy_theme, force_terminal=True))
|
||||
printer.set_thread_stream(buf)
|
||||
try:
|
||||
if hasattr(module, "Entrypoint"):
|
||||
module.Entrypoint(args, parser, app)
|
||||
except BaseException as e:
|
||||
if not isinstance(e, SystemExit):
|
||||
import traceback
|
||||
printer.err_console.print(traceback.format_exc())
|
||||
finally:
|
||||
printer.set_thread_console(old_console)
|
||||
printer.set_thread_err_console(old_err_console)
|
||||
printer.set_thread_stream(None)
|
||||
q.put(None)
|
||||
|
||||
t = threading.Thread(target=run_plugin, daemon=True)
|
||||
t.start()
|
||||
|
||||
while True:
|
||||
item = q.get()
|
||||
if item is None:
|
||||
break
|
||||
yield item
|
||||
@@ -1,134 +0,0 @@
|
||||
from .base import BaseService
|
||||
from .exceptions import ProfileNotFoundError, ProfileAlreadyExistsError, InvalidConfigurationError
|
||||
|
||||
class ProfileService(BaseService):
|
||||
"""Business logic for node profiles management."""
|
||||
|
||||
def list_profiles(self, filter_str=None):
|
||||
"""List all profile names, optionally filtered."""
|
||||
profiles = list(self.config.profiles.keys())
|
||||
case_sensitive = self.config.config.get("case", False)
|
||||
|
||||
if filter_str:
|
||||
if not case_sensitive:
|
||||
f_str = filter_str.lower()
|
||||
return [p for p in profiles if f_str in p.lower()]
|
||||
else:
|
||||
return [p for p in profiles if filter_str in p]
|
||||
return profiles
|
||||
|
||||
def get_profile(self, name, resolve=True):
|
||||
"""Get the profile dictionary, optionally resolved."""
|
||||
profile = self.config.profiles.get(name)
|
||||
if not profile:
|
||||
raise ProfileNotFoundError(f"Profile '{name}' not found.")
|
||||
|
||||
if resolve:
|
||||
return self.resolve_node_data(profile)
|
||||
return profile
|
||||
|
||||
def add_profile(self, name, data):
|
||||
"""Add a new profile."""
|
||||
if name in self.config.profiles:
|
||||
raise ProfileAlreadyExistsError(f"Profile '{name}' already exists.")
|
||||
|
||||
# Filter data to match _profiles_add signature and ensure id is passed
|
||||
allowed_keys = {"host", "options", "logs", "password", "port", "protocol", "user", "tags", "jumphost"}
|
||||
filtered_data = {k: v for k, v in data.items() if k in allowed_keys}
|
||||
|
||||
self.config._profiles_add(id=name, **filtered_data)
|
||||
self.config._saveconfig(self.config.file)
|
||||
|
||||
def resolve_node_data(self, node_data):
|
||||
"""Resolve profile references (@profile) in node data and handle inheritance."""
|
||||
resolved = node_data.copy()
|
||||
|
||||
# 1. Identify all referenced profiles to support inheritance
|
||||
referenced_profiles = []
|
||||
for value in resolved.values():
|
||||
if isinstance(value, str) and value.startswith("@"):
|
||||
referenced_profiles.append(value[1:])
|
||||
elif isinstance(value, list):
|
||||
for item in value:
|
||||
if isinstance(item, str) and item.startswith("@"):
|
||||
referenced_profiles.append(item[1:])
|
||||
|
||||
# 2. Resolve explicit references
|
||||
for key, value in resolved.items():
|
||||
if isinstance(value, str) and value.startswith("@"):
|
||||
profile_name = value[1:]
|
||||
try:
|
||||
profile = self.get_profile(profile_name, resolve=True)
|
||||
resolved[key] = profile.get(key, "")
|
||||
except ProfileNotFoundError:
|
||||
resolved[key] = ""
|
||||
elif isinstance(value, list):
|
||||
resolved_list = []
|
||||
for item in value:
|
||||
if isinstance(item, str) and item.startswith("@"):
|
||||
profile_name = item[1:]
|
||||
try:
|
||||
profile = self.get_profile(profile_name, resolve=True)
|
||||
if "password" in profile:
|
||||
resolved_list.append(profile["password"])
|
||||
except ProfileNotFoundError:
|
||||
pass
|
||||
else:
|
||||
resolved_list.append(item)
|
||||
resolved[key] = resolved_list
|
||||
|
||||
# 3. Inheritance: Fill empty keys from the first referenced profile
|
||||
if referenced_profiles:
|
||||
base_profile_name = referenced_profiles[0]
|
||||
try:
|
||||
base_profile = self.get_profile(base_profile_name, resolve=True)
|
||||
for key, value in base_profile.items():
|
||||
# Fill if key is missing or empty
|
||||
if key not in resolved or resolved[key] == "" or resolved[key] == [] or resolved[key] is None:
|
||||
resolved[key] = value
|
||||
except ProfileNotFoundError:
|
||||
pass
|
||||
|
||||
# 4. Handle default protocol
|
||||
if resolved.get("protocol") == "" or resolved.get("protocol") is None:
|
||||
try:
|
||||
default_profile = self.get_profile("default", resolve=True)
|
||||
resolved["protocol"] = default_profile.get("protocol", "ssh")
|
||||
except ProfileNotFoundError:
|
||||
resolved["protocol"] = "ssh"
|
||||
|
||||
return resolved
|
||||
|
||||
def delete_profile(self, name):
|
||||
"""Delete an existing profile, with safety checks."""
|
||||
if name not in self.config.profiles:
|
||||
raise ProfileNotFoundError(f"Profile '{name}' not found.")
|
||||
|
||||
if name == "default":
|
||||
raise InvalidConfigurationError("Cannot delete the 'default' profile.")
|
||||
|
||||
used_by = self.config._profileused(name)
|
||||
if used_by:
|
||||
# We return the list of nodes using it so the UI can inform the user
|
||||
raise InvalidConfigurationError(f"Profile '{name}' is used by nodes: {', '.join(used_by)}")
|
||||
|
||||
self.config._profiles_del(id=name)
|
||||
self.config._saveconfig(self.config.file)
|
||||
|
||||
def update_profile(self, name, data):
|
||||
"""Update an existing profile."""
|
||||
if name not in self.config.profiles:
|
||||
raise ProfileNotFoundError(f"Profile '{name}' not found.")
|
||||
|
||||
# Merge with existing data
|
||||
existing = self.get_profile(name, resolve=False)
|
||||
updated_data = existing.copy()
|
||||
updated_data.update(data)
|
||||
|
||||
# Filter data to match _profiles_add signature
|
||||
allowed_keys = {"host", "options", "logs", "password", "port", "protocol", "user", "tags", "jumphost"}
|
||||
filtered_data = {k: v for k, v in updated_data.items() if k in allowed_keys}
|
||||
|
||||
self.config._profiles_add(id=name, **filtered_data)
|
||||
self.config._saveconfig(self.config.file)
|
||||
|
||||
@@ -1,71 +0,0 @@
|
||||
from .exceptions import InvalidConfigurationError
|
||||
|
||||
class RemoteStub:
|
||||
def __getattr__(self, name):
|
||||
raise NotImplementedError(
|
||||
"Remote mode (gRPC) is not yet available. "
|
||||
"Use local mode or wait for the gRPC implementation."
|
||||
)
|
||||
|
||||
class ServiceProvider:
|
||||
"""Dynamic service backend. Transparently provides local or remote services."""
|
||||
|
||||
def __init__(self, config, mode="local", remote_host=None):
|
||||
self.mode = mode
|
||||
self.config = config
|
||||
self.remote_host = remote_host
|
||||
|
||||
if mode == "local":
|
||||
self._init_local()
|
||||
elif mode == "remote":
|
||||
self._init_remote()
|
||||
else:
|
||||
raise ValueError(f"Unknown service mode: {mode}")
|
||||
|
||||
def _init_local(self):
|
||||
from .node_service import NodeService
|
||||
from .profile_service import ProfileService
|
||||
from .config_service import ConfigService
|
||||
from .plugin_service import PluginService
|
||||
from .ai_service import AIService
|
||||
from .system_service import SystemService
|
||||
from .execution_service import ExecutionService
|
||||
from .import_export_service import ImportExportService
|
||||
from .context_service import ContextService
|
||||
from .sync_service import SyncService
|
||||
|
||||
self.nodes = NodeService(self.config)
|
||||
self.profiles = ProfileService(self.config)
|
||||
self.config_svc = ConfigService(self.config)
|
||||
self.plugins = PluginService(self.config)
|
||||
self.ai = AIService(self.config)
|
||||
self.system = SystemService(self.config)
|
||||
self.execution = ExecutionService(self.config)
|
||||
self.import_export = ImportExportService(self.config)
|
||||
self.context = ContextService(self.config)
|
||||
self.sync = SyncService(self.config)
|
||||
|
||||
def _init_remote(self):
|
||||
# Allow ConfigService to work locally so the user can revert the mode
|
||||
from .config_service import ConfigService
|
||||
from .context_service import ContextService
|
||||
from .sync_service import SyncService
|
||||
self.config_svc = ConfigService(self.config)
|
||||
self.context = ContextService(self.config)
|
||||
self.sync = SyncService(self.config)
|
||||
|
||||
if not self.remote_host:
|
||||
raise InvalidConfigurationError("Remote host must be specified in remote mode")
|
||||
|
||||
import grpc
|
||||
from ..grpc_layer.stubs import NodeStub, ProfileStub, PluginStub, AIStub, ExecutionStub, ImportExportStub, SystemStub
|
||||
|
||||
channel = grpc.insecure_channel(self.remote_host)
|
||||
|
||||
self.nodes = NodeStub(channel, remote_host=self.remote_host, config=self.config)
|
||||
self.profiles = ProfileStub(channel, remote_host=self.remote_host, node_stub=self.nodes)
|
||||
self.plugins = PluginStub(channel, remote_host=self.remote_host)
|
||||
self.ai = AIStub(channel, remote_host=self.remote_host)
|
||||
self.system = SystemStub(channel, remote_host=self.remote_host)
|
||||
self.execution = ExecutionStub(channel, remote_host=self.remote_host)
|
||||
self.import_export = ImportExportStub(channel, remote_host=self.remote_host)
|
||||
@@ -1,389 +0,0 @@
|
||||
import os
|
||||
import time
|
||||
import zipfile
|
||||
import tempfile
|
||||
import io
|
||||
import yaml
|
||||
import threading
|
||||
from datetime import datetime
|
||||
from google.oauth2.credentials import Credentials
|
||||
from google.auth.transport.requests import Request
|
||||
from googleapiclient.discovery import build
|
||||
from google.auth.exceptions import RefreshError
|
||||
from google_auth_oauthlib.flow import InstalledAppFlow
|
||||
from googleapiclient.http import MediaFileUpload, MediaIoBaseDownload
|
||||
from googleapiclient.errors import HttpError
|
||||
|
||||
from .base import BaseService
|
||||
from .. import printer
|
||||
|
||||
class SyncService(BaseService):
|
||||
"""Business logic for Google Drive synchronization."""
|
||||
|
||||
def __init__(self, config):
|
||||
super().__init__(config)
|
||||
self.scopes = ['https://www.googleapis.com/auth/drive.appdata']
|
||||
self.token_file = os.path.join(self.config.defaultdir, "gtoken.json")
|
||||
|
||||
# Embedded OAuth config
|
||||
self.client_config = {
|
||||
"installed": {
|
||||
"client_id": "559598250648-cr189kfrga2il1a6d6nkaspq0a9pn5vv." + "apps.googleusercontent.com",
|
||||
"project_id": "celtic-surface-420323",
|
||||
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
|
||||
"token_uri": "https://oauth2.googleapis.com/token",
|
||||
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
|
||||
"client_secret": "GOCSPX-" + "VVfOSrJLPU90Pl0g7aAXM9GK2xPE",
|
||||
"redirect_uris": ["http://localhost"]
|
||||
}
|
||||
}
|
||||
|
||||
# Sync status from config
|
||||
self.sync_enabled = self.config.config.get("sync", False)
|
||||
self.sync_remote = self.config.config.get("sync_remote", False)
|
||||
|
||||
def login(self):
|
||||
"""Authenticate with Google Drive."""
|
||||
creds = None
|
||||
if os.path.exists(self.token_file):
|
||||
creds = Credentials.from_authorized_user_file(self.token_file, self.scopes)
|
||||
|
||||
try:
|
||||
if not creds or not creds.valid:
|
||||
if creds and creds.expired and creds.refresh_token:
|
||||
creds.refresh(Request())
|
||||
else:
|
||||
flow = InstalledAppFlow.from_client_config(self.client_config, self.scopes)
|
||||
creds = flow.run_local_server(port=0, access_type='offline')
|
||||
|
||||
with open(self.token_file, 'w') as token:
|
||||
token.write(creds.to_json())
|
||||
|
||||
printer.success("Logged in successfully.")
|
||||
return True
|
||||
|
||||
except RefreshError:
|
||||
if os.path.exists(self.token_file):
|
||||
os.remove(self.token_file)
|
||||
printer.warning("Existing token was invalid and has been removed. Please log in again.")
|
||||
return False
|
||||
except Exception as e:
|
||||
printer.error(f"Login failed: {e}")
|
||||
return False
|
||||
|
||||
def logout(self):
|
||||
"""Remove Google Drive credentials."""
|
||||
if os.path.exists(self.token_file):
|
||||
os.remove(self.token_file)
|
||||
printer.success("Logged out successfully.")
|
||||
else:
|
||||
printer.info("No credentials file found. Already logged out.")
|
||||
|
||||
def get_credentials(self):
|
||||
"""Get valid credentials, refreshing if necessary."""
|
||||
if os.path.exists(self.token_file):
|
||||
creds = Credentials.from_authorized_user_file(self.token_file, self.scopes)
|
||||
else:
|
||||
return None
|
||||
|
||||
if not creds or not creds.valid:
|
||||
if creds and creds.expired and creds.refresh_token:
|
||||
try:
|
||||
creds.refresh(Request())
|
||||
except RefreshError:
|
||||
return None
|
||||
else:
|
||||
return None
|
||||
return creds
|
||||
|
||||
def check_login_status(self):
|
||||
"""Check if logged in to Google Drive."""
|
||||
if os.path.exists(self.token_file):
|
||||
creds = Credentials.from_authorized_user_file(self.token_file)
|
||||
if creds and creds.expired and creds.refresh_token:
|
||||
try:
|
||||
creds.refresh(Request())
|
||||
except RefreshError:
|
||||
pass
|
||||
return True if creds.valid else "Invalid"
|
||||
return False
|
||||
|
||||
def list_backups(self):
|
||||
"""List files in Google Drive appDataFolder."""
|
||||
creds = self.get_credentials()
|
||||
if not creds:
|
||||
printer.error("Not logged in to Google Drive.")
|
||||
return []
|
||||
|
||||
try:
|
||||
service = build("drive", "v3", credentials=creds)
|
||||
response = service.files().list(
|
||||
spaces="appDataFolder",
|
||||
fields="files(id, name, appProperties)",
|
||||
pageSize=10,
|
||||
).execute()
|
||||
|
||||
files_info = []
|
||||
for file in response.get("files", []):
|
||||
files_info.append({
|
||||
"name": file.get("name"),
|
||||
"id": file.get("id"),
|
||||
"date": file.get("appProperties", {}).get("date"),
|
||||
"timestamp": file.get("appProperties", {}).get("timestamp")
|
||||
})
|
||||
return files_info
|
||||
except HttpError as error:
|
||||
printer.error(f"Google Drive API error: {error}")
|
||||
return []
|
||||
|
||||
def compress_and_upload(self, remote_data=None):
|
||||
"""Compress config and upload to Drive."""
|
||||
timestamp = int(time.time() * 1000)
|
||||
with tempfile.TemporaryDirectory() as tmp_dir:
|
||||
zip_path = os.path.join(tmp_dir, f"connpy-backup-{timestamp}.zip")
|
||||
|
||||
with zipfile.ZipFile(zip_path, 'w', zipfile.ZIP_DEFLATED) as zipf:
|
||||
# If we have remote data, we create a virtual config file
|
||||
if remote_data:
|
||||
config_tmp = os.path.join(tmp_dir, "config.yaml")
|
||||
with open(config_tmp, 'w') as f:
|
||||
yaml.dump(remote_data, f, default_flow_style=False)
|
||||
zipf.write(config_tmp, "config.yaml")
|
||||
else:
|
||||
# Legacy behavior: use local file
|
||||
zipf.write(self.config.file, os.path.basename(self.config.file))
|
||||
|
||||
# Always include the key if it exists
|
||||
if os.path.exists(self.config.key):
|
||||
zipf.write(self.config.key, ".osk")
|
||||
|
||||
# Manage retention (max 100 backups)
|
||||
backups = self.list_backups()
|
||||
if len(backups) >= 100:
|
||||
oldest = min(backups, key=lambda x: x['timestamp'] or '0')
|
||||
self.delete_backup(oldest['id'])
|
||||
|
||||
# Upload
|
||||
return self.upload_file(zip_path, timestamp)
|
||||
|
||||
def upload_file(self, file_path, timestamp):
|
||||
"""Internal method to upload to Drive."""
|
||||
creds = self.get_credentials()
|
||||
if not creds: return False
|
||||
|
||||
service = build('drive', 'v3', credentials=creds)
|
||||
date_str = datetime.fromtimestamp(timestamp/1000).strftime('%Y-%m-%d %H:%M:%S')
|
||||
|
||||
file_metadata = {
|
||||
'name': os.path.basename(file_path),
|
||||
'parents': ["appDataFolder"],
|
||||
'appProperties': {
|
||||
'timestamp': str(timestamp),
|
||||
'date': date_str
|
||||
}
|
||||
}
|
||||
media = MediaFileUpload(file_path)
|
||||
try:
|
||||
service.files().create(body=file_metadata, media_body=media, fields='id').execute()
|
||||
printer.success("Backup uploaded to Google Drive.")
|
||||
return True
|
||||
except Exception as e:
|
||||
printer.error(f"Upload failed: {e}")
|
||||
return False
|
||||
|
||||
def delete_backup(self, file_id):
|
||||
"""Delete a backup from Drive."""
|
||||
creds = self.get_credentials()
|
||||
if not creds: return False
|
||||
try:
|
||||
service = build("drive", "v3", credentials=creds)
|
||||
service.files().delete(fileId=file_id).execute()
|
||||
return True
|
||||
except Exception as e:
|
||||
printer.error(f"Delete failed: {e}")
|
||||
return False
|
||||
|
||||
def restore_backup(self, file_id=None, restore_config=True, restore_nodes=True, app_instance=None):
|
||||
"""Download and analyze a backup for restoration."""
|
||||
backups = self.list_backups()
|
||||
if not backups:
|
||||
printer.error("No backups found.")
|
||||
return None
|
||||
|
||||
if file_id:
|
||||
selected = next((f for f in backups if f['id'] == file_id), None)
|
||||
if not selected:
|
||||
printer.error(f"Backup {file_id} not found.")
|
||||
return None
|
||||
else:
|
||||
selected = max(backups, key=lambda x: x['timestamp'] or '0')
|
||||
|
||||
with tempfile.TemporaryDirectory() as tmp_dir:
|
||||
zip_path = os.path.join(tmp_dir, 'restore.zip')
|
||||
if self.download_file(selected['id'], zip_path):
|
||||
return self.perform_restore(zip_path, restore_config, restore_nodes, app_instance)
|
||||
return False
|
||||
|
||||
def download_file(self, file_id, dest):
|
||||
"""Internal method to download from Drive."""
|
||||
creds = self.get_credentials()
|
||||
if not creds: return False
|
||||
try:
|
||||
service = build('drive', 'v3', credentials=creds)
|
||||
request = service.files().get_media(fileId=file_id)
|
||||
with io.FileIO(dest, mode='wb') as fh:
|
||||
downloader = MediaIoBaseDownload(fh, request)
|
||||
done = False
|
||||
while not done:
|
||||
_, done = downloader.next_chunk()
|
||||
return True
|
||||
except Exception as e:
|
||||
printer.error(f"Download failed: {e}")
|
||||
return False
|
||||
|
||||
def perform_restore(self, zip_path, restore_config=True, restore_nodes=True, app_instance=None):
|
||||
"""Execute the actual restoration of files or remote nodes."""
|
||||
try:
|
||||
with zipfile.ZipFile(zip_path, 'r') as zipf:
|
||||
names = zipf.namelist()
|
||||
dest_dir = os.path.dirname(self.config.file)
|
||||
|
||||
# We need to read the config content from zip to decide what to do
|
||||
backup_data = {}
|
||||
config_filename = "config.yaml" if "config.yaml" in names else ("config.json" if "config.json" in names else None)
|
||||
|
||||
if config_filename:
|
||||
with zipf.open(config_filename) as f:
|
||||
backup_data = yaml.safe_load(f)
|
||||
|
||||
# 1. Restore Key (.osk) - Part of config identity
|
||||
if restore_config and ".osk" in names:
|
||||
zipf.extract(".osk", os.path.dirname(self.config.key))
|
||||
|
||||
# 2. Restore Config (Local Settings)
|
||||
if restore_config and backup_data:
|
||||
local_config = self.config.config.copy()
|
||||
|
||||
# Capture current connectivity settings to preserve them
|
||||
current_mode = local_config.get("service_mode", "local")
|
||||
current_remote = local_config.get("remote_host")
|
||||
|
||||
if "config" in backup_data:
|
||||
local_config.update(backup_data["config"])
|
||||
|
||||
# Restore connectivity settings - we don't want a restore to
|
||||
# accidentally switch us between local and remote and break connectivity
|
||||
local_config["service_mode"] = current_mode
|
||||
if current_remote:
|
||||
local_config["remote_host"] = current_remote
|
||||
|
||||
self.config.config = local_config
|
||||
self.config._saveconfig(self.config.file)
|
||||
|
||||
# 3. Restore Nodes and Profiles
|
||||
if restore_nodes and backup_data:
|
||||
connections = backup_data.get("connections", {})
|
||||
profiles = backup_data.get("profiles", {})
|
||||
|
||||
if app_instance and app_instance.services.mode == "remote":
|
||||
# Push to Remote via gRPC
|
||||
app_instance.services.nodes.full_replace(connections, profiles)
|
||||
else:
|
||||
# Restore to Local config file
|
||||
self.config.connections = connections
|
||||
self.config.profiles = profiles
|
||||
self.config._saveconfig(self.config.file)
|
||||
|
||||
# Clear caches
|
||||
for f in [self.config.cachefile, self.config.fzf_cachefile]:
|
||||
if os.path.exists(f): os.remove(f)
|
||||
|
||||
return True
|
||||
except Exception as e:
|
||||
printer.error(f"Restoration failed: {e}")
|
||||
return False
|
||||
|
||||
def analyze_backup_content(self, file_id=None):
|
||||
"""Analyze a backup without restoring to provide info for confirmation."""
|
||||
backups = self.list_backups()
|
||||
if not backups: return None
|
||||
selected = next((f for f in backups if f['id'] == file_id), None) if file_id else max(backups, key=lambda x: x['timestamp'] or '0')
|
||||
|
||||
with tempfile.TemporaryDirectory() as tmp_dir:
|
||||
zip_path = os.path.join(tmp_dir, 'analyze.zip')
|
||||
if self.download_file(selected['id'], zip_path):
|
||||
with zipfile.ZipFile(zip_path, 'r') as zipf:
|
||||
names = zipf.namelist()
|
||||
config_filename = "config.yaml" if "config.yaml" in names else ("config.json" if "config.json" in names else None)
|
||||
if config_filename:
|
||||
with zipf.open(config_filename) as f:
|
||||
data = yaml.safe_load(f)
|
||||
connections = data.get("connections", {})
|
||||
|
||||
# Accurate recursive count
|
||||
nodes_count = 0
|
||||
folders_count = 0
|
||||
|
||||
# Layer 1
|
||||
for k, v in connections.items():
|
||||
if isinstance(v, dict):
|
||||
if v.get("type") == "connection":
|
||||
nodes_count += 1
|
||||
elif v.get("type") == "folder":
|
||||
folders_count += 1
|
||||
# Layer 2
|
||||
for k2, v2 in v.items():
|
||||
if isinstance(v2, dict):
|
||||
if v2.get("type") == "connection":
|
||||
nodes_count += 1
|
||||
elif v2.get("type") == "subfolder":
|
||||
folders_count += 1
|
||||
# Layer 3
|
||||
for k3, v3 in v2.items():
|
||||
if isinstance(v3, dict) and v3.get("type") == "connection":
|
||||
nodes_count += 1
|
||||
|
||||
return {
|
||||
"nodes": nodes_count,
|
||||
"folders": folders_count,
|
||||
"profiles": len(data.get("profiles", {})),
|
||||
"has_config": "config" in data,
|
||||
"has_key": ".osk" in names
|
||||
}
|
||||
return None
|
||||
|
||||
def perform_sync(self, app_instance):
|
||||
"""Background sync logic."""
|
||||
# Always check current config state
|
||||
sync_enabled = self.config.config.get("sync", False)
|
||||
sync_remote = self.config.config.get("sync_remote", False)
|
||||
|
||||
if not sync_enabled: return
|
||||
|
||||
|
||||
if self.check_login_status() != True:
|
||||
printer.warning("Auto-sync: Not logged in to Google Drive.")
|
||||
return
|
||||
|
||||
remote_data = None
|
||||
if sync_remote and app_instance.services.mode == "remote":
|
||||
try:
|
||||
inventory = app_instance.services.nodes.get_inventory()
|
||||
# Merge with local settings
|
||||
local_settings = app_instance.services.config_svc.get_settings()
|
||||
local_settings.pop("configfolder", None)
|
||||
|
||||
# Maintain proper config structure: {config: {}, connections: {}, profiles: {}}
|
||||
remote_data = {
|
||||
"config": local_settings,
|
||||
"connections": inventory.get("connections", {}),
|
||||
"profiles": inventory.get("profiles", {})
|
||||
}
|
||||
except Exception as e:
|
||||
printer.warning(f"Could not fetch remote inventory for sync: {e}")
|
||||
|
||||
# Run in thread to not block CLI
|
||||
threading.Thread(
|
||||
target=self.compress_and_upload,
|
||||
args=(remote_data,)
|
||||
).start()
|
||||
@@ -1,87 +0,0 @@
|
||||
from .base import BaseService
|
||||
from .exceptions import ConnpyError
|
||||
|
||||
class SystemService(BaseService):
|
||||
"""Business logic for application lifecycle (API, processes)."""
|
||||
|
||||
def start_api(self, port=None):
|
||||
"""Start the Connpy REST API."""
|
||||
from connpy.api import start_api
|
||||
try:
|
||||
start_api(port, config=self.config)
|
||||
except Exception as e:
|
||||
raise ConnpyError(f"Failed to start API: {e}")
|
||||
|
||||
def debug_api(self, port=None):
|
||||
"""Start the Connpy REST API in debug mode."""
|
||||
from connpy.api import debug_api
|
||||
try:
|
||||
debug_api(port, config=self.config)
|
||||
except Exception as e:
|
||||
raise ConnpyError(f"Failed to start API in debug mode: {e}")
|
||||
|
||||
|
||||
def stop_api(self):
|
||||
"""Stop the Connpy REST API."""
|
||||
try:
|
||||
import os
|
||||
import signal
|
||||
|
||||
pids = ["/run/connpy.pid", "/tmp/connpy.pid"]
|
||||
stopped = False
|
||||
for pid_file in pids:
|
||||
if os.path.exists(pid_file):
|
||||
try:
|
||||
with open(pid_file, "r") as f:
|
||||
# Read only the first line (PID)
|
||||
line = f.readline().strip()
|
||||
if not line:
|
||||
continue
|
||||
pid = int(line)
|
||||
os.kill(pid, signal.SIGTERM)
|
||||
# Remove the PID file after successful kill
|
||||
os.remove(pid_file)
|
||||
stopped = True
|
||||
except (ValueError, OSError, ProcessLookupError):
|
||||
# If process is already dead, just remove the stale PID file
|
||||
try:
|
||||
os.remove(pid_file)
|
||||
except OSError:
|
||||
pass
|
||||
continue
|
||||
return stopped
|
||||
except Exception as e:
|
||||
raise ConnpyError(f"Failed to stop API: {e}")
|
||||
|
||||
def restart_api(self, port=None):
|
||||
"""Restart the Connpy REST API, maintaining the current port if none provided."""
|
||||
if port is None:
|
||||
status = self.get_api_status()
|
||||
if status["running"] and status.get("port"):
|
||||
port = status["port"]
|
||||
|
||||
self.stop_api()
|
||||
import time
|
||||
time.sleep(1)
|
||||
self.start_api(port)
|
||||
|
||||
def get_api_status(self):
|
||||
"""Check if the API is currently running."""
|
||||
import os
|
||||
pids = ["/run/connpy.pid", "/tmp/connpy.pid"]
|
||||
for pid_file in pids:
|
||||
if os.path.exists(pid_file):
|
||||
try:
|
||||
with open(pid_file, "r") as f:
|
||||
pid_line = f.readline().strip()
|
||||
port_line = f.readline().strip()
|
||||
if not pid_line:
|
||||
continue
|
||||
pid = int(pid_line)
|
||||
port = int(port_line) if port_line else None
|
||||
# Signal 0 checks for process existence without killing it
|
||||
os.kill(pid, 0)
|
||||
return {"running": True, "pid": pid, "port": port, "pid_file": pid_file}
|
||||
except (ValueError, OSError, ProcessLookupError):
|
||||
continue
|
||||
return {"running": False}
|
||||
@@ -1 +0,0 @@
|
||||
# Tests package
|
||||
@@ -1,193 +0,0 @@
|
||||
"""Shared fixtures for connpy tests.
|
||||
|
||||
All tests use tmp_path to create isolated config/keys.
|
||||
No test touches ~/.config/conn/
|
||||
"""
|
||||
import pytest
|
||||
import json
|
||||
import yaml
|
||||
import os
|
||||
from unittest.mock import patch, MagicMock
|
||||
from Crypto.PublicKey import RSA
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Minimal config data
|
||||
# ---------------------------------------------------------------------------
|
||||
DEFAULT_CONFIG = {
|
||||
"config": {"case": False, "idletime": 30, "fzf": False},
|
||||
"connections": {},
|
||||
"profiles": {
|
||||
"default": {
|
||||
"host": "", "protocol": "ssh", "port": "", "user": "",
|
||||
"password": "", "options": "", "logs": "", "tags": "", "jumphost": ""
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
SAMPLE_CONNECTIONS = {
|
||||
"router1": {
|
||||
"host": "10.0.0.1", "protocol": "ssh", "port": "22",
|
||||
"user": "admin", "password": "pass1", "options": "",
|
||||
"logs": "", "tags": "", "jumphost": "", "type": "connection"
|
||||
},
|
||||
"office": {
|
||||
"type": "folder",
|
||||
"server1": {
|
||||
"host": "10.0.1.1", "protocol": "ssh", "port": "",
|
||||
"user": "root", "password": "pass2", "options": "",
|
||||
"logs": "", "tags": "", "jumphost": "", "type": "connection"
|
||||
},
|
||||
"datacenter": {
|
||||
"type": "subfolder",
|
||||
"db1": {
|
||||
"host": "10.0.2.1", "protocol": "ssh", "port": "",
|
||||
"user": "dbadmin", "password": "pass3", "options": "",
|
||||
"logs": "", "tags": "", "jumphost": "", "type": "connection"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
SAMPLE_PROFILES = {
|
||||
"default": {
|
||||
"host": "", "protocol": "ssh", "port": "", "user": "",
|
||||
"password": "", "options": "", "logs": "", "tags": "", "jumphost": ""
|
||||
},
|
||||
"office-user": {
|
||||
"host": "", "protocol": "ssh", "port": "", "user": "officeadmin",
|
||||
"password": "officepass", "options": "", "logs": "", "tags": "", "jumphost": ""
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Fixtures
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
@pytest.fixture
|
||||
def tmp_config_dir(tmp_path):
|
||||
"""Create an isolated config directory with config.json and RSA key."""
|
||||
config_dir = tmp_path / ".config" / "conn"
|
||||
config_dir.mkdir(parents=True)
|
||||
plugins_dir = config_dir / "plugins"
|
||||
plugins_dir.mkdir()
|
||||
|
||||
# Write config.yaml
|
||||
config_file = config_dir / "config.yaml"
|
||||
config_file.write_text(yaml.dump(DEFAULT_CONFIG, default_flow_style=False, sort_keys=False))
|
||||
os.chmod(str(config_file), 0o600)
|
||||
|
||||
# Write .folder (points to itself)
|
||||
folder_file = config_dir / ".folder"
|
||||
folder_file.write_text(str(config_dir))
|
||||
|
||||
# Generate RSA key
|
||||
key = RSA.generate(2048)
|
||||
key_file = config_dir / ".osk"
|
||||
key_file.write_bytes(key.export_key("PEM"))
|
||||
os.chmod(str(key_file), 0o600)
|
||||
|
||||
return config_dir
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def config(tmp_config_dir):
|
||||
"""Create a configfile instance pointing to tmp directory."""
|
||||
from connpy.configfile import configfile
|
||||
conf_path = str(tmp_config_dir / "config.yaml")
|
||||
key_path = str(tmp_config_dir / ".osk")
|
||||
return configfile(conf=conf_path, key=key_path)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def populated_config(tmp_config_dir):
|
||||
"""Create a configfile with sample nodes/profiles pre-loaded."""
|
||||
config_file = tmp_config_dir / "config.yaml"
|
||||
data = {
|
||||
"config": {"case": False, "idletime": 30, "fzf": False},
|
||||
"connections": SAMPLE_CONNECTIONS,
|
||||
"profiles": SAMPLE_PROFILES
|
||||
}
|
||||
config_file.write_text(yaml.dump(data, default_flow_style=False, sort_keys=False))
|
||||
from connpy.configfile import configfile
|
||||
return configfile(conf=str(config_file), key=str(tmp_config_dir / ".osk"))
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_pexpect():
|
||||
"""Mock pexpect.spawn for connection tests."""
|
||||
with patch("connpy.core.pexpect") as mock_pexp:
|
||||
child = MagicMock()
|
||||
child.before = b""
|
||||
child.after = b"router#"
|
||||
child.readline.return_value = b""
|
||||
child.child_fd = 3
|
||||
mock_pexp.spawn.return_value = child
|
||||
mock_pexp.EOF = object()
|
||||
mock_pexp.TIMEOUT = object()
|
||||
|
||||
# Also mock fdpexpect
|
||||
with patch("connpy.core.fdpexpect", create=True) as mock_fd:
|
||||
mock_fd.fdspawn.return_value = MagicMock()
|
||||
yield {
|
||||
"pexpect": mock_pexp,
|
||||
"child": child,
|
||||
"fdpexpect": mock_fd
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_litellm():
|
||||
"""Mock litellm.completion for AI tests."""
|
||||
with patch("connpy.ai.completion") as mock_comp:
|
||||
# Create a default response
|
||||
msg = MagicMock()
|
||||
msg.content = "Test response from AI"
|
||||
msg.tool_calls = None
|
||||
msg.role = "assistant"
|
||||
msg.model_dump.return_value = {
|
||||
"role": "assistant",
|
||||
"content": "Test response from AI"
|
||||
}
|
||||
|
||||
choice = MagicMock()
|
||||
choice.message = msg
|
||||
|
||||
response = MagicMock()
|
||||
response.choices = [choice]
|
||||
response.usage = MagicMock()
|
||||
response.usage.prompt_tokens = 100
|
||||
response.usage.completion_tokens = 50
|
||||
response.usage.total_tokens = 150
|
||||
|
||||
mock_comp.return_value = response
|
||||
|
||||
yield {
|
||||
"completion": mock_comp,
|
||||
"response": response,
|
||||
"message": msg,
|
||||
"choice": choice
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def ai_config(tmp_config_dir):
|
||||
"""Create a configfile with AI keys configured for AI tests."""
|
||||
config_file = tmp_config_dir / "config.yaml"
|
||||
data = {
|
||||
"config": {
|
||||
"case": False, "idletime": 30, "fzf": False,
|
||||
"ai": {
|
||||
"engineer_model": "test/test-model",
|
||||
"engineer_api_key": "test-engineer-key",
|
||||
"architect_model": "test/test-architect",
|
||||
"architect_api_key": "test-architect-key"
|
||||
}
|
||||
},
|
||||
"connections": SAMPLE_CONNECTIONS,
|
||||
"profiles": SAMPLE_PROFILES
|
||||
}
|
||||
config_file.write_text(yaml.dump(data, default_flow_style=False, sort_keys=False))
|
||||
from connpy.configfile import configfile
|
||||
return configfile(conf=str(config_file), key=str(tmp_config_dir / ".osk"))
|
||||
@@ -1,483 +0,0 @@
|
||||
"""Tests for connpy.ai module."""
|
||||
import json
|
||||
import os
|
||||
import pytest
|
||||
from unittest.mock import patch, MagicMock
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# AI Init tests
|
||||
# =========================================================================
|
||||
|
||||
class TestAIInit:
|
||||
def test_init_with_keys(self, ai_config, mock_litellm):
|
||||
"""Initializes correctly when keys are configured."""
|
||||
from connpy.ai import ai
|
||||
myai = ai(ai_config)
|
||||
assert myai.engineer_model == "test/test-model"
|
||||
assert myai.architect_model == "test/test-architect"
|
||||
|
||||
def test_ask_missing_engineer_key(self, config):
|
||||
"""Raises ValueError if engineer key is missing when asking."""
|
||||
from connpy.ai import ai
|
||||
myai = ai(config)
|
||||
with pytest.raises(ValueError) as exc:
|
||||
myai.ask("hello")
|
||||
assert "Engineer API key not configured" in str(exc.value)
|
||||
|
||||
def test_init_missing_architect_key_warns(self, ai_config, capsys, mock_litellm):
|
||||
"""Warns if architect key is missing but doesn't crash."""
|
||||
# Remove architect key
|
||||
ai_config.config["ai"]["architect_api_key"] = None
|
||||
from connpy.ai import ai
|
||||
# Should not raise
|
||||
myai = ai(ai_config)
|
||||
assert myai.architect_key is None
|
||||
|
||||
def test_default_models(self, config):
|
||||
"""Default models are set correctly when not configured."""
|
||||
config.config["ai"] = {"engineer_api_key": "test-key", "architect_api_key": "test-key"}
|
||||
from connpy.ai import ai
|
||||
myai = ai(config)
|
||||
assert "gemini" in myai.engineer_model.lower()
|
||||
assert "claude" in myai.architect_model.lower() or "anthropic" in myai.architect_model.lower()
|
||||
|
||||
def test_init_loads_memory(self, ai_config, tmp_path, mock_litellm):
|
||||
"""Loads long-term memory from file if it exists."""
|
||||
memory_path = os.path.join(ai_config.defaultdir, "ai_memory.md")
|
||||
from connpy.ai import ai
|
||||
|
||||
with patch("os.path.exists", side_effect=lambda p: True if p == memory_path else os.path.exists(p)):
|
||||
with patch("builtins.open", side_effect=lambda f, *a, **kw: (
|
||||
__import__("io").StringIO("## Memory\nRouter1 is border router")
|
||||
if f == memory_path else open(f, *a, **kw)
|
||||
)):
|
||||
try:
|
||||
myai = ai(ai_config)
|
||||
except Exception:
|
||||
pass # May fail on other file opens, that's ok
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# register_ai_tool tests
|
||||
# =========================================================================
|
||||
|
||||
class TestRegisterAITool:
|
||||
@pytest.fixture
|
||||
def myai(self, ai_config, mock_litellm):
|
||||
from connpy.ai import ai
|
||||
return ai(ai_config)
|
||||
|
||||
def _make_tool_def(self, name="my_tool"):
|
||||
return {
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": name,
|
||||
"description": "Test tool",
|
||||
"parameters": {"type": "object", "properties": {}}
|
||||
}
|
||||
}
|
||||
|
||||
def test_register_tool_engineer(self, myai):
|
||||
tool_def = self._make_tool_def()
|
||||
myai.register_ai_tool(tool_def, lambda self, **kw: "ok", target="engineer")
|
||||
assert len(myai.external_engineer_tools) == 1
|
||||
assert len(myai.external_architect_tools) == 0
|
||||
|
||||
def test_register_tool_architect(self, myai):
|
||||
tool_def = self._make_tool_def()
|
||||
myai.register_ai_tool(tool_def, lambda self, **kw: "ok", target="architect")
|
||||
assert len(myai.external_architect_tools) == 1
|
||||
assert len(myai.external_engineer_tools) == 0
|
||||
|
||||
def test_register_tool_both(self, myai):
|
||||
tool_def = self._make_tool_def()
|
||||
myai.register_ai_tool(tool_def, lambda self, **kw: "ok", target="both")
|
||||
assert len(myai.external_engineer_tools) == 1
|
||||
assert len(myai.external_architect_tools) == 1
|
||||
|
||||
def test_register_tool_handler(self, myai):
|
||||
tool_def = self._make_tool_def("custom_tool")
|
||||
handler = lambda self, **kw: "result"
|
||||
myai.register_ai_tool(tool_def, handler)
|
||||
assert "custom_tool" in myai.external_tool_handlers
|
||||
assert myai.external_tool_handlers["custom_tool"] is handler
|
||||
|
||||
def test_register_tool_prompt_extension(self, myai):
|
||||
tool_def = self._make_tool_def()
|
||||
myai.register_ai_tool(
|
||||
tool_def, lambda self, **kw: "ok",
|
||||
engineer_prompt="- Custom capability",
|
||||
architect_prompt=" * Custom tool"
|
||||
)
|
||||
assert any("Custom capability" in ext for ext in myai.engineer_prompt_extensions)
|
||||
assert any("Custom tool" in ext for ext in myai.architect_prompt_extensions)
|
||||
|
||||
def test_register_tool_status_formatter(self, myai):
|
||||
tool_def = self._make_tool_def("status_tool")
|
||||
formatter = lambda args: f"[STATUS] {args}"
|
||||
myai.register_ai_tool(tool_def, lambda self, **kw: "ok", status_formatter=formatter)
|
||||
assert "status_tool" in myai.tool_status_formatters
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# Dynamic prompts tests
|
||||
# =========================================================================
|
||||
|
||||
class TestDynamicPrompts:
|
||||
@pytest.fixture
|
||||
def myai(self, ai_config, mock_litellm):
|
||||
from connpy.ai import ai
|
||||
return ai(ai_config)
|
||||
|
||||
def test_engineer_prompt_without_extensions(self, myai):
|
||||
prompt = myai.engineer_system_prompt
|
||||
assert "Plugin Capabilities" not in prompt
|
||||
assert "TECHNICAL EXECUTION ENGINE" in prompt
|
||||
|
||||
def test_engineer_prompt_with_extensions(self, myai):
|
||||
myai.engineer_prompt_extensions.append("- AWS Cloud Auditing")
|
||||
prompt = myai.engineer_system_prompt
|
||||
assert "Plugin Capabilities" in prompt
|
||||
assert "AWS Cloud Auditing" in prompt
|
||||
|
||||
def test_architect_prompt_without_extensions(self, myai):
|
||||
prompt = myai.architect_system_prompt
|
||||
assert "Plugin Capabilities" not in prompt
|
||||
assert "STRATEGIC REASONING ENGINE" in prompt
|
||||
|
||||
def test_architect_prompt_with_extensions(self, myai):
|
||||
myai.architect_prompt_extensions.append(" * Custom tool available")
|
||||
prompt = myai.architect_system_prompt
|
||||
assert "Plugin Capabilities" in prompt
|
||||
assert "Custom tool available" in prompt
|
||||
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# _sanitize_messages tests
|
||||
# =========================================================================
|
||||
|
||||
class TestSanitizeMessages:
|
||||
@pytest.fixture
|
||||
def myai(self, ai_config, mock_litellm):
|
||||
from connpy.ai import ai
|
||||
return ai(ai_config)
|
||||
|
||||
def test_sanitize_empty(self, myai):
|
||||
assert myai._sanitize_messages([]) == []
|
||||
|
||||
def test_sanitize_normal_messages(self, myai):
|
||||
messages = [
|
||||
{"role": "system", "content": "You are helpful"},
|
||||
{"role": "user", "content": "Hello"},
|
||||
{"role": "assistant", "content": "Hi there"}
|
||||
]
|
||||
result = myai._sanitize_messages(messages)
|
||||
assert len(result) == 3
|
||||
|
||||
def test_sanitize_removes_orphan_tool_calls(self, myai):
|
||||
"""Tool calls at the end without responses are removed."""
|
||||
messages = [
|
||||
{"role": "user", "content": "do something"},
|
||||
{"role": "assistant", "content": None, "tool_calls": [
|
||||
{"id": "tc1", "function": {"name": "list_nodes", "arguments": "{}"}}
|
||||
]}
|
||||
# No tool response follows!
|
||||
]
|
||||
result = myai._sanitize_messages(messages)
|
||||
assert len(result) == 1 # Only user message
|
||||
assert result[0]["role"] == "user"
|
||||
|
||||
def test_sanitize_removes_orphan_tool_responses(self, myai):
|
||||
"""Tool responses without preceding tool_calls are removed."""
|
||||
messages = [
|
||||
{"role": "user", "content": "hello"},
|
||||
{"role": "tool", "tool_call_id": "tc1", "name": "list_nodes", "content": "[]"}
|
||||
]
|
||||
result = myai._sanitize_messages(messages)
|
||||
assert len(result) == 1
|
||||
assert result[0]["role"] == "user"
|
||||
|
||||
def test_sanitize_preserves_valid_tool_pairs(self, myai):
|
||||
"""Valid assistant+tool_calls followed by tool responses are preserved."""
|
||||
messages = [
|
||||
{"role": "user", "content": "list nodes"},
|
||||
{"role": "assistant", "content": None, "tool_calls": [
|
||||
{"id": "tc1", "function": {"name": "list_nodes", "arguments": "{}"}}
|
||||
]},
|
||||
{"role": "tool", "tool_call_id": "tc1", "name": "list_nodes", "content": "[\"r1\"]"},
|
||||
{"role": "assistant", "content": "Found r1"}
|
||||
]
|
||||
result = myai._sanitize_messages(messages)
|
||||
assert len(result) == 4
|
||||
|
||||
def test_sanitize_strips_cache_control(self, myai):
|
||||
"""_sanitize_messages should convert list-based content (with cache_control) back to strings."""
|
||||
messages = [
|
||||
{"role": "system", "content": [{"type": "text", "text": "system prompt", "cache_control": {"type": "ephemeral"}}]},
|
||||
{"role": "user", "content": "hello"}
|
||||
]
|
||||
result = myai._sanitize_messages(messages)
|
||||
assert result[0]["role"] == "system"
|
||||
assert isinstance(result[0]["content"], str)
|
||||
assert result[0]["content"] == "system prompt"
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# _truncate tests
|
||||
# =========================================================================
|
||||
|
||||
class TestTruncate:
|
||||
@pytest.fixture
|
||||
def myai(self, ai_config, mock_litellm):
|
||||
from connpy.ai import ai
|
||||
return ai(ai_config)
|
||||
|
||||
def test_truncate_short_text(self, myai):
|
||||
text = "short text"
|
||||
assert myai._truncate(text) == text
|
||||
|
||||
def test_truncate_long_text(self, myai):
|
||||
text = "x" * 100000
|
||||
result = myai._truncate(text)
|
||||
assert len(result) < 100000
|
||||
assert "[... OUTPUT TRUNCATED ...]" in result
|
||||
|
||||
def test_truncate_custom_limit(self, myai):
|
||||
text = "x" * 1000
|
||||
result = myai._truncate(text, limit=500)
|
||||
assert len(result) < 1000
|
||||
assert "[... OUTPUT TRUNCATED ...]" in result
|
||||
|
||||
def test_truncate_preserves_head_and_tail(self, myai):
|
||||
text = "HEAD" + "x" * 100000 + "TAIL"
|
||||
result = myai._truncate(text)
|
||||
assert result.startswith("HEAD")
|
||||
assert result.endswith("TAIL")
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# Tool methods tests
|
||||
# =========================================================================
|
||||
|
||||
class TestToolMethods:
|
||||
@pytest.fixture
|
||||
def myai(self, ai_config, mock_litellm):
|
||||
from connpy.ai import ai
|
||||
return ai(ai_config)
|
||||
|
||||
def test_list_nodes_tool_found(self, myai):
|
||||
result = myai.list_nodes_tool("router.*")
|
||||
parsed = json.loads(result) if isinstance(result, str) else result
|
||||
assert "router1" in str(parsed)
|
||||
|
||||
def test_list_nodes_tool_not_found(self, myai):
|
||||
result = myai.list_nodes_tool("nonexistent_pattern_xyz")
|
||||
assert "No nodes found" in str(result)
|
||||
|
||||
def test_get_node_info_masks_password(self, myai):
|
||||
result = myai.get_node_info_tool("router1")
|
||||
parsed = json.loads(result) if isinstance(result, str) else result
|
||||
assert parsed["password"] == "***"
|
||||
|
||||
def test_is_safe_command_show(self, myai):
|
||||
assert myai._is_safe_command("show running-config") == True
|
||||
assert myai._is_safe_command("show ip int brief") == True
|
||||
|
||||
def test_is_safe_command_config(self, myai):
|
||||
assert myai._is_safe_command("config t") == False
|
||||
assert myai._is_safe_command("write memory") == False
|
||||
|
||||
def test_is_safe_command_ls(self, myai):
|
||||
assert myai._is_safe_command("ls -la") == True
|
||||
|
||||
def test_is_safe_command_ping(self, myai):
|
||||
assert myai._is_safe_command("ping 10.0.0.1") == True
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# manage_memory_tool tests
|
||||
# =========================================================================
|
||||
|
||||
class TestManageMemory:
|
||||
@pytest.fixture
|
||||
def myai(self, ai_config, mock_litellm, tmp_path):
|
||||
from connpy.ai import ai
|
||||
myai = ai(ai_config)
|
||||
myai.memory_path = str(tmp_path / "ai_memory.md")
|
||||
return myai
|
||||
|
||||
def test_manage_memory_append(self, myai):
|
||||
result = myai.manage_memory_tool("Router1 is border router", action="append")
|
||||
assert "successfully" in result.lower()
|
||||
assert os.path.exists(myai.memory_path)
|
||||
content = open(myai.memory_path).read()
|
||||
assert "Router1 is border router" in content
|
||||
|
||||
def test_manage_memory_replace(self, myai):
|
||||
myai.manage_memory_tool("old content", action="append")
|
||||
myai.manage_memory_tool("new content only", action="replace")
|
||||
content = open(myai.memory_path).read()
|
||||
assert "new content only" in content
|
||||
assert "old content" not in content
|
||||
|
||||
def test_manage_memory_empty_content(self, myai):
|
||||
result = myai.manage_memory_tool("", action="append")
|
||||
assert "error" in result.lower() or "Error" in result
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# ask() with mock LLM tests
|
||||
# =========================================================================
|
||||
|
||||
class TestAsk:
|
||||
@pytest.fixture
|
||||
def myai(self, ai_config, mock_litellm):
|
||||
from connpy.ai import ai
|
||||
return ai(ai_config)
|
||||
|
||||
def test_ask_basic_response(self, myai, mock_litellm):
|
||||
result = myai.ask("hello", stream=False)
|
||||
assert "response" in result
|
||||
assert "chat_history" in result
|
||||
assert "usage" in result
|
||||
assert result["response"] == "Test response from AI"
|
||||
|
||||
def test_ask_sticky_brain_engineer(self, myai, mock_litellm):
|
||||
result = myai.ask("show me the routers", stream=False)
|
||||
assert result["responder"] == "engineer"
|
||||
|
||||
def test_ask_explicit_architect(self, myai, mock_litellm):
|
||||
result = myai.ask("architect: review the network design", stream=False)
|
||||
assert result["responder"] == "architect"
|
||||
|
||||
def test_ask_returns_usage(self, myai, mock_litellm):
|
||||
result = myai.ask("test", stream=False)
|
||||
assert result["usage"]["total"] > 0
|
||||
|
||||
def test_ask_with_chat_history(self, myai, mock_litellm):
|
||||
history = [
|
||||
{"role": "user", "content": "previous question"},
|
||||
{"role": "assistant", "content": "previous answer"}
|
||||
]
|
||||
result = myai.ask("follow up", chat_history=history, stream=False)
|
||||
assert result["response"] is not None
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# _get_engineer_tools / _get_architect_tools tests
|
||||
# =========================================================================
|
||||
|
||||
class TestToolDefinitions:
|
||||
@pytest.fixture
|
||||
def myai(self, ai_config, mock_litellm):
|
||||
from connpy.ai import ai
|
||||
return ai(ai_config)
|
||||
|
||||
def test_engineer_tools_include_core(self, myai):
|
||||
tools = myai._get_engineer_tools()
|
||||
names = [t["function"]["name"] for t in tools]
|
||||
assert "list_nodes" in names
|
||||
assert "run_commands" in names
|
||||
assert "get_node_info" in names
|
||||
assert "consult_architect" in names
|
||||
assert "escalate_to_architect" in names
|
||||
|
||||
def test_engineer_tools_include_external(self, myai):
|
||||
myai.external_engineer_tools.append({
|
||||
"type": "function",
|
||||
"function": {"name": "custom_tool", "description": "test", "parameters": {}}
|
||||
})
|
||||
tools = myai._get_engineer_tools()
|
||||
names = [t["function"]["name"] for t in tools]
|
||||
assert "custom_tool" in names
|
||||
|
||||
def test_architect_tools_include_core(self, myai):
|
||||
tools = myai._get_architect_tools()
|
||||
names = [t["function"]["name"] for t in tools]
|
||||
assert "delegate_to_engineer" in names
|
||||
assert "return_to_engineer" in names
|
||||
assert "manage_memory_tool" in names
|
||||
|
||||
def test_architect_tools_include_external(self, myai):
|
||||
myai.external_architect_tools.append({
|
||||
"type": "function",
|
||||
"function": {"name": "arch_tool", "description": "test", "parameters": {}}
|
||||
})
|
||||
tools = myai._get_architect_tools()
|
||||
names = [t["function"]["name"] for t in tools]
|
||||
assert "arch_tool" in names
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# AI Session Management tests
|
||||
# =========================================================================
|
||||
|
||||
class TestAISessions:
|
||||
@pytest.fixture
|
||||
def myai(self, ai_config, mock_litellm, tmp_path):
|
||||
from connpy.ai import ai
|
||||
ai_config.defaultdir = str(tmp_path)
|
||||
return ai(ai_config)
|
||||
|
||||
def test_sessions_dir_initialization(self, myai, tmp_path):
|
||||
assert os.path.exists(os.path.join(tmp_path, "ai_sessions"))
|
||||
assert myai.sessions_dir == str(tmp_path / "ai_sessions")
|
||||
|
||||
def test_generate_session_id(self, myai):
|
||||
session_id = myai._generate_session_id("Any query")
|
||||
# Format: YYYYMMDD-HHMMSS
|
||||
assert len(session_id) == 15
|
||||
assert "-" in session_id
|
||||
parts = session_id.split("-")
|
||||
assert len(parts[0]) == 8 # YYYYMMDD
|
||||
assert len(parts[1]) == 6 # HHMMSS
|
||||
|
||||
def test_save_and_load_session(self, myai):
|
||||
history = [
|
||||
{"role": "user", "content": "Hello"},
|
||||
{"role": "assistant", "content": "Hi"}
|
||||
]
|
||||
myai.save_session(history, title="Test Session")
|
||||
session_id = myai.session_id
|
||||
|
||||
# Load it back
|
||||
loaded = myai.load_session_data(session_id)
|
||||
assert loaded["title"] == "Test Session"
|
||||
assert loaded["history"] == history
|
||||
assert loaded["model"] == myai.engineer_model
|
||||
|
||||
def test_list_sessions(self, myai, capsys):
|
||||
history = [{"role": "user", "content": "Query 1"}]
|
||||
myai.save_session(history, title="Session 1")
|
||||
|
||||
# Use a second instance to list
|
||||
myai.list_sessions()
|
||||
captured = capsys.readouterr()
|
||||
assert "Session 1" in captured.out
|
||||
assert "AI Persisted Sessions" in captured.out
|
||||
|
||||
def test_get_last_session_id(self, myai):
|
||||
# Save two sessions
|
||||
myai.session_id = None # Force new
|
||||
myai.save_session([{"role": "user", "content": "First"}])
|
||||
first_id = myai.session_id
|
||||
import time
|
||||
time.sleep(1.1) # Ensure different timestamp
|
||||
|
||||
myai.session_id = None # Force new
|
||||
myai.save_session([{"role": "user", "content": "Second"}])
|
||||
second_id = myai.session_id
|
||||
|
||||
last_id = myai.get_last_session_id()
|
||||
assert last_id == second_id
|
||||
assert last_id != first_id
|
||||
|
||||
def test_delete_session(self, myai):
|
||||
myai.save_session([{"role": "user", "content": "To be deleted"}])
|
||||
session_id = myai.session_id
|
||||
assert os.path.exists(myai.session_path)
|
||||
|
||||
myai.delete_session(session_id)
|
||||
assert not os.path.exists(myai.session_path)
|
||||
@@ -1,160 +0,0 @@
|
||||
import pytest
|
||||
from unittest.mock import MagicMock, patch, AsyncMock
|
||||
import json
|
||||
import asyncio
|
||||
|
||||
from connpy.ai import ai
|
||||
from connpy.core import node
|
||||
|
||||
class DummyConfig:
|
||||
def __init__(self):
|
||||
self.config = {"ai": {"engineer_api_key": "test_key", "engineer_model": "test_model"}}
|
||||
self.defaultdir = "/tmp"
|
||||
|
||||
class MockAsyncIterator:
|
||||
def __init__(self, items):
|
||||
self.items = items
|
||||
def __aiter__(self):
|
||||
return self
|
||||
async def __anext__(self):
|
||||
if not self.items:
|
||||
raise StopAsyncIteration
|
||||
return self.items.pop(0)
|
||||
|
||||
@pytest.fixture
|
||||
def mock_acompletion():
|
||||
# Patch acompletion inside connpy.ai.aask_copilot
|
||||
with patch('litellm.acompletion') as mock:
|
||||
yield mock
|
||||
|
||||
def test_aask_copilot_tool_call(mock_acompletion):
|
||||
agent = ai(DummyConfig())
|
||||
|
||||
# Setup mock response for streaming
|
||||
class MockDelta:
|
||||
def __init__(self, content):
|
||||
self.content = content
|
||||
|
||||
class MockChoice:
|
||||
def __init__(self, content):
|
||||
self.delta = MockDelta(content)
|
||||
|
||||
class MockChunk:
|
||||
def __init__(self, content):
|
||||
self.choices = [MockChoice(content)]
|
||||
|
||||
# acompletion is awaited and returns an async iterator
|
||||
async def mock_ac(*args, **kwargs):
|
||||
return MockAsyncIterator([
|
||||
MockChunk("<guide>Check the interfaces and running config.</guide>"),
|
||||
MockChunk("<commands>\nshow ip int br\nshow run\n</commands>"),
|
||||
MockChunk("<risk>low</risk>")
|
||||
])
|
||||
|
||||
mock_acompletion.side_effect = mock_ac
|
||||
|
||||
async def run_test():
|
||||
return await agent.aask_copilot("Router#", "What do I do?")
|
||||
|
||||
result = asyncio.run(run_test())
|
||||
|
||||
if result["error"]:
|
||||
print(f"ERROR OCCURRED: {result['error']}")
|
||||
|
||||
assert result["error"] is None
|
||||
assert result["guide"] == "Check the interfaces and running config."
|
||||
assert result["risk_level"] == "low"
|
||||
assert result["commands"] == ["show ip int br", "show run"]
|
||||
|
||||
def test_aask_copilot_fallback(mock_acompletion):
|
||||
agent = ai(DummyConfig())
|
||||
|
||||
# Setup mock response for streaming
|
||||
class MockDelta:
|
||||
def __init__(self, content):
|
||||
self.content = content
|
||||
|
||||
class MockChoice:
|
||||
def __init__(self, content):
|
||||
self.delta = MockDelta(content)
|
||||
|
||||
class MockChunk:
|
||||
def __init__(self, content):
|
||||
self.choices = [MockChoice(content)]
|
||||
|
||||
async def mock_ac(*args, **kwargs):
|
||||
return MockAsyncIterator([
|
||||
MockChunk("Here is some text response instead of tool call.")
|
||||
])
|
||||
|
||||
mock_acompletion.side_effect = mock_ac
|
||||
|
||||
async def run_test():
|
||||
return await agent.aask_copilot("Router#", "What do I do?")
|
||||
|
||||
result = asyncio.run(run_test())
|
||||
|
||||
if result["error"]:
|
||||
print(f"ERROR OCCURRED: {result['error']}")
|
||||
|
||||
assert result["error"] is None
|
||||
assert result["guide"] == "Here is some text response instead of tool call."
|
||||
assert result["risk_level"] == "low"
|
||||
|
||||
def test_logclean_ansi():
|
||||
c = node("test_node", "1.2.3.4")
|
||||
raw = "Router#\x1b[K\x1b[m show ip"
|
||||
clean = c._logclean(raw, var=True)
|
||||
assert "\x1b" not in clean
|
||||
|
||||
def test_ingress_task_interception():
|
||||
async def run_test():
|
||||
c = node("test_node", "1.2.3.4")
|
||||
c.mylog = MagicMock()
|
||||
c.mylog.getvalue.return_value = b"Some session log"
|
||||
c.unique = "test_node"
|
||||
c.host = "1.2.3.4"
|
||||
c.tags = {"os": "cisco_ios"}
|
||||
|
||||
class MockStream:
|
||||
def __init__(self):
|
||||
self.data = [b"a", b"b", b"\x00", b"c", b""]
|
||||
async def read(self):
|
||||
if self.data:
|
||||
return self.data.pop(0)
|
||||
return b""
|
||||
def setup(self, resize_callback):
|
||||
pass
|
||||
|
||||
stream = MockStream()
|
||||
|
||||
called_copilot = False
|
||||
async def mock_handler(buffer, node_info, s, child_fd):
|
||||
nonlocal called_copilot
|
||||
called_copilot = True
|
||||
assert buffer == "Some session log"
|
||||
assert node_info["os"] == "cisco_ios"
|
||||
|
||||
c.child = MagicMock()
|
||||
c.child.child_fd = 123
|
||||
c.child.after = b""
|
||||
c.child.buffer = b""
|
||||
|
||||
async def mock_ingress():
|
||||
while True:
|
||||
data = await stream.read()
|
||||
if not data:
|
||||
break
|
||||
|
||||
if mock_handler and b'\x00' in data:
|
||||
buffer = c.mylog.getvalue().decode()
|
||||
node_info = {"name": getattr(c, 'unique', 'unknown'), "host": getattr(c, 'host', 'unknown')}
|
||||
if isinstance(getattr(c, 'tags', None), dict):
|
||||
node_info["os"] = c.tags.get("os", "unknown")
|
||||
await mock_handler(buffer, node_info, stream, c.child.child_fd)
|
||||
continue
|
||||
|
||||
await mock_ingress()
|
||||
assert called_copilot
|
||||
|
||||
asyncio.run(run_test())
|
||||
@@ -1,56 +0,0 @@
|
||||
"""Tests for connpy.core_plugins.capture"""
|
||||
import pytest
|
||||
from unittest.mock import MagicMock, patch
|
||||
from connpy.core_plugins.capture import Entrypoint
|
||||
|
||||
@pytest.fixture
|
||||
def RemoteCapture():
|
||||
return Entrypoint.get_remote_capture_class()
|
||||
|
||||
@pytest.fixture
|
||||
def mock_connapp():
|
||||
app = MagicMock()
|
||||
app.services.nodes.list_nodes.return_value = ["test_node"]
|
||||
app.services.nodes.get_node_details.return_value = {"host": "127.0.0.1", "protocol": "ssh"}
|
||||
app.services.config_svc.get_settings().get.return_value = "/fake/ws"
|
||||
|
||||
mock_node = MagicMock()
|
||||
mock_node.protocol = "ssh"
|
||||
mock_node.unique = "test_node"
|
||||
app.node.return_value = mock_node
|
||||
return app
|
||||
|
||||
class TestRemoteCapture:
|
||||
def test_init_node_not_found(self, mock_connapp, RemoteCapture):
|
||||
# Attempt to capture a node not in inventory
|
||||
mock_connapp.services.nodes.list_nodes.return_value = []
|
||||
with pytest.raises(SystemExit) as exc:
|
||||
RemoteCapture(mock_connapp, "test_node", "eth0")
|
||||
assert exc.value.code == 2
|
||||
|
||||
def test_init_success(self, mock_connapp, RemoteCapture):
|
||||
rc = RemoteCapture(mock_connapp, "test_node", "eth0")
|
||||
assert rc.node_name == "test_node"
|
||||
assert rc.interface == "eth0"
|
||||
assert rc.wireshark_path == "/fake/ws"
|
||||
|
||||
def test_is_port_in_use(self, mock_connapp, RemoteCapture):
|
||||
rc = RemoteCapture(mock_connapp, "test_node", "eth0")
|
||||
with patch("socket.socket") as mock_socket:
|
||||
mock_sock_instance = MagicMock()
|
||||
mock_socket.return_value.__enter__.return_value = mock_sock_instance
|
||||
|
||||
mock_sock_instance.connect_ex.return_value = 0
|
||||
assert rc._is_port_in_use(8080) is True
|
||||
|
||||
mock_sock_instance.connect_ex.return_value = 1
|
||||
assert rc._is_port_in_use(8080) is False
|
||||
|
||||
def test_find_free_port(self, mock_connapp, RemoteCapture):
|
||||
rc = RemoteCapture(mock_connapp, "test_node", "eth0")
|
||||
with patch.object(RemoteCapture, "_is_port_in_use") as mock_is_in_use:
|
||||
# First 2 ports in use, 3rd is free
|
||||
mock_is_in_use.side_effect = [True, True, False]
|
||||
port = rc._find_free_port(20000, 30000)
|
||||
assert 20000 <= port <= 30000
|
||||
assert mock_is_in_use.call_count == 3
|
||||
@@ -1,68 +0,0 @@
|
||||
"""Tests for connpy.completion module."""
|
||||
import os
|
||||
import json
|
||||
import pytest
|
||||
from connpy.completion import load_txt_cache, get_cwd
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# load_txt_cache tests
|
||||
# =========================================================================
|
||||
|
||||
class TestLoadTxtCache:
|
||||
def test_load_existing_cache(self, tmp_path):
|
||||
"""Loads lines from a file correctly."""
|
||||
cache_file = tmp_path / "cache.txt"
|
||||
cache_file.write_text("node1\nnode2\nnode3@folder")
|
||||
|
||||
result = load_txt_cache(str(cache_file))
|
||||
assert result == ["node1", "node2", "node3@folder"]
|
||||
|
||||
def test_load_nonexistent_cache(self, tmp_path):
|
||||
"""Returns empty list if file is missing."""
|
||||
result = load_txt_cache(str(tmp_path / "missing.txt"))
|
||||
assert result == []
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# get_cwd tests
|
||||
# =========================================================================
|
||||
|
||||
class TestGetCwd:
|
||||
def test_current_dir(self, tmp_path, monkeypatch):
|
||||
"""Lists files in current directory."""
|
||||
monkeypatch.chdir(tmp_path)
|
||||
(tmp_path / "file1.txt").touch()
|
||||
(tmp_path / "file2.py").touch()
|
||||
subdir = tmp_path / "subdir"
|
||||
subdir.mkdir()
|
||||
|
||||
result = get_cwd(["run", "run"])
|
||||
# Should list files
|
||||
assert any("file1.txt" in r for r in result)
|
||||
assert any("subdir/" in r for r in result)
|
||||
|
||||
def test_specific_path(self, tmp_path, monkeypatch):
|
||||
"""Lists files matching a partial path."""
|
||||
monkeypatch.chdir(tmp_path)
|
||||
(tmp_path / "script.yaml").touch()
|
||||
(tmp_path / "script2.yaml").touch()
|
||||
|
||||
result = get_cwd(["run", "script"])
|
||||
assert any("script" in r for r in result)
|
||||
|
||||
def test_folder_only(self, tmp_path, monkeypatch):
|
||||
"""folderonly=True returns only directories."""
|
||||
monkeypatch.chdir(tmp_path)
|
||||
(tmp_path / "file.txt").touch()
|
||||
subdir = tmp_path / "mydir"
|
||||
subdir.mkdir()
|
||||
|
||||
result = get_cwd(["export", "export"], folderonly=True)
|
||||
files_in_result = [r for r in result if "file.txt" in r]
|
||||
assert len(files_in_result) == 0
|
||||
dirs_in_result = [r for r in result if "mydir" in r]
|
||||
assert len(dirs_in_result) > 0
|
||||
|
||||
|
||||
|
||||
@@ -1,585 +0,0 @@
|
||||
"""Tests for connpy.configfile module."""
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import pytest
|
||||
import yaml
|
||||
from copy import deepcopy
|
||||
|
||||
|
||||
class TestConfigfileInit:
|
||||
def test_creates_default_config(self, tmp_config_dir):
|
||||
"""Creates config.yaml with defaults when it doesn't exist."""
|
||||
config_file = tmp_config_dir / "config.yaml"
|
||||
config_file.unlink(missing_ok=True) # Remove existing
|
||||
key_file = tmp_config_dir / ".osk"
|
||||
|
||||
from connpy.configfile import configfile
|
||||
conf = configfile(conf=str(config_file), key=str(key_file))
|
||||
|
||||
assert config_file.exists()
|
||||
assert conf.config["case"] == False
|
||||
assert conf.config["idletime"] == 30
|
||||
assert "default" in conf.profiles
|
||||
|
||||
def test_creates_rsa_key(self, tmp_config_dir):
|
||||
"""Generates RSA key when it doesn't exist."""
|
||||
key_file = tmp_config_dir / ".osk"
|
||||
key_file.unlink() # Remove existing
|
||||
|
||||
from connpy.configfile import configfile
|
||||
conf = configfile(conf=str(tmp_config_dir / "config.yaml"), key=str(key_file))
|
||||
|
||||
assert key_file.exists()
|
||||
assert conf.privatekey is not None
|
||||
assert conf.publickey is not None
|
||||
|
||||
def test_loads_existing_config(self, config):
|
||||
"""Loads correctly from existing config."""
|
||||
assert config.config is not None
|
||||
assert config.connections is not None
|
||||
assert config.profiles is not None
|
||||
|
||||
def test_config_file_permissions(self, tmp_config_dir):
|
||||
"""Config is created with 0o600 permissions."""
|
||||
config_file = tmp_config_dir / "config.yaml"
|
||||
config_file.unlink(missing_ok=True)
|
||||
|
||||
from connpy.configfile import configfile
|
||||
configfile(conf=str(config_file), key=str(tmp_config_dir / ".osk"))
|
||||
|
||||
stat = os.stat(str(config_file))
|
||||
assert oct(stat.st_mode & 0o777) == oct(0o600)
|
||||
|
||||
def test_custom_paths(self, tmp_path):
|
||||
"""Accepts custom paths for conf and key."""
|
||||
config_dir = tmp_path / "custom"
|
||||
config_dir.mkdir()
|
||||
(config_dir / "plugins").mkdir()
|
||||
|
||||
# Write .folder for the config dir
|
||||
dot_folder = tmp_path / ".config" / "conn"
|
||||
dot_folder.mkdir(parents=True, exist_ok=True)
|
||||
(dot_folder / ".folder").write_text(str(config_dir))
|
||||
(dot_folder / "plugins").mkdir(exist_ok=True)
|
||||
|
||||
conf_path = str(config_dir / "my_config.yaml")
|
||||
key_path = str(config_dir / "my_key")
|
||||
|
||||
from connpy.configfile import configfile
|
||||
conf = configfile(conf=conf_path, key=key_path)
|
||||
|
||||
assert conf.file == conf_path
|
||||
assert conf.key == key_path
|
||||
|
||||
|
||||
class TestEncryption:
|
||||
def test_encrypt_password(self, config):
|
||||
"""Encrypts and produces b'...' format."""
|
||||
encrypted = config.encrypt("mysecret")
|
||||
assert encrypted.startswith("b'") or encrypted.startswith('b"')
|
||||
|
||||
def test_encrypt_decrypt_roundtrip(self, config):
|
||||
"""Encrypt then decrypt returns original."""
|
||||
from Crypto.PublicKey import RSA
|
||||
from Crypto.Cipher import PKCS1_OAEP
|
||||
import ast
|
||||
|
||||
original = "super_secret_password"
|
||||
encrypted = config.encrypt(original)
|
||||
|
||||
# Decrypt
|
||||
with open(config.key) as f:
|
||||
key = RSA.import_key(f.read())
|
||||
decryptor = PKCS1_OAEP.new(key)
|
||||
decrypted = decryptor.decrypt(ast.literal_eval(encrypted)).decode("utf-8")
|
||||
assert decrypted == original
|
||||
|
||||
|
||||
class TestExplodeUnique:
|
||||
def test_simple_node(self, config):
|
||||
result = config._explode_unique("router1")
|
||||
assert result == {"id": "router1"}
|
||||
|
||||
def test_node_with_folder(self, config):
|
||||
result = config._explode_unique("r1@office")
|
||||
assert result == {"id": "r1", "folder": "office"}
|
||||
|
||||
def test_node_with_subfolder(self, config):
|
||||
result = config._explode_unique("r1@dc@office")
|
||||
assert result == {"id": "r1", "folder": "office", "subfolder": "dc"}
|
||||
|
||||
def test_folder_only(self, config):
|
||||
result = config._explode_unique("@office")
|
||||
assert result == {"folder": "office"}
|
||||
|
||||
def test_subfolder_only(self, config):
|
||||
result = config._explode_unique("@dc@office")
|
||||
assert result == {"folder": "office", "subfolder": "dc"}
|
||||
|
||||
def test_too_deep(self, config):
|
||||
result = config._explode_unique("a@b@c@d")
|
||||
assert result == False
|
||||
|
||||
def test_empty_folder(self, config):
|
||||
result = config._explode_unique("a@")
|
||||
assert result == False
|
||||
|
||||
def test_empty_subfolder(self, config):
|
||||
result = config._explode_unique("a@@office")
|
||||
assert result == False
|
||||
|
||||
|
||||
class TestCRUDNodes:
|
||||
def test_add_node_root(self, config):
|
||||
config._connections_add(
|
||||
id="router1", host="10.0.0.1", protocol="ssh",
|
||||
port="22", user="admin", password="pass", options="",
|
||||
logs="", tags="", jumphost=""
|
||||
)
|
||||
assert "router1" in config.connections
|
||||
assert config.connections["router1"]["host"] == "10.0.0.1"
|
||||
|
||||
def test_add_node_folder(self, config):
|
||||
config._folder_add(folder="office")
|
||||
config._connections_add(
|
||||
id="server1", folder="office", host="10.0.1.1",
|
||||
protocol="ssh", port="", user="root", password="pass",
|
||||
options="", logs="", tags="", jumphost=""
|
||||
)
|
||||
assert "server1" in config.connections["office"]
|
||||
|
||||
def test_add_node_subfolder(self, config):
|
||||
config._folder_add(folder="office")
|
||||
config._folder_add(folder="office", subfolder="dc")
|
||||
config._connections_add(
|
||||
id="db1", folder="office", subfolder="dc", host="10.0.2.1",
|
||||
protocol="ssh", port="", user="dbadmin", password="pass",
|
||||
options="", logs="", tags="", jumphost=""
|
||||
)
|
||||
assert "db1" in config.connections["office"]["dc"]
|
||||
|
||||
def test_del_node_root(self, config):
|
||||
config._connections_add(
|
||||
id="router1", host="10.0.0.1", protocol="ssh",
|
||||
port="", user="", password="", options="",
|
||||
logs="", tags="", jumphost=""
|
||||
)
|
||||
config._connections_del(id="router1")
|
||||
assert "router1" not in config.connections
|
||||
|
||||
def test_del_node_folder(self, config):
|
||||
config._folder_add(folder="office")
|
||||
config._connections_add(
|
||||
id="server1", folder="office", host="10.0.1.1",
|
||||
protocol="ssh", port="", user="", password="",
|
||||
options="", logs="", tags="", jumphost=""
|
||||
)
|
||||
config._connections_del(id="server1", folder="office")
|
||||
assert "server1" not in config.connections["office"]
|
||||
|
||||
def test_add_folder(self, config):
|
||||
config._folder_add(folder="office")
|
||||
assert "office" in config.connections
|
||||
assert config.connections["office"]["type"] == "folder"
|
||||
|
||||
def test_add_subfolder(self, config):
|
||||
config._folder_add(folder="office")
|
||||
config._folder_add(folder="office", subfolder="dc")
|
||||
assert "dc" in config.connections["office"]
|
||||
assert config.connections["office"]["dc"]["type"] == "subfolder"
|
||||
|
||||
def test_del_folder(self, config):
|
||||
config._folder_add(folder="office")
|
||||
config._folder_del(folder="office")
|
||||
assert "office" not in config.connections
|
||||
|
||||
def test_del_subfolder(self, config):
|
||||
config._folder_add(folder="office")
|
||||
config._folder_add(folder="office", subfolder="dc")
|
||||
config._folder_del(folder="office", subfolder="dc")
|
||||
assert "dc" not in config.connections["office"]
|
||||
|
||||
|
||||
class TestCRUDProfiles:
|
||||
def test_add_profile(self, config):
|
||||
config._profiles_add(
|
||||
id="myprofile", host="", protocol="telnet",
|
||||
port="23", user="user1", password="pass1",
|
||||
options="", logs="", tags="", jumphost=""
|
||||
)
|
||||
assert "myprofile" in config.profiles
|
||||
assert config.profiles["myprofile"]["protocol"] == "telnet"
|
||||
|
||||
def test_del_profile(self, config):
|
||||
config._profiles_add(
|
||||
id="temp", host="", protocol="ssh", port="",
|
||||
user="", password="", options="", logs="", tags="", jumphost=""
|
||||
)
|
||||
config._profiles_del(id="temp")
|
||||
assert "temp" not in config.profiles
|
||||
|
||||
def test_default_profile_exists(self, config):
|
||||
assert "default" in config.profiles
|
||||
|
||||
|
||||
class TestGetItem:
|
||||
def test_getitem_node(self, populated_config):
|
||||
node = populated_config.getitem("router1")
|
||||
assert node["host"] == "10.0.0.1"
|
||||
assert "type" not in node # type is stripped
|
||||
|
||||
def test_getitem_folder(self, populated_config):
|
||||
nodes = populated_config.getitem("@office")
|
||||
# Should contain server1@office but NOT datacenter (subfolder)
|
||||
assert "server1@office" in nodes
|
||||
assert all("type" not in v for v in nodes.values())
|
||||
|
||||
def test_getitem_subfolder(self, populated_config):
|
||||
nodes = populated_config.getitem("@datacenter@office")
|
||||
assert "db1@datacenter@office" in nodes
|
||||
|
||||
def test_getitem_node_in_folder(self, populated_config):
|
||||
node = populated_config.getitem("server1@office")
|
||||
assert node["host"] == "10.0.1.1"
|
||||
|
||||
def test_getitem_node_in_subfolder(self, populated_config):
|
||||
node = populated_config.getitem("db1@datacenter@office")
|
||||
assert node["host"] == "10.0.2.1"
|
||||
|
||||
def test_getitem_with_profile_extraction(self, tmp_config_dir):
|
||||
"""extract=True resolves @profile references."""
|
||||
config_file = tmp_config_dir / "config.yaml"
|
||||
data = {
|
||||
"config": {"case": False, "idletime": 30, "fzf": False},
|
||||
"connections": {
|
||||
"router1": {
|
||||
"host": "10.0.0.1", "protocol": "ssh", "port": "",
|
||||
"user": "@office-user", "password": "@office-user",
|
||||
"options": "", "logs": "", "tags": "", "jumphost": "",
|
||||
"type": "connection"
|
||||
}
|
||||
},
|
||||
"profiles": {
|
||||
"default": {"host": "", "protocol": "ssh", "port": "",
|
||||
"user": "", "password": "", "options": "",
|
||||
"logs": "", "tags": "", "jumphost": ""},
|
||||
"office-user": {"host": "", "protocol": "ssh", "port": "",
|
||||
"user": "officeadmin", "password": "officepass",
|
||||
"options": "", "logs": "", "tags": "", "jumphost": ""}
|
||||
}
|
||||
}
|
||||
config_file.write_text(yaml.dump(data, default_flow_style=False, sort_keys=False))
|
||||
|
||||
from connpy.configfile import configfile
|
||||
conf = configfile(conf=str(config_file), key=str(tmp_config_dir / ".osk"))
|
||||
|
||||
node = conf.getitem("router1", extract=True)
|
||||
assert node["user"] == "officeadmin"
|
||||
assert node["password"] == "officepass"
|
||||
|
||||
def test_getitems_multiple(self, populated_config):
|
||||
nodes = populated_config.getitems(["router1", "server1@office"])
|
||||
assert "router1" in nodes
|
||||
assert "server1@office" in nodes
|
||||
|
||||
def test_getitems_folder(self, populated_config):
|
||||
nodes = populated_config.getitems(["@office"])
|
||||
assert "server1@office" in nodes
|
||||
|
||||
|
||||
class TestGetAll:
|
||||
def test_getallnodes_no_filter(self, populated_config):
|
||||
nodes = populated_config._getallnodes()
|
||||
assert "router1" in nodes
|
||||
assert "server1@office" in nodes
|
||||
assert "db1@datacenter@office" in nodes
|
||||
|
||||
def test_getallnodes_string_filter(self, populated_config):
|
||||
nodes = populated_config._getallnodes("router.*")
|
||||
assert "router1" in nodes
|
||||
assert "server1@office" not in nodes
|
||||
|
||||
def test_getallnodes_list_filter(self, populated_config):
|
||||
nodes = populated_config._getallnodes(["router.*", "db.*"])
|
||||
assert "router1" in nodes
|
||||
assert "db1@datacenter@office" in nodes
|
||||
assert "server1@office" not in nodes
|
||||
|
||||
def test_getallnodes_filter_invalid_type(self, populated_config):
|
||||
with pytest.raises(SystemExit) as exc:
|
||||
populated_config._getallnodes(123)
|
||||
assert exc.value.code == 1
|
||||
|
||||
def test_getallfolders(self, populated_config):
|
||||
folders = populated_config._getallfolders()
|
||||
assert "@office" in folders
|
||||
assert "@datacenter@office" in folders
|
||||
|
||||
def test_getallnodesfull(self, populated_config):
|
||||
nodes = populated_config._getallnodesfull()
|
||||
assert "router1" in nodes
|
||||
assert nodes["router1"]["host"] == "10.0.0.1"
|
||||
|
||||
def test_getallnodesfull_with_filter(self, populated_config):
|
||||
nodes = populated_config._getallnodesfull("router.*")
|
||||
assert "router1" in nodes
|
||||
assert "server1@office" not in nodes
|
||||
|
||||
def test_profileused(self, tmp_config_dir):
|
||||
"""Detects nodes using a specific profile."""
|
||||
config_file = tmp_config_dir / "config.yaml"
|
||||
data = {
|
||||
"config": {"case": False, "idletime": 30, "fzf": False},
|
||||
"connections": {
|
||||
"router1": {
|
||||
"host": "10.0.0.1", "protocol": "ssh", "port": "",
|
||||
"user": "@myprofile", "password": "pass",
|
||||
"options": "", "logs": "", "tags": "", "jumphost": "",
|
||||
"type": "connection"
|
||||
},
|
||||
"router2": {
|
||||
"host": "10.0.0.2", "protocol": "ssh", "port": "",
|
||||
"user": "admin", "password": "pass",
|
||||
"options": "", "logs": "", "tags": "", "jumphost": "",
|
||||
"type": "connection"
|
||||
}
|
||||
},
|
||||
"profiles": {
|
||||
"default": {"host": "", "protocol": "ssh", "port": "",
|
||||
"user": "", "password": "", "options": "",
|
||||
"logs": "", "tags": "", "jumphost": ""},
|
||||
"myprofile": {"host": "", "protocol": "ssh", "port": "",
|
||||
"user": "profuser", "password": "profpass",
|
||||
"options": "", "logs": "", "tags": "", "jumphost": ""}
|
||||
}
|
||||
}
|
||||
config_file.write_text(yaml.dump(data, default_flow_style=False, sort_keys=False))
|
||||
from connpy.configfile import configfile
|
||||
conf = configfile(conf=str(config_file), key=str(tmp_config_dir / ".osk"))
|
||||
|
||||
used = conf._profileused("myprofile")
|
||||
assert "router1" in used
|
||||
assert "router2" not in used
|
||||
|
||||
def test_saveconfig(self, config):
|
||||
"""Save and reload correctly."""
|
||||
config._connections_add(
|
||||
id="test_node", host="1.2.3.4", protocol="ssh",
|
||||
port="", user="", password="", options="",
|
||||
logs="", tags="", jumphost=""
|
||||
)
|
||||
result = config._saveconfig(config.file)
|
||||
assert result == 0
|
||||
|
||||
# Reload and verify
|
||||
from connpy.configfile import configfile
|
||||
reloaded = configfile(conf=config.file, key=config.key)
|
||||
assert "test_node" in reloaded.connections
|
||||
|
||||
|
||||
class TestValidateConfig:
|
||||
def test_valid_config(self, config):
|
||||
data = {"config": {}, "connections": {}, "profiles": {}}
|
||||
assert config._validate_config(data) == True
|
||||
|
||||
def test_none_data(self, config):
|
||||
assert config._validate_config(None) == False
|
||||
|
||||
def test_string_data(self, config):
|
||||
assert config._validate_config("not a dict") == False
|
||||
|
||||
def test_missing_key(self, config):
|
||||
assert config._validate_config({"config": {}, "connections": {}}) == False
|
||||
|
||||
def test_empty_dict(self, config):
|
||||
assert config._validate_config({}) == False
|
||||
|
||||
|
||||
class TestCorruptionRecovery:
|
||||
def test_corrupt_yaml_recovers_from_cache(self, tmp_config_dir):
|
||||
"""If YAML is corrupt but cache is valid, recovers from cache."""
|
||||
config_file = tmp_config_dir / "config.yaml"
|
||||
key_file = tmp_config_dir / ".osk"
|
||||
|
||||
# Write valid config with router1
|
||||
valid_data = {
|
||||
"config": {"case": False, "idletime": 30, "fzf": False},
|
||||
"connections": {"router1": {"host": "10.0.0.1", "type": "connection", "protocol": "ssh", "port": "", "user": "", "password": "", "options": "", "logs": "", "tags": "", "jumphost": ""}},
|
||||
"profiles": {"default": {"host": "", "protocol": "ssh", "port": "", "user": "", "password": "", "options": "", "logs": "", "tags": "", "jumphost": ""}}
|
||||
}
|
||||
config_file.write_text(yaml.dump(valid_data, default_flow_style=False, sort_keys=False))
|
||||
|
||||
from connpy.configfile import configfile
|
||||
conf = configfile(conf=str(config_file), key=str(key_file))
|
||||
# Save to populate cache at the real self.cachefile path
|
||||
conf._saveconfig(conf.file)
|
||||
cachefile_path = conf.cachefile
|
||||
assert os.path.exists(cachefile_path)
|
||||
|
||||
# Now corrupt the YAML
|
||||
config_file.write_text("")
|
||||
import time; time.sleep(0.05) # Ensure YAML is newer than cache
|
||||
|
||||
# Reload - should recover from cache
|
||||
conf2 = configfile(conf=str(config_file), key=str(key_file))
|
||||
assert "router1" in conf2.connections
|
||||
assert conf2.connections["router1"]["host"] == "10.0.0.1"
|
||||
|
||||
def test_corrupt_cache_uses_yaml(self, tmp_config_dir):
|
||||
"""If cache is corrupt but YAML is valid, uses YAML."""
|
||||
config_file = tmp_config_dir / "config.yaml"
|
||||
key_file = tmp_config_dir / ".osk"
|
||||
|
||||
valid_data = {
|
||||
"config": {"case": False, "idletime": 30, "fzf": False},
|
||||
"connections": {},
|
||||
"profiles": {"default": {"host": "", "protocol": "ssh", "port": "", "user": "", "password": "", "options": "", "logs": "", "tags": "", "jumphost": ""}}
|
||||
}
|
||||
config_file.write_text(yaml.dump(valid_data, default_flow_style=False, sort_keys=False))
|
||||
|
||||
from connpy.configfile import configfile
|
||||
conf = configfile(conf=str(config_file), key=str(key_file))
|
||||
cachefile_path = conf.cachefile
|
||||
|
||||
# Now corrupt the cache (valid JSON but invalid config structure)
|
||||
from pathlib import Path
|
||||
Path(cachefile_path).write_text(json.dumps({"garbage": True}))
|
||||
# Make cache newer than YAML to force cache path
|
||||
import time; time.sleep(0.05)
|
||||
os.utime(cachefile_path, None)
|
||||
|
||||
conf2 = configfile(conf=str(config_file), key=str(key_file))
|
||||
assert conf2.config["case"] == False
|
||||
assert "default" in conf2.profiles
|
||||
|
||||
def test_both_corrupt_creates_default(self, tmp_config_dir):
|
||||
"""If both YAML and cache are corrupt, creates fresh config."""
|
||||
config_file = tmp_config_dir / "config.yaml"
|
||||
key_file = tmp_config_dir / ".osk"
|
||||
|
||||
from connpy.configfile import configfile
|
||||
conf = configfile(conf=str(config_file), key=str(key_file))
|
||||
cachefile_path = conf.cachefile
|
||||
|
||||
# Corrupt YAML
|
||||
config_file.write_text("")
|
||||
# Corrupt cache
|
||||
from pathlib import Path
|
||||
Path(cachefile_path).write_text(json.dumps({"garbage": True}))
|
||||
import time; time.sleep(0.05)
|
||||
os.utime(str(config_file), None)
|
||||
|
||||
conf2 = configfile(conf=str(config_file), key=str(key_file))
|
||||
|
||||
# Should get defaults, not crash
|
||||
assert conf2.config is not None
|
||||
assert "default" in conf2.profiles
|
||||
assert isinstance(conf2.connections, dict)
|
||||
|
||||
|
||||
class TestAtomicSave:
|
||||
def test_save_creates_no_leftover_tmp(self, config):
|
||||
"""After successful save, no .tmp file remains."""
|
||||
config._connections_add(
|
||||
id="test123", host="1.2.3.4", protocol="ssh",
|
||||
port="", user="", password="", options="",
|
||||
logs="", tags="", jumphost=""
|
||||
)
|
||||
result = config._saveconfig(config.file)
|
||||
assert result == 0
|
||||
assert not os.path.exists(config.file + '.tmp')
|
||||
|
||||
def test_save_preserves_original_on_error(self, config):
|
||||
"""If save fails, original config file is not corrupted."""
|
||||
import unittest.mock as mock
|
||||
|
||||
config._connections_add(
|
||||
id="original_node", host="10.0.0.1", protocol="ssh",
|
||||
port="", user="", password="", options="",
|
||||
logs="", tags="", jumphost=""
|
||||
)
|
||||
config._saveconfig(config.file)
|
||||
|
||||
# Now add another node and make yaml.dump fail
|
||||
config._connections_add(
|
||||
id="new_node", host="10.0.0.2", protocol="ssh",
|
||||
port="", user="", password="", options="",
|
||||
logs="", tags="", jumphost=""
|
||||
)
|
||||
|
||||
with mock.patch('connpy.configfile.yaml.dump', side_effect=IOError("disk full")):
|
||||
result = config._saveconfig(config.file)
|
||||
assert result == 1
|
||||
|
||||
# Original file should still be valid with original_node
|
||||
from connpy.configfile import configfile
|
||||
reloaded = configfile(conf=config.file, key=config.key)
|
||||
assert "original_node" in reloaded.connections
|
||||
|
||||
|
||||
class TestMigrationSafety:
|
||||
def test_migration_validates_legacy_data(self, tmp_path):
|
||||
"""Migration skips invalid legacy JSON files."""
|
||||
from unittest.mock import patch
|
||||
config_dir = tmp_path / ".config" / "conn"
|
||||
config_dir.mkdir(parents=True)
|
||||
(config_dir / "plugins").mkdir()
|
||||
|
||||
# Write .folder
|
||||
(config_dir / ".folder").write_text(str(config_dir))
|
||||
|
||||
# Generate RSA key
|
||||
from Crypto.PublicKey import RSA
|
||||
key = RSA.generate(2048)
|
||||
key_file = config_dir / ".osk"
|
||||
key_file.write_bytes(key.export_key("PEM"))
|
||||
os.chmod(str(key_file), 0o600)
|
||||
|
||||
# Write invalid JSON config (missing required keys)
|
||||
legacy_file = config_dir / "config.json"
|
||||
legacy_file.write_text(json.dumps({"garbage": True}))
|
||||
|
||||
with patch("os.path.expanduser", return_value=str(tmp_path)):
|
||||
from connpy.configfile import configfile
|
||||
conf = configfile(key=str(key_file))
|
||||
|
||||
# Legacy file should NOT have been moved to .backup
|
||||
assert legacy_file.exists()
|
||||
assert not (config_dir / "config.json.backup").exists()
|
||||
|
||||
def test_migration_verifies_written_yaml(self, tmp_path):
|
||||
"""Migration succeeds when legacy JSON is valid."""
|
||||
from unittest.mock import patch
|
||||
config_dir = tmp_path / ".config" / "conn"
|
||||
config_dir.mkdir(parents=True)
|
||||
(config_dir / "plugins").mkdir()
|
||||
|
||||
# Write .folder
|
||||
(config_dir / ".folder").write_text(str(config_dir))
|
||||
|
||||
# Generate RSA key
|
||||
from Crypto.PublicKey import RSA
|
||||
key = RSA.generate(2048)
|
||||
key_file = config_dir / ".osk"
|
||||
key_file.write_bytes(key.export_key("PEM"))
|
||||
os.chmod(str(key_file), 0o600)
|
||||
|
||||
valid_data = {
|
||||
"config": {"case": False, "idletime": 30, "fzf": False},
|
||||
"connections": {"r1": {"host": "1.2.3.4", "type": "connection", "protocol": "ssh", "port": "", "user": "", "password": "", "options": "", "logs": "", "tags": "", "jumphost": ""}},
|
||||
"profiles": {"default": {"host": "", "protocol": "ssh", "port": "", "user": "", "password": "", "options": "", "logs": "", "tags": "", "jumphost": ""}}
|
||||
}
|
||||
legacy_file = config_dir / "config.json"
|
||||
legacy_file.write_text(json.dumps(valid_data))
|
||||
|
||||
with patch("os.path.expanduser", return_value=str(tmp_path)):
|
||||
from connpy.configfile import configfile
|
||||
conf = configfile(key=str(key_file))
|
||||
|
||||
# Migration should have succeeded: YAML exists, JSON backed up
|
||||
yaml_file = config_dir / "config.yaml"
|
||||
assert yaml_file.exists()
|
||||
assert (config_dir / "config.json.backup").exists()
|
||||
assert not legacy_file.exists()
|
||||
assert "r1" in conf.connections
|
||||
@@ -1,264 +0,0 @@
|
||||
import pytest
|
||||
from unittest.mock import patch, MagicMock
|
||||
from connpy.connapp import connapp
|
||||
import sys
|
||||
import yaml
|
||||
import os
|
||||
|
||||
@pytest.fixture
|
||||
def app(populated_config):
|
||||
"""Returns an instance of connapp initialized with the mock config."""
|
||||
return connapp(populated_config)
|
||||
|
||||
def test_connapp_init(app, populated_config):
|
||||
"""Test that connapp initializes correctly with config."""
|
||||
assert app.config == populated_config
|
||||
assert app.case == populated_config.config.get("case", False)
|
||||
|
||||
@patch("connpy.cli.node_handler.NodeHandler.dispatch")
|
||||
def test_node_default(mock_func_node, app):
|
||||
"""Test that default 'node' command correctly parses and calls _func_node."""
|
||||
app.start(["node", "router1"])
|
||||
mock_func_node.assert_called_once()
|
||||
args = mock_func_node.call_args[0][0]
|
||||
assert args.data == "router1"
|
||||
assert args.action == "connect"
|
||||
|
||||
@patch("connpy.cli.node_handler.NodeHandler.dispatch")
|
||||
def test_node_add(mock_func_node, app):
|
||||
"""Test that 'node -a' command correctly parses."""
|
||||
app.start(["node", "-a", "new_router"])
|
||||
mock_func_node.assert_called_once()
|
||||
args = mock_func_node.call_args[0][0]
|
||||
assert args.data == "new_router"
|
||||
assert args.action == "add"
|
||||
|
||||
@patch("connpy.services.node_service.NodeService.list_nodes")
|
||||
@patch("connpy.services.node_service.NodeService.delete_node")
|
||||
@patch("inquirer.prompt")
|
||||
def test_node_del(mock_prompt, mock_delete_node, mock_list_nodes, app):
|
||||
mock_list_nodes.return_value = ["router1"]
|
||||
mock_prompt.return_value = {"delete": True}
|
||||
app.start(["node", "-r", "router1"])
|
||||
mock_delete_node.assert_called_once_with("router1", is_folder=False)
|
||||
|
||||
@patch("connpy.services.node_service.NodeService.list_nodes")
|
||||
@patch("connpy.services.node_service.NodeService.get_node_details")
|
||||
@patch("connpy.services.node_service.NodeService.update_node")
|
||||
@patch("connpy.cli.forms.Forms.questions_edit")
|
||||
@patch("connpy.cli.forms.Forms.questions_nodes")
|
||||
def test_node_mod(mock_q_nodes, mock_q_edit, mock_update_node, mock_get_details, mock_list_nodes, app):
|
||||
mock_list_nodes.return_value = ["router1"]
|
||||
mock_get_details.return_value = {"host": "1.1.1.1", "port": 22}
|
||||
mock_q_edit.return_value = {"host": True}
|
||||
mock_q_nodes.return_value = {"host": "2.2.2.2", "port": 22}
|
||||
|
||||
app.start(["node", "-e", "router1"])
|
||||
mock_update_node.assert_called_once()
|
||||
|
||||
@patch("connpy.printer.data")
|
||||
def test_node_show(mock_data, app):
|
||||
app.nodes_list = ["router1"]
|
||||
app.config.getitem = MagicMock(return_value={"host": "1.1.1.1"})
|
||||
app.start(["node", "-s", "router1"])
|
||||
mock_data.assert_called()
|
||||
|
||||
@patch("connpy.services.profile_service.ProfileService.list_profiles")
|
||||
@patch("connpy.connapp.printer.console.print")
|
||||
def test_profile_list(mock_print, mock_list_profiles, app):
|
||||
"""Test 'profile list' invokes profile service correctly."""
|
||||
mock_list_profiles.return_value = ["default", "office-user"]
|
||||
app.start(["list", "profiles"])
|
||||
assert mock_list_profiles.call_count >= 2
|
||||
|
||||
@patch("connpy.services.node_service.NodeService.list_nodes")
|
||||
def test_node_list(mock_list_nodes, app):
|
||||
"""Test 'list nodes' invokes node service."""
|
||||
mock_list_nodes.return_value = ["router1", "server1"]
|
||||
app.start(["list", "nodes"])
|
||||
# Should be called during init and during the list command
|
||||
assert mock_list_nodes.call_count >= 2
|
||||
|
||||
@patch("connpy.services.system_service.SystemService.get_api_status")
|
||||
def test_api_stop(mock_status, app):
|
||||
mock_status.return_value = {"running": True, "pid": "1234"}
|
||||
app.services.system.stop_api = MagicMock(return_value=True)
|
||||
app.start(["api", "-x"])
|
||||
app.services.system.stop_api.assert_called_once()
|
||||
|
||||
@patch("connpy.services.profile_service.ProfileService.list_profiles")
|
||||
@patch("connpy.services.profile_service.ProfileService.add_profile")
|
||||
@patch("connpy.cli.forms.Forms.questions_profiles")
|
||||
def test_profile_add(mock_q_profiles, mock_add_profile, mock_list_profiles, app):
|
||||
mock_list_profiles.return_value = ["default"]
|
||||
mock_q_profiles.return_value = {"host": "test"}
|
||||
app.start(["profile", "-a", "new_profile"])
|
||||
mock_add_profile.assert_called_once_with("new_profile", {"host": "test"})
|
||||
|
||||
@patch("connpy.services.profile_service.ProfileService.get_profile")
|
||||
@patch("connpy.services.profile_service.ProfileService.delete_profile")
|
||||
@patch("inquirer.prompt")
|
||||
def test_profile_del(mock_prompt, mock_delete_profile, mock_get_profile, app):
|
||||
mock_get_profile.return_value = {"host": "test"}
|
||||
mock_prompt.return_value = {"delete": True}
|
||||
app.start(["profile", "-r", "test_profile"])
|
||||
mock_delete_profile.assert_called_once_with("test_profile")
|
||||
|
||||
@patch("connpy.services.profile_service.ProfileService.get_profile")
|
||||
@patch("connpy.services.profile_service.ProfileService.update_profile")
|
||||
@patch("connpy.cli.forms.Forms.questions_edit")
|
||||
@patch("connpy.cli.forms.Forms.questions_profiles")
|
||||
def test_profile_mod(mock_q_profiles, mock_q_edit, mock_update_profile, mock_get_profile, app):
|
||||
mock_get_profile.return_value = {"host": "test", "port": 22}
|
||||
mock_q_edit.return_value = {"host": True}
|
||||
mock_q_profiles.return_value = {"id": "test_profile", "host": "new_host", "port": 22}
|
||||
app.start(["profile", "-e", "test_profile"])
|
||||
mock_update_profile.assert_called_once_with("test_profile", {"id": "test_profile", "host": "new_host", "port": 22})
|
||||
|
||||
@patch("connpy.services.profile_service.ProfileService.get_profile")
|
||||
@patch("connpy.printer.data")
|
||||
def test_profile_show(mock_data, mock_get_profile, app):
|
||||
mock_get_profile.return_value = {"host": "test"}
|
||||
app.start(["profile", "-s", "test_profile"])
|
||||
mock_data.assert_called()
|
||||
|
||||
@patch("connpy.services.node_service.NodeService.move_node")
|
||||
def test_move(mock_move_node, app):
|
||||
app.start(["move", "src_node", "dst_node"])
|
||||
mock_move_node.assert_called_once_with("src_node", "dst_node", copy=False)
|
||||
|
||||
@patch("connpy.services.node_service.NodeService.move_node")
|
||||
def test_copy(mock_move_node, app):
|
||||
app.start(["copy", "src_node", "dst_node"])
|
||||
mock_move_node.assert_called_once_with("src_node", "dst_node", copy=True)
|
||||
|
||||
@patch("connpy.cli.forms.Forms.questions_bulk")
|
||||
@patch("connpy.services.node_service.NodeService.bulk_add")
|
||||
def test_bulk(mock_bulk_add, mock_q_bulk, app):
|
||||
mock_q_bulk.return_value = {"ids": "node1", "host": "host1", "location": ""}
|
||||
mock_bulk_add.return_value = 1
|
||||
app.start(["bulk"])
|
||||
mock_bulk_add.assert_called_once()
|
||||
|
||||
@patch("connpy.services.import_export_service.ImportExportService.export_to_file")
|
||||
def test_export(mock_export, app):
|
||||
with pytest.raises(SystemExit):
|
||||
app.start(["export", "file.yml", "@folder1"])
|
||||
mock_export.assert_called_once_with("file.yml", folders=["@folder1"])
|
||||
|
||||
@patch("os.path.exists")
|
||||
@patch("inquirer.prompt")
|
||||
@patch("connpy.services.import_export_service.ImportExportService.import_from_file")
|
||||
def test_import(mock_import, mock_prompt, mock_exists, app):
|
||||
mock_exists.return_value = True
|
||||
mock_prompt.return_value = {"import": True}
|
||||
app.start(["import", "file.yml"])
|
||||
mock_import.assert_called_once_with("file.yml")
|
||||
|
||||
@patch("connpy.services.ai_service.AIService.ask")
|
||||
@patch("connpy.connapp.console.status")
|
||||
def test_ai(mock_status, mock_ask, app):
|
||||
mock_ask.return_value = {"response": "AI output", "usage": {"total": 10, "input": 5, "output": 5}}
|
||||
|
||||
app.start(["ai", "--engineer-api-key", "testkey", "how are you"])
|
||||
mock_ask.assert_called_once()
|
||||
|
||||
@patch("connpy.services.execution_service.ExecutionService.run_commands")
|
||||
def test_run(mock_run_commands, app):
|
||||
app.start(["run", "node1", "command1", "command2"])
|
||||
mock_run_commands.assert_called_once()
|
||||
assert mock_run_commands.call_args[1]["nodes_filter"] == "node1"
|
||||
assert mock_run_commands.call_args[1]["commands"] == ["command1 command2"]
|
||||
|
||||
@patch("os.path.exists")
|
||||
@patch("shutil.copy2")
|
||||
@patch("connpy.plugins.Plugins.verify_script")
|
||||
def test_plugin_add(mock_verify, mock_copy, mock_exists, app):
|
||||
def mock_exists_side_effect(path):
|
||||
if "testplug.py" in path: return False
|
||||
if "testplug.py.bkp" in path: return False
|
||||
if "file.py" in path: return True
|
||||
return True
|
||||
mock_exists.side_effect = mock_exists_side_effect
|
||||
mock_verify.return_value = None
|
||||
app.commands = []
|
||||
app.start(["plugin", "--add", "testplug", "file.py"])
|
||||
mock_copy.assert_called()
|
||||
|
||||
@patch("connpy.services.config_service.ConfigService.update_setting")
|
||||
def test_config(mock_update_setting, app):
|
||||
app.start(["config", "--allow-uppercase", "true"])
|
||||
mock_update_setting.assert_called_with("case", True)
|
||||
|
||||
@patch("connpy.services.system_service.SystemService.get_api_status")
|
||||
def test_api_start(mock_status, app):
|
||||
mock_status.return_value = {"running": False}
|
||||
app.services.system.start_api = MagicMock()
|
||||
app.start(["api", "-s", "8080"])
|
||||
app.services.system.start_api.assert_called_once_with(port=8080)
|
||||
|
||||
@patch("connpy.services.system_service.SystemService.get_api_status")
|
||||
def test_api_debug(mock_status, app):
|
||||
mock_status.return_value = {"running": False}
|
||||
app.services.system.debug_api = MagicMock()
|
||||
app.start(["api", "-d", "8080"])
|
||||
app.services.system.debug_api.assert_called_once_with(port=8080)
|
||||
|
||||
@patch("connpy.services.node_service.NodeService.list_folders")
|
||||
def test_list_folders(mock_list_folders, app):
|
||||
mock_list_folders.return_value = ["folder1"]
|
||||
app.start(["list", "folders"])
|
||||
# Called during init and during the list command
|
||||
assert mock_list_folders.call_count >= 2
|
||||
|
||||
@patch("connpy.services.config_service.ConfigService.update_setting")
|
||||
def test_config_various(mock_update_setting, app):
|
||||
app.start(["config", "--fzf", "true"])
|
||||
mock_update_setting.assert_called_with("fzf", True)
|
||||
app.start(["config", "--keepalive", "60"])
|
||||
mock_update_setting.assert_called_with("idletime", 60)
|
||||
|
||||
@patch("connpy.services.config_service.ConfigService.set_config_folder")
|
||||
def test_config_folder(mock_set_config_folder, app):
|
||||
app.start(["config", "--configfolder", "/new/path"])
|
||||
mock_set_config_folder.assert_called_once_with("/new/path")
|
||||
|
||||
@patch("connpy.services.plugin_service.PluginService.list_plugins")
|
||||
def test_plugin_list(mock_list_plugins, app):
|
||||
mock_list_plugins.return_value = {"testplug": {"enabled": True}}
|
||||
app.start(["plugin", "--list"])
|
||||
mock_list_plugins.assert_called_once()
|
||||
|
||||
@patch("connpy.services.plugin_service.PluginService.delete_plugin")
|
||||
def test_plugin_delete(mock_delete, app):
|
||||
app.start(["plugin", "--del", "testplug"])
|
||||
mock_delete.assert_called_once_with("testplug")
|
||||
|
||||
@patch("connpy.services.plugin_service.PluginService.enable_plugin")
|
||||
def test_plugin_enable(mock_enable, app):
|
||||
app.start(["plugin", "--enable", "testplug"])
|
||||
mock_enable.assert_called_once_with("testplug")
|
||||
|
||||
@patch("connpy.services.plugin_service.PluginService.disable_plugin")
|
||||
def test_plugin_disable(mock_disable, app):
|
||||
app.start(["plugin", "--disable", "testplug"])
|
||||
mock_disable.assert_called_once_with("testplug")
|
||||
|
||||
@patch("connpy.services.ai_service.AIService.list_sessions")
|
||||
def test_ai_list(mock_list_sessions, app):
|
||||
mock_list_sessions.return_value = [{"id": "1", "title": "t", "created_at": "now", "model": "m"}]
|
||||
app.start(["ai", "--list"])
|
||||
mock_list_sessions.assert_called_once()
|
||||
|
||||
def test_type_node_reserved_word(app):
|
||||
app.commands = ["bulk", "ai", "run"]
|
||||
with patch("sys.argv", ["connpy", "node", "-a", "bulk"]):
|
||||
with pytest.raises(SystemExit) as exc:
|
||||
app._type_node("bulk")
|
||||
assert exc.value.code == 2
|
||||
|
||||
# In move/copy it also raises because destination cannot be reserved
|
||||
with patch("sys.argv", ["connpy", "mv", "test1", "bulk"]):
|
||||
with pytest.raises(SystemExit) as exc:
|
||||
app._type_node("bulk")
|
||||
assert exc.value.code == 2
|
||||
@@ -1,437 +0,0 @@
|
||||
"""Tests for connpy.core module — node and nodes classes."""
|
||||
import json
|
||||
import os
|
||||
import io
|
||||
import re
|
||||
import pytest
|
||||
from unittest.mock import patch, MagicMock, PropertyMock
|
||||
from copy import deepcopy
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# node.__init__ tests
|
||||
# =========================================================================
|
||||
|
||||
class TestNodeInit:
|
||||
def test_basic_init(self):
|
||||
"""Creates node with basic attributes."""
|
||||
from connpy.core import node
|
||||
n = node("router1", "10.0.0.1", user="admin", password="pass1", protocol="ssh")
|
||||
assert n.unique == "router1"
|
||||
assert n.host == "10.0.0.1"
|
||||
assert n.user == "admin"
|
||||
assert n.protocol == "ssh"
|
||||
assert n.password == ["pass1"]
|
||||
|
||||
def test_default_protocol(self):
|
||||
"""Default protocol is ssh."""
|
||||
from connpy.core import node
|
||||
n = node("router1", "10.0.0.1")
|
||||
assert n.protocol == "ssh"
|
||||
|
||||
def test_password_as_list_of_profiles(self, populated_config):
|
||||
"""Password list with @profile references resolves correctly."""
|
||||
from connpy.core import node
|
||||
n = node("router1", "10.0.0.1", password=["@office-user"],
|
||||
config=populated_config)
|
||||
assert n.password == ["officepass"]
|
||||
|
||||
def test_password_plain_string(self):
|
||||
"""Plain string password is wrapped in a list."""
|
||||
from connpy.core import node
|
||||
n = node("router1", "10.0.0.1", password="mypass")
|
||||
assert n.password == ["mypass"]
|
||||
|
||||
def test_node_with_profile(self, populated_config):
|
||||
"""Resolves @profile references for user."""
|
||||
from connpy.core import node
|
||||
n = node("test1", "10.0.0.1", user="@office-user", password="plain",
|
||||
config=populated_config)
|
||||
assert n.user == "officeadmin"
|
||||
|
||||
def test_node_tags(self):
|
||||
"""Tags are stored correctly."""
|
||||
from connpy.core import node
|
||||
tags = {"os": "cisco_ios", "prompt": r"Router#"}
|
||||
n = node("router1", "10.0.0.1", tags=tags)
|
||||
assert n.tags["os"] == "cisco_ios"
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# Command generation tests
|
||||
# =========================================================================
|
||||
|
||||
class TestCommandGeneration:
|
||||
def _make_node(self, **kwargs):
|
||||
from connpy.core import node
|
||||
defaults = {
|
||||
"unique": "test", "host": "10.0.0.1", "protocol": "ssh",
|
||||
"user": "admin", "password": "", "port": "", "options": "",
|
||||
"jumphost": "", "tags": "", "logs": ""
|
||||
}
|
||||
defaults.update(kwargs)
|
||||
return node(defaults.pop("unique"), defaults.pop("host"), **defaults)
|
||||
|
||||
def test_ssh_cmd_basic(self):
|
||||
n = self._make_node()
|
||||
cmd = n._get_cmd()
|
||||
assert "ssh" in cmd
|
||||
assert "admin@10.0.0.1" in cmd
|
||||
|
||||
def test_ssh_cmd_port(self):
|
||||
n = self._make_node(port="2222")
|
||||
cmd = n._get_cmd()
|
||||
assert "-p 2222" in cmd
|
||||
|
||||
def test_ssh_cmd_options(self):
|
||||
n = self._make_node(options="-o StrictHostKeyChecking=no")
|
||||
cmd = n._get_cmd()
|
||||
assert "-o StrictHostKeyChecking=no" in cmd
|
||||
|
||||
def test_sftp_cmd_port(self):
|
||||
n = self._make_node(protocol="sftp", port="2222")
|
||||
cmd = n._get_cmd()
|
||||
assert "-P 2222" in cmd # SFTP uses uppercase P
|
||||
|
||||
def test_telnet_cmd(self):
|
||||
n = self._make_node(protocol="telnet", port="23")
|
||||
cmd = n._get_cmd()
|
||||
assert "telnet 10.0.0.1" in cmd
|
||||
assert "23" in cmd
|
||||
|
||||
def test_ssm_cmd_basic(self):
|
||||
n = self._make_node(protocol="ssm", host="i-12345")
|
||||
cmd = n._get_cmd()
|
||||
assert "aws ssm start-session" in cmd
|
||||
assert "--target i-12345" in cmd
|
||||
|
||||
def test_ssm_cmd_tags(self):
|
||||
n = self._make_node(protocol="ssm", host="i-12345", tags={"region": "us-west-2", "profile": "prod"})
|
||||
cmd = n._get_cmd()
|
||||
assert "--region us-west-2" in cmd
|
||||
assert "--profile prod" in cmd
|
||||
|
||||
def test_ssm_cmd_options(self):
|
||||
n = self._make_node(protocol="ssm", host="i-12345", options="--document-name AWS-StartInteractiveCommand")
|
||||
cmd = n._get_cmd()
|
||||
assert "--document-name AWS-StartInteractiveCommand" in cmd
|
||||
|
||||
def test_kubectl_cmd(self):
|
||||
n = self._make_node(protocol="kubectl", host="my-pod", tags={"kube_command": "/bin/sh"})
|
||||
cmd = n._get_cmd()
|
||||
assert "kubectl exec" in cmd
|
||||
assert "my-pod" in cmd
|
||||
assert "/bin/sh" in cmd
|
||||
|
||||
def test_kubectl_cmd_default_command(self):
|
||||
n = self._make_node(protocol="kubectl", host="my-pod")
|
||||
cmd = n._get_cmd()
|
||||
assert "/bin/bash" in cmd
|
||||
|
||||
def test_docker_cmd(self):
|
||||
n = self._make_node(protocol="docker", host="my-container",
|
||||
tags={"docker_command": "/bin/sh"})
|
||||
cmd = n._get_cmd()
|
||||
assert "docker" in cmd
|
||||
assert "my-container" in cmd
|
||||
assert "/bin/sh" in cmd
|
||||
|
||||
def test_invalid_protocol_raises(self):
|
||||
n = self._make_node(protocol="invalid_proto")
|
||||
with pytest.raises(SystemExit) as exc:
|
||||
n._get_cmd()
|
||||
assert exc.value.code == 1
|
||||
|
||||
def test_ssh_cmd_no_user(self):
|
||||
n = self._make_node(user="")
|
||||
cmd = n._get_cmd()
|
||||
assert "10.0.0.1" in cmd
|
||||
assert "@" not in cmd # No user@ prefix
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# Password decryption tests
|
||||
# =========================================================================
|
||||
|
||||
class TestPasswordDecryption:
|
||||
def test_passtx_plaintext(self, config):
|
||||
"""Plaintext passwords pass through unchanged."""
|
||||
from connpy.core import node
|
||||
n = node("test", "10.0.0.1", password="plainpass", config=config)
|
||||
result = n._passtx(["plainpass"])
|
||||
assert result == ["plainpass"]
|
||||
|
||||
def test_passtx_encrypted(self, config):
|
||||
"""Encrypted passwords get decrypted."""
|
||||
from connpy.core import node
|
||||
encrypted = config.encrypt("mysecret")
|
||||
n = node("test", "10.0.0.1", password=encrypted, config=config)
|
||||
result = n._passtx([encrypted])
|
||||
assert result == ["mysecret"]
|
||||
|
||||
def test_passtx_missing_key_raises(self):
|
||||
"""Missing key file raises ValueError."""
|
||||
from connpy.core import node
|
||||
n = node("test", "10.0.0.1", password="pass")
|
||||
# A password formatted as encrypted but no valid key
|
||||
with pytest.raises((ValueError, Exception)):
|
||||
n._passtx(["""b'corrupted_encrypted_data'"""], keyfile="/nonexistent")
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# Log handling tests
|
||||
# =========================================================================
|
||||
|
||||
class TestLogHandling:
|
||||
def test_logfile_variable_substitution(self):
|
||||
from connpy.core import node
|
||||
n = node("router1", "10.0.0.1", user="admin", protocol="ssh", port="22",
|
||||
logs="/logs/${unique}_${host}_${user}")
|
||||
result = n._logfile()
|
||||
assert result == "/logs/router1_10.0.0.1_admin"
|
||||
|
||||
def test_logfile_date_substitution(self):
|
||||
from connpy.core import node
|
||||
import datetime
|
||||
n = node("router1", "10.0.0.1", logs="/logs/${date '%Y'}")
|
||||
result = n._logfile()
|
||||
assert datetime.datetime.now().strftime("%Y") in result
|
||||
|
||||
def test_logclean_removes_ansi(self):
|
||||
from connpy.core import node
|
||||
n = node("test", "10.0.0.1")
|
||||
dirty = "\x1B[32mgreen text\x1B[0m"
|
||||
clean = n._logclean(dirty, var=True)
|
||||
assert "\x1B" not in clean
|
||||
assert "green text" in clean
|
||||
|
||||
def test_logclean_removes_backspaces(self):
|
||||
from connpy.core import node
|
||||
n = node("test", "10.0.0.1")
|
||||
dirty = "type\bo"
|
||||
clean = n._logclean(dirty, var=True)
|
||||
assert "\b" not in clean
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# run() and test() with mock pexpect
|
||||
# =========================================================================
|
||||
|
||||
class TestNodeRun:
|
||||
def _make_connected_node(self, mock_pexpect_obj, **kwargs):
|
||||
"""Create a node and mock its _connect to succeed."""
|
||||
from connpy.core import node
|
||||
defaults = {
|
||||
"unique": "router1", "host": "10.0.0.1",
|
||||
"protocol": "ssh", "user": "admin", "password": ""
|
||||
}
|
||||
defaults.update(kwargs)
|
||||
n = node(defaults.pop("unique"), defaults.pop("host"), **defaults)
|
||||
return n
|
||||
|
||||
def test_run_returns_output(self, mock_pexpect):
|
||||
"""run() returns string output."""
|
||||
child = mock_pexpect["child"]
|
||||
pexp = mock_pexpect["pexpect"]
|
||||
|
||||
# Simulate: connect succeeds, command runs, prompt found
|
||||
child.expect.return_value = 9 # prompt index for ssh
|
||||
child.logfile_read = None
|
||||
|
||||
from connpy.core import node
|
||||
n = node("router1", "10.0.0.1", user="admin", password="")
|
||||
|
||||
# Mock _connect to return True and set up child
|
||||
with patch.object(n, '_connect', return_value=True):
|
||||
n.child = child
|
||||
log_buffer = io.BytesIO(b"show version\nRouter v1.0\nrouter#")
|
||||
n.mylog = log_buffer
|
||||
child.logfile_read = log_buffer
|
||||
|
||||
with patch.object(n, '_logclean', return_value="Router v1.0"):
|
||||
output = n.run(["show version"])
|
||||
|
||||
assert n.status == 0
|
||||
assert output == "Router v1.0"
|
||||
|
||||
def test_run_status_1_on_failure(self, mock_pexpect):
|
||||
"""Status 1 when connection fails."""
|
||||
from connpy.core import node
|
||||
n = node("router1", "10.0.0.1", user="admin", password="")
|
||||
|
||||
with patch.object(n, '_connect', return_value="Connection failed code: 1\nrefused"):
|
||||
output = n.run(["show version"])
|
||||
|
||||
assert n.status == 1
|
||||
assert "refused" in output
|
||||
|
||||
def test_run_with_variables(self, mock_pexpect):
|
||||
"""Variables get substituted in commands."""
|
||||
child = mock_pexpect["child"]
|
||||
child.expect.return_value = 9
|
||||
|
||||
from connpy.core import node
|
||||
n = node("router1", "10.0.0.1", user="admin", password="")
|
||||
|
||||
sent_commands = []
|
||||
child.sendline.side_effect = lambda cmd: sent_commands.append(cmd)
|
||||
|
||||
with patch.object(n, '_connect', return_value=True):
|
||||
n.child = child
|
||||
n.mylog = io.BytesIO(b"output")
|
||||
with patch.object(n, '_logclean', return_value="output"):
|
||||
n.run(["show ip route {subnet}"], vars={"subnet": "10.0.0.0/24"})
|
||||
|
||||
assert "show ip route 10.0.0.0/24" in sent_commands
|
||||
|
||||
def test_run_saves_to_folder(self, mock_pexpect, tmp_path):
|
||||
"""folder param saves log file."""
|
||||
child = mock_pexpect["child"]
|
||||
child.expect.return_value = 9
|
||||
|
||||
from connpy.core import node
|
||||
n = node("router1", "10.0.0.1", user="admin", password="")
|
||||
|
||||
with patch.object(n, '_connect', return_value=True):
|
||||
n.child = child
|
||||
n.mylog = io.BytesIO(b"log output")
|
||||
with patch.object(n, '_logclean', return_value="log output"):
|
||||
n.run(["show version"], folder=str(tmp_path))
|
||||
|
||||
log_files = list(tmp_path.glob("router1_*.txt"))
|
||||
assert len(log_files) == 1
|
||||
assert "log output" in log_files[0].read_text()
|
||||
|
||||
|
||||
class TestNodeTest:
|
||||
def test_test_returns_dict(self, mock_pexpect):
|
||||
"""test() returns dict of results."""
|
||||
child = mock_pexpect["child"]
|
||||
child.expect.return_value = 0 # prompt found (index 0 in test expects)
|
||||
|
||||
from connpy.core import node
|
||||
n = node("router1", "10.0.0.1", user="admin", password="")
|
||||
|
||||
with patch.object(n, '_connect', return_value=True):
|
||||
n.child = child
|
||||
n.mylog = io.BytesIO(b"1.1.1.1 is up")
|
||||
with patch.object(n, '_logclean', return_value="1.1.1.1 is up"):
|
||||
result = n.test(["ping 1.1.1.1"], "1.1.1.1")
|
||||
|
||||
assert isinstance(result, dict)
|
||||
assert result.get("1.1.1.1") == True
|
||||
|
||||
def test_test_expected_not_found(self, mock_pexpect):
|
||||
"""Expected text not found returns False."""
|
||||
child = mock_pexpect["child"]
|
||||
child.expect.return_value = 0
|
||||
|
||||
from connpy.core import node
|
||||
n = node("router1", "10.0.0.1", user="admin", password="")
|
||||
|
||||
with patch.object(n, '_connect', return_value=True):
|
||||
n.child = child
|
||||
n.mylog = io.BytesIO(b"some other output")
|
||||
with patch.object(n, '_logclean', return_value="some other output"):
|
||||
result = n.test(["ping 1.1.1.1"], "1.1.1.1")
|
||||
|
||||
assert isinstance(result, dict)
|
||||
assert result.get("1.1.1.1") == False
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# nodes (parallel) tests
|
||||
# =========================================================================
|
||||
|
||||
class TestNodes:
|
||||
def test_nodes_init(self):
|
||||
"""Creates list of node objects."""
|
||||
from connpy.core import nodes
|
||||
nodes_dict = {
|
||||
"r1": {"host": "10.0.0.1", "user": "admin", "password": ""},
|
||||
"r2": {"host": "10.0.0.2", "user": "admin", "password": ""}
|
||||
}
|
||||
mynodes = nodes(nodes_dict)
|
||||
assert len(mynodes.nodelist) == 2
|
||||
assert hasattr(mynodes, "r1")
|
||||
assert hasattr(mynodes, "r2")
|
||||
|
||||
def test_nodes_run_parallel(self):
|
||||
"""run() executes on all nodes and returns dict."""
|
||||
from connpy.core import nodes
|
||||
|
||||
nodes_dict = {
|
||||
"r1": {"host": "10.0.0.1", "user": "admin", "password": ""},
|
||||
"r2": {"host": "10.0.0.2", "user": "admin", "password": ""}
|
||||
}
|
||||
mynodes = nodes(nodes_dict)
|
||||
|
||||
# Mock run on each node — must set output AND status on the node
|
||||
for n in mynodes.nodelist:
|
||||
original_node = n # capture by value
|
||||
def make_mock(node_ref):
|
||||
def mock_run(commands, **kwargs):
|
||||
node_ref.output = f"output from {node_ref.unique}"
|
||||
node_ref.status = 0
|
||||
return mock_run
|
||||
n.run = make_mock(n)
|
||||
|
||||
result = mynodes.run(["show version"])
|
||||
assert "r1" in result
|
||||
assert "r2" in result
|
||||
|
||||
def test_nodes_splitlist(self):
|
||||
"""_splitlist divides list correctly."""
|
||||
from connpy.core import nodes
|
||||
mynodes = nodes({"r1": {"host": "1.1.1.1", "user": "", "password": ""}})
|
||||
chunks = list(mynodes._splitlist([1, 2, 3, 4, 5], 2))
|
||||
assert chunks == [[1, 2], [3, 4], [5]]
|
||||
|
||||
def test_nodes_run_with_vars(self):
|
||||
"""Variables per node and __global__ work."""
|
||||
from connpy.core import nodes
|
||||
|
||||
nodes_dict = {
|
||||
"r1": {"host": "10.0.0.1", "user": "admin", "password": ""},
|
||||
}
|
||||
mynodes = nodes(nodes_dict)
|
||||
|
||||
captured_vars = {}
|
||||
|
||||
def mock_run(commands, vars=None, **kwargs):
|
||||
captured_vars.update(vars or {})
|
||||
mynodes.r1.output = "ok"
|
||||
mynodes.r1.status = 0
|
||||
|
||||
mynodes.r1.run = mock_run
|
||||
|
||||
variables = {
|
||||
"__global__": {"mask": "255.255.255.0"},
|
||||
"r1": {"ip": "10.0.0.1"}
|
||||
}
|
||||
mynodes.run(["show ip"], vars=variables)
|
||||
assert captured_vars.get("mask") == "255.255.255.0"
|
||||
assert captured_vars.get("ip") == "10.0.0.1"
|
||||
|
||||
def test_nodes_on_complete_callback(self):
|
||||
"""on_complete callback fires per node."""
|
||||
from connpy.core import nodes
|
||||
|
||||
nodes_dict = {
|
||||
"r1": {"host": "10.0.0.1", "user": "admin", "password": ""},
|
||||
}
|
||||
mynodes = nodes(nodes_dict)
|
||||
|
||||
completed = []
|
||||
|
||||
def mock_run(commands, **kwargs):
|
||||
mynodes.r1.output = "done"
|
||||
mynodes.r1.status = 0
|
||||
|
||||
mynodes.r1.run = mock_run
|
||||
|
||||
def on_done(unique, output, status):
|
||||
completed.append(unique)
|
||||
|
||||
mynodes.run(["show version"], on_complete=on_done)
|
||||
assert "r1" in completed
|
||||
@@ -1,55 +0,0 @@
|
||||
import pytest
|
||||
from unittest.mock import MagicMock, patch
|
||||
from connpy.services.execution_service import ExecutionService
|
||||
|
||||
def test_run_commands_callback(populated_config):
|
||||
"""Test that run_commands correctly passes on_node_complete to the executor."""
|
||||
service = ExecutionService(populated_config)
|
||||
|
||||
# Mock the Nodes class in connpy.services.execution_service
|
||||
with patch("connpy.services.execution_service.Nodes") as MockNodes:
|
||||
mock_executor = MockNodes.return_value
|
||||
mock_executor.run.return_value = {"router1": "output"}
|
||||
|
||||
callback = MagicMock()
|
||||
|
||||
service.run_commands(
|
||||
nodes_filter="router1",
|
||||
commands=["show version"],
|
||||
on_node_complete=callback
|
||||
)
|
||||
|
||||
# Verify executor.run was called with on_complete=callback
|
||||
# Note: ExecutionService calls executor.run(..., on_complete=on_node_complete, ...)
|
||||
MockNodes.return_value.run.assert_called_once()
|
||||
args, kwargs = MockNodes.return_value.run.call_args
|
||||
assert kwargs["on_complete"] == callback
|
||||
|
||||
def test_test_commands_callback_regression(populated_config):
|
||||
"""
|
||||
Test that test_commands correctly passes on_node_complete to the executor.
|
||||
Regression: ExecutionService.test_commands currently ignores on_node_complete.
|
||||
"""
|
||||
service = ExecutionService(populated_config)
|
||||
|
||||
with patch("connpy.services.execution_service.Nodes") as MockNodes:
|
||||
mock_executor = MockNodes.return_value
|
||||
mock_executor.test.return_value = {"router1": {"PASS": True}}
|
||||
|
||||
callback = MagicMock()
|
||||
|
||||
service.test_commands(
|
||||
nodes_filter="router1",
|
||||
commands=["show version"],
|
||||
expected=["12.4"],
|
||||
on_node_complete=callback
|
||||
)
|
||||
|
||||
# This is expected to FAIL because ExecutionService.test_commands
|
||||
# doesn't pass on_complete to executor.test
|
||||
MockNodes.return_value.test.assert_called_once()
|
||||
args, kwargs = MockNodes.return_value.test.call_args
|
||||
|
||||
# We expect 'on_complete' to be in kwargs and equal to our callback
|
||||
assert "on_complete" in kwargs, "on_complete parameter missing in call to executor.test"
|
||||
assert kwargs["on_complete"] == callback
|
||||
@@ -1,202 +0,0 @@
|
||||
import pytest
|
||||
import grpc
|
||||
import json
|
||||
import os
|
||||
import threading
|
||||
from unittest.mock import MagicMock, patch
|
||||
from concurrent import futures
|
||||
from connpy.grpc_layer import server, connpy_pb2, connpy_pb2_grpc, stubs
|
||||
from connpy.services.exceptions import ConnpyError
|
||||
|
||||
class MockContext:
|
||||
def abort(self, code, details):
|
||||
raise Exception(f"gRPC Abort: {code} - {details}")
|
||||
|
||||
# --- UNIT TESTS (with mocks) ---
|
||||
|
||||
class TestNodeServicerNaming:
|
||||
@pytest.fixture
|
||||
def servicer(self, populated_config):
|
||||
return server.NodeServicer(populated_config)
|
||||
|
||||
@patch("connpy.core.node")
|
||||
def test_interact_node_uses_passed_name(self, mock_node, servicer):
|
||||
# Setup request with custom name
|
||||
params = {"name": "custom-node-name@test", "host": "1.2.3.4", "protocol": "ssh"}
|
||||
request = connpy_pb2.InteractRequest(
|
||||
id="dynamic",
|
||||
connection_params_json=json.dumps(params)
|
||||
)
|
||||
|
||||
# Mock node to allow _connect
|
||||
mock_node_instance = MagicMock()
|
||||
mock_node_instance._connect.return_value = True
|
||||
mock_node.return_value = mock_node_instance
|
||||
|
||||
# We only need the first iteration of the generator to check naming
|
||||
gen = servicer.interact_node(iter([request]), MockContext())
|
||||
next(gen) # Skip the success response
|
||||
|
||||
# Verify that node() was called with the custom name
|
||||
mock_node.assert_called()
|
||||
found = False
|
||||
for call in mock_node.call_args_list:
|
||||
if call.args[0] == "custom-node-name@test":
|
||||
found = True
|
||||
break
|
||||
assert found
|
||||
|
||||
@patch("connpy.core.node")
|
||||
def test_interact_node_fallback_naming(self, mock_node, servicer):
|
||||
# Setup request without custom name but with host
|
||||
params = {"host": "my-instance", "protocol": "ssm"}
|
||||
request = connpy_pb2.InteractRequest(
|
||||
id="dynamic",
|
||||
connection_params_json=json.dumps(params)
|
||||
)
|
||||
|
||||
mock_node_instance = MagicMock()
|
||||
mock_node_instance._connect.return_value = True
|
||||
mock_node.return_value = mock_node_instance
|
||||
|
||||
gen = servicer.interact_node(iter([request]), MockContext())
|
||||
next(gen)
|
||||
|
||||
# Verify fallback name: dynamic-{host}@remote
|
||||
found = False
|
||||
for call in mock_node.call_args_list:
|
||||
if call.args[0] == "dynamic-my-instance@remote":
|
||||
found = True
|
||||
break
|
||||
assert found
|
||||
|
||||
class TestStubsMessageFormatting:
|
||||
@patch("termios.tcsetattr")
|
||||
@patch("termios.tcgetattr")
|
||||
@patch("tty.setraw")
|
||||
@patch("os.read")
|
||||
@patch("select.select")
|
||||
def test_connect_dynamic_msg_formatting_ssm(self, mock_select, mock_read, mock_setraw, mock_getattr, mock_setattr):
|
||||
from connpy.grpc_layer.stubs import NodeStub
|
||||
|
||||
mock_getattr.return_value = [0, 0, 0, 0, 0, 0, [0] * 32]
|
||||
mock_channel = MagicMock()
|
||||
stub = NodeStub(mock_channel, "localhost:8048")
|
||||
|
||||
mock_resp = MagicMock()
|
||||
mock_resp.success = True
|
||||
mock_resp.stdout_data = b''
|
||||
stub.stub.interact_node.return_value = iter([mock_resp])
|
||||
with patch("connpy.printer.success") as mock_success:
|
||||
with patch("sys.stdin.fileno", return_value=0):
|
||||
mock_select.return_value = ([], [], [])
|
||||
params = {"protocol": "ssm", "host": "i-12345", "name": "my-ssm-node@aws"}
|
||||
|
||||
with patch("select.select", side_effect=KeyboardInterrupt):
|
||||
try:
|
||||
stub.connect_dynamic(params)
|
||||
except KeyboardInterrupt:
|
||||
pass
|
||||
|
||||
mock_success.assert_called()
|
||||
msg = mock_success.call_args[0][0]
|
||||
assert "Connected to my-ssm-node@aws" in msg
|
||||
assert "at i-12345" in msg
|
||||
assert ":22" not in msg
|
||||
assert "via: ssm" in msg
|
||||
|
||||
|
||||
# --- INTEGRATION TESTS (Real Server/Stub Communication) ---
|
||||
|
||||
class TestGRPCIntegration:
|
||||
@pytest.fixture
|
||||
def grpc_server(self, populated_config):
|
||||
"""Starts a local gRPC server for integration testing."""
|
||||
srv = grpc.server(futures.ThreadPoolExecutor(max_workers=5))
|
||||
|
||||
# Register services
|
||||
connpy_pb2_grpc.add_NodeServiceServicer_to_server(server.NodeServicer(populated_config), srv)
|
||||
connpy_pb2_grpc.add_ProfileServiceServicer_to_server(server.ProfileServicer(populated_config), srv)
|
||||
connpy_pb2_grpc.add_ConfigServiceServicer_to_server(server.ConfigServicer(populated_config), srv)
|
||||
connpy_pb2_grpc.add_ExecutionServiceServicer_to_server(server.ExecutionServicer(populated_config), srv)
|
||||
connpy_pb2_grpc.add_ImportExportServiceServicer_to_server(server.ImportExportServicer(populated_config), srv)
|
||||
|
||||
port = srv.add_insecure_port('127.0.0.1:0')
|
||||
srv.start()
|
||||
yield f"127.0.0.1:{port}"
|
||||
srv.stop(0)
|
||||
|
||||
@pytest.fixture
|
||||
def channel(self, grpc_server):
|
||||
with grpc.insecure_channel(grpc_server) as channel:
|
||||
yield channel
|
||||
|
||||
@pytest.fixture
|
||||
def node_stub(self, channel):
|
||||
return stubs.NodeStub(channel, "localhost")
|
||||
|
||||
@pytest.fixture
|
||||
def profile_stub(self, channel):
|
||||
return stubs.ProfileStub(channel, "localhost")
|
||||
|
||||
@pytest.fixture
|
||||
def config_stub(self, channel):
|
||||
return stubs.ConfigStub(channel, "localhost")
|
||||
|
||||
def test_list_nodes_integration(self, node_stub):
|
||||
nodes = node_stub.list_nodes()
|
||||
assert "router1" in nodes
|
||||
assert "server1@office" in nodes
|
||||
|
||||
def test_get_node_details_integration(self, node_stub):
|
||||
details = node_stub.get_node_details("router1")
|
||||
assert details["host"] == "10.0.0.1"
|
||||
|
||||
def test_node_not_found_integration(self, node_stub):
|
||||
with pytest.raises(ConnpyError) as exc:
|
||||
node_stub.get_node_details("non-existent")
|
||||
assert "Node 'non-existent' not found." in str(exc.value)
|
||||
|
||||
def test_list_profiles_integration(self, profile_stub):
|
||||
profiles = profile_stub.list_profiles()
|
||||
assert "office-user" in profiles
|
||||
|
||||
def test_get_settings_integration(self, config_stub):
|
||||
settings = config_stub.get_settings()
|
||||
assert "idletime" in settings
|
||||
|
||||
def test_update_setting_integration(self, config_stub):
|
||||
config_stub.update_setting("idletime", 99)
|
||||
settings = config_stub.get_settings()
|
||||
assert settings["idletime"] == 99
|
||||
|
||||
def test_add_delete_node_integration(self, node_stub):
|
||||
node_stub.add_node("integration-test-node", {"host": "9.9.9.9"})
|
||||
assert "integration-test-node" in node_stub.list_nodes()
|
||||
node_stub.delete_node("integration-test-node")
|
||||
assert "integration-test-node" not in node_stub.list_nodes()
|
||||
|
||||
def test_import_yaml_integration(self, channel, node_stub):
|
||||
import yaml
|
||||
from connpy.grpc_layer import stubs
|
||||
stub = stubs.ImportExportStub(channel, "localhost")
|
||||
|
||||
# ImportExportService expects a flat dict of nodes, not a full config structure
|
||||
inventory = {
|
||||
"imported-node": {"host": "8.8.8.8", "protocol": "ssh", "type": "connection"}
|
||||
}
|
||||
yaml_content = yaml.dump(inventory)
|
||||
|
||||
import tempfile
|
||||
with tempfile.NamedTemporaryFile(mode="w", suffix=".yaml", delete=False) as f:
|
||||
f.write(yaml_content)
|
||||
temp_path = f.name
|
||||
|
||||
try:
|
||||
stub.import_from_file(temp_path)
|
||||
# Verify the node was imported and is visible via NodeStub
|
||||
nodes = node_stub.list_nodes()
|
||||
assert "imported-node" in nodes
|
||||
finally:
|
||||
if os.path.exists(temp_path):
|
||||
os.remove(temp_path)
|
||||
@@ -1,216 +0,0 @@
|
||||
"""Tests for connpy.hooks module — MethodHook and ClassHook."""
|
||||
import pytest
|
||||
from connpy.hooks import MethodHook, ClassHook
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# MethodHook Tests
|
||||
# =========================================================================
|
||||
|
||||
class TestMethodHook:
|
||||
def test_basic_call(self):
|
||||
"""Decorated function executes normally."""
|
||||
@MethodHook
|
||||
def add(a, b):
|
||||
return a + b
|
||||
assert add(2, 3) == 5
|
||||
|
||||
def test_pre_hook_modifies_args(self):
|
||||
"""Pre-hook can modify arguments before execution."""
|
||||
@MethodHook
|
||||
def greet(name):
|
||||
return f"Hello {name}"
|
||||
|
||||
def uppercase_hook(name):
|
||||
return (name.upper(),), {}
|
||||
|
||||
greet.register_pre_hook(uppercase_hook)
|
||||
assert greet("world") == "Hello WORLD"
|
||||
|
||||
def test_post_hook_modifies_result(self):
|
||||
"""Post-hook can modify the return value."""
|
||||
@MethodHook
|
||||
def compute(x):
|
||||
return x * 2
|
||||
|
||||
def double_result(*args, **kwargs):
|
||||
return kwargs["result"] * 2
|
||||
|
||||
compute.register_post_hook(double_result)
|
||||
assert compute(5) == 20 # 5*2=10, then 10*2=20
|
||||
|
||||
def test_multiple_pre_hooks_order(self):
|
||||
"""Pre-hooks execute in registration order."""
|
||||
calls = []
|
||||
|
||||
@MethodHook
|
||||
def func(x):
|
||||
return x
|
||||
|
||||
def hook1(x):
|
||||
calls.append("hook1")
|
||||
return (x,), {}
|
||||
|
||||
def hook2(x):
|
||||
calls.append("hook2")
|
||||
return (x,), {}
|
||||
|
||||
func.register_pre_hook(hook1)
|
||||
func.register_pre_hook(hook2)
|
||||
func(1)
|
||||
assert calls == ["hook1", "hook2"]
|
||||
|
||||
def test_multiple_post_hooks_order(self):
|
||||
"""Post-hooks execute in registration order."""
|
||||
calls = []
|
||||
|
||||
@MethodHook
|
||||
def func(x):
|
||||
return x
|
||||
|
||||
def hook1(*args, **kwargs):
|
||||
calls.append("hook1")
|
||||
return kwargs["result"]
|
||||
|
||||
def hook2(*args, **kwargs):
|
||||
calls.append("hook2")
|
||||
return kwargs["result"]
|
||||
|
||||
func.register_post_hook(hook1)
|
||||
func.register_post_hook(hook2)
|
||||
func(1)
|
||||
assert calls == ["hook1", "hook2"]
|
||||
|
||||
def test_pre_hook_exception_continues(self, capsys):
|
||||
"""If a pre-hook raises, the function still executes."""
|
||||
@MethodHook
|
||||
def func(x):
|
||||
return x + 1
|
||||
|
||||
def bad_hook(x):
|
||||
raise RuntimeError("broken hook")
|
||||
|
||||
func.register_pre_hook(bad_hook)
|
||||
# Should not raise — the hook error is printed but execution continues
|
||||
result = func(5)
|
||||
assert result == 6
|
||||
|
||||
def test_post_hook_exception_continues(self, capsys):
|
||||
"""If a post-hook raises, the result is still returned."""
|
||||
@MethodHook
|
||||
def func(x):
|
||||
return x + 1
|
||||
|
||||
def bad_hook(*args, **kwargs):
|
||||
raise RuntimeError("broken post hook")
|
||||
|
||||
func.register_post_hook(bad_hook)
|
||||
result = func(5)
|
||||
assert result == 6
|
||||
|
||||
def test_method_hook_as_instance_method(self):
|
||||
"""MethodHook works as a descriptor on a class."""
|
||||
class MyClass:
|
||||
@MethodHook
|
||||
def double(self, x):
|
||||
return x * 2
|
||||
|
||||
obj = MyClass()
|
||||
assert obj.double(5) == 10
|
||||
|
||||
def test_method_hook_instance_hook_registration(self):
|
||||
"""Can register hooks via instance method access."""
|
||||
class MyClass:
|
||||
@MethodHook
|
||||
def process(self, x):
|
||||
return x
|
||||
|
||||
def add_ten(*args, **kwargs):
|
||||
return kwargs["result"] + 10
|
||||
|
||||
obj = MyClass()
|
||||
obj.process.register_post_hook(add_ten)
|
||||
assert obj.process(5) == 15
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# ClassHook Tests
|
||||
# =========================================================================
|
||||
|
||||
class TestClassHook:
|
||||
def test_creates_instance(self):
|
||||
"""ClassHook still creates instances normally."""
|
||||
@ClassHook
|
||||
class MyClass:
|
||||
def __init__(self, value):
|
||||
self.value = value
|
||||
|
||||
obj = MyClass(42)
|
||||
assert obj.value == 42
|
||||
|
||||
def test_modify_future_instances(self):
|
||||
"""modify() affects all future instances."""
|
||||
@ClassHook
|
||||
class MyClass:
|
||||
def __init__(self):
|
||||
self.x = 1
|
||||
|
||||
def set_x_to_99(instance):
|
||||
instance.x = 99
|
||||
|
||||
MyClass.modify(set_x_to_99)
|
||||
obj = MyClass()
|
||||
assert obj.x == 99
|
||||
|
||||
def test_modify_does_not_affect_past(self):
|
||||
"""modify() does not affect already-created instances."""
|
||||
@ClassHook
|
||||
class MyClass:
|
||||
def __init__(self):
|
||||
self.x = 1
|
||||
|
||||
old_obj = MyClass()
|
||||
|
||||
def set_x_to_99(instance):
|
||||
instance.x = 99
|
||||
|
||||
MyClass.modify(set_x_to_99)
|
||||
assert old_obj.x == 1 # Not affected
|
||||
assert MyClass().x == 99 # New instance IS affected
|
||||
|
||||
def test_instance_modify(self):
|
||||
"""instance.modify() only affects that specific instance."""
|
||||
@ClassHook
|
||||
class MyClass:
|
||||
def __init__(self):
|
||||
self.x = 1
|
||||
|
||||
obj1 = MyClass()
|
||||
obj2 = MyClass()
|
||||
|
||||
obj1.modify(lambda inst: setattr(inst, 'x', 999))
|
||||
assert obj1.x == 999
|
||||
assert obj2.x == 1
|
||||
|
||||
def test_multiple_deferred_hooks(self):
|
||||
"""Multiple modify() calls apply in order."""
|
||||
@ClassHook
|
||||
class MyClass:
|
||||
def __init__(self):
|
||||
self.log = []
|
||||
|
||||
MyClass.modify(lambda inst: inst.log.append("first"))
|
||||
MyClass.modify(lambda inst: inst.log.append("second"))
|
||||
|
||||
obj = MyClass()
|
||||
assert obj.log == ["first", "second"]
|
||||
|
||||
def test_getattr_delegation(self):
|
||||
"""ClassHook delegates attribute access to the wrapped class."""
|
||||
@ClassHook
|
||||
class MyClass:
|
||||
class_var = "hello"
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
assert MyClass.class_var == "hello"
|
||||
@@ -1,66 +0,0 @@
|
||||
import pytest
|
||||
from connpy.services.node_service import NodeService
|
||||
from connpy.services.exceptions import NodeNotFoundError, NodeAlreadyExistsError
|
||||
|
||||
def test_list_nodes_filtering_parity(populated_config):
|
||||
"""
|
||||
Test that list_nodes uses literal 'in' logic instead of re.search.
|
||||
Regression: NodeService currently uses re.search in some versions,
|
||||
but we want to ensure it uses literal 'in' for parity.
|
||||
"""
|
||||
service = NodeService(populated_config)
|
||||
|
||||
# If it uses 'in' logic, '1' should match all nodes containing '1'
|
||||
# router1, server1@office, db1@datacenter@office
|
||||
nodes = service.list_nodes(filter_str="1")
|
||||
assert len(nodes) == 3
|
||||
assert "router1" in nodes
|
||||
assert "server1@office" in nodes
|
||||
assert "db1@datacenter@office" in nodes
|
||||
|
||||
# Test regex-specific characters.
|
||||
# NodeService should use re.search, so '^router' will match 'router1'.
|
||||
nodes_regex = service.list_nodes(filter_str="^router")
|
||||
|
||||
assert "router1" in nodes_regex
|
||||
|
||||
def test_list_nodes_dynamic_formatting(populated_config):
|
||||
"""
|
||||
Test that list_nodes supports dynamic formatting for any node attribute.
|
||||
Regression: NodeService currently has hardcoded support for name, location, host.
|
||||
"""
|
||||
service = NodeService(populated_config)
|
||||
|
||||
# Try to format using 'user' and 'protocol' which are NOT in the hardcoded list
|
||||
# (name, location, host)
|
||||
format_str = "{name} -> {user}@{host} ({protocol})"
|
||||
|
||||
# router1: host=10.0.0.1, user=admin, protocol=ssh
|
||||
# Expected: "router1 -> admin@10.0.0.1 (ssh)"
|
||||
|
||||
formatted = service.list_nodes(filter_str="router1", format_str=format_str)
|
||||
|
||||
assert len(formatted) == 1
|
||||
# This will FAIL if it only supports {name}, {location}, {host}
|
||||
assert formatted[0] == "router1 -> admin@10.0.0.1 (ssh)"
|
||||
|
||||
def test_node_editing_parity(populated_config):
|
||||
"""
|
||||
Test that add_node improperly raises NodeAlreadyExistsError when used for editing.
|
||||
Regression: connapp._mod calls add_node instead of update_node.
|
||||
"""
|
||||
service = NodeService(populated_config)
|
||||
|
||||
# router1 already exists in populated_config
|
||||
# We confirm that calling add_node with an existing ID raises NodeAlreadyExistsError
|
||||
# which is why connapp._mod (which calls add_node) is currently broken for editing.
|
||||
with pytest.raises(NodeAlreadyExistsError):
|
||||
service.add_node("router1", {"host": "1.1.1.1"})
|
||||
|
||||
def test_list_nodes_case_sensitivity(populated_config):
|
||||
"""Test that filtering respects the case setting in config."""
|
||||
service = NodeService(populated_config)
|
||||
|
||||
# Default case is False (case-insensitive)
|
||||
nodes = service.list_nodes(filter_str="ROUTER")
|
||||
assert "router1" in nodes
|
||||
@@ -1,327 +0,0 @@
|
||||
"""Tests for connpy.plugins module."""
|
||||
import os
|
||||
import textwrap
|
||||
import pytest
|
||||
from connpy.plugins import Plugins
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helper: write a plugin script to a file
|
||||
# ---------------------------------------------------------------------------
|
||||
def _write_plugin(path, code):
|
||||
"""Write dedented code to a file."""
|
||||
with open(path, "w") as f:
|
||||
f.write(textwrap.dedent(code))
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# verify_script tests
|
||||
# =========================================================================
|
||||
|
||||
class TestVerifyScript:
|
||||
def test_valid_parser_entrypoint(self, tmp_path):
|
||||
p = tmp_path / "good.py"
|
||||
_write_plugin(p, """\
|
||||
import argparse
|
||||
|
||||
class Parser:
|
||||
def __init__(self):
|
||||
self.parser = argparse.ArgumentParser()
|
||||
|
||||
class Entrypoint:
|
||||
def __init__(self, args, parser, connapp):
|
||||
pass
|
||||
""")
|
||||
plugins = Plugins()
|
||||
assert plugins.verify_script(str(p)) == False
|
||||
|
||||
def test_valid_preload_only(self, tmp_path):
|
||||
p = tmp_path / "preload.py"
|
||||
_write_plugin(p, """\
|
||||
class Preload:
|
||||
def __init__(self, connapp):
|
||||
pass
|
||||
""")
|
||||
plugins = Plugins()
|
||||
assert plugins.verify_script(str(p)) == False
|
||||
|
||||
def test_valid_all_three(self, tmp_path):
|
||||
p = tmp_path / "all.py"
|
||||
_write_plugin(p, """\
|
||||
import argparse
|
||||
|
||||
class Parser:
|
||||
def __init__(self):
|
||||
self.parser = argparse.ArgumentParser()
|
||||
|
||||
class Entrypoint:
|
||||
def __init__(self, args, parser, connapp):
|
||||
pass
|
||||
|
||||
class Preload:
|
||||
def __init__(self, connapp):
|
||||
pass
|
||||
""")
|
||||
plugins = Plugins()
|
||||
assert plugins.verify_script(str(p)) == False
|
||||
|
||||
def test_parser_without_entrypoint(self, tmp_path):
|
||||
p = tmp_path / "bad.py"
|
||||
_write_plugin(p, """\
|
||||
import argparse
|
||||
|
||||
class Parser:
|
||||
def __init__(self):
|
||||
self.parser = argparse.ArgumentParser()
|
||||
""")
|
||||
plugins = Plugins()
|
||||
result = plugins.verify_script(str(p))
|
||||
assert result # Should be a truthy error string
|
||||
assert "Entrypoint" in result
|
||||
|
||||
def test_entrypoint_without_parser(self, tmp_path):
|
||||
p = tmp_path / "bad.py"
|
||||
_write_plugin(p, """\
|
||||
class Entrypoint:
|
||||
def __init__(self, args, parser, connapp):
|
||||
pass
|
||||
""")
|
||||
plugins = Plugins()
|
||||
result = plugins.verify_script(str(p))
|
||||
assert result
|
||||
assert "Parser" in result
|
||||
|
||||
def test_no_valid_class(self, tmp_path):
|
||||
p = tmp_path / "empty.py"
|
||||
_write_plugin(p, """\
|
||||
def some_function():
|
||||
pass
|
||||
""")
|
||||
plugins = Plugins()
|
||||
result = plugins.verify_script(str(p))
|
||||
assert result
|
||||
assert "No valid class" in result
|
||||
|
||||
def test_parser_missing_self_parser(self, tmp_path):
|
||||
p = tmp_path / "bad.py"
|
||||
_write_plugin(p, """\
|
||||
class Parser:
|
||||
def __init__(self):
|
||||
self.something = "not parser"
|
||||
|
||||
class Entrypoint:
|
||||
def __init__(self, args, parser, connapp):
|
||||
pass
|
||||
""")
|
||||
plugins = Plugins()
|
||||
result = plugins.verify_script(str(p))
|
||||
assert result
|
||||
assert "self.parser" in result
|
||||
|
||||
def test_entrypoint_wrong_args(self, tmp_path):
|
||||
p = tmp_path / "bad.py"
|
||||
_write_plugin(p, """\
|
||||
import argparse
|
||||
|
||||
class Parser:
|
||||
def __init__(self):
|
||||
self.parser = argparse.ArgumentParser()
|
||||
|
||||
class Entrypoint:
|
||||
def __init__(self, args):
|
||||
pass
|
||||
""")
|
||||
plugins = Plugins()
|
||||
result = plugins.verify_script(str(p))
|
||||
assert result
|
||||
assert "Entrypoint" in result
|
||||
|
||||
def test_preload_wrong_args(self, tmp_path):
|
||||
p = tmp_path / "bad.py"
|
||||
_write_plugin(p, """\
|
||||
class Preload:
|
||||
def __init__(self, connapp, extra):
|
||||
pass
|
||||
""")
|
||||
plugins = Plugins()
|
||||
result = plugins.verify_script(str(p))
|
||||
assert result
|
||||
assert "Preload" in result
|
||||
|
||||
def test_disallowed_top_level(self, tmp_path):
|
||||
p = tmp_path / "bad.py"
|
||||
_write_plugin(p, """\
|
||||
MY_GLOBAL = "not allowed"
|
||||
|
||||
class Preload:
|
||||
def __init__(self, connapp):
|
||||
pass
|
||||
""")
|
||||
plugins = Plugins()
|
||||
result = plugins.verify_script(str(p))
|
||||
assert result
|
||||
assert "not allowed" in result.lower() or "Plugin can only have" in result
|
||||
|
||||
def test_syntax_error(self, tmp_path):
|
||||
p = tmp_path / "bad.py"
|
||||
_write_plugin(p, """\
|
||||
def broken(
|
||||
""")
|
||||
plugins = Plugins()
|
||||
result = plugins.verify_script(str(p))
|
||||
assert result
|
||||
assert "Syntax error" in result
|
||||
|
||||
def test_if_name_main_allowed(self, tmp_path):
|
||||
p = tmp_path / "good.py"
|
||||
_write_plugin(p, """\
|
||||
class Preload:
|
||||
def __init__(self, connapp):
|
||||
pass
|
||||
|
||||
if __name__ == "__main__":
|
||||
print("standalone")
|
||||
""")
|
||||
plugins = Plugins()
|
||||
assert plugins.verify_script(str(p)) == False
|
||||
|
||||
def test_other_if_not_allowed(self, tmp_path):
|
||||
p = tmp_path / "bad.py"
|
||||
_write_plugin(p, """\
|
||||
import sys
|
||||
|
||||
if sys.platform == "linux":
|
||||
pass
|
||||
|
||||
class Preload:
|
||||
def __init__(self, connapp):
|
||||
pass
|
||||
""")
|
||||
plugins = Plugins()
|
||||
result = plugins.verify_script(str(p))
|
||||
assert result
|
||||
assert "__name__" in result
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# Import and loading tests
|
||||
# =========================================================================
|
||||
|
||||
class TestPluginLoading:
|
||||
def test_import_from_path(self, tmp_path):
|
||||
p = tmp_path / "mymod.py"
|
||||
_write_plugin(p, """\
|
||||
MY_VAR = 42
|
||||
""")
|
||||
plugins = Plugins()
|
||||
module = plugins._import_from_path(str(p))
|
||||
assert module.MY_VAR == 42
|
||||
|
||||
def test_import_plugins_to_argparse(self, tmp_path):
|
||||
"""Valid plugins get loaded into argparse."""
|
||||
import argparse
|
||||
|
||||
plugin_dir = tmp_path / "plugins"
|
||||
plugin_dir.mkdir()
|
||||
_write_plugin(plugin_dir / "myplugin.py", """\
|
||||
import argparse
|
||||
|
||||
class Parser:
|
||||
def __init__(self):
|
||||
self.parser = argparse.ArgumentParser(description="My plugin")
|
||||
|
||||
class Entrypoint:
|
||||
def __init__(self, args, parser, connapp):
|
||||
pass
|
||||
""")
|
||||
|
||||
parser = argparse.ArgumentParser()
|
||||
subparsers = parser.add_subparsers()
|
||||
|
||||
plugins = Plugins()
|
||||
plugins._import_plugins_to_argparse(str(plugin_dir), subparsers)
|
||||
|
||||
assert "myplugin" in plugins.plugins
|
||||
assert "myplugin" in plugins.plugin_parsers
|
||||
|
||||
def test_plugin_name_collision(self, tmp_path):
|
||||
"""Plugin with same name as existing subcommand is skipped."""
|
||||
import argparse
|
||||
|
||||
plugin_dir = tmp_path / "plugins"
|
||||
plugin_dir.mkdir()
|
||||
_write_plugin(plugin_dir / "existcmd.py", """\
|
||||
import argparse
|
||||
|
||||
class Parser:
|
||||
def __init__(self):
|
||||
self.parser = argparse.ArgumentParser()
|
||||
|
||||
class Entrypoint:
|
||||
def __init__(self, args, parser, connapp):
|
||||
pass
|
||||
""")
|
||||
|
||||
parser = argparse.ArgumentParser()
|
||||
subparsers = parser.add_subparsers()
|
||||
subparsers.add_parser("existcmd") # Already taken
|
||||
|
||||
plugins = Plugins()
|
||||
plugins._import_plugins_to_argparse(str(plugin_dir), subparsers)
|
||||
|
||||
assert "existcmd" not in plugins.plugins
|
||||
|
||||
def test_preload_registration(self, tmp_path):
|
||||
"""Preload class gets registered in preloads dict."""
|
||||
import argparse
|
||||
|
||||
plugin_dir = tmp_path / "plugins"
|
||||
plugin_dir.mkdir()
|
||||
_write_plugin(plugin_dir / "preloader.py", """\
|
||||
class Preload:
|
||||
def __init__(self, connapp):
|
||||
pass
|
||||
""")
|
||||
|
||||
parser = argparse.ArgumentParser()
|
||||
subparsers = parser.add_subparsers()
|
||||
|
||||
plugins = Plugins()
|
||||
plugins._import_plugins_to_argparse(str(plugin_dir), subparsers)
|
||||
|
||||
assert "preloader" in plugins.preloads
|
||||
|
||||
def test_invalid_plugin_skipped(self, tmp_path, capsys):
|
||||
"""Invalid plugin is skipped with error message."""
|
||||
import argparse
|
||||
|
||||
plugin_dir = tmp_path / "plugins"
|
||||
plugin_dir.mkdir()
|
||||
_write_plugin(plugin_dir / "badplugin.py", """\
|
||||
MY_GLOBAL = "bad"
|
||||
""")
|
||||
|
||||
parser = argparse.ArgumentParser()
|
||||
subparsers = parser.add_subparsers()
|
||||
|
||||
plugins = Plugins()
|
||||
plugins._import_plugins_to_argparse(str(plugin_dir), subparsers)
|
||||
|
||||
assert "badplugin" not in plugins.plugins
|
||||
captured = capsys.readouterr()
|
||||
assert "Failed to load plugin" in captured.err or "Failed to load plugin" in captured.out
|
||||
|
||||
def test_empty_directory(self, tmp_path):
|
||||
"""Empty directory doesn't cause errors."""
|
||||
import argparse
|
||||
|
||||
plugin_dir = tmp_path / "plugins"
|
||||
plugin_dir.mkdir()
|
||||
|
||||
parser = argparse.ArgumentParser()
|
||||
subparsers = parser.add_subparsers()
|
||||
|
||||
plugins = Plugins()
|
||||
plugins._import_plugins_to_argparse(str(plugin_dir), subparsers)
|
||||
|
||||
assert len(plugins.plugins) == 0
|
||||
@@ -1,104 +0,0 @@
|
||||
"""Tests for connpy.printer module."""
|
||||
import sys
|
||||
from io import StringIO
|
||||
from connpy import printer
|
||||
|
||||
|
||||
class TestPrinter:
|
||||
def test_info_output(self, capsys):
|
||||
printer.info("hello world")
|
||||
captured = capsys.readouterr()
|
||||
assert "[i] hello world" in captured.out
|
||||
|
||||
def test_success_output(self, capsys):
|
||||
printer.success("done")
|
||||
captured = capsys.readouterr()
|
||||
assert "[✓] done" in captured.out
|
||||
|
||||
def test_warning_output(self, capsys):
|
||||
printer.warning("careful")
|
||||
captured = capsys.readouterr()
|
||||
assert "[!] careful" in captured.out
|
||||
|
||||
def test_error_output(self, capsys):
|
||||
printer.error("failed")
|
||||
captured = capsys.readouterr()
|
||||
assert "[✗] failed" in captured.err
|
||||
|
||||
def test_debug_output(self, capsys):
|
||||
printer.debug("debug info")
|
||||
captured = capsys.readouterr()
|
||||
assert "[d] debug info" in captured.out
|
||||
|
||||
def test_start_output(self, capsys):
|
||||
printer.start("starting")
|
||||
captured = capsys.readouterr()
|
||||
assert "[+] starting" in captured.out
|
||||
|
||||
def test_custom_output(self, capsys):
|
||||
printer.custom("TAG", "custom message")
|
||||
captured = capsys.readouterr()
|
||||
assert "[TAG] custom message" in captured.out
|
||||
|
||||
def test_multiline_indentation(self, capsys):
|
||||
printer.info("line1\nline2\nline3")
|
||||
captured = capsys.readouterr()
|
||||
lines = captured.out.strip().split("\n")
|
||||
assert lines[0] == "[i] line1"
|
||||
# Second line should be indented by len("[i] ") = 4 chars
|
||||
assert lines[1].startswith(" line2")
|
||||
assert lines[2].startswith(" line3")
|
||||
|
||||
def test_data_output(self, capsys):
|
||||
printer.data("my title", "key: value")
|
||||
captured = capsys.readouterr()
|
||||
# Rich output is formatted with ansi escape sequences or box drawing chars
|
||||
# Just check that title and content appear in the output stream
|
||||
assert "my title" in captured.out
|
||||
assert "key" in captured.out
|
||||
|
||||
def test_node_panel_pass(self, capsys):
|
||||
printer.node_panel("node1", "output line\n", 0)
|
||||
captured = capsys.readouterr()
|
||||
assert "node1" in captured.out
|
||||
assert "PASS" in captured.out
|
||||
assert "output line" in captured.out
|
||||
|
||||
def test_node_panel_fail(self, capsys):
|
||||
printer.node_panel("node2", "error line\n", 1)
|
||||
captured = capsys.readouterr()
|
||||
assert "node2" in captured.out
|
||||
assert "FAIL" in captured.out
|
||||
assert "error line" in captured.out
|
||||
|
||||
def test_test_panel(self, capsys):
|
||||
printer.test_panel("node1", "output", 0, {"check1": True, "check2": False})
|
||||
captured = capsys.readouterr()
|
||||
assert "node1" in captured.out
|
||||
assert "check1" in captured.out
|
||||
assert "check2" in captured.out
|
||||
|
||||
def test_test_summary(self, capsys):
|
||||
results = {"node1": {"test1": True}, "node2": {"test2": False}}
|
||||
printer.test_summary(results)
|
||||
captured = capsys.readouterr()
|
||||
assert "node1" in captured.out
|
||||
assert "node2" in captured.out
|
||||
assert "test1" in captured.out
|
||||
assert "test2" in captured.out
|
||||
|
||||
def test_header_output(self, capsys):
|
||||
printer.header("My Header")
|
||||
captured = capsys.readouterr()
|
||||
assert "My Header" in captured.out
|
||||
|
||||
def test_kv_output(self, capsys):
|
||||
printer.kv("mykeystring", "myvaluestring")
|
||||
captured = capsys.readouterr()
|
||||
assert "mykeystring" in captured.out
|
||||
assert "myvaluestring" in captured.out
|
||||
|
||||
def test_confirm_action(self, capsys):
|
||||
printer.confirm_action("router1", "delete")
|
||||
captured = capsys.readouterr()
|
||||
assert "[i] delete: router1" in captured.out
|
||||
@@ -1,65 +0,0 @@
|
||||
import threading
|
||||
import io
|
||||
import time
|
||||
import sys
|
||||
import pytest
|
||||
from connpy import printer
|
||||
|
||||
def test_printer_thread_isolation():
|
||||
"""Verify that printer output is isolated per thread when using set_thread_stream."""
|
||||
num_threads = 5
|
||||
iterations = 20
|
||||
results = {}
|
||||
|
||||
def worker(thread_id):
|
||||
# Create a private buffer for this thread
|
||||
buf = io.StringIO()
|
||||
printer.set_thread_stream(buf)
|
||||
|
||||
# Ensure we have a clean console for this thread
|
||||
# In a real gRPC request, this happens automatically as it's a new thread
|
||||
printer.set_thread_console(None)
|
||||
|
||||
# Each thread prints its own ID
|
||||
expected_msg = f"Thread-{thread_id}"
|
||||
for _ in range(iterations):
|
||||
printer.info(expected_msg)
|
||||
time.sleep(0.01)
|
||||
|
||||
results[thread_id] = buf.getvalue()
|
||||
printer.set_thread_stream(None)
|
||||
|
||||
threads = []
|
||||
for i in range(num_threads):
|
||||
t = threading.Thread(target=worker, args=(i,))
|
||||
threads.append(t)
|
||||
t.start()
|
||||
|
||||
for t in threads:
|
||||
t.join()
|
||||
|
||||
# Validation
|
||||
for thread_id, output in results.items():
|
||||
expected_msg = f"Thread-{thread_id}"
|
||||
assert expected_msg in output
|
||||
|
||||
# Ensure no leaks
|
||||
for other_id in range(num_threads):
|
||||
if other_id == thread_id: continue
|
||||
assert f"Thread-{other_id}" not in output
|
||||
|
||||
def test_printer_manual_stream():
|
||||
"""Verify that setting a thread stream correctly captures printer output in the current thread."""
|
||||
buf = io.StringIO()
|
||||
|
||||
# We must clear the thread-local console to force it to pick up the new sys.stdout proxy
|
||||
printer.set_thread_console(None)
|
||||
printer.set_thread_stream(buf)
|
||||
|
||||
printer.info("Captured-Message")
|
||||
|
||||
output = buf.getvalue()
|
||||
printer.set_thread_stream(None)
|
||||
printer.set_thread_console(None)
|
||||
|
||||
assert "Captured-Message" in output
|
||||
@@ -1,83 +0,0 @@
|
||||
import pytest
|
||||
from connpy.services.profile_service import ProfileService
|
||||
from connpy.services.exceptions import ProfileNotFoundError, ProfileAlreadyExistsError
|
||||
|
||||
def test_profile_crud(populated_config):
|
||||
"""Test basic CRUD operations for profiles."""
|
||||
service = ProfileService(populated_config)
|
||||
|
||||
# List
|
||||
profiles = service.list_profiles()
|
||||
assert "default" in profiles
|
||||
assert "office-user" in profiles
|
||||
|
||||
# Get
|
||||
office = service.get_profile("office-user")
|
||||
assert office["user"] == "officeadmin"
|
||||
|
||||
# Add
|
||||
new_data = {
|
||||
"user": "newadmin",
|
||||
"password": "newpassword"
|
||||
}
|
||||
service.add_profile("new-profile", new_data)
|
||||
assert "new-profile" in service.list_profiles()
|
||||
assert service.get_profile("new-profile")["user"] == "newadmin"
|
||||
|
||||
# Update
|
||||
update_data = {
|
||||
"user": "updatedadmin"
|
||||
}
|
||||
service.update_profile("new-profile", update_data)
|
||||
assert service.get_profile("new-profile")["user"] == "updatedadmin"
|
||||
|
||||
# Delete
|
||||
service.delete_profile("new-profile")
|
||||
assert "new-profile" not in service.list_profiles()
|
||||
|
||||
def test_profile_inheritance_parity(populated_config):
|
||||
"""
|
||||
Test that profiles can inherit from other profiles.
|
||||
Regression: ProfileService currently doesn't resolve inheritance within profiles.
|
||||
"""
|
||||
service = ProfileService(populated_config)
|
||||
|
||||
# Create a profile that inherits from 'office-user'
|
||||
# 'office-user' has user='officeadmin', password='officepass'
|
||||
inherited_data = {
|
||||
"user": "@office-user",
|
||||
"options": "-v"
|
||||
}
|
||||
service.add_profile("inherited-profile", inherited_data)
|
||||
|
||||
# When we get the profile, we expect it to be resolved if inheritance is supported
|
||||
# This is a common pattern in connpy for nodes, but should it work for profiles?
|
||||
# The task mentions "profile CRUD and inheritance parity".
|
||||
|
||||
profile = service.get_profile("inherited-profile")
|
||||
|
||||
# If inheritance is resolved, user should be 'officeadmin'
|
||||
# This is expected to FAIL if ProfileService just returns the raw dict.
|
||||
assert profile["user"] == "officeadmin"
|
||||
assert profile["password"] == "officepass"
|
||||
assert profile["options"] == "-v"
|
||||
|
||||
def test_delete_default_profile_fails(populated_config):
|
||||
"""Test that deleting the 'default' profile is prohibited."""
|
||||
service = ProfileService(populated_config)
|
||||
from connpy.services.exceptions import InvalidConfigurationError
|
||||
|
||||
with pytest.raises(InvalidConfigurationError, match="Cannot delete the 'default' profile"):
|
||||
service.delete_profile("default")
|
||||
|
||||
def test_delete_used_profile_fails(populated_config):
|
||||
"""Test that deleting a profile used by nodes is prohibited."""
|
||||
service = ProfileService(populated_config)
|
||||
from connpy.services.exceptions import InvalidConfigurationError
|
||||
|
||||
# In populated_config, we need to make sure a node uses a profile
|
||||
# Let's add a node that uses 'office-user'
|
||||
populated_config._connections_add(id="testnode", host="1.1.1.1", user="@office-user")
|
||||
|
||||
with pytest.raises(InvalidConfigurationError, match="is used by nodes"):
|
||||
service.delete_profile("office-user")
|
||||
@@ -1,42 +0,0 @@
|
||||
import pytest
|
||||
from unittest.mock import patch, MagicMock
|
||||
from connpy.services.provider import ServiceProvider
|
||||
|
||||
def test_service_provider_local_mode():
|
||||
config_mock = MagicMock()
|
||||
with patch("connpy.services.provider.NodeService", create=True) as MockNodeService, \
|
||||
patch("connpy.services.provider.ProfileService", create=True), \
|
||||
patch("connpy.services.provider.ConfigService", create=True), \
|
||||
patch("connpy.services.provider.PluginService", create=True), \
|
||||
patch("connpy.services.provider.AIService", create=True), \
|
||||
patch("connpy.services.provider.SystemService", create=True), \
|
||||
patch("connpy.services.provider.ExecutionService", create=True), \
|
||||
patch("connpy.services.provider.ImportExportService", create=True):
|
||||
|
||||
provider = ServiceProvider(config_mock, mode="local")
|
||||
|
||||
assert provider.mode == "local"
|
||||
assert provider.config == config_mock
|
||||
# Verify that an attribute was created
|
||||
assert provider.nodes is not None
|
||||
|
||||
def test_service_provider_remote_mode():
|
||||
config_mock = MagicMock()
|
||||
with patch("connpy.services.provider.ConfigService", create=True) as MockConfigService, \
|
||||
patch("grpc.insecure_channel", create=True) as MockChannel:
|
||||
|
||||
provider = ServiceProvider(config_mock, mode="remote", remote_host="localhost:50051")
|
||||
|
||||
# Verify ConfigService is initialized locally
|
||||
assert provider.config_svc is not None
|
||||
|
||||
# Verify grpc channel was created
|
||||
MockChannel.assert_called_once_with("localhost:50051")
|
||||
|
||||
# Verify a stub was assigned
|
||||
assert provider.nodes is not None
|
||||
|
||||
def test_service_provider_unknown_mode():
|
||||
config_mock = MagicMock()
|
||||
with pytest.raises(ValueError, match="Unknown service mode: invalid_mode"):
|
||||
ServiceProvider(config_mock, mode="invalid_mode")
|
||||
@@ -1,103 +0,0 @@
|
||||
"""Tests for connpy.services.sync_service"""
|
||||
import pytest
|
||||
import os
|
||||
from unittest.mock import MagicMock, patch
|
||||
from connpy.services.sync_service import SyncService
|
||||
|
||||
@pytest.fixture
|
||||
def mock_config():
|
||||
config = MagicMock()
|
||||
config.defaultdir = "/fake/dir"
|
||||
config.file = "/fake/dir/config.yaml"
|
||||
config.key = "/fake/dir/.osk"
|
||||
config.cachefile = "/fake/dir/.cache"
|
||||
config.fzf_cachefile = "/fake/dir/.fzf_cache"
|
||||
config.config = {"sync": True, "sync_remote": False}
|
||||
return config
|
||||
|
||||
class TestSyncService:
|
||||
def test_init(self, mock_config):
|
||||
s = SyncService(mock_config)
|
||||
assert s.sync_enabled is True
|
||||
assert s.token_file == os.path.join("/fake/dir", "gtoken.json")
|
||||
|
||||
@patch("connpy.services.sync_service.os.path.exists")
|
||||
@patch("connpy.services.sync_service.Credentials")
|
||||
def test_get_credentials_success(self, MockCreds, mock_exists, mock_config):
|
||||
mock_exists.return_value = True
|
||||
mock_cred_instance = MagicMock()
|
||||
mock_cred_instance.valid = True
|
||||
MockCreds.from_authorized_user_file.return_value = mock_cred_instance
|
||||
|
||||
s = SyncService(mock_config)
|
||||
creds = s.get_credentials()
|
||||
assert creds == mock_cred_instance
|
||||
|
||||
@patch("connpy.services.sync_service.os.path.exists")
|
||||
def test_get_credentials_not_found(self, mock_exists, mock_config):
|
||||
mock_exists.return_value = False
|
||||
s = SyncService(mock_config)
|
||||
assert s.get_credentials() is None
|
||||
|
||||
@patch("connpy.services.sync_service.zipfile.ZipFile")
|
||||
@patch("connpy.services.sync_service.os.path.exists")
|
||||
@patch("connpy.services.sync_service.os.path.basename")
|
||||
def test_compress_and_upload_local(self, mock_basename, mock_exists, MockZipFile, mock_config):
|
||||
mock_basename.return_value = "config.yaml"
|
||||
mock_exists.return_value = True
|
||||
s = SyncService(mock_config)
|
||||
|
||||
# Mocking list_backups and upload_file to avoid real API calls
|
||||
s.list_backups = MagicMock(return_value=[])
|
||||
s.upload_file = MagicMock(return_value=True)
|
||||
|
||||
zip_mock = MagicMock()
|
||||
MockZipFile.return_value.__enter__.return_value = zip_mock
|
||||
|
||||
s.compress_and_upload()
|
||||
# Verify zip was created with local config and key
|
||||
zip_mock.write.assert_any_call(s.config.file, "config.yaml")
|
||||
zip_mock.write.assert_any_call(s.config.key, ".osk")
|
||||
|
||||
@patch("connpy.services.sync_service.zipfile.ZipFile")
|
||||
@patch("connpy.services.sync_service.os.path.exists")
|
||||
@patch("connpy.services.sync_service.os.path.dirname")
|
||||
@patch("connpy.services.sync_service.os.remove")
|
||||
def test_perform_restore(self, mock_remove, mock_dirname, mock_exists, MockZipFile, mock_config):
|
||||
mock_dirname.return_value = "/fake/dir"
|
||||
# Mock exists to return True for key and zip, but False for caches during the cleanup phase
|
||||
def exists_side_effect(path):
|
||||
if ".cache" in path or ".fzf_cache" in path:
|
||||
return False
|
||||
return True
|
||||
mock_exists.side_effect = exists_side_effect
|
||||
|
||||
s = SyncService(mock_config)
|
||||
zip_mock = MagicMock()
|
||||
zip_mock.namelist.return_value = ["config.yaml", ".osk"]
|
||||
MockZipFile.return_value.__enter__.return_value = zip_mock
|
||||
|
||||
with patch("connpy.services.sync_service.yaml.safe_load") as mock_load:
|
||||
mock_load.return_value = {"connections": {}, "profiles": {}, "config": {}}
|
||||
assert s.perform_restore("/fake/zip.zip") is True
|
||||
|
||||
zip_mock.extract.assert_any_call(".osk", "/fake/dir")
|
||||
|
||||
@patch.object(SyncService, "get_credentials")
|
||||
@patch("connpy.services.sync_service.build")
|
||||
def test_list_backups(self, mock_build, mock_get_credentials, mock_config):
|
||||
mock_get_credentials.return_value = MagicMock()
|
||||
mock_service = MagicMock()
|
||||
mock_build.return_value = mock_service
|
||||
|
||||
mock_service.files().list().execute.return_value = {
|
||||
"files": [
|
||||
{"id": "1", "name": "backup1.zip", "appProperties": {"timestamp": "1000", "date": "2024"}}
|
||||
]
|
||||
}
|
||||
|
||||
s = SyncService(mock_config)
|
||||
files = s.list_backups()
|
||||
assert len(files) == 1
|
||||
assert files[0]["id"] == "1"
|
||||
assert files[0]["timestamp"] == "1000"
|
||||
@@ -1,203 +0,0 @@
|
||||
import asyncio
|
||||
import os
|
||||
import sys
|
||||
import termios
|
||||
import tty
|
||||
import signal
|
||||
import struct
|
||||
import fcntl
|
||||
|
||||
class LocalStream:
|
||||
"""
|
||||
Asynchronous stream wrapper for local stdin/stdout.
|
||||
Handles terminal raw mode, async I/O, and SIGWINCH signals.
|
||||
"""
|
||||
def __init__(self):
|
||||
self.stdin_fd = sys.stdin.fileno()
|
||||
self.stdout_fd = sys.stdout.fileno()
|
||||
self.original_tty_settings = None
|
||||
self.resize_callback = None
|
||||
self._reader_queue = asyncio.Queue()
|
||||
self._loop = None
|
||||
|
||||
def setup(self, resize_callback=None):
|
||||
self._loop = asyncio.get_running_loop()
|
||||
self.resize_callback = resize_callback
|
||||
|
||||
# Save original terminal settings
|
||||
try:
|
||||
self.original_tty_settings = termios.tcgetattr(self.stdin_fd)
|
||||
tty.setraw(self.stdin_fd)
|
||||
except termios.error:
|
||||
# Not a TTY, maybe piped or redirected
|
||||
pass
|
||||
|
||||
# Set stdin non-blocking
|
||||
flags = fcntl.fcntl(self.stdin_fd, fcntl.F_GETFL)
|
||||
fcntl.fcntl(self.stdin_fd, fcntl.F_SETFL, flags | os.O_NONBLOCK)
|
||||
|
||||
# Setup read callback
|
||||
self._loop.add_reader(self.stdin_fd, self._read_ready)
|
||||
|
||||
# Register SIGWINCH
|
||||
if resize_callback:
|
||||
try:
|
||||
self._loop.add_signal_handler(signal.SIGWINCH, self._handle_winch)
|
||||
except (NotImplementedError, RuntimeError):
|
||||
# signal handling not supported on some loops (e.g., Windows Proactor)
|
||||
pass
|
||||
|
||||
def stop_reading(self):
|
||||
"""Temporarily stop reading from stdin."""
|
||||
if self._loop and self.stdin_fd is not None:
|
||||
try:
|
||||
self._loop.remove_reader(self.stdin_fd)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def start_reading(self):
|
||||
"""Resume reading from stdin."""
|
||||
if self._loop and self.stdin_fd is not None:
|
||||
try:
|
||||
# Ensure we don't add it twice
|
||||
self._loop.remove_reader(self.stdin_fd)
|
||||
except Exception:
|
||||
pass
|
||||
self._loop.add_reader(self.stdin_fd, self._read_ready)
|
||||
|
||||
def teardown(self):
|
||||
if self._loop:
|
||||
try:
|
||||
self._loop.remove_reader(self.stdin_fd)
|
||||
except Exception:
|
||||
pass
|
||||
if self.resize_callback:
|
||||
try:
|
||||
self._loop.remove_signal_handler(signal.SIGWINCH)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Restore terminal settings
|
||||
if self.original_tty_settings is not None:
|
||||
try:
|
||||
termios.tcsetattr(self.stdin_fd, termios.TCSADRAIN, self.original_tty_settings)
|
||||
except termios.error:
|
||||
pass
|
||||
|
||||
# Restore blocking mode for stdin
|
||||
try:
|
||||
flags = fcntl.fcntl(self.stdin_fd, fcntl.F_GETFL)
|
||||
fcntl.fcntl(self.stdin_fd, fcntl.F_SETFL, flags & ~os.O_NONBLOCK)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def _read_ready(self):
|
||||
try:
|
||||
# Read whatever is available
|
||||
data = os.read(self.stdin_fd, 4096)
|
||||
if data:
|
||||
self._reader_queue.put_nowait(data)
|
||||
else:
|
||||
self._reader_queue.put_nowait(b'') # EOF
|
||||
except BlockingIOError:
|
||||
pass
|
||||
except OSError:
|
||||
self._reader_queue.put_nowait(b'') # EOF on error
|
||||
|
||||
async def read(self) -> bytes:
|
||||
"""Asynchronously read bytes from stdin."""
|
||||
return await self._reader_queue.get()
|
||||
|
||||
async def write(self, data: bytes):
|
||||
"""Asynchronously write bytes to stdout."""
|
||||
if not data:
|
||||
return
|
||||
|
||||
try:
|
||||
os.write(self.stdout_fd, data)
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
def _handle_winch(self):
|
||||
if self.resize_callback:
|
||||
try:
|
||||
# Use ioctl to get the current window size
|
||||
s = struct.pack("HHHH", 0, 0, 0, 0)
|
||||
a = fcntl.ioctl(self.stdout_fd, termios.TIOCGWINSZ, s)
|
||||
rows, cols, _, _ = struct.unpack("HHHH", a)
|
||||
|
||||
# We schedule the callback safely inside the asyncio loop
|
||||
# instead of running it raw in the signal handler
|
||||
self._loop.call_soon(self.resize_callback, rows, cols)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
import threading
|
||||
|
||||
class RemoteStream:
|
||||
"""
|
||||
Asynchronous stream wrapper for gRPC remote connections.
|
||||
Bridges the blocking gRPC iterators with the async _async_interact_loop.
|
||||
"""
|
||||
def __init__(self, request_iterator, response_queue):
|
||||
self.request_iterator = request_iterator
|
||||
self.response_queue = response_queue
|
||||
self.running = True
|
||||
self._reader_queue = asyncio.Queue()
|
||||
self.copilot_queue = asyncio.Queue()
|
||||
self.resize_callback = None
|
||||
self._loop = None
|
||||
self.t = None
|
||||
|
||||
def setup(self, resize_callback=None):
|
||||
self._loop = asyncio.get_running_loop()
|
||||
self.resize_callback = resize_callback
|
||||
|
||||
def read_requests():
|
||||
try:
|
||||
for req in self.request_iterator:
|
||||
if not self.running:
|
||||
break
|
||||
if req.cols > 0 and req.rows > 0:
|
||||
if self.resize_callback:
|
||||
self._loop.call_soon_threadsafe(self.resize_callback, req.rows, req.cols)
|
||||
# Copilot dispatching
|
||||
copilot_msg = {}
|
||||
if getattr(req, "copilot_question", ""):
|
||||
copilot_msg.update({
|
||||
"question": req.copilot_question,
|
||||
"context_buffer": getattr(req, "copilot_context_buffer", ""),
|
||||
"node_info_json": getattr(req, "copilot_node_info_json", "")
|
||||
})
|
||||
if getattr(req, "copilot_action", ""):
|
||||
copilot_msg["action"] = req.copilot_action
|
||||
|
||||
if copilot_msg:
|
||||
self._loop.call_soon_threadsafe(self.copilot_queue.put_nowait, copilot_msg)
|
||||
if req.stdin_data:
|
||||
self._loop.call_soon_threadsafe(self._reader_queue.put_nowait, req.stdin_data)
|
||||
except Exception:
|
||||
pass
|
||||
finally:
|
||||
if self._loop and not self._loop.is_closed():
|
||||
try:
|
||||
self._loop.call_soon_threadsafe(self._reader_queue.put_nowait, b'')
|
||||
except RuntimeError:
|
||||
pass
|
||||
|
||||
self.t = threading.Thread(target=read_requests, daemon=True)
|
||||
self.t.start()
|
||||
|
||||
def teardown(self):
|
||||
self.running = False
|
||||
self.response_queue.put(None) # Signal EOF
|
||||
|
||||
async def read(self) -> bytes:
|
||||
"""Asynchronously read bytes from the gRPC iterator queue."""
|
||||
return await self._reader_queue.get()
|
||||
|
||||
async def write(self, data: bytes):
|
||||
"""Asynchronously write bytes to the gRPC response queue."""
|
||||
if data:
|
||||
self.response_queue.put(data)
|
||||
@@ -1,47 +0,0 @@
|
||||
import re
|
||||
|
||||
def log_cleaner(data: str) -> str:
|
||||
"""
|
||||
Stateless utility to remove ANSI sequences and process cursor movements.
|
||||
"""
|
||||
if not data:
|
||||
return ""
|
||||
|
||||
lines = data.split('\n')
|
||||
cleaned_lines = []
|
||||
|
||||
# Regex to capture: ANSI sequences, control characters (\r, \b, etc), and plain text chunks
|
||||
token_re = re.compile(r'(\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/ ]*[@-~])|\r|\b|\x7f|[\x00-\x1F]|[^\x1B\r\b\x7f\x00-\x1F]+)')
|
||||
|
||||
for line in lines:
|
||||
buffer = []
|
||||
cursor = 0
|
||||
|
||||
for token in token_re.findall(line):
|
||||
if token == '\r':
|
||||
cursor = 0
|
||||
elif token in ('\b', '\x7f'):
|
||||
if cursor > 0:
|
||||
cursor -= 1
|
||||
elif token == '\x1B[D': # Left Arrow
|
||||
if cursor > 0:
|
||||
cursor -= 1
|
||||
elif token == '\x1B[C': # Right Arrow
|
||||
if cursor < len(buffer):
|
||||
cursor += 1
|
||||
elif token == '\x1B[K': # Clear to end of line
|
||||
buffer = buffer[:cursor]
|
||||
elif token.startswith('\x1B'):
|
||||
continue
|
||||
elif len(token) == 1 and ord(token) < 32:
|
||||
continue
|
||||
else:
|
||||
for char in token:
|
||||
if cursor == len(buffer):
|
||||
buffer.append(char)
|
||||
else:
|
||||
buffer[cursor] = char
|
||||
cursor += 1
|
||||
cleaned_lines.append("".join(buffer))
|
||||
|
||||
return "\n".join(cleaned_lines).replace('\n\n', '\n').strip()
|
||||
+5
-13
@@ -1,17 +1,9 @@
|
||||
version: "3.8"
|
||||
services:
|
||||
connpy-app:
|
||||
build: .
|
||||
image: connpy:latest
|
||||
container_name: connpy
|
||||
# Fundamental para la interactividad de la terminal
|
||||
stdin_open: true
|
||||
tty: true
|
||||
environment:
|
||||
- TERM=xterm-256color
|
||||
extra_hosts:
|
||||
- "host.docker.internal:host-gateway"
|
||||
image: connpy-app
|
||||
volumes:
|
||||
- ./docker/config:/config
|
||||
- ./docker/ssh:/root/.ssh
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
# No definimos comando por defecto para que 'run' sea más natural
|
||||
- ./docker/connpy/:/app
|
||||
- ./docker/logs/:/logs
|
||||
- ./docker/ssh/:/root/.ssh/
|
||||
|
||||
+14
-58
@@ -1,65 +1,21 @@
|
||||
# connpy v6.0.0b8 - Modern Network Automation Environment (Local Build)
|
||||
FROM python:3.11-slim
|
||||
# Use the official python image
|
||||
|
||||
LABEL description="Connpy: AI-Driven Network Automation & Intelligence Platform"
|
||||
|
||||
# Configuración de Terminal y Python
|
||||
ENV DEBIAN_FRONTEND=noninteractive \
|
||||
PYTHONUNBUFFERED=1 \
|
||||
TERM=xterm-256color
|
||||
FROM python:3.11-alpine as connpy-app
|
||||
|
||||
# Set the entrypoint
|
||||
# Set the working directory
|
||||
WORKDIR /app
|
||||
|
||||
# 1. Herramientas base del sistema
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
curl \
|
||||
git \
|
||||
openssh-client \
|
||||
fzf \
|
||||
ncurses-bin \
|
||||
bash \
|
||||
procps \
|
||||
unzip \
|
||||
ca-certificates \
|
||||
gnupg \
|
||||
iputils-ping \
|
||||
telnet \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
# Install any additional dependencies
|
||||
RUN apk update && apk add --no-cache openssh fzf fzf-tmux ncurses bash
|
||||
RUN pip3 install connpy
|
||||
RUN connpy config --configfolder /app
|
||||
|
||||
# 2. Instalar Docker CLI (para el plugin de docker de connpy)
|
||||
RUN install -m 0755 -d /etc/apt/keyrings && \
|
||||
curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg && \
|
||||
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
|
||||
tee /etc/apt/sources.list.d/docker.list > /dev/null && \
|
||||
apt-get update && apt-get install -y docker-ce-cli && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
#AUTH
|
||||
RUN ssh-keygen -A
|
||||
RUN mkdir /root/.ssh && \
|
||||
chmod 700 /root/.ssh
|
||||
|
||||
# 3. Instalar Kubectl (para el plugin de k8s de connpy)
|
||||
RUN curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/$(dpkg --print-architecture)/kubectl" && \
|
||||
install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl && \
|
||||
rm kubectl
|
||||
|
||||
# 4. Instalar AWS CLI y Session Manager Plugin (Universal x86_64/ARM64)
|
||||
RUN ARCH=$(uname -m) && \
|
||||
if [ "$ARCH" = "x86_64" ]; then AWS_ARCH="x86_64"; else AWS_ARCH="aarch64"; fi && \
|
||||
curl "https://awscli.amazonaws.com/awscli-exe-linux-$AWS_ARCH.zip" -o "awscliv2.zip" && \
|
||||
unzip awscliv2.zip && ./aws/install && rm -rf awscliv2.zip aws/ && \
|
||||
if [ "$ARCH" = "x86_64" ]; then \
|
||||
curl "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/ubuntu_64bit/session-manager-plugin.deb" -o "ssm.deb"; \
|
||||
else \
|
||||
curl "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/ubuntu_arm64/session-manager-plugin.deb" -o "ssm.deb"; \
|
||||
fi && \
|
||||
dpkg -i ssm.deb && rm ssm.deb
|
||||
|
||||
# 5. Copiar código local e instalar dependencias
|
||||
COPY . .
|
||||
RUN pip install --no-cache-dir --upgrade pip && \
|
||||
pip install --no-cache-dir .
|
||||
|
||||
# 6. Configuración de persistencia
|
||||
# Creamos la carpeta y el puntero .folder para que connpy use /config
|
||||
RUN mkdir -p /config /root/.ssh /root/.config/conn && chmod 700 /root/.ssh && \
|
||||
echo -n "/config" > /root/.config/conn/.folder
|
||||
|
||||
# Punto de entrada directo a connpy
|
||||
ENTRYPOINT ["conn"]
|
||||
#Set the entrypoint
|
||||
ENTRYPOINT ["connpy"]
|
||||
|
||||
@@ -1,614 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.5">
|
||||
<title>connpy.cli.ai_handler API documentation</title>
|
||||
<meta name="description" content="">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.cli.ai_handler</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-classes">Classes</h2>
|
||||
<dl>
|
||||
<dt id="connpy.cli.ai_handler.AIHandler"><code class="flex name class">
|
||||
<span>class <span class="ident">AIHandler</span></span>
|
||||
<span>(</span><span>app)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">class AIHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
|
||||
def dispatch(self, args):
|
||||
if args.list_sessions:
|
||||
sessions = self.app.services.ai.list_sessions()
|
||||
if not sessions:
|
||||
printer.info("No saved AI sessions found.")
|
||||
return
|
||||
columns = ["ID", "Title", "Created At", "Model"]
|
||||
rows = [[s["id"], s["title"], s["created_at"], s["model"]] for s in sessions]
|
||||
printer.table("AI Persisted Sessions", columns, rows)
|
||||
return
|
||||
|
||||
if args.delete_session:
|
||||
try:
|
||||
self.app.services.ai.delete_session(args.delete_session[0])
|
||||
printer.success(f"Session {args.delete_session[0]} deleted.")
|
||||
except Exception as e:
|
||||
printer.error(str(e))
|
||||
return
|
||||
|
||||
if args.mcp is not None:
|
||||
return self.configure_mcp(args)
|
||||
|
||||
# Determinar session_id para retomar
|
||||
session_id = None
|
||||
if args.resume:
|
||||
sessions = self.app.services.ai.list_sessions()
|
||||
session_id = sessions[0]["id"] if sessions else None
|
||||
if not session_id:
|
||||
printer.warning("No previous session found to resume.")
|
||||
elif args.session:
|
||||
session_id = args.session[0]
|
||||
|
||||
# Configurar argumentos adicionales para el servicio de AI
|
||||
# Prioridad: CLI Args > Configuración Local
|
||||
settings = self.app.services.config_svc.get_settings().get("ai", {})
|
||||
arguments = {}
|
||||
|
||||
for key in ["engineer_model", "engineer_api_key", "architect_model", "architect_api_key"]:
|
||||
cli_val = getattr(args, key, None)
|
||||
if cli_val:
|
||||
arguments[key] = cli_val[0]
|
||||
elif settings.get(key):
|
||||
arguments[key] = settings.get(key)
|
||||
|
||||
# Check keys only if running in local mode (not remote)
|
||||
if getattr(self.app.services, "mode", "local") == "local":
|
||||
if not arguments.get("engineer_api_key"):
|
||||
printer.error("Engineer API key not configured. The chat cannot start.")
|
||||
printer.info("Use 'connpy config --engineer-api-key <key>' to set it.")
|
||||
sys.exit(1)
|
||||
if not arguments.get("architect_api_key"):
|
||||
printer.warning("Architect API key not configured. Architect will be unavailable.")
|
||||
printer.info("Use 'connpy config --architect-api-key <key>' to enable it.")
|
||||
|
||||
# El resto de la interacción el CLI la maneja con el agente subyacente
|
||||
self.app.myai = self.app.services.ai
|
||||
self.ai_overrides = arguments
|
||||
|
||||
if args.ask:
|
||||
self.single_question(args, session_id)
|
||||
else:
|
||||
self.interactive_chat(args, session_id)
|
||||
|
||||
def single_question(self, args, session_id):
|
||||
query = " ".join(args.ask)
|
||||
with console.status("[ai_status]Agent is thinking and analyzing...") as status:
|
||||
result = self.app.myai.ask(query, status=status, debug=args.debug, session_id=session_id, trust=args.trust, **self.ai_overrides)
|
||||
|
||||
responder = result.get("responder", "engineer")
|
||||
border = "architect" if responder == "architect" else "engineer"
|
||||
title = "[architect][bold]Network Architect[/bold][/architect]" if responder == "architect" else "[engineer][bold]Network Engineer[/bold][/engineer]"
|
||||
|
||||
if not result.get("streamed"):
|
||||
mdprint(Panel(Markdown(result["response"]), title=title, border_style=border, expand=False))
|
||||
|
||||
if "usage" in result:
|
||||
u = result["usage"]
|
||||
console.print(f"[debug]Tokens: {u['total']} (Input: {u['input']}, Output: {u['output']})[/debug]")
|
||||
|
||||
def interactive_chat(self, args, session_id):
|
||||
history = None
|
||||
if session_id:
|
||||
session_data = self.app.myai.load_session_data(session_id)
|
||||
if session_data:
|
||||
history = session_data.get("history", [])
|
||||
mdprint(Rule(title=f"[header] Resuming Session: {session_data.get('title')} [/header]", style="border"))
|
||||
if history:
|
||||
mdprint(f"[debug]Analyzing {len(history)} previous messages...[/debug]\n")
|
||||
else:
|
||||
printer.error(f"Could not load session {session_id}. Starting clean.")
|
||||
|
||||
if not history:
|
||||
mdprint(Rule(style="engineer"))
|
||||
mdprint(Markdown("**Networking Expert Agent**: Hi! I'm your assistant. I can help you diagnose issues, run commands, and manage your nodes.\nType 'exit' to quit.\n"))
|
||||
mdprint(Rule(style="engineer"))
|
||||
|
||||
while True:
|
||||
try:
|
||||
user_query = Prompt.ask("[user_prompt]User[/user_prompt]")
|
||||
if not user_query.strip(): continue
|
||||
if user_query.lower() in ['exit', 'quit', 'bye', 'cancel']: break
|
||||
|
||||
with console.status("[ai_status]Agent is thinking...") as status:
|
||||
result = self.app.myai.ask(user_query, chat_history=history, status=status, debug=args.debug, trust=args.trust, **self.ai_overrides)
|
||||
|
||||
new_history = result.get("chat_history")
|
||||
if new_history is not None:
|
||||
history = new_history
|
||||
|
||||
responder = result.get("responder", "engineer")
|
||||
border = "architect" if responder == "architect" else "engineer"
|
||||
title = "[architect][bold]Network Architect[/bold][/architect]" if responder == "architect" else "[engineer][bold]Network Engineer[/bold][/engineer]"
|
||||
|
||||
if not result.get("streamed"):
|
||||
response_text = result.get("response", "")
|
||||
if response_text:
|
||||
mdprint(Panel(Markdown(response_text), title=title, border_style=border, expand=False))
|
||||
|
||||
if "usage" in result:
|
||||
u = result["usage"]
|
||||
console.print(f"[debug]Tokens: {u['total']} (Input: {u['input']}, Output: {u['output']})[/debug]")
|
||||
except (KeyboardInterrupt, EOFError):
|
||||
console.print("\n[dim]Session closed.[/dim]")
|
||||
break
|
||||
|
||||
def configure_mcp(self, args):
|
||||
"""Handle MCP server configuration via CLI tokens or interactive wizard."""
|
||||
mcp_args = args.mcp
|
||||
|
||||
# 1. Non-interactive CLI Mode (if arguments are provided)
|
||||
if mcp_args:
|
||||
action = mcp_args[0].lower()
|
||||
|
||||
if action == "list":
|
||||
settings = self.app.services.config_svc.get_settings()
|
||||
mcp_servers = settings.get("ai", {}).get("mcp_servers", {})
|
||||
if not mcp_servers:
|
||||
printer.info("No MCP servers configured.")
|
||||
else:
|
||||
columns = ["Name", "URL", "Enabled", "Auto-load OS"]
|
||||
rows = []
|
||||
for name, cfg in mcp_servers.items():
|
||||
rows.append([
|
||||
name,
|
||||
cfg.get("url", ""),
|
||||
"[green]Yes[/green]" if cfg.get("enabled", True) else "[red]No[/red]",
|
||||
cfg.get("auto_load_on_os", "Any")
|
||||
])
|
||||
printer.table("Configured MCP Servers", columns, rows)
|
||||
return
|
||||
|
||||
elif action == "add":
|
||||
if len(mcp_args) < 3:
|
||||
printer.error("Usage: connpy ai --mcp add <name> <url> [os_filter]")
|
||||
return
|
||||
name, url = mcp_args[1], mcp_args[2]
|
||||
os_filter = mcp_args[3] if len(mcp_args) > 3 else None
|
||||
try:
|
||||
self.app.services.ai.configure_mcp(name, url=url, auto_load_on_os=os_filter)
|
||||
printer.success(f"MCP server '{name}' added/updated.")
|
||||
except Exception as e:
|
||||
printer.error(str(e))
|
||||
return
|
||||
|
||||
elif action == "remove":
|
||||
if len(mcp_args) < 2:
|
||||
printer.error("Usage: connpy ai --mcp remove <name>")
|
||||
return
|
||||
name = mcp_args[1]
|
||||
try:
|
||||
self.app.services.ai.configure_mcp(name, remove=True)
|
||||
printer.success(f"MCP server '{name}' removed.")
|
||||
except Exception as e:
|
||||
printer.error(str(e))
|
||||
return
|
||||
|
||||
elif action in ["enable", "disable"]:
|
||||
if len(mcp_args) < 2:
|
||||
printer.error(f"Usage: connpy ai --mcp {action} <name>")
|
||||
return
|
||||
name = mcp_args[1]
|
||||
enabled = (action == "enable")
|
||||
try:
|
||||
self.app.services.ai.configure_mcp(name, enabled=enabled)
|
||||
printer.success(f"MCP server '{name}' {'enabled' if enabled else 'disabled'}.")
|
||||
except Exception as e:
|
||||
printer.error(str(e))
|
||||
return
|
||||
|
||||
else:
|
||||
printer.error(f"Unknown MCP action: {action}")
|
||||
printer.info("Available actions: list, add, remove, enable, disable")
|
||||
return
|
||||
|
||||
# 2. Interactive Wizard Mode (if no arguments provided)
|
||||
# Import forms dynamically to avoid circular dependencies if any
|
||||
if not hasattr(self.app, "cli_forms"):
|
||||
from .forms import Forms
|
||||
self.app.cli_forms = Forms(self.app)
|
||||
|
||||
settings = self.app.services.config_svc.get_settings()
|
||||
mcp_servers = settings.get("ai", {}).get("mcp_servers", {})
|
||||
|
||||
result = self.app.cli_forms.mcp_wizard(mcp_servers)
|
||||
if not result:
|
||||
return
|
||||
|
||||
action = result["action"]
|
||||
try:
|
||||
if action == "list":
|
||||
# Recursive call to the non-interactive list logic
|
||||
args.mcp = ["list"]
|
||||
return self.configure_mcp(args)
|
||||
|
||||
elif action == "add":
|
||||
self.app.services.ai.configure_mcp(
|
||||
result["name"],
|
||||
url=result["url"],
|
||||
enabled=result["enabled"],
|
||||
auto_load_on_os=result["os"]
|
||||
)
|
||||
printer.success(f"MCP server '{result['name']}' saved.")
|
||||
|
||||
elif action == "update": # Used for toggle
|
||||
self.app.services.ai.configure_mcp(
|
||||
result["name"],
|
||||
enabled=result["enabled"]
|
||||
)
|
||||
printer.success(f"MCP server '{result['name']}' updated.")
|
||||
|
||||
elif action == "remove":
|
||||
self.app.services.ai.configure_mcp(result["name"], remove=True)
|
||||
printer.success(f"MCP server '{result['name']}' removed.")
|
||||
|
||||
except Exception as e:
|
||||
printer.error(str(e))</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
<h3>Methods</h3>
|
||||
<dl>
|
||||
<dt id="connpy.cli.ai_handler.AIHandler.configure_mcp"><code class="name flex">
|
||||
<span>def <span class="ident">configure_mcp</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def configure_mcp(self, args):
|
||||
"""Handle MCP server configuration via CLI tokens or interactive wizard."""
|
||||
mcp_args = args.mcp
|
||||
|
||||
# 1. Non-interactive CLI Mode (if arguments are provided)
|
||||
if mcp_args:
|
||||
action = mcp_args[0].lower()
|
||||
|
||||
if action == "list":
|
||||
settings = self.app.services.config_svc.get_settings()
|
||||
mcp_servers = settings.get("ai", {}).get("mcp_servers", {})
|
||||
if not mcp_servers:
|
||||
printer.info("No MCP servers configured.")
|
||||
else:
|
||||
columns = ["Name", "URL", "Enabled", "Auto-load OS"]
|
||||
rows = []
|
||||
for name, cfg in mcp_servers.items():
|
||||
rows.append([
|
||||
name,
|
||||
cfg.get("url", ""),
|
||||
"[green]Yes[/green]" if cfg.get("enabled", True) else "[red]No[/red]",
|
||||
cfg.get("auto_load_on_os", "Any")
|
||||
])
|
||||
printer.table("Configured MCP Servers", columns, rows)
|
||||
return
|
||||
|
||||
elif action == "add":
|
||||
if len(mcp_args) < 3:
|
||||
printer.error("Usage: connpy ai --mcp add <name> <url> [os_filter]")
|
||||
return
|
||||
name, url = mcp_args[1], mcp_args[2]
|
||||
os_filter = mcp_args[3] if len(mcp_args) > 3 else None
|
||||
try:
|
||||
self.app.services.ai.configure_mcp(name, url=url, auto_load_on_os=os_filter)
|
||||
printer.success(f"MCP server '{name}' added/updated.")
|
||||
except Exception as e:
|
||||
printer.error(str(e))
|
||||
return
|
||||
|
||||
elif action == "remove":
|
||||
if len(mcp_args) < 2:
|
||||
printer.error("Usage: connpy ai --mcp remove <name>")
|
||||
return
|
||||
name = mcp_args[1]
|
||||
try:
|
||||
self.app.services.ai.configure_mcp(name, remove=True)
|
||||
printer.success(f"MCP server '{name}' removed.")
|
||||
except Exception as e:
|
||||
printer.error(str(e))
|
||||
return
|
||||
|
||||
elif action in ["enable", "disable"]:
|
||||
if len(mcp_args) < 2:
|
||||
printer.error(f"Usage: connpy ai --mcp {action} <name>")
|
||||
return
|
||||
name = mcp_args[1]
|
||||
enabled = (action == "enable")
|
||||
try:
|
||||
self.app.services.ai.configure_mcp(name, enabled=enabled)
|
||||
printer.success(f"MCP server '{name}' {'enabled' if enabled else 'disabled'}.")
|
||||
except Exception as e:
|
||||
printer.error(str(e))
|
||||
return
|
||||
|
||||
else:
|
||||
printer.error(f"Unknown MCP action: {action}")
|
||||
printer.info("Available actions: list, add, remove, enable, disable")
|
||||
return
|
||||
|
||||
# 2. Interactive Wizard Mode (if no arguments provided)
|
||||
# Import forms dynamically to avoid circular dependencies if any
|
||||
if not hasattr(self.app, "cli_forms"):
|
||||
from .forms import Forms
|
||||
self.app.cli_forms = Forms(self.app)
|
||||
|
||||
settings = self.app.services.config_svc.get_settings()
|
||||
mcp_servers = settings.get("ai", {}).get("mcp_servers", {})
|
||||
|
||||
result = self.app.cli_forms.mcp_wizard(mcp_servers)
|
||||
if not result:
|
||||
return
|
||||
|
||||
action = result["action"]
|
||||
try:
|
||||
if action == "list":
|
||||
# Recursive call to the non-interactive list logic
|
||||
args.mcp = ["list"]
|
||||
return self.configure_mcp(args)
|
||||
|
||||
elif action == "add":
|
||||
self.app.services.ai.configure_mcp(
|
||||
result["name"],
|
||||
url=result["url"],
|
||||
enabled=result["enabled"],
|
||||
auto_load_on_os=result["os"]
|
||||
)
|
||||
printer.success(f"MCP server '{result['name']}' saved.")
|
||||
|
||||
elif action == "update": # Used for toggle
|
||||
self.app.services.ai.configure_mcp(
|
||||
result["name"],
|
||||
enabled=result["enabled"]
|
||||
)
|
||||
printer.success(f"MCP server '{result['name']}' updated.")
|
||||
|
||||
elif action == "remove":
|
||||
self.app.services.ai.configure_mcp(result["name"], remove=True)
|
||||
printer.success(f"MCP server '{result['name']}' removed.")
|
||||
|
||||
except Exception as e:
|
||||
printer.error(str(e))</code></pre>
|
||||
</details>
|
||||
<div class="desc"><p>Handle MCP server configuration via CLI tokens or interactive wizard.</p></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.ai_handler.AIHandler.dispatch"><code class="name flex">
|
||||
<span>def <span class="ident">dispatch</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def dispatch(self, args):
|
||||
if args.list_sessions:
|
||||
sessions = self.app.services.ai.list_sessions()
|
||||
if not sessions:
|
||||
printer.info("No saved AI sessions found.")
|
||||
return
|
||||
columns = ["ID", "Title", "Created At", "Model"]
|
||||
rows = [[s["id"], s["title"], s["created_at"], s["model"]] for s in sessions]
|
||||
printer.table("AI Persisted Sessions", columns, rows)
|
||||
return
|
||||
|
||||
if args.delete_session:
|
||||
try:
|
||||
self.app.services.ai.delete_session(args.delete_session[0])
|
||||
printer.success(f"Session {args.delete_session[0]} deleted.")
|
||||
except Exception as e:
|
||||
printer.error(str(e))
|
||||
return
|
||||
|
||||
if args.mcp is not None:
|
||||
return self.configure_mcp(args)
|
||||
|
||||
# Determinar session_id para retomar
|
||||
session_id = None
|
||||
if args.resume:
|
||||
sessions = self.app.services.ai.list_sessions()
|
||||
session_id = sessions[0]["id"] if sessions else None
|
||||
if not session_id:
|
||||
printer.warning("No previous session found to resume.")
|
||||
elif args.session:
|
||||
session_id = args.session[0]
|
||||
|
||||
# Configurar argumentos adicionales para el servicio de AI
|
||||
# Prioridad: CLI Args > Configuración Local
|
||||
settings = self.app.services.config_svc.get_settings().get("ai", {})
|
||||
arguments = {}
|
||||
|
||||
for key in ["engineer_model", "engineer_api_key", "architect_model", "architect_api_key"]:
|
||||
cli_val = getattr(args, key, None)
|
||||
if cli_val:
|
||||
arguments[key] = cli_val[0]
|
||||
elif settings.get(key):
|
||||
arguments[key] = settings.get(key)
|
||||
|
||||
# Check keys only if running in local mode (not remote)
|
||||
if getattr(self.app.services, "mode", "local") == "local":
|
||||
if not arguments.get("engineer_api_key"):
|
||||
printer.error("Engineer API key not configured. The chat cannot start.")
|
||||
printer.info("Use 'connpy config --engineer-api-key <key>' to set it.")
|
||||
sys.exit(1)
|
||||
if not arguments.get("architect_api_key"):
|
||||
printer.warning("Architect API key not configured. Architect will be unavailable.")
|
||||
printer.info("Use 'connpy config --architect-api-key <key>' to enable it.")
|
||||
|
||||
# El resto de la interacción el CLI la maneja con el agente subyacente
|
||||
self.app.myai = self.app.services.ai
|
||||
self.ai_overrides = arguments
|
||||
|
||||
if args.ask:
|
||||
self.single_question(args, session_id)
|
||||
else:
|
||||
self.interactive_chat(args, session_id)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.ai_handler.AIHandler.interactive_chat"><code class="name flex">
|
||||
<span>def <span class="ident">interactive_chat</span></span>(<span>self, args, session_id)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def interactive_chat(self, args, session_id):
|
||||
history = None
|
||||
if session_id:
|
||||
session_data = self.app.myai.load_session_data(session_id)
|
||||
if session_data:
|
||||
history = session_data.get("history", [])
|
||||
mdprint(Rule(title=f"[header] Resuming Session: {session_data.get('title')} [/header]", style="border"))
|
||||
if history:
|
||||
mdprint(f"[debug]Analyzing {len(history)} previous messages...[/debug]\n")
|
||||
else:
|
||||
printer.error(f"Could not load session {session_id}. Starting clean.")
|
||||
|
||||
if not history:
|
||||
mdprint(Rule(style="engineer"))
|
||||
mdprint(Markdown("**Networking Expert Agent**: Hi! I'm your assistant. I can help you diagnose issues, run commands, and manage your nodes.\nType 'exit' to quit.\n"))
|
||||
mdprint(Rule(style="engineer"))
|
||||
|
||||
while True:
|
||||
try:
|
||||
user_query = Prompt.ask("[user_prompt]User[/user_prompt]")
|
||||
if not user_query.strip(): continue
|
||||
if user_query.lower() in ['exit', 'quit', 'bye', 'cancel']: break
|
||||
|
||||
with console.status("[ai_status]Agent is thinking...") as status:
|
||||
result = self.app.myai.ask(user_query, chat_history=history, status=status, debug=args.debug, trust=args.trust, **self.ai_overrides)
|
||||
|
||||
new_history = result.get("chat_history")
|
||||
if new_history is not None:
|
||||
history = new_history
|
||||
|
||||
responder = result.get("responder", "engineer")
|
||||
border = "architect" if responder == "architect" else "engineer"
|
||||
title = "[architect][bold]Network Architect[/bold][/architect]" if responder == "architect" else "[engineer][bold]Network Engineer[/bold][/engineer]"
|
||||
|
||||
if not result.get("streamed"):
|
||||
response_text = result.get("response", "")
|
||||
if response_text:
|
||||
mdprint(Panel(Markdown(response_text), title=title, border_style=border, expand=False))
|
||||
|
||||
if "usage" in result:
|
||||
u = result["usage"]
|
||||
console.print(f"[debug]Tokens: {u['total']} (Input: {u['input']}, Output: {u['output']})[/debug]")
|
||||
except (KeyboardInterrupt, EOFError):
|
||||
console.print("\n[dim]Session closed.[/dim]")
|
||||
break</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.ai_handler.AIHandler.single_question"><code class="name flex">
|
||||
<span>def <span class="ident">single_question</span></span>(<span>self, args, session_id)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def single_question(self, args, session_id):
|
||||
query = " ".join(args.ask)
|
||||
with console.status("[ai_status]Agent is thinking and analyzing...") as status:
|
||||
result = self.app.myai.ask(query, status=status, debug=args.debug, session_id=session_id, trust=args.trust, **self.ai_overrides)
|
||||
|
||||
responder = result.get("responder", "engineer")
|
||||
border = "architect" if responder == "architect" else "engineer"
|
||||
title = "[architect][bold]Network Architect[/bold][/architect]" if responder == "architect" else "[engineer][bold]Network Engineer[/bold][/engineer]"
|
||||
|
||||
if not result.get("streamed"):
|
||||
mdprint(Panel(Markdown(result["response"]), title=title, border_style=border, expand=False))
|
||||
|
||||
if "usage" in result:
|
||||
u = result["usage"]
|
||||
console.print(f"[debug]Tokens: {u['total']} (Input: {u['input']}, Output: {u['output']})[/debug]")</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.cli" href="index.html">connpy.cli</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-classes">Classes</a></h3>
|
||||
<ul>
|
||||
<li>
|
||||
<h4><code><a title="connpy.cli.ai_handler.AIHandler" href="#connpy.cli.ai_handler.AIHandler">AIHandler</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.cli.ai_handler.AIHandler.configure_mcp" href="#connpy.cli.ai_handler.AIHandler.configure_mcp">configure_mcp</a></code></li>
|
||||
<li><code><a title="connpy.cli.ai_handler.AIHandler.dispatch" href="#connpy.cli.ai_handler.AIHandler.dispatch">dispatch</a></code></li>
|
||||
<li><code><a title="connpy.cli.ai_handler.AIHandler.interactive_chat" href="#connpy.cli.ai_handler.AIHandler.interactive_chat">interactive_chat</a></code></li>
|
||||
<li><code><a title="connpy.cli.ai_handler.AIHandler.single_question" href="#connpy.cli.ai_handler.AIHandler.single_question">single_question</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.5</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,199 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.5">
|
||||
<title>connpy.cli.api_handler API documentation</title>
|
||||
<meta name="description" content="">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.cli.api_handler</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-classes">Classes</h2>
|
||||
<dl>
|
||||
<dt id="connpy.cli.api_handler.APIHandler"><code class="flex name class">
|
||||
<span>class <span class="ident">APIHandler</span></span>
|
||||
<span>(</span><span>app)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">class APIHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
|
||||
def dispatch(self, args):
|
||||
try:
|
||||
status = self.app.services.system.get_api_status()
|
||||
|
||||
if args.command == "stop":
|
||||
if not status["running"]:
|
||||
printer.warning("API does not seem to be running.")
|
||||
else:
|
||||
stopped = self.app.services.system.stop_api()
|
||||
if stopped:
|
||||
printer.success("API stopped successfully.")
|
||||
|
||||
elif args.command == "restart":
|
||||
port = args.data if args.data and isinstance(args.data, int) else None
|
||||
if status["running"]:
|
||||
printer.info(f"Stopping server with process ID {status['pid']}...")
|
||||
|
||||
# Service handles port preservation if port is None
|
||||
self.app.services.system.restart_api(port=port)
|
||||
|
||||
if status["running"]:
|
||||
printer.info(f"Server with process ID {status['pid']} stopped.")
|
||||
|
||||
# Re-fetch status to show the actual port used
|
||||
new_status = self.app.services.system.get_api_status()
|
||||
printer.success(f"API restarted on port {new_status.get('port', 'unknown')}.")
|
||||
|
||||
elif args.command == "start":
|
||||
if status["running"]:
|
||||
msg = f"Connpy server is already running (PID: {status['pid']}"
|
||||
if status.get("port"):
|
||||
msg += f", Port: {status['port']}"
|
||||
msg += ")."
|
||||
printer.warning(msg)
|
||||
else:
|
||||
port = args.data if args.data and isinstance(args.data, int) else 8048
|
||||
self.app.services.system.start_api(port=port)
|
||||
printer.success(f"API started on port {port}.")
|
||||
|
||||
elif args.command == "debug":
|
||||
port = args.data if args.data and isinstance(args.data, int) else 8048
|
||||
self.app.services.system.debug_api(port=port)
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
<h3>Methods</h3>
|
||||
<dl>
|
||||
<dt id="connpy.cli.api_handler.APIHandler.dispatch"><code class="name flex">
|
||||
<span>def <span class="ident">dispatch</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def dispatch(self, args):
|
||||
try:
|
||||
status = self.app.services.system.get_api_status()
|
||||
|
||||
if args.command == "stop":
|
||||
if not status["running"]:
|
||||
printer.warning("API does not seem to be running.")
|
||||
else:
|
||||
stopped = self.app.services.system.stop_api()
|
||||
if stopped:
|
||||
printer.success("API stopped successfully.")
|
||||
|
||||
elif args.command == "restart":
|
||||
port = args.data if args.data and isinstance(args.data, int) else None
|
||||
if status["running"]:
|
||||
printer.info(f"Stopping server with process ID {status['pid']}...")
|
||||
|
||||
# Service handles port preservation if port is None
|
||||
self.app.services.system.restart_api(port=port)
|
||||
|
||||
if status["running"]:
|
||||
printer.info(f"Server with process ID {status['pid']} stopped.")
|
||||
|
||||
# Re-fetch status to show the actual port used
|
||||
new_status = self.app.services.system.get_api_status()
|
||||
printer.success(f"API restarted on port {new_status.get('port', 'unknown')}.")
|
||||
|
||||
elif args.command == "start":
|
||||
if status["running"]:
|
||||
msg = f"Connpy server is already running (PID: {status['pid']}"
|
||||
if status.get("port"):
|
||||
msg += f", Port: {status['port']}"
|
||||
msg += ")."
|
||||
printer.warning(msg)
|
||||
else:
|
||||
port = args.data if args.data and isinstance(args.data, int) else 8048
|
||||
self.app.services.system.start_api(port=port)
|
||||
printer.success(f"API started on port {port}.")
|
||||
|
||||
elif args.command == "debug":
|
||||
port = args.data if args.data and isinstance(args.data, int) else 8048
|
||||
self.app.services.system.debug_api(port=port)
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.cli" href="index.html">connpy.cli</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-classes">Classes</a></h3>
|
||||
<ul>
|
||||
<li>
|
||||
<h4><code><a title="connpy.cli.api_handler.APIHandler" href="#connpy.cli.api_handler.APIHandler">APIHandler</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.cli.api_handler.APIHandler.dispatch" href="#connpy.cli.api_handler.APIHandler.dispatch">dispatch</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.5</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,488 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.5">
|
||||
<title>connpy.cli.config_handler API documentation</title>
|
||||
<meta name="description" content="">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.cli.config_handler</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-classes">Classes</h2>
|
||||
<dl>
|
||||
<dt id="connpy.cli.config_handler.ConfigHandler"><code class="flex name class">
|
||||
<span>class <span class="ident">ConfigHandler</span></span>
|
||||
<span>(</span><span>app)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">class ConfigHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
|
||||
def dispatch(self, args):
|
||||
actions = {
|
||||
"completion": self.show_completion,
|
||||
"fzf_wrapper": self.show_fzf_wrapper,
|
||||
"case": self.set_case,
|
||||
"fzf": self.set_fzf,
|
||||
"idletime": self.set_idletime,
|
||||
"configfolder": self.set_configfolder,
|
||||
"theme": self.set_theme,
|
||||
"engineer_model": self.set_ai_config,
|
||||
"engineer_api_key": self.set_ai_config,
|
||||
"architect_model": self.set_ai_config,
|
||||
"architect_api_key": self.set_ai_config,
|
||||
"trusted_commands": self.set_ai_config,
|
||||
"service_mode": self.set_service_mode,
|
||||
"remote_host": self.set_remote_host,
|
||||
"sync_remote": self.set_sync_remote
|
||||
}
|
||||
handler = actions.get(getattr(args, "command", None))
|
||||
if handler:
|
||||
return handler(args)
|
||||
|
||||
# If no specific command was triggered, show current configuration
|
||||
return self.show_config(args)
|
||||
|
||||
def show_config(self, args):
|
||||
settings = self.app.services.config_svc.get_settings()
|
||||
yaml_str = yaml.dump(settings, sort_keys=False, default_flow_style=False)
|
||||
printer.data("Current Configuration", yaml_str)
|
||||
|
||||
def set_service_mode(self, args):
|
||||
new_mode = args.data[0]
|
||||
if new_mode == "remote":
|
||||
settings = self.app.services.config_svc.get_settings()
|
||||
if not settings.get("remote_host"):
|
||||
printer.error("Remote host must be configured before switching to remote mode")
|
||||
return
|
||||
|
||||
self.app.services.config_svc.update_setting("service_mode", new_mode)
|
||||
|
||||
# Immediate sync of fzf/text cache files for the new mode
|
||||
try:
|
||||
# 1. Clear old cache files to avoid discrepancies if fetch fails
|
||||
self.app.config._generate_nodes_cache(nodes=[], folders=[], profiles=[])
|
||||
|
||||
# 2. Re-initialize services for the new mode
|
||||
from ..services.provider import ServiceProvider
|
||||
settings = self.app.services.config_svc.get_settings()
|
||||
new_services = ServiceProvider(self.app.config, mode=new_mode, remote_host=settings.get("remote_host"))
|
||||
|
||||
# 3. Fetch data from new mode and generate cache
|
||||
nodes = new_services.nodes.list_nodes()
|
||||
folders = new_services.nodes.list_folders()
|
||||
profiles = new_services.profiles.list_profiles()
|
||||
new_services.nodes.generate_cache(nodes=nodes, folders=folders, profiles=profiles)
|
||||
|
||||
printer.success("Config saved")
|
||||
except Exception as e:
|
||||
printer.success("Config saved")
|
||||
printer.warning(f"Note: Could not synchronize fzf cache: {e}")
|
||||
|
||||
|
||||
def set_remote_host(self, args):
|
||||
self.app.services.config_svc.update_setting("remote_host", args.data[0])
|
||||
printer.success("Config saved")
|
||||
|
||||
def set_theme(self, args):
|
||||
try:
|
||||
valid_styles = self.app.services.config_svc.apply_theme_from_file(args.data[0])
|
||||
# Apply immediately to current session
|
||||
printer.apply_theme(valid_styles)
|
||||
printer.success(f"Theme '{args.data[0]}' applied and saved")
|
||||
except (ConnpyError, InvalidConfigurationError) as e:
|
||||
printer.error(str(e))
|
||||
|
||||
def show_fzf_wrapper(self, args):
|
||||
print(get_instructions("fzf_wrapper_" + args.data[0]))
|
||||
|
||||
def show_completion(self, args):
|
||||
print(get_instructions(args.data[0] + "completion"))
|
||||
|
||||
def set_case(self, args):
|
||||
val = (args.data[0].lower() == "true")
|
||||
self.app.services.config_svc.update_setting("case", val)
|
||||
self.app.case = val
|
||||
printer.success("Config saved")
|
||||
|
||||
def set_fzf(self, args):
|
||||
val = (args.data[0].lower() == "true")
|
||||
self.app.services.config_svc.update_setting("fzf", val)
|
||||
self.app.fzf = val
|
||||
printer.success("Config saved")
|
||||
|
||||
def set_idletime(self, args):
|
||||
try:
|
||||
val = max(0, int(args.data[0]))
|
||||
self.app.services.config_svc.update_setting("idletime", val)
|
||||
printer.success("Config saved")
|
||||
except ValueError:
|
||||
printer.error("Keepalive must be an integer.")
|
||||
|
||||
def set_configfolder(self, args):
|
||||
try:
|
||||
self.app.services.config_svc.set_config_folder(args.data[0])
|
||||
printer.success("Config saved")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
def set_sync_remote(self, args):
|
||||
val = (args.data[0].lower() == "true")
|
||||
self.app.services.config_svc.update_setting("sync_remote", val)
|
||||
self.app.services.sync.sync_remote = val
|
||||
printer.success("Config saved")
|
||||
|
||||
def set_ai_config(self, args):
|
||||
try:
|
||||
settings = self.app.services.config_svc.get_settings()
|
||||
aiconfig = settings.get("ai", {})
|
||||
aiconfig[args.command] = args.data[0]
|
||||
self.app.services.config_svc.update_setting("ai", aiconfig)
|
||||
printer.success("Config saved")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
<h3>Methods</h3>
|
||||
<dl>
|
||||
<dt id="connpy.cli.config_handler.ConfigHandler.dispatch"><code class="name flex">
|
||||
<span>def <span class="ident">dispatch</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def dispatch(self, args):
|
||||
actions = {
|
||||
"completion": self.show_completion,
|
||||
"fzf_wrapper": self.show_fzf_wrapper,
|
||||
"case": self.set_case,
|
||||
"fzf": self.set_fzf,
|
||||
"idletime": self.set_idletime,
|
||||
"configfolder": self.set_configfolder,
|
||||
"theme": self.set_theme,
|
||||
"engineer_model": self.set_ai_config,
|
||||
"engineer_api_key": self.set_ai_config,
|
||||
"architect_model": self.set_ai_config,
|
||||
"architect_api_key": self.set_ai_config,
|
||||
"trusted_commands": self.set_ai_config,
|
||||
"service_mode": self.set_service_mode,
|
||||
"remote_host": self.set_remote_host,
|
||||
"sync_remote": self.set_sync_remote
|
||||
}
|
||||
handler = actions.get(getattr(args, "command", None))
|
||||
if handler:
|
||||
return handler(args)
|
||||
|
||||
# If no specific command was triggered, show current configuration
|
||||
return self.show_config(args)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.config_handler.ConfigHandler.set_ai_config"><code class="name flex">
|
||||
<span>def <span class="ident">set_ai_config</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def set_ai_config(self, args):
|
||||
try:
|
||||
settings = self.app.services.config_svc.get_settings()
|
||||
aiconfig = settings.get("ai", {})
|
||||
aiconfig[args.command] = args.data[0]
|
||||
self.app.services.config_svc.update_setting("ai", aiconfig)
|
||||
printer.success("Config saved")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.config_handler.ConfigHandler.set_case"><code class="name flex">
|
||||
<span>def <span class="ident">set_case</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def set_case(self, args):
|
||||
val = (args.data[0].lower() == "true")
|
||||
self.app.services.config_svc.update_setting("case", val)
|
||||
self.app.case = val
|
||||
printer.success("Config saved")</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.config_handler.ConfigHandler.set_configfolder"><code class="name flex">
|
||||
<span>def <span class="ident">set_configfolder</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def set_configfolder(self, args):
|
||||
try:
|
||||
self.app.services.config_svc.set_config_folder(args.data[0])
|
||||
printer.success("Config saved")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.config_handler.ConfigHandler.set_fzf"><code class="name flex">
|
||||
<span>def <span class="ident">set_fzf</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def set_fzf(self, args):
|
||||
val = (args.data[0].lower() == "true")
|
||||
self.app.services.config_svc.update_setting("fzf", val)
|
||||
self.app.fzf = val
|
||||
printer.success("Config saved")</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.config_handler.ConfigHandler.set_idletime"><code class="name flex">
|
||||
<span>def <span class="ident">set_idletime</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def set_idletime(self, args):
|
||||
try:
|
||||
val = max(0, int(args.data[0]))
|
||||
self.app.services.config_svc.update_setting("idletime", val)
|
||||
printer.success("Config saved")
|
||||
except ValueError:
|
||||
printer.error("Keepalive must be an integer.")</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.config_handler.ConfigHandler.set_remote_host"><code class="name flex">
|
||||
<span>def <span class="ident">set_remote_host</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def set_remote_host(self, args):
|
||||
self.app.services.config_svc.update_setting("remote_host", args.data[0])
|
||||
printer.success("Config saved")</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.config_handler.ConfigHandler.set_service_mode"><code class="name flex">
|
||||
<span>def <span class="ident">set_service_mode</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def set_service_mode(self, args):
|
||||
new_mode = args.data[0]
|
||||
if new_mode == "remote":
|
||||
settings = self.app.services.config_svc.get_settings()
|
||||
if not settings.get("remote_host"):
|
||||
printer.error("Remote host must be configured before switching to remote mode")
|
||||
return
|
||||
|
||||
self.app.services.config_svc.update_setting("service_mode", new_mode)
|
||||
|
||||
# Immediate sync of fzf/text cache files for the new mode
|
||||
try:
|
||||
# 1. Clear old cache files to avoid discrepancies if fetch fails
|
||||
self.app.config._generate_nodes_cache(nodes=[], folders=[], profiles=[])
|
||||
|
||||
# 2. Re-initialize services for the new mode
|
||||
from ..services.provider import ServiceProvider
|
||||
settings = self.app.services.config_svc.get_settings()
|
||||
new_services = ServiceProvider(self.app.config, mode=new_mode, remote_host=settings.get("remote_host"))
|
||||
|
||||
# 3. Fetch data from new mode and generate cache
|
||||
nodes = new_services.nodes.list_nodes()
|
||||
folders = new_services.nodes.list_folders()
|
||||
profiles = new_services.profiles.list_profiles()
|
||||
new_services.nodes.generate_cache(nodes=nodes, folders=folders, profiles=profiles)
|
||||
|
||||
printer.success("Config saved")
|
||||
except Exception as e:
|
||||
printer.success("Config saved")
|
||||
printer.warning(f"Note: Could not synchronize fzf cache: {e}")</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.config_handler.ConfigHandler.set_sync_remote"><code class="name flex">
|
||||
<span>def <span class="ident">set_sync_remote</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def set_sync_remote(self, args):
|
||||
val = (args.data[0].lower() == "true")
|
||||
self.app.services.config_svc.update_setting("sync_remote", val)
|
||||
self.app.services.sync.sync_remote = val
|
||||
printer.success("Config saved")</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.config_handler.ConfigHandler.set_theme"><code class="name flex">
|
||||
<span>def <span class="ident">set_theme</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def set_theme(self, args):
|
||||
try:
|
||||
valid_styles = self.app.services.config_svc.apply_theme_from_file(args.data[0])
|
||||
# Apply immediately to current session
|
||||
printer.apply_theme(valid_styles)
|
||||
printer.success(f"Theme '{args.data[0]}' applied and saved")
|
||||
except (ConnpyError, InvalidConfigurationError) as e:
|
||||
printer.error(str(e))</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.config_handler.ConfigHandler.show_completion"><code class="name flex">
|
||||
<span>def <span class="ident">show_completion</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def show_completion(self, args):
|
||||
print(get_instructions(args.data[0] + "completion"))</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.config_handler.ConfigHandler.show_config"><code class="name flex">
|
||||
<span>def <span class="ident">show_config</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def show_config(self, args):
|
||||
settings = self.app.services.config_svc.get_settings()
|
||||
yaml_str = yaml.dump(settings, sort_keys=False, default_flow_style=False)
|
||||
printer.data("Current Configuration", yaml_str)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.config_handler.ConfigHandler.show_fzf_wrapper"><code class="name flex">
|
||||
<span>def <span class="ident">show_fzf_wrapper</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def show_fzf_wrapper(self, args):
|
||||
print(get_instructions("fzf_wrapper_" + args.data[0]))</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.cli" href="index.html">connpy.cli</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-classes">Classes</a></h3>
|
||||
<ul>
|
||||
<li>
|
||||
<h4><code><a title="connpy.cli.config_handler.ConfigHandler" href="#connpy.cli.config_handler.ConfigHandler">ConfigHandler</a></code></h4>
|
||||
<ul class="two-column">
|
||||
<li><code><a title="connpy.cli.config_handler.ConfigHandler.dispatch" href="#connpy.cli.config_handler.ConfigHandler.dispatch">dispatch</a></code></li>
|
||||
<li><code><a title="connpy.cli.config_handler.ConfigHandler.set_ai_config" href="#connpy.cli.config_handler.ConfigHandler.set_ai_config">set_ai_config</a></code></li>
|
||||
<li><code><a title="connpy.cli.config_handler.ConfigHandler.set_case" href="#connpy.cli.config_handler.ConfigHandler.set_case">set_case</a></code></li>
|
||||
<li><code><a title="connpy.cli.config_handler.ConfigHandler.set_configfolder" href="#connpy.cli.config_handler.ConfigHandler.set_configfolder">set_configfolder</a></code></li>
|
||||
<li><code><a title="connpy.cli.config_handler.ConfigHandler.set_fzf" href="#connpy.cli.config_handler.ConfigHandler.set_fzf">set_fzf</a></code></li>
|
||||
<li><code><a title="connpy.cli.config_handler.ConfigHandler.set_idletime" href="#connpy.cli.config_handler.ConfigHandler.set_idletime">set_idletime</a></code></li>
|
||||
<li><code><a title="connpy.cli.config_handler.ConfigHandler.set_remote_host" href="#connpy.cli.config_handler.ConfigHandler.set_remote_host">set_remote_host</a></code></li>
|
||||
<li><code><a title="connpy.cli.config_handler.ConfigHandler.set_service_mode" href="#connpy.cli.config_handler.ConfigHandler.set_service_mode">set_service_mode</a></code></li>
|
||||
<li><code><a title="connpy.cli.config_handler.ConfigHandler.set_sync_remote" href="#connpy.cli.config_handler.ConfigHandler.set_sync_remote">set_sync_remote</a></code></li>
|
||||
<li><code><a title="connpy.cli.config_handler.ConfigHandler.set_theme" href="#connpy.cli.config_handler.ConfigHandler.set_theme">set_theme</a></code></li>
|
||||
<li><code><a title="connpy.cli.config_handler.ConfigHandler.show_completion" href="#connpy.cli.config_handler.ConfigHandler.show_completion">show_completion</a></code></li>
|
||||
<li><code><a title="connpy.cli.config_handler.ConfigHandler.show_config" href="#connpy.cli.config_handler.ConfigHandler.show_config">show_config</a></code></li>
|
||||
<li><code><a title="connpy.cli.config_handler.ConfigHandler.show_fzf_wrapper" href="#connpy.cli.config_handler.ConfigHandler.show_fzf_wrapper">show_fzf_wrapper</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.5</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,255 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.5">
|
||||
<title>connpy.cli.context_handler API documentation</title>
|
||||
<meta name="description" content="">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.cli.context_handler</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-classes">Classes</h2>
|
||||
<dl>
|
||||
<dt id="connpy.cli.context_handler.ContextHandler"><code class="flex name class">
|
||||
<span>class <span class="ident">ContextHandler</span></span>
|
||||
<span>(</span><span>app)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">class ContextHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
self.service = self.app.services.context
|
||||
|
||||
def dispatch(self, args):
|
||||
try:
|
||||
if args.add:
|
||||
if len(args.add) < 2:
|
||||
printer.error("--add requires name and at least one regex")
|
||||
return
|
||||
self.service.add_context(args.add[0], args.add[1:])
|
||||
printer.success(f"Context '{args.add[0]}' added successfully.")
|
||||
|
||||
elif args.rm:
|
||||
if not args.context_name:
|
||||
printer.error("--rm requires a context name")
|
||||
return
|
||||
self.service.delete_context(args.context_name)
|
||||
printer.success(f"Context '{args.context_name}' deleted successfully.")
|
||||
|
||||
elif args.ls:
|
||||
contexts = self.service.list_contexts()
|
||||
for ctx in contexts:
|
||||
if ctx["active"]:
|
||||
printer.success(f"{ctx['name']} (active)")
|
||||
else:
|
||||
printer.custom(" ", ctx["name"])
|
||||
|
||||
elif args.set:
|
||||
if not args.context_name:
|
||||
printer.error("--set requires a context name")
|
||||
return
|
||||
self.service.set_active_context(args.context_name)
|
||||
printer.success(f"Context set to: {args.context_name}")
|
||||
|
||||
elif args.show:
|
||||
if not args.context_name:
|
||||
printer.error("--show requires a context name")
|
||||
return
|
||||
contexts = self.service.contexts
|
||||
if args.context_name not in contexts:
|
||||
printer.error(f"Context '{args.context_name}' does not exist")
|
||||
return
|
||||
yaml_output = yaml.dump(contexts[args.context_name], sort_keys=False, default_flow_style=False)
|
||||
printer.custom(args.context_name, "")
|
||||
print(yaml_output)
|
||||
|
||||
elif args.edit:
|
||||
if len(args.edit) < 2:
|
||||
printer.error("--edit requires name and at least one regex")
|
||||
return
|
||||
self.service.update_context(args.edit[0], args.edit[1:])
|
||||
printer.success(f"Context '{args.edit[0]}' modified successfully.")
|
||||
|
||||
else:
|
||||
# Default behavior if no flags: show list
|
||||
self.dispatch_ls(args)
|
||||
|
||||
except ValueError as e:
|
||||
printer.error(str(e))
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
def dispatch_ls(self, args):
|
||||
contexts = self.service.list_contexts()
|
||||
for ctx in contexts:
|
||||
if ctx["active"]:
|
||||
printer.success(f"{ctx['name']} (active)")
|
||||
else:
|
||||
printer.custom(" ", ctx["name"])</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
<h3>Methods</h3>
|
||||
<dl>
|
||||
<dt id="connpy.cli.context_handler.ContextHandler.dispatch"><code class="name flex">
|
||||
<span>def <span class="ident">dispatch</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def dispatch(self, args):
|
||||
try:
|
||||
if args.add:
|
||||
if len(args.add) < 2:
|
||||
printer.error("--add requires name and at least one regex")
|
||||
return
|
||||
self.service.add_context(args.add[0], args.add[1:])
|
||||
printer.success(f"Context '{args.add[0]}' added successfully.")
|
||||
|
||||
elif args.rm:
|
||||
if not args.context_name:
|
||||
printer.error("--rm requires a context name")
|
||||
return
|
||||
self.service.delete_context(args.context_name)
|
||||
printer.success(f"Context '{args.context_name}' deleted successfully.")
|
||||
|
||||
elif args.ls:
|
||||
contexts = self.service.list_contexts()
|
||||
for ctx in contexts:
|
||||
if ctx["active"]:
|
||||
printer.success(f"{ctx['name']} (active)")
|
||||
else:
|
||||
printer.custom(" ", ctx["name"])
|
||||
|
||||
elif args.set:
|
||||
if not args.context_name:
|
||||
printer.error("--set requires a context name")
|
||||
return
|
||||
self.service.set_active_context(args.context_name)
|
||||
printer.success(f"Context set to: {args.context_name}")
|
||||
|
||||
elif args.show:
|
||||
if not args.context_name:
|
||||
printer.error("--show requires a context name")
|
||||
return
|
||||
contexts = self.service.contexts
|
||||
if args.context_name not in contexts:
|
||||
printer.error(f"Context '{args.context_name}' does not exist")
|
||||
return
|
||||
yaml_output = yaml.dump(contexts[args.context_name], sort_keys=False, default_flow_style=False)
|
||||
printer.custom(args.context_name, "")
|
||||
print(yaml_output)
|
||||
|
||||
elif args.edit:
|
||||
if len(args.edit) < 2:
|
||||
printer.error("--edit requires name and at least one regex")
|
||||
return
|
||||
self.service.update_context(args.edit[0], args.edit[1:])
|
||||
printer.success(f"Context '{args.edit[0]}' modified successfully.")
|
||||
|
||||
else:
|
||||
# Default behavior if no flags: show list
|
||||
self.dispatch_ls(args)
|
||||
|
||||
except ValueError as e:
|
||||
printer.error(str(e))
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.context_handler.ContextHandler.dispatch_ls"><code class="name flex">
|
||||
<span>def <span class="ident">dispatch_ls</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def dispatch_ls(self, args):
|
||||
contexts = self.service.list_contexts()
|
||||
for ctx in contexts:
|
||||
if ctx["active"]:
|
||||
printer.success(f"{ctx['name']} (active)")
|
||||
else:
|
||||
printer.custom(" ", ctx["name"])</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.cli" href="index.html">connpy.cli</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-classes">Classes</a></h3>
|
||||
<ul>
|
||||
<li>
|
||||
<h4><code><a title="connpy.cli.context_handler.ContextHandler" href="#connpy.cli.context_handler.ContextHandler">ContextHandler</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.cli.context_handler.ContextHandler.dispatch" href="#connpy.cli.context_handler.ContextHandler.dispatch">dispatch</a></code></li>
|
||||
<li><code><a title="connpy.cli.context_handler.ContextHandler.dispatch_ls" href="#connpy.cli.context_handler.ContextHandler.dispatch_ls">dispatch_ls</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.5</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,696 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.5">
|
||||
<title>connpy.cli.forms API documentation</title>
|
||||
<meta name="description" content="">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.cli.forms</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-classes">Classes</h2>
|
||||
<dl>
|
||||
<dt id="connpy.cli.forms.Forms"><code class="flex name class">
|
||||
<span>class <span class="ident">Forms</span></span>
|
||||
<span>(</span><span>app)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">class Forms:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
self.validators = Validators(app)
|
||||
|
||||
def questions_edit(self):
|
||||
questions = []
|
||||
questions.append(inquirer.Confirm("host", message="Edit Hostname/IP?"))
|
||||
questions.append(inquirer.Confirm("protocol", message="Edit Protocol/app?"))
|
||||
questions.append(inquirer.Confirm("port", message="Edit Port?"))
|
||||
questions.append(inquirer.Confirm("options", message="Edit Options?"))
|
||||
questions.append(inquirer.Confirm("logs", message="Edit logging path/file?"))
|
||||
questions.append(inquirer.Confirm("tags", message="Edit tags?"))
|
||||
questions.append(inquirer.Confirm("jumphost", message="Edit jumphost?"))
|
||||
questions.append(inquirer.Confirm("user", message="Edit User?"))
|
||||
questions.append(inquirer.Confirm("password", message="Edit password?"))
|
||||
return inquirer.prompt(questions)
|
||||
|
||||
def questions_nodes(self, unique, uniques=None, edit=None):
|
||||
try:
|
||||
defaults = self.app.services.nodes.get_node_details(unique)
|
||||
if "tags" not in defaults:
|
||||
defaults["tags"] = ""
|
||||
if "jumphost" not in defaults:
|
||||
defaults["jumphost"] = ""
|
||||
except Exception:
|
||||
defaults = {"host": "", "protocol": "", "port": "", "user": "", "options": "", "logs": "", "tags": "", "password": "", "jumphost": ""}
|
||||
node = {}
|
||||
if edit is None:
|
||||
edit = {"host": True, "protocol": True, "port": True, "user": True, "password": True, "options": True, "logs": True, "tags": True, "jumphost": True}
|
||||
questions = []
|
||||
if edit["host"]:
|
||||
questions.append(inquirer.Text("host", message="Add Hostname or IP", validate=self.validators.host_validation, default=defaults["host"]))
|
||||
else:
|
||||
node["host"] = defaults["host"]
|
||||
if edit["protocol"]:
|
||||
questions.append(inquirer.Text("protocol", message="Select Protocol/app", validate=self.validators.protocol_validation, default=defaults["protocol"]))
|
||||
else:
|
||||
node["protocol"] = defaults["protocol"]
|
||||
if edit["port"]:
|
||||
questions.append(inquirer.Text("port", message="Select Port Number", validate=self.validators.port_validation, default=defaults["port"]))
|
||||
else:
|
||||
node["port"] = defaults["port"]
|
||||
if edit["options"]:
|
||||
questions.append(inquirer.Text("options", message="Pass extra options to protocol/app", validate=self.validators.default_validation, default=defaults["options"]))
|
||||
else:
|
||||
node["options"] = defaults["options"]
|
||||
if edit["logs"]:
|
||||
questions.append(inquirer.Text("logs", message="Pick logging path/file ", validate=self.validators.default_validation, default=defaults["logs"].replace("{", "{{").replace("}", "}}")))
|
||||
else:
|
||||
node["logs"] = defaults["logs"]
|
||||
if edit["tags"]:
|
||||
questions.append(inquirer.Text("tags", message="Add tags dictionary", validate=self.validators.tags_validation, default=str(defaults["tags"]).replace("{", "{{").replace("}", "}}")))
|
||||
else:
|
||||
node["tags"] = defaults["tags"]
|
||||
if edit["jumphost"]:
|
||||
questions.append(inquirer.Text("jumphost", message="Add Jumphost node", validate=self.validators.jumphost_validation, default=str(defaults["jumphost"]).replace("{", "{{").replace("}", "}}")))
|
||||
else:
|
||||
node["jumphost"] = defaults["jumphost"]
|
||||
if edit["user"]:
|
||||
questions.append(inquirer.Text("user", message="Pick username", validate=self.validators.default_validation, default=defaults["user"]))
|
||||
else:
|
||||
node["user"] = defaults["user"]
|
||||
if edit["password"]:
|
||||
questions.append(inquirer.List("password", message="Password: Use a local password, no password or a list of profiles to reference?", choices=["Local Password", "Profiles", "No Password"]))
|
||||
else:
|
||||
node["password"] = defaults["password"]
|
||||
|
||||
answer = inquirer.prompt(questions)
|
||||
if answer is None:
|
||||
return False
|
||||
|
||||
if "password" in answer:
|
||||
if answer["password"] == "Local Password":
|
||||
passq = [inquirer.Password("password", message="Set Password")]
|
||||
passa = inquirer.prompt(passq)
|
||||
if passa is None:
|
||||
return False
|
||||
answer["password"] = self.app.services.config_svc.encrypt_password(passa["password"])
|
||||
elif answer["password"] == "Profiles":
|
||||
passq = [(inquirer.Text("password", message="Set a @profile or a comma separated list of @profiles", validate=self.validators.pass_validation))]
|
||||
passa = inquirer.prompt(passq)
|
||||
if passa is None:
|
||||
return False
|
||||
answer["password"] = passa["password"].split(",")
|
||||
elif answer["password"] == "No Password":
|
||||
answer["password"] = ""
|
||||
|
||||
if "tags" in answer and not answer["tags"].startswith("@") and answer["tags"]:
|
||||
answer["tags"] = ast.literal_eval(answer["tags"])
|
||||
|
||||
result = {**uniques, **answer, **node}
|
||||
result["type"] = "connection"
|
||||
return result
|
||||
|
||||
def questions_profiles(self, unique, edit=None):
|
||||
try:
|
||||
defaults = self.app.services.profiles.get_profile(unique, resolve=False)
|
||||
if "tags" not in defaults:
|
||||
defaults["tags"] = ""
|
||||
if "jumphost" not in defaults:
|
||||
defaults["jumphost"] = ""
|
||||
except Exception:
|
||||
defaults = {"host": "", "protocol": "", "port": "", "user": "", "options": "", "logs": "", "tags": "", "jumphost": ""}
|
||||
profile = {}
|
||||
if edit is None:
|
||||
edit = {"host": True, "protocol": True, "port": True, "user": True, "password": True, "options": True, "logs": True, "tags": True, "jumphost": True}
|
||||
questions = []
|
||||
if edit["host"]:
|
||||
questions.append(inquirer.Text("host", message="Add Hostname or IP", default=defaults["host"]))
|
||||
else:
|
||||
profile["host"] = defaults["host"]
|
||||
if edit["protocol"]:
|
||||
questions.append(inquirer.Text("protocol", message="Select Protocol/app", validate=self.validators.profile_protocol_validation, default=defaults["protocol"]))
|
||||
else:
|
||||
profile["protocol"] = defaults["protocol"]
|
||||
if edit["port"]:
|
||||
questions.append(inquirer.Text("port", message="Select Port Number", validate=self.validators.profile_port_validation, default=defaults["port"]))
|
||||
else:
|
||||
profile["port"] = defaults["port"]
|
||||
if edit["options"]:
|
||||
questions.append(inquirer.Text("options", message="Pass extra options to protocol/app", default=defaults["options"]))
|
||||
else:
|
||||
profile["options"] = defaults["options"]
|
||||
if edit["logs"]:
|
||||
questions.append(inquirer.Text("logs", message="Pick logging path/file ", default=defaults["logs"].replace("{", "{{").replace("}", "}}")))
|
||||
else:
|
||||
profile["logs"] = defaults["logs"]
|
||||
if edit["tags"]:
|
||||
questions.append(inquirer.Text("tags", message="Add tags dictionary", validate=self.validators.profile_tags_validation, default=str(defaults["tags"]).replace("{", "{{").replace("}", "}}")))
|
||||
else:
|
||||
profile["tags"] = defaults["tags"]
|
||||
if edit["jumphost"]:
|
||||
questions.append(inquirer.Text("jumphost", message="Add Jumphost node", validate=self.validators.profile_jumphost_validation, default=str(defaults["jumphost"]).replace("{", "{{").replace("}", "}}")))
|
||||
else:
|
||||
profile["jumphost"] = defaults["jumphost"]
|
||||
if edit["user"]:
|
||||
questions.append(inquirer.Text("user", message="Pick username", default=defaults["user"]))
|
||||
else:
|
||||
profile["user"] = defaults["user"]
|
||||
if edit["password"]:
|
||||
questions.append(inquirer.Password("password", message="Set Password"))
|
||||
else:
|
||||
profile["password"] = defaults["password"]
|
||||
|
||||
answer = inquirer.prompt(questions)
|
||||
if answer is None:
|
||||
return False
|
||||
|
||||
if "password" in answer:
|
||||
if answer["password"] != "":
|
||||
answer["password"] = self.app.services.config_svc.encrypt_password(answer["password"])
|
||||
|
||||
if "tags" in answer and answer["tags"]:
|
||||
answer["tags"] = ast.literal_eval(answer["tags"])
|
||||
|
||||
result = {**answer, **profile}
|
||||
result["id"] = unique
|
||||
return result
|
||||
|
||||
def questions_bulk(self, nodes="", hosts=""):
|
||||
questions = []
|
||||
questions.append(inquirer.Text("ids", message="add a comma separated list of nodes to add", default=nodes, validate=self.validators.bulk_node_validation))
|
||||
questions.append(inquirer.Text("location", message="Add a @folder, @subfolder@folder or leave empty", validate=self.validators.bulk_folder_validation))
|
||||
questions.append(inquirer.Text("host", message="Add comma separated list of Hostnames or IPs", default=hosts, validate=self.validators.bulk_host_validation))
|
||||
questions.append(inquirer.Text("protocol", message="Select Protocol/app", validate=self.validators.protocol_validation))
|
||||
questions.append(inquirer.Text("port", message="Select Port Number", validate=self.validators.port_validation))
|
||||
questions.append(inquirer.Text("options", message="Pass extra options to protocol/app", validate=self.validators.default_validation))
|
||||
questions.append(inquirer.Text("logs", message="Pick logging path/file ", validate=self.validators.default_validation))
|
||||
questions.append(inquirer.Text("tags", message="Add tags dictionary", validate=self.validators.tags_validation))
|
||||
questions.append(inquirer.Text("jumphost", message="Add Jumphost node", validate=self.validators.jumphost_validation))
|
||||
questions.append(inquirer.Text("user", message="Pick username", validate=self.validators.default_validation))
|
||||
questions.append(inquirer.List("password", message="Password: Use a local password, no password or a list of profiles to reference?", choices=["Local Password", "Profiles", "No Password"]))
|
||||
|
||||
answer = inquirer.prompt(questions)
|
||||
if answer is None:
|
||||
return False
|
||||
|
||||
if "password" in answer:
|
||||
if answer["password"] == "Local Password":
|
||||
passq = [inquirer.Password("password", message="Set Password")]
|
||||
passa = inquirer.prompt(passq)
|
||||
answer["password"] = self.app.services.config_svc.encrypt_password(passa["password"])
|
||||
elif answer["password"] == "Profiles":
|
||||
passq = [(inquirer.Text("password", message="Set a @profile or a comma separated list of @profiles", validate=self.validators.pass_validation))]
|
||||
passa = inquirer.prompt(passq)
|
||||
answer["password"] = passa["password"].split(",")
|
||||
elif answer["password"] == "No Password":
|
||||
answer["password"] = ""
|
||||
|
||||
answer["type"] = "connection"
|
||||
if "tags" in answer and not answer["tags"].startswith("@") and answer["tags"]:
|
||||
answer["tags"] = ast.literal_eval(answer["tags"])
|
||||
|
||||
return answer
|
||||
|
||||
def mcp_wizard(self, mcp_servers):
|
||||
"""Interactive wizard to manage MCP servers."""
|
||||
from .helpers import theme
|
||||
|
||||
while True:
|
||||
options = [
|
||||
("List Configured Servers", "list"),
|
||||
("Add/Update Server", "add"),
|
||||
("Enable/Disable Server", "toggle"),
|
||||
("Remove Server", "remove"),
|
||||
("Back", "exit")
|
||||
]
|
||||
|
||||
questions = [
|
||||
inquirer.List("action", message="MCP Configuration", choices=options)
|
||||
]
|
||||
|
||||
answers = inquirer.prompt(questions, theme=theme)
|
||||
if not answers or answers["action"] == "exit":
|
||||
return None
|
||||
|
||||
action = answers["action"]
|
||||
|
||||
if action == "list":
|
||||
if not mcp_servers:
|
||||
print("\nNo MCP servers configured.\n")
|
||||
else:
|
||||
return {"action": "list"}
|
||||
|
||||
elif action == "add":
|
||||
questions = [
|
||||
inquirer.Text("name", message="Server Name (identifier)"),
|
||||
inquirer.Text("url", message="SSE URL (e.g., http://localhost:8000/sse)"),
|
||||
inquirer.Confirm("enabled", message="Enabled?", default=True),
|
||||
inquirer.Text("auto_load_os", message="Auto-load on specific OS (blank for any)")
|
||||
]
|
||||
answers = inquirer.prompt(questions, theme=theme)
|
||||
if answers:
|
||||
return {
|
||||
"action": "add",
|
||||
"name": answers["name"],
|
||||
"url": answers["url"],
|
||||
"enabled": answers["enabled"],
|
||||
"os": answers["auto_load_os"]
|
||||
}
|
||||
|
||||
elif action == "toggle":
|
||||
if not mcp_servers:
|
||||
print("\nNo servers to toggle.\n")
|
||||
continue
|
||||
|
||||
choices = []
|
||||
for name, cfg in mcp_servers.items():
|
||||
status = "[Enabled]" if cfg.get("enabled", True) else "[Disabled]"
|
||||
choices.append((f"{name} {status}", name))
|
||||
|
||||
questions = [
|
||||
inquirer.List("name", message="Select server to toggle", choices=choices + [("Cancel", None)])
|
||||
]
|
||||
answers = inquirer.prompt(questions, theme=theme)
|
||||
if answers and answers["name"]:
|
||||
current = mcp_servers[answers["name"]].get("enabled", True)
|
||||
return {
|
||||
"action": "update",
|
||||
"name": answers["name"],
|
||||
"enabled": not current
|
||||
}
|
||||
|
||||
elif action == "remove":
|
||||
if not mcp_servers:
|
||||
print("\nNo servers to remove.\n")
|
||||
continue
|
||||
|
||||
questions = [
|
||||
inquirer.List("name", message="Select server to remove", choices=list(mcp_servers.keys()) + ["Cancel"])
|
||||
]
|
||||
answers = inquirer.prompt(questions, theme=theme)
|
||||
if answers and answers["name"] != "Cancel":
|
||||
return {"action": "remove", "name": answers["name"]}
|
||||
return None</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
<h3>Methods</h3>
|
||||
<dl>
|
||||
<dt id="connpy.cli.forms.Forms.mcp_wizard"><code class="name flex">
|
||||
<span>def <span class="ident">mcp_wizard</span></span>(<span>self, mcp_servers)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def mcp_wizard(self, mcp_servers):
|
||||
"""Interactive wizard to manage MCP servers."""
|
||||
from .helpers import theme
|
||||
|
||||
while True:
|
||||
options = [
|
||||
("List Configured Servers", "list"),
|
||||
("Add/Update Server", "add"),
|
||||
("Enable/Disable Server", "toggle"),
|
||||
("Remove Server", "remove"),
|
||||
("Back", "exit")
|
||||
]
|
||||
|
||||
questions = [
|
||||
inquirer.List("action", message="MCP Configuration", choices=options)
|
||||
]
|
||||
|
||||
answers = inquirer.prompt(questions, theme=theme)
|
||||
if not answers or answers["action"] == "exit":
|
||||
return None
|
||||
|
||||
action = answers["action"]
|
||||
|
||||
if action == "list":
|
||||
if not mcp_servers:
|
||||
print("\nNo MCP servers configured.\n")
|
||||
else:
|
||||
return {"action": "list"}
|
||||
|
||||
elif action == "add":
|
||||
questions = [
|
||||
inquirer.Text("name", message="Server Name (identifier)"),
|
||||
inquirer.Text("url", message="SSE URL (e.g., http://localhost:8000/sse)"),
|
||||
inquirer.Confirm("enabled", message="Enabled?", default=True),
|
||||
inquirer.Text("auto_load_os", message="Auto-load on specific OS (blank for any)")
|
||||
]
|
||||
answers = inquirer.prompt(questions, theme=theme)
|
||||
if answers:
|
||||
return {
|
||||
"action": "add",
|
||||
"name": answers["name"],
|
||||
"url": answers["url"],
|
||||
"enabled": answers["enabled"],
|
||||
"os": answers["auto_load_os"]
|
||||
}
|
||||
|
||||
elif action == "toggle":
|
||||
if not mcp_servers:
|
||||
print("\nNo servers to toggle.\n")
|
||||
continue
|
||||
|
||||
choices = []
|
||||
for name, cfg in mcp_servers.items():
|
||||
status = "[Enabled]" if cfg.get("enabled", True) else "[Disabled]"
|
||||
choices.append((f"{name} {status}", name))
|
||||
|
||||
questions = [
|
||||
inquirer.List("name", message="Select server to toggle", choices=choices + [("Cancel", None)])
|
||||
]
|
||||
answers = inquirer.prompt(questions, theme=theme)
|
||||
if answers and answers["name"]:
|
||||
current = mcp_servers[answers["name"]].get("enabled", True)
|
||||
return {
|
||||
"action": "update",
|
||||
"name": answers["name"],
|
||||
"enabled": not current
|
||||
}
|
||||
|
||||
elif action == "remove":
|
||||
if not mcp_servers:
|
||||
print("\nNo servers to remove.\n")
|
||||
continue
|
||||
|
||||
questions = [
|
||||
inquirer.List("name", message="Select server to remove", choices=list(mcp_servers.keys()) + ["Cancel"])
|
||||
]
|
||||
answers = inquirer.prompt(questions, theme=theme)
|
||||
if answers and answers["name"] != "Cancel":
|
||||
return {"action": "remove", "name": answers["name"]}
|
||||
return None</code></pre>
|
||||
</details>
|
||||
<div class="desc"><p>Interactive wizard to manage MCP servers.</p></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.forms.Forms.questions_bulk"><code class="name flex">
|
||||
<span>def <span class="ident">questions_bulk</span></span>(<span>self, nodes='', hosts='')</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def questions_bulk(self, nodes="", hosts=""):
|
||||
questions = []
|
||||
questions.append(inquirer.Text("ids", message="add a comma separated list of nodes to add", default=nodes, validate=self.validators.bulk_node_validation))
|
||||
questions.append(inquirer.Text("location", message="Add a @folder, @subfolder@folder or leave empty", validate=self.validators.bulk_folder_validation))
|
||||
questions.append(inquirer.Text("host", message="Add comma separated list of Hostnames or IPs", default=hosts, validate=self.validators.bulk_host_validation))
|
||||
questions.append(inquirer.Text("protocol", message="Select Protocol/app", validate=self.validators.protocol_validation))
|
||||
questions.append(inquirer.Text("port", message="Select Port Number", validate=self.validators.port_validation))
|
||||
questions.append(inquirer.Text("options", message="Pass extra options to protocol/app", validate=self.validators.default_validation))
|
||||
questions.append(inquirer.Text("logs", message="Pick logging path/file ", validate=self.validators.default_validation))
|
||||
questions.append(inquirer.Text("tags", message="Add tags dictionary", validate=self.validators.tags_validation))
|
||||
questions.append(inquirer.Text("jumphost", message="Add Jumphost node", validate=self.validators.jumphost_validation))
|
||||
questions.append(inquirer.Text("user", message="Pick username", validate=self.validators.default_validation))
|
||||
questions.append(inquirer.List("password", message="Password: Use a local password, no password or a list of profiles to reference?", choices=["Local Password", "Profiles", "No Password"]))
|
||||
|
||||
answer = inquirer.prompt(questions)
|
||||
if answer is None:
|
||||
return False
|
||||
|
||||
if "password" in answer:
|
||||
if answer["password"] == "Local Password":
|
||||
passq = [inquirer.Password("password", message="Set Password")]
|
||||
passa = inquirer.prompt(passq)
|
||||
answer["password"] = self.app.services.config_svc.encrypt_password(passa["password"])
|
||||
elif answer["password"] == "Profiles":
|
||||
passq = [(inquirer.Text("password", message="Set a @profile or a comma separated list of @profiles", validate=self.validators.pass_validation))]
|
||||
passa = inquirer.prompt(passq)
|
||||
answer["password"] = passa["password"].split(",")
|
||||
elif answer["password"] == "No Password":
|
||||
answer["password"] = ""
|
||||
|
||||
answer["type"] = "connection"
|
||||
if "tags" in answer and not answer["tags"].startswith("@") and answer["tags"]:
|
||||
answer["tags"] = ast.literal_eval(answer["tags"])
|
||||
|
||||
return answer</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.forms.Forms.questions_edit"><code class="name flex">
|
||||
<span>def <span class="ident">questions_edit</span></span>(<span>self)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def questions_edit(self):
|
||||
questions = []
|
||||
questions.append(inquirer.Confirm("host", message="Edit Hostname/IP?"))
|
||||
questions.append(inquirer.Confirm("protocol", message="Edit Protocol/app?"))
|
||||
questions.append(inquirer.Confirm("port", message="Edit Port?"))
|
||||
questions.append(inquirer.Confirm("options", message="Edit Options?"))
|
||||
questions.append(inquirer.Confirm("logs", message="Edit logging path/file?"))
|
||||
questions.append(inquirer.Confirm("tags", message="Edit tags?"))
|
||||
questions.append(inquirer.Confirm("jumphost", message="Edit jumphost?"))
|
||||
questions.append(inquirer.Confirm("user", message="Edit User?"))
|
||||
questions.append(inquirer.Confirm("password", message="Edit password?"))
|
||||
return inquirer.prompt(questions)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.forms.Forms.questions_nodes"><code class="name flex">
|
||||
<span>def <span class="ident">questions_nodes</span></span>(<span>self, unique, uniques=None, edit=None)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def questions_nodes(self, unique, uniques=None, edit=None):
|
||||
try:
|
||||
defaults = self.app.services.nodes.get_node_details(unique)
|
||||
if "tags" not in defaults:
|
||||
defaults["tags"] = ""
|
||||
if "jumphost" not in defaults:
|
||||
defaults["jumphost"] = ""
|
||||
except Exception:
|
||||
defaults = {"host": "", "protocol": "", "port": "", "user": "", "options": "", "logs": "", "tags": "", "password": "", "jumphost": ""}
|
||||
node = {}
|
||||
if edit is None:
|
||||
edit = {"host": True, "protocol": True, "port": True, "user": True, "password": True, "options": True, "logs": True, "tags": True, "jumphost": True}
|
||||
questions = []
|
||||
if edit["host"]:
|
||||
questions.append(inquirer.Text("host", message="Add Hostname or IP", validate=self.validators.host_validation, default=defaults["host"]))
|
||||
else:
|
||||
node["host"] = defaults["host"]
|
||||
if edit["protocol"]:
|
||||
questions.append(inquirer.Text("protocol", message="Select Protocol/app", validate=self.validators.protocol_validation, default=defaults["protocol"]))
|
||||
else:
|
||||
node["protocol"] = defaults["protocol"]
|
||||
if edit["port"]:
|
||||
questions.append(inquirer.Text("port", message="Select Port Number", validate=self.validators.port_validation, default=defaults["port"]))
|
||||
else:
|
||||
node["port"] = defaults["port"]
|
||||
if edit["options"]:
|
||||
questions.append(inquirer.Text("options", message="Pass extra options to protocol/app", validate=self.validators.default_validation, default=defaults["options"]))
|
||||
else:
|
||||
node["options"] = defaults["options"]
|
||||
if edit["logs"]:
|
||||
questions.append(inquirer.Text("logs", message="Pick logging path/file ", validate=self.validators.default_validation, default=defaults["logs"].replace("{", "{{").replace("}", "}}")))
|
||||
else:
|
||||
node["logs"] = defaults["logs"]
|
||||
if edit["tags"]:
|
||||
questions.append(inquirer.Text("tags", message="Add tags dictionary", validate=self.validators.tags_validation, default=str(defaults["tags"]).replace("{", "{{").replace("}", "}}")))
|
||||
else:
|
||||
node["tags"] = defaults["tags"]
|
||||
if edit["jumphost"]:
|
||||
questions.append(inquirer.Text("jumphost", message="Add Jumphost node", validate=self.validators.jumphost_validation, default=str(defaults["jumphost"]).replace("{", "{{").replace("}", "}}")))
|
||||
else:
|
||||
node["jumphost"] = defaults["jumphost"]
|
||||
if edit["user"]:
|
||||
questions.append(inquirer.Text("user", message="Pick username", validate=self.validators.default_validation, default=defaults["user"]))
|
||||
else:
|
||||
node["user"] = defaults["user"]
|
||||
if edit["password"]:
|
||||
questions.append(inquirer.List("password", message="Password: Use a local password, no password or a list of profiles to reference?", choices=["Local Password", "Profiles", "No Password"]))
|
||||
else:
|
||||
node["password"] = defaults["password"]
|
||||
|
||||
answer = inquirer.prompt(questions)
|
||||
if answer is None:
|
||||
return False
|
||||
|
||||
if "password" in answer:
|
||||
if answer["password"] == "Local Password":
|
||||
passq = [inquirer.Password("password", message="Set Password")]
|
||||
passa = inquirer.prompt(passq)
|
||||
if passa is None:
|
||||
return False
|
||||
answer["password"] = self.app.services.config_svc.encrypt_password(passa["password"])
|
||||
elif answer["password"] == "Profiles":
|
||||
passq = [(inquirer.Text("password", message="Set a @profile or a comma separated list of @profiles", validate=self.validators.pass_validation))]
|
||||
passa = inquirer.prompt(passq)
|
||||
if passa is None:
|
||||
return False
|
||||
answer["password"] = passa["password"].split(",")
|
||||
elif answer["password"] == "No Password":
|
||||
answer["password"] = ""
|
||||
|
||||
if "tags" in answer and not answer["tags"].startswith("@") and answer["tags"]:
|
||||
answer["tags"] = ast.literal_eval(answer["tags"])
|
||||
|
||||
result = {**uniques, **answer, **node}
|
||||
result["type"] = "connection"
|
||||
return result</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.forms.Forms.questions_profiles"><code class="name flex">
|
||||
<span>def <span class="ident">questions_profiles</span></span>(<span>self, unique, edit=None)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def questions_profiles(self, unique, edit=None):
|
||||
try:
|
||||
defaults = self.app.services.profiles.get_profile(unique, resolve=False)
|
||||
if "tags" not in defaults:
|
||||
defaults["tags"] = ""
|
||||
if "jumphost" not in defaults:
|
||||
defaults["jumphost"] = ""
|
||||
except Exception:
|
||||
defaults = {"host": "", "protocol": "", "port": "", "user": "", "options": "", "logs": "", "tags": "", "jumphost": ""}
|
||||
profile = {}
|
||||
if edit is None:
|
||||
edit = {"host": True, "protocol": True, "port": True, "user": True, "password": True, "options": True, "logs": True, "tags": True, "jumphost": True}
|
||||
questions = []
|
||||
if edit["host"]:
|
||||
questions.append(inquirer.Text("host", message="Add Hostname or IP", default=defaults["host"]))
|
||||
else:
|
||||
profile["host"] = defaults["host"]
|
||||
if edit["protocol"]:
|
||||
questions.append(inquirer.Text("protocol", message="Select Protocol/app", validate=self.validators.profile_protocol_validation, default=defaults["protocol"]))
|
||||
else:
|
||||
profile["protocol"] = defaults["protocol"]
|
||||
if edit["port"]:
|
||||
questions.append(inquirer.Text("port", message="Select Port Number", validate=self.validators.profile_port_validation, default=defaults["port"]))
|
||||
else:
|
||||
profile["port"] = defaults["port"]
|
||||
if edit["options"]:
|
||||
questions.append(inquirer.Text("options", message="Pass extra options to protocol/app", default=defaults["options"]))
|
||||
else:
|
||||
profile["options"] = defaults["options"]
|
||||
if edit["logs"]:
|
||||
questions.append(inquirer.Text("logs", message="Pick logging path/file ", default=defaults["logs"].replace("{", "{{").replace("}", "}}")))
|
||||
else:
|
||||
profile["logs"] = defaults["logs"]
|
||||
if edit["tags"]:
|
||||
questions.append(inquirer.Text("tags", message="Add tags dictionary", validate=self.validators.profile_tags_validation, default=str(defaults["tags"]).replace("{", "{{").replace("}", "}}")))
|
||||
else:
|
||||
profile["tags"] = defaults["tags"]
|
||||
if edit["jumphost"]:
|
||||
questions.append(inquirer.Text("jumphost", message="Add Jumphost node", validate=self.validators.profile_jumphost_validation, default=str(defaults["jumphost"]).replace("{", "{{").replace("}", "}}")))
|
||||
else:
|
||||
profile["jumphost"] = defaults["jumphost"]
|
||||
if edit["user"]:
|
||||
questions.append(inquirer.Text("user", message="Pick username", default=defaults["user"]))
|
||||
else:
|
||||
profile["user"] = defaults["user"]
|
||||
if edit["password"]:
|
||||
questions.append(inquirer.Password("password", message="Set Password"))
|
||||
else:
|
||||
profile["password"] = defaults["password"]
|
||||
|
||||
answer = inquirer.prompt(questions)
|
||||
if answer is None:
|
||||
return False
|
||||
|
||||
if "password" in answer:
|
||||
if answer["password"] != "":
|
||||
answer["password"] = self.app.services.config_svc.encrypt_password(answer["password"])
|
||||
|
||||
if "tags" in answer and answer["tags"]:
|
||||
answer["tags"] = ast.literal_eval(answer["tags"])
|
||||
|
||||
result = {**answer, **profile}
|
||||
result["id"] = unique
|
||||
return result</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.cli" href="index.html">connpy.cli</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-classes">Classes</a></h3>
|
||||
<ul>
|
||||
<li>
|
||||
<h4><code><a title="connpy.cli.forms.Forms" href="#connpy.cli.forms.Forms">Forms</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.cli.forms.Forms.mcp_wizard" href="#connpy.cli.forms.Forms.mcp_wizard">mcp_wizard</a></code></li>
|
||||
<li><code><a title="connpy.cli.forms.Forms.questions_bulk" href="#connpy.cli.forms.Forms.questions_bulk">questions_bulk</a></code></li>
|
||||
<li><code><a title="connpy.cli.forms.Forms.questions_edit" href="#connpy.cli.forms.Forms.questions_edit">questions_edit</a></code></li>
|
||||
<li><code><a title="connpy.cli.forms.Forms.questions_nodes" href="#connpy.cli.forms.Forms.questions_nodes">questions_nodes</a></code></li>
|
||||
<li><code><a title="connpy.cli.forms.Forms.questions_profiles" href="#connpy.cli.forms.Forms.questions_profiles">questions_profiles</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.5</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,309 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.5">
|
||||
<title>connpy.cli.help_text API documentation</title>
|
||||
<meta name="description" content="">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.cli.help_text</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-functions">Functions</h2>
|
||||
<dl>
|
||||
<dt id="connpy.cli.help_text.get_help"><code class="name flex">
|
||||
<span>def <span class="ident">get_help</span></span>(<span>type, parsers=None)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def get_help(type, parsers=None):
|
||||
if type == "export":
|
||||
return "Export /path/to/file.yml \[@subfolder1]\[@folder1] \[@subfolderN]\[@folderN]"
|
||||
if type == "import":
|
||||
return "Import /path/to/file.yml"
|
||||
if type == "node":
|
||||
return "node\[@subfolder]\[@folder]\nConnect to specific node or show all matching nodes\n\[@subfolder]\[@folder]\nShow all available connections globally or in specified path"
|
||||
if type == "usage":
|
||||
commands = []
|
||||
for subcommand, subparser in parsers.choices.items():
|
||||
if subparser.description != None:
|
||||
commands.append(subcommand)
|
||||
commands = ",".join(commands)
|
||||
usage_help = f"connpy [-h] [--add | --del | --mod | --show | --debug] [node|folder] [--sftp]\n connpy {{{commands}}} ..."
|
||||
return usage_help
|
||||
return get_instructions(type)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.help_text.get_instructions"><code class="name flex">
|
||||
<span>def <span class="ident">get_instructions</span></span>(<span>type='add')</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def get_instructions(type="add"):
|
||||
if type == "add":
|
||||
return """
|
||||
Welcome to Connpy node Addition Wizard!
|
||||
|
||||
Here are some important instructions and tips for configuring your new node:
|
||||
|
||||
1. **Profiles**:
|
||||
- You can use the configured settings in a profile using `@profilename`.
|
||||
|
||||
2. **Available Protocols and Apps**:
|
||||
- ssh
|
||||
- telnet
|
||||
- kubectl (`kubectl exec`)
|
||||
- docker (`docker exec`)
|
||||
- ssm (`aws ssm start-session`)
|
||||
|
||||
3. **Optional Values**:
|
||||
- You can leave any value empty except for the hostname/IP.
|
||||
|
||||
4. **Passwords**:
|
||||
- You can pass one or more passwords using comma-separated `@profiles`.
|
||||
|
||||
5. **Logging**:
|
||||
- You can use the following variables in the logging file name:
|
||||
- `${id}`
|
||||
- `${unique}`
|
||||
- `${host}`
|
||||
- `${port}`
|
||||
- `${user}`
|
||||
- `${protocol}`
|
||||
|
||||
6. **Well-Known Tags**:
|
||||
- `os`: Identified by AI to generate commands based on the operating system.
|
||||
- `screen_length_command`: Used by automation to avoid pagination on different devices (e.g., `terminal length 0` for Cisco devices).
|
||||
- `prompt`: Replaces default app prompt to identify the end of output or where the user can start inputting commands.
|
||||
- `kube_command`: Replaces the default command (`/bin/bash`) for `kubectl exec`.
|
||||
- `docker_command`: Replaces the default command for `docker exec`.
|
||||
- `region`: AWS Region used for `aws ssm start-session`.
|
||||
- `profile`: AWS Profile used for `aws ssm start-session`.
|
||||
- `ssh_options`: Additional SSH options injected when an SSM node is used as a jumphost (e.g., `-i ~/.ssh/key.pem`).
|
||||
- `nc_command`: Replaces the default `nc` command used when bridging connections through Docker or Kubernetes (e.g., `ip netns exec global-vrf nc`).
|
||||
"""
|
||||
if type == "bashcompletion":
|
||||
return '''
|
||||
# Bash completion for connpy
|
||||
# Run: eval "$(connpy config --completion bash)"
|
||||
# Or add it to your .bashrc
|
||||
|
||||
_connpy_autocomplete()
|
||||
{
|
||||
local strings
|
||||
strings=$(python3 -m connpy.completion bash ${#COMP_WORDS[@]} "${COMP_WORDS[@]}")
|
||||
|
||||
local IFS=$'\\t'
|
||||
COMPREPLY=( $(compgen -W "$strings" -- "${COMP_WORDS[$COMP_CWORD]}") )
|
||||
}
|
||||
complete -o nosort -F _connpy_autocomplete conn
|
||||
complete -o nosort -F _connpy_autocomplete connpy
|
||||
'''
|
||||
if type == "zshcompletion":
|
||||
return '''
|
||||
# Zsh completion for connpy
|
||||
# Run: eval "$(connpy config --completion zsh)"
|
||||
# Or add it to your .zshrc
|
||||
# Make sure compinit is loaded
|
||||
|
||||
autoload -U compinit && compinit
|
||||
_connpy_autocomplete()
|
||||
{
|
||||
local COMP_WORDS num strings
|
||||
COMP_WORDS=( $words )
|
||||
num=${#COMP_WORDS[@]}
|
||||
if [[ $words =~ '.* $' ]]; then
|
||||
num=$(($num + 1))
|
||||
fi
|
||||
strings=$(python3 -m connpy.completion zsh ${num} ${COMP_WORDS[@]})
|
||||
|
||||
local IFS=$'\\t'
|
||||
compadd "$@" -- ${=strings}
|
||||
}
|
||||
compdef _connpy_autocomplete conn
|
||||
compdef _connpy_autocomplete connpy
|
||||
'''
|
||||
if type == "fzf_wrapper_bash":
|
||||
return '''\n#Here starts bash 0ms fzf wrapper for connpy
|
||||
connpy() {
|
||||
if [ $# -eq 0 ]; then
|
||||
local selected
|
||||
local configdir=$(cat ~/.config/conn/.folder 2>/dev/null || echo ~/.config/conn)
|
||||
if [ -s "$configdir/.fzf_nodes_cache.txt" ]; then
|
||||
selected=$(cat "$configdir/.fzf_nodes_cache.txt" | fzf-tmux -i -d 25%)
|
||||
else
|
||||
command connpy
|
||||
return
|
||||
fi
|
||||
if [ -n "$selected" ]; then
|
||||
command connpy "$selected"
|
||||
fi
|
||||
else
|
||||
command connpy "$@"
|
||||
fi
|
||||
}
|
||||
alias c="connpy"
|
||||
#Here ends bash 0ms fzf wrapper for connpy
|
||||
'''
|
||||
if type == "fzf_wrapper_zsh":
|
||||
return '''\n#Here starts zsh 0ms fzf wrapper for connpy
|
||||
connpy() {
|
||||
if [ $# -eq 0 ]; then
|
||||
local selected
|
||||
local configdir=$(cat ~/.config/conn/.folder 2>/dev/null || echo ~/.config/conn)
|
||||
if [ -s "$configdir/.fzf_nodes_cache.txt" ]; then
|
||||
selected=$(cat "$configdir/.fzf_nodes_cache.txt" | fzf-tmux -i -d 25%)
|
||||
else
|
||||
command connpy
|
||||
return
|
||||
fi
|
||||
if [ -n "$selected" ]; then
|
||||
command connpy "$selected"
|
||||
fi
|
||||
else
|
||||
command connpy "$@"
|
||||
fi
|
||||
}
|
||||
alias c="connpy"
|
||||
#Here ends zsh 0ms fzf wrapper for connpy
|
||||
'''
|
||||
if type == "run":
|
||||
return "node[@subfolder][@folder] commmand to run\nRun the specific command on the node and print output\n/path/to/file.yaml\nUse a yaml file to run an automation script"
|
||||
if type == "generate":
|
||||
return r'''---
|
||||
tasks:
|
||||
- name: "Config"
|
||||
|
||||
action: 'run' #Action can be test or run. Mandatory
|
||||
|
||||
nodes: #List of nodes to work on. Mandatory
|
||||
- 'router1@office' #You can add specific nodes
|
||||
- '@aws' #entire folders or subfolders
|
||||
- 'router.*@office' #or use regex to filter inside a folder
|
||||
|
||||
commands: #List of commands to send, use {name} to pass variables
|
||||
- 'term len 0'
|
||||
- 'conf t'
|
||||
- 'interface {if}'
|
||||
- 'ip address 10.100.100.{id} 255.255.255.255'
|
||||
- '{commit}'
|
||||
- 'end'
|
||||
|
||||
variables: #Variables to use on commands and expected. Optional
|
||||
__global__: #Global variables to use on all nodes, fallback if missing in the node.
|
||||
commit: ''
|
||||
if: 'loopback100'
|
||||
router1@office:
|
||||
id: 1
|
||||
router2@office:
|
||||
id: 2
|
||||
commit: 'commit'
|
||||
router3@office:
|
||||
id: 3
|
||||
vrouter1@aws:
|
||||
id: 4
|
||||
vrouterN@aws:
|
||||
id: 5
|
||||
|
||||
output: /home/user/logs #Type of output, if null you only get Connection and test result. Choices are: null,stdout,/path/to/folder. Folder path works on both 'run' and 'test' actions.
|
||||
|
||||
options:
|
||||
prompt: r'>$|#$|\$$|>.$|#.$|\$.$' #Optional prompt to check on your devices, default should work on most devices.
|
||||
parallel: 10 #Optional number of nodes to run commands on parallel. Default 10.
|
||||
timeout: 20 #Optional time to wait in seconds for prompt, expected or EOF. Default 20.
|
||||
|
||||
- name: "TestConfig"
|
||||
action: 'test'
|
||||
nodes:
|
||||
- 'router1@office'
|
||||
- '@aws'
|
||||
commands:
|
||||
- 'ping 10.100.100.{id}'
|
||||
expected: '!' #Expected text to find when running test action. Mandatory for 'test'
|
||||
variables:
|
||||
router1@office:
|
||||
id: 1
|
||||
router2@office:
|
||||
id: 2
|
||||
commit: 'commit'
|
||||
router3@office:
|
||||
id: 3
|
||||
vrouter1@aws:
|
||||
id: 4
|
||||
vrouterN@aws:
|
||||
id: 5
|
||||
output: null
|
||||
...'''
|
||||
return ""</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.cli" href="index.html">connpy.cli</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-functions">Functions</a></h3>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.cli.help_text.get_help" href="#connpy.cli.help_text.get_help">get_help</a></code></li>
|
||||
<li><code><a title="connpy.cli.help_text.get_instructions" href="#connpy.cli.help_text.get_instructions">get_instructions</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.5</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,213 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.5">
|
||||
<title>connpy.cli.helpers API documentation</title>
|
||||
<meta name="description" content="">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.cli.helpers</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-functions">Functions</h2>
|
||||
<dl>
|
||||
<dt id="connpy.cli.helpers.choose"><code class="name flex">
|
||||
<span>def <span class="ident">choose</span></span>(<span>app, list_, name, action)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def choose(app, list_, name, action):
|
||||
# Generates an inquirer list to pick
|
||||
# Safeguard: Never prompt if running in autocomplete shell
|
||||
if os.environ.get("_ARGCOMPLETE") or os.environ.get("COMP_LINE"):
|
||||
return None
|
||||
|
||||
if FzfPrompt and app.fzf and os.environ.get("_ARGCOMPLETE") is None and os.environ.get("COMP_LINE") is None:
|
||||
fzf_prompt = FzfPrompt(executable_path="fzf-tmux")
|
||||
if not app.case:
|
||||
fzf_prompt = FzfPrompt(executable_path="fzf-tmux -i")
|
||||
answer = fzf_prompt.prompt(list_, fzf_options="-d 25%")
|
||||
if len(answer) == 0:
|
||||
return None
|
||||
else:
|
||||
return answer[0]
|
||||
else:
|
||||
questions = [inquirer.List(name, message="Pick {} to {}:".format(name,action), choices=list_, carousel=True)]
|
||||
answer = inquirer.prompt(questions)
|
||||
if answer == None:
|
||||
return None
|
||||
else:
|
||||
return answer[name]</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.helpers.folders_completer"><code class="name flex">
|
||||
<span>def <span class="ident">folders_completer</span></span>(<span>prefix, parsed_args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def folders_completer(prefix, parsed_args, **kwargs):
|
||||
configdir = get_config_dir()
|
||||
cache_file = os.path.join(configdir, '.folders_cache.txt')
|
||||
if os.path.exists(cache_file):
|
||||
with open(cache_file, "r") as f:
|
||||
return [line.strip() for line in f if line.startswith(prefix)]
|
||||
return []</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.helpers.get_config_dir"><code class="name flex">
|
||||
<span>def <span class="ident">get_config_dir</span></span>(<span>)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def get_config_dir():
|
||||
home = os.path.expanduser("~")
|
||||
defaultdir = os.path.join(home, '.config/conn')
|
||||
pathfile = os.path.join(defaultdir, '.folder')
|
||||
try:
|
||||
with open(pathfile, "r") as f:
|
||||
return f.read().strip()
|
||||
except:
|
||||
return defaultdir</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.helpers.nodes_completer"><code class="name flex">
|
||||
<span>def <span class="ident">nodes_completer</span></span>(<span>prefix, parsed_args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def nodes_completer(prefix, parsed_args, **kwargs):
|
||||
configdir = get_config_dir()
|
||||
cache_file = os.path.join(configdir, '.fzf_nodes_cache.txt')
|
||||
if os.path.exists(cache_file):
|
||||
with open(cache_file, "r") as f:
|
||||
return [line.strip() for line in f if line.startswith(prefix)]
|
||||
return []</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.helpers.profiles_completer"><code class="name flex">
|
||||
<span>def <span class="ident">profiles_completer</span></span>(<span>prefix, parsed_args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def profiles_completer(prefix, parsed_args, **kwargs):
|
||||
configdir = get_config_dir()
|
||||
cache_file = os.path.join(configdir, '.profiles_cache.txt')
|
||||
if os.path.exists(cache_file):
|
||||
with open(cache_file, "r") as f:
|
||||
return [line.strip() for line in f if line.startswith(prefix)]
|
||||
return []</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.helpers.toplevel_completer"><code class="name flex">
|
||||
<span>def <span class="ident">toplevel_completer</span></span>(<span>prefix, parsed_args, **kwargs)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def toplevel_completer(prefix, parsed_args, **kwargs):
|
||||
commands = ["node", "profile", "move", "mv", "copy", "cp", "list", "ls", "bulk", "export", "import", "ai", "run", "api", "context", "plugin", "config", "sync"]
|
||||
|
||||
configdir = get_config_dir()
|
||||
cache_file = os.path.join(configdir, '.fzf_nodes_cache.txt')
|
||||
nodes = []
|
||||
if os.path.exists(cache_file):
|
||||
with open(cache_file, "r") as f:
|
||||
nodes = [line.strip() for line in f if line.startswith(prefix)]
|
||||
|
||||
cache_folders = os.path.join(configdir, '.folders_cache.txt')
|
||||
if os.path.exists(cache_folders):
|
||||
with open(cache_folders, "r") as f:
|
||||
nodes += [line.strip() for line in f if line.startswith(prefix)]
|
||||
|
||||
return [c for c in commands + nodes if c.startswith(prefix)]</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.cli" href="index.html">connpy.cli</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-functions">Functions</a></h3>
|
||||
<ul class="two-column">
|
||||
<li><code><a title="connpy.cli.helpers.choose" href="#connpy.cli.helpers.choose">choose</a></code></li>
|
||||
<li><code><a title="connpy.cli.helpers.folders_completer" href="#connpy.cli.helpers.folders_completer">folders_completer</a></code></li>
|
||||
<li><code><a title="connpy.cli.helpers.get_config_dir" href="#connpy.cli.helpers.get_config_dir">get_config_dir</a></code></li>
|
||||
<li><code><a title="connpy.cli.helpers.nodes_completer" href="#connpy.cli.helpers.nodes_completer">nodes_completer</a></code></li>
|
||||
<li><code><a title="connpy.cli.helpers.profiles_completer" href="#connpy.cli.helpers.profiles_completer">profiles_completer</a></code></li>
|
||||
<li><code><a title="connpy.cli.helpers.toplevel_completer" href="#connpy.cli.helpers.toplevel_completer">toplevel_completer</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.5</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,278 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.5">
|
||||
<title>connpy.cli.import_export_handler API documentation</title>
|
||||
<meta name="description" content="">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.cli.import_export_handler</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-classes">Classes</h2>
|
||||
<dl>
|
||||
<dt id="connpy.cli.import_export_handler.ImportExportHandler"><code class="flex name class">
|
||||
<span>class <span class="ident">ImportExportHandler</span></span>
|
||||
<span>(</span><span>app)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">class ImportExportHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
self.forms = Forms(app)
|
||||
|
||||
def dispatch_import(self, args):
|
||||
file_path = args.data[0]
|
||||
try:
|
||||
printer.warning("This could overwrite your current configuration!")
|
||||
question = [inquirer.Confirm("import", message=f"Are you sure you want to import {file_path}?")]
|
||||
confirm = inquirer.prompt(question)
|
||||
if confirm == None or not confirm["import"]:
|
||||
sys.exit(7)
|
||||
|
||||
self.app.services.import_export.import_from_file(file_path)
|
||||
printer.success(f"File {file_path} imported successfully.")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
def dispatch_export(self, args):
|
||||
file_path = args.data[0]
|
||||
folders = args.data[1:] if len(args.data) > 1 else None
|
||||
try:
|
||||
self.app.services.import_export.export_to_file(file_path, folders=folders)
|
||||
printer.success(f"File {file_path} generated successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
sys.exit()
|
||||
|
||||
def bulk(self, args):
|
||||
if args.file and os.path.isfile(args.file[0]):
|
||||
with open(args.file[0], 'r') as f:
|
||||
lines = f.readlines()
|
||||
|
||||
# Expecting exactly 2 lines
|
||||
if len(lines) < 2:
|
||||
printer.error("The file must contain at least two lines: one for nodes, one for hosts.")
|
||||
sys.exit(11)
|
||||
|
||||
nodes = lines[0].strip()
|
||||
hosts = lines[1].strip()
|
||||
newnodes = self.forms.questions_bulk(nodes, hosts)
|
||||
else:
|
||||
newnodes = self.forms.questions_bulk()
|
||||
|
||||
if newnodes == False:
|
||||
sys.exit(7)
|
||||
|
||||
if not self.app.case:
|
||||
newnodes["location"] = newnodes["location"].lower()
|
||||
newnodes["ids"] = newnodes["ids"].lower()
|
||||
|
||||
# Handle the case where location might be a file reference (e.g. from a prompt)
|
||||
location = newnodes["location"]
|
||||
if location.startswith("@") and "/" in location:
|
||||
# Extract the actual @folder part (e.g. @testall from @testall/.folders_cache.txt)
|
||||
location = location.split("/")[0]
|
||||
newnodes["location"] = location
|
||||
|
||||
ids = newnodes["ids"].split(",")
|
||||
# Append location to each id for proper folder assignment
|
||||
location = newnodes["location"]
|
||||
if location:
|
||||
ids = [f"{i}{location}" for i in ids]
|
||||
|
||||
hosts = newnodes["host"].split(",")
|
||||
|
||||
try:
|
||||
count = self.app.services.nodes.bulk_add(ids, hosts, newnodes)
|
||||
if count > 0:
|
||||
printer.success(f"Successfully added {count} nodes.")
|
||||
else:
|
||||
printer.info("0 nodes added")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
<h3>Methods</h3>
|
||||
<dl>
|
||||
<dt id="connpy.cli.import_export_handler.ImportExportHandler.bulk"><code class="name flex">
|
||||
<span>def <span class="ident">bulk</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def bulk(self, args):
|
||||
if args.file and os.path.isfile(args.file[0]):
|
||||
with open(args.file[0], 'r') as f:
|
||||
lines = f.readlines()
|
||||
|
||||
# Expecting exactly 2 lines
|
||||
if len(lines) < 2:
|
||||
printer.error("The file must contain at least two lines: one for nodes, one for hosts.")
|
||||
sys.exit(11)
|
||||
|
||||
nodes = lines[0].strip()
|
||||
hosts = lines[1].strip()
|
||||
newnodes = self.forms.questions_bulk(nodes, hosts)
|
||||
else:
|
||||
newnodes = self.forms.questions_bulk()
|
||||
|
||||
if newnodes == False:
|
||||
sys.exit(7)
|
||||
|
||||
if not self.app.case:
|
||||
newnodes["location"] = newnodes["location"].lower()
|
||||
newnodes["ids"] = newnodes["ids"].lower()
|
||||
|
||||
# Handle the case where location might be a file reference (e.g. from a prompt)
|
||||
location = newnodes["location"]
|
||||
if location.startswith("@") and "/" in location:
|
||||
# Extract the actual @folder part (e.g. @testall from @testall/.folders_cache.txt)
|
||||
location = location.split("/")[0]
|
||||
newnodes["location"] = location
|
||||
|
||||
ids = newnodes["ids"].split(",")
|
||||
# Append location to each id for proper folder assignment
|
||||
location = newnodes["location"]
|
||||
if location:
|
||||
ids = [f"{i}{location}" for i in ids]
|
||||
|
||||
hosts = newnodes["host"].split(",")
|
||||
|
||||
try:
|
||||
count = self.app.services.nodes.bulk_add(ids, hosts, newnodes)
|
||||
if count > 0:
|
||||
printer.success(f"Successfully added {count} nodes.")
|
||||
else:
|
||||
printer.info("0 nodes added")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.import_export_handler.ImportExportHandler.dispatch_export"><code class="name flex">
|
||||
<span>def <span class="ident">dispatch_export</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def dispatch_export(self, args):
|
||||
file_path = args.data[0]
|
||||
folders = args.data[1:] if len(args.data) > 1 else None
|
||||
try:
|
||||
self.app.services.import_export.export_to_file(file_path, folders=folders)
|
||||
printer.success(f"File {file_path} generated successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
sys.exit()</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.import_export_handler.ImportExportHandler.dispatch_import"><code class="name flex">
|
||||
<span>def <span class="ident">dispatch_import</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def dispatch_import(self, args):
|
||||
file_path = args.data[0]
|
||||
try:
|
||||
printer.warning("This could overwrite your current configuration!")
|
||||
question = [inquirer.Confirm("import", message=f"Are you sure you want to import {file_path}?")]
|
||||
confirm = inquirer.prompt(question)
|
||||
if confirm == None or not confirm["import"]:
|
||||
sys.exit(7)
|
||||
|
||||
self.app.services.import_export.import_from_file(file_path)
|
||||
printer.success(f"File {file_path} imported successfully.")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.cli" href="index.html">connpy.cli</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-classes">Classes</a></h3>
|
||||
<ul>
|
||||
<li>
|
||||
<h4><code><a title="connpy.cli.import_export_handler.ImportExportHandler" href="#connpy.cli.import_export_handler.ImportExportHandler">ImportExportHandler</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.cli.import_export_handler.ImportExportHandler.bulk" href="#connpy.cli.import_export_handler.ImportExportHandler.bulk">bulk</a></code></li>
|
||||
<li><code><a title="connpy.cli.import_export_handler.ImportExportHandler.dispatch_export" href="#connpy.cli.import_export_handler.ImportExportHandler.dispatch_export">dispatch_export</a></code></li>
|
||||
<li><code><a title="connpy.cli.import_export_handler.ImportExportHandler.dispatch_import" href="#connpy.cli.import_export_handler.ImportExportHandler.dispatch_import">dispatch_import</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.5</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,148 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.5">
|
||||
<title>connpy.cli API documentation</title>
|
||||
<meta name="description" content="">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.cli</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-submodules">Sub-modules</h2>
|
||||
<dl>
|
||||
<dt><code class="name"><a title="connpy.cli.ai_handler" href="ai_handler.html">connpy.cli.ai_handler</a></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt><code class="name"><a title="connpy.cli.api_handler" href="api_handler.html">connpy.cli.api_handler</a></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt><code class="name"><a title="connpy.cli.config_handler" href="config_handler.html">connpy.cli.config_handler</a></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt><code class="name"><a title="connpy.cli.context_handler" href="context_handler.html">connpy.cli.context_handler</a></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt><code class="name"><a title="connpy.cli.forms" href="forms.html">connpy.cli.forms</a></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt><code class="name"><a title="connpy.cli.help_text" href="help_text.html">connpy.cli.help_text</a></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt><code class="name"><a title="connpy.cli.helpers" href="helpers.html">connpy.cli.helpers</a></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt><code class="name"><a title="connpy.cli.import_export_handler" href="import_export_handler.html">connpy.cli.import_export_handler</a></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt><code class="name"><a title="connpy.cli.node_handler" href="node_handler.html">connpy.cli.node_handler</a></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt><code class="name"><a title="connpy.cli.plugin_handler" href="plugin_handler.html">connpy.cli.plugin_handler</a></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt><code class="name"><a title="connpy.cli.profile_handler" href="profile_handler.html">connpy.cli.profile_handler</a></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt><code class="name"><a title="connpy.cli.run_handler" href="run_handler.html">connpy.cli.run_handler</a></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt><code class="name"><a title="connpy.cli.sync_handler" href="sync_handler.html">connpy.cli.sync_handler</a></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt><code class="name"><a title="connpy.cli.terminal_ui" href="terminal_ui.html">connpy.cli.terminal_ui</a></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt><code class="name"><a title="connpy.cli.validators" href="validators.html">connpy.cli.validators</a></code></dt>
|
||||
<dd>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy" href="../index.html">connpy</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-submodules">Sub-modules</a></h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.cli.ai_handler" href="ai_handler.html">connpy.cli.ai_handler</a></code></li>
|
||||
<li><code><a title="connpy.cli.api_handler" href="api_handler.html">connpy.cli.api_handler</a></code></li>
|
||||
<li><code><a title="connpy.cli.config_handler" href="config_handler.html">connpy.cli.config_handler</a></code></li>
|
||||
<li><code><a title="connpy.cli.context_handler" href="context_handler.html">connpy.cli.context_handler</a></code></li>
|
||||
<li><code><a title="connpy.cli.forms" href="forms.html">connpy.cli.forms</a></code></li>
|
||||
<li><code><a title="connpy.cli.help_text" href="help_text.html">connpy.cli.help_text</a></code></li>
|
||||
<li><code><a title="connpy.cli.helpers" href="helpers.html">connpy.cli.helpers</a></code></li>
|
||||
<li><code><a title="connpy.cli.import_export_handler" href="import_export_handler.html">connpy.cli.import_export_handler</a></code></li>
|
||||
<li><code><a title="connpy.cli.node_handler" href="node_handler.html">connpy.cli.node_handler</a></code></li>
|
||||
<li><code><a title="connpy.cli.plugin_handler" href="plugin_handler.html">connpy.cli.plugin_handler</a></code></li>
|
||||
<li><code><a title="connpy.cli.profile_handler" href="profile_handler.html">connpy.cli.profile_handler</a></code></li>
|
||||
<li><code><a title="connpy.cli.run_handler" href="run_handler.html">connpy.cli.run_handler</a></code></li>
|
||||
<li><code><a title="connpy.cli.sync_handler" href="sync_handler.html">connpy.cli.sync_handler</a></code></li>
|
||||
<li><code><a title="connpy.cli.terminal_ui" href="terminal_ui.html">connpy.cli.terminal_ui</a></code></li>
|
||||
<li><code><a title="connpy.cli.validators" href="validators.html">connpy.cli.validators</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.5</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,612 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.5">
|
||||
<title>connpy.cli.node_handler API documentation</title>
|
||||
<meta name="description" content="">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.cli.node_handler</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-classes">Classes</h2>
|
||||
<dl>
|
||||
<dt id="connpy.cli.node_handler.NodeHandler"><code class="flex name class">
|
||||
<span>class <span class="ident">NodeHandler</span></span>
|
||||
<span>(</span><span>app)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">class NodeHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
self.forms = Forms(app)
|
||||
|
||||
def dispatch(self, args):
|
||||
if not self.app.case and args.data != None:
|
||||
args.data = args.data.lower()
|
||||
actions = {"version": self.version, "connect": self.connect, "add": self.add, "del": self.delete, "mod": self.modify, "show": self.show}
|
||||
return actions.get(args.action)(args)
|
||||
|
||||
def version(self, args):
|
||||
from .._version import __version__
|
||||
printer.info(f"Connpy {__version__}")
|
||||
|
||||
def connect(self, args):
|
||||
if args.data == None:
|
||||
try:
|
||||
matches = self.app.services.nodes.list_nodes()
|
||||
except Exception as e:
|
||||
printer.error(f"Failed to list nodes: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
if len(matches) == 0:
|
||||
printer.warning("There are no nodes created")
|
||||
printer.info("try: connpy --help")
|
||||
sys.exit(9)
|
||||
else:
|
||||
try:
|
||||
matches = self.app.services.nodes.list_nodes(args.data)
|
||||
except Exception:
|
||||
matches = []
|
||||
|
||||
if len(matches) == 0:
|
||||
printer.error(f"{args.data} not found")
|
||||
sys.exit(2)
|
||||
elif len(matches) > 1:
|
||||
matches[0] = choose(self.app, matches, "node", "connect")
|
||||
|
||||
if matches[0] == None:
|
||||
sys.exit(7)
|
||||
|
||||
try:
|
||||
self.app.services.nodes.connect_node(
|
||||
matches[0],
|
||||
sftp=args.sftp,
|
||||
debug=args.debug,
|
||||
logger=self.app._service_logger
|
||||
)
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
def delete(self, args):
|
||||
if args.data == None:
|
||||
printer.error("Missing argument node")
|
||||
sys.exit(3)
|
||||
|
||||
is_folder = args.data.startswith("@")
|
||||
try:
|
||||
if is_folder:
|
||||
matches = self.app.services.nodes.list_folders(args.data)
|
||||
else:
|
||||
matches = self.app.services.nodes.list_nodes(args.data)
|
||||
except Exception:
|
||||
matches = []
|
||||
|
||||
if len(matches) == 0:
|
||||
printer.error(f"{args.data} not found")
|
||||
sys.exit(2)
|
||||
|
||||
printer.info(f"Removing: {matches}")
|
||||
question = [inquirer.Confirm("delete", message="Are you sure you want to continue?")]
|
||||
confirm = inquirer.prompt(question)
|
||||
if confirm == None or not confirm["delete"]:
|
||||
sys.exit(7)
|
||||
|
||||
try:
|
||||
for item in matches:
|
||||
self.app.services.nodes.delete_node(item, is_folder=is_folder)
|
||||
|
||||
if len(matches) == 1:
|
||||
printer.success(f"{matches[0]} deleted successfully")
|
||||
else:
|
||||
printer.success(f"{len(matches)} items deleted successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
def add(self, args):
|
||||
try:
|
||||
args.data = self.app._type_node(args.data)
|
||||
except ValueError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(3)
|
||||
|
||||
if args.data == None:
|
||||
printer.error("Missing argument node")
|
||||
sys.exit(3)
|
||||
|
||||
is_folder = args.data.startswith("@")
|
||||
try:
|
||||
if is_folder:
|
||||
uniques = self.app.services.nodes.explode_unique(args.data)
|
||||
if not uniques:
|
||||
raise InvalidConfigurationError(f"Invalid folder {args.data}")
|
||||
self.app.services.nodes.add_node(args.data, {}, is_folder=True)
|
||||
printer.success(f"{args.data} added successfully")
|
||||
else:
|
||||
if args.data in self.app.nodes_list:
|
||||
printer.error(f"Node '{args.data}' already exists.")
|
||||
sys.exit(1)
|
||||
uniques = self.app.services.nodes.explode_unique(args.data)
|
||||
|
||||
# Fast fail if parent folder does not exist
|
||||
self.app.services.nodes.validate_parent_folder(args.data)
|
||||
|
||||
printer.console.print(Markdown(get_instructions()))
|
||||
|
||||
new_node_data = self.forms.questions_nodes(args.data, uniques)
|
||||
if not new_node_data:
|
||||
sys.exit(7)
|
||||
self.app.services.nodes.add_node(args.data, new_node_data)
|
||||
printer.success(f"{args.data} added successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
def show(self, args):
|
||||
if args.data == None:
|
||||
printer.error("Missing argument node")
|
||||
sys.exit(3)
|
||||
|
||||
try:
|
||||
matches = self.app.services.nodes.list_nodes(args.data)
|
||||
except Exception:
|
||||
matches = []
|
||||
|
||||
if len(matches) == 0:
|
||||
printer.error(f"{args.data} not found")
|
||||
sys.exit(2)
|
||||
elif len(matches) > 1:
|
||||
matches[0] = choose(self.app, matches, "node", "show")
|
||||
|
||||
if matches[0] == None:
|
||||
sys.exit(7)
|
||||
|
||||
try:
|
||||
node = self.app.services.nodes.get_node_details(matches[0])
|
||||
yaml_output = yaml.dump(node, sort_keys=False, default_flow_style=False)
|
||||
printer.data(matches[0], yaml_output)
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
def modify(self, args):
|
||||
if args.data == None:
|
||||
printer.error("Missing argument node")
|
||||
sys.exit(3)
|
||||
|
||||
try:
|
||||
matches = self.app.services.nodes.list_nodes(args.data)
|
||||
except Exception:
|
||||
matches = []
|
||||
|
||||
if len(matches) == 0:
|
||||
printer.error(f"No connection found with filter: {args.data}")
|
||||
sys.exit(2)
|
||||
|
||||
unique = matches[0] if len(matches) == 1 else None
|
||||
uniques = self.app.services.nodes.explode_unique(unique) if unique else {"id": None, "folder": None}
|
||||
|
||||
printer.info(f"Editing: {matches}")
|
||||
node_details = {}
|
||||
for i in matches:
|
||||
node_details[i] = self.app.services.nodes.get_node_details(i)
|
||||
|
||||
edits = self.forms.questions_edit()
|
||||
if edits == None:
|
||||
sys.exit(7)
|
||||
|
||||
# Use first match as base for defaults if multiple matches exist
|
||||
base_unique = matches[0]
|
||||
base_uniques = self.app.services.nodes.explode_unique(base_unique)
|
||||
updatenode = self.forms.questions_nodes(base_unique, base_uniques, edit=edits)
|
||||
if not updatenode:
|
||||
sys.exit(7)
|
||||
|
||||
try:
|
||||
if len(matches) == 1:
|
||||
# Comparison for "Nothing to do"
|
||||
current = node_details[matches[0]].copy()
|
||||
current.update(uniques)
|
||||
current["type"] = "connection"
|
||||
if sorted(updatenode.items()) == sorted(current.items()):
|
||||
printer.info("Nothing to do here")
|
||||
return
|
||||
self.app.services.nodes.update_node(matches[0], updatenode)
|
||||
printer.success(f"{args.data} edited successfully")
|
||||
else:
|
||||
editcount = 0
|
||||
for k in matches:
|
||||
updated_item = self.app.services.nodes.explode_unique(k)
|
||||
updated_item["type"] = "connection"
|
||||
updated_item.update(node_details[k])
|
||||
|
||||
this_item_changed = False
|
||||
for key, should_edit in edits.items():
|
||||
if should_edit:
|
||||
this_item_changed = True
|
||||
updated_item[key] = updatenode[key]
|
||||
|
||||
if this_item_changed:
|
||||
editcount += 1
|
||||
self.app.services.nodes.update_node(k, updated_item)
|
||||
|
||||
if editcount == 0:
|
||||
printer.info("Nothing to do here")
|
||||
else:
|
||||
printer.success(f"{matches} edited successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
<h3>Methods</h3>
|
||||
<dl>
|
||||
<dt id="connpy.cli.node_handler.NodeHandler.add"><code class="name flex">
|
||||
<span>def <span class="ident">add</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def add(self, args):
|
||||
try:
|
||||
args.data = self.app._type_node(args.data)
|
||||
except ValueError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(3)
|
||||
|
||||
if args.data == None:
|
||||
printer.error("Missing argument node")
|
||||
sys.exit(3)
|
||||
|
||||
is_folder = args.data.startswith("@")
|
||||
try:
|
||||
if is_folder:
|
||||
uniques = self.app.services.nodes.explode_unique(args.data)
|
||||
if not uniques:
|
||||
raise InvalidConfigurationError(f"Invalid folder {args.data}")
|
||||
self.app.services.nodes.add_node(args.data, {}, is_folder=True)
|
||||
printer.success(f"{args.data} added successfully")
|
||||
else:
|
||||
if args.data in self.app.nodes_list:
|
||||
printer.error(f"Node '{args.data}' already exists.")
|
||||
sys.exit(1)
|
||||
uniques = self.app.services.nodes.explode_unique(args.data)
|
||||
|
||||
# Fast fail if parent folder does not exist
|
||||
self.app.services.nodes.validate_parent_folder(args.data)
|
||||
|
||||
printer.console.print(Markdown(get_instructions()))
|
||||
|
||||
new_node_data = self.forms.questions_nodes(args.data, uniques)
|
||||
if not new_node_data:
|
||||
sys.exit(7)
|
||||
self.app.services.nodes.add_node(args.data, new_node_data)
|
||||
printer.success(f"{args.data} added successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.node_handler.NodeHandler.connect"><code class="name flex">
|
||||
<span>def <span class="ident">connect</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def connect(self, args):
|
||||
if args.data == None:
|
||||
try:
|
||||
matches = self.app.services.nodes.list_nodes()
|
||||
except Exception as e:
|
||||
printer.error(f"Failed to list nodes: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
if len(matches) == 0:
|
||||
printer.warning("There are no nodes created")
|
||||
printer.info("try: connpy --help")
|
||||
sys.exit(9)
|
||||
else:
|
||||
try:
|
||||
matches = self.app.services.nodes.list_nodes(args.data)
|
||||
except Exception:
|
||||
matches = []
|
||||
|
||||
if len(matches) == 0:
|
||||
printer.error(f"{args.data} not found")
|
||||
sys.exit(2)
|
||||
elif len(matches) > 1:
|
||||
matches[0] = choose(self.app, matches, "node", "connect")
|
||||
|
||||
if matches[0] == None:
|
||||
sys.exit(7)
|
||||
|
||||
try:
|
||||
self.app.services.nodes.connect_node(
|
||||
matches[0],
|
||||
sftp=args.sftp,
|
||||
debug=args.debug,
|
||||
logger=self.app._service_logger
|
||||
)
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.node_handler.NodeHandler.delete"><code class="name flex">
|
||||
<span>def <span class="ident">delete</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def delete(self, args):
|
||||
if args.data == None:
|
||||
printer.error("Missing argument node")
|
||||
sys.exit(3)
|
||||
|
||||
is_folder = args.data.startswith("@")
|
||||
try:
|
||||
if is_folder:
|
||||
matches = self.app.services.nodes.list_folders(args.data)
|
||||
else:
|
||||
matches = self.app.services.nodes.list_nodes(args.data)
|
||||
except Exception:
|
||||
matches = []
|
||||
|
||||
if len(matches) == 0:
|
||||
printer.error(f"{args.data} not found")
|
||||
sys.exit(2)
|
||||
|
||||
printer.info(f"Removing: {matches}")
|
||||
question = [inquirer.Confirm("delete", message="Are you sure you want to continue?")]
|
||||
confirm = inquirer.prompt(question)
|
||||
if confirm == None or not confirm["delete"]:
|
||||
sys.exit(7)
|
||||
|
||||
try:
|
||||
for item in matches:
|
||||
self.app.services.nodes.delete_node(item, is_folder=is_folder)
|
||||
|
||||
if len(matches) == 1:
|
||||
printer.success(f"{matches[0]} deleted successfully")
|
||||
else:
|
||||
printer.success(f"{len(matches)} items deleted successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.node_handler.NodeHandler.dispatch"><code class="name flex">
|
||||
<span>def <span class="ident">dispatch</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def dispatch(self, args):
|
||||
if not self.app.case and args.data != None:
|
||||
args.data = args.data.lower()
|
||||
actions = {"version": self.version, "connect": self.connect, "add": self.add, "del": self.delete, "mod": self.modify, "show": self.show}
|
||||
return actions.get(args.action)(args)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.node_handler.NodeHandler.modify"><code class="name flex">
|
||||
<span>def <span class="ident">modify</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def modify(self, args):
|
||||
if args.data == None:
|
||||
printer.error("Missing argument node")
|
||||
sys.exit(3)
|
||||
|
||||
try:
|
||||
matches = self.app.services.nodes.list_nodes(args.data)
|
||||
except Exception:
|
||||
matches = []
|
||||
|
||||
if len(matches) == 0:
|
||||
printer.error(f"No connection found with filter: {args.data}")
|
||||
sys.exit(2)
|
||||
|
||||
unique = matches[0] if len(matches) == 1 else None
|
||||
uniques = self.app.services.nodes.explode_unique(unique) if unique else {"id": None, "folder": None}
|
||||
|
||||
printer.info(f"Editing: {matches}")
|
||||
node_details = {}
|
||||
for i in matches:
|
||||
node_details[i] = self.app.services.nodes.get_node_details(i)
|
||||
|
||||
edits = self.forms.questions_edit()
|
||||
if edits == None:
|
||||
sys.exit(7)
|
||||
|
||||
# Use first match as base for defaults if multiple matches exist
|
||||
base_unique = matches[0]
|
||||
base_uniques = self.app.services.nodes.explode_unique(base_unique)
|
||||
updatenode = self.forms.questions_nodes(base_unique, base_uniques, edit=edits)
|
||||
if not updatenode:
|
||||
sys.exit(7)
|
||||
|
||||
try:
|
||||
if len(matches) == 1:
|
||||
# Comparison for "Nothing to do"
|
||||
current = node_details[matches[0]].copy()
|
||||
current.update(uniques)
|
||||
current["type"] = "connection"
|
||||
if sorted(updatenode.items()) == sorted(current.items()):
|
||||
printer.info("Nothing to do here")
|
||||
return
|
||||
self.app.services.nodes.update_node(matches[0], updatenode)
|
||||
printer.success(f"{args.data} edited successfully")
|
||||
else:
|
||||
editcount = 0
|
||||
for k in matches:
|
||||
updated_item = self.app.services.nodes.explode_unique(k)
|
||||
updated_item["type"] = "connection"
|
||||
updated_item.update(node_details[k])
|
||||
|
||||
this_item_changed = False
|
||||
for key, should_edit in edits.items():
|
||||
if should_edit:
|
||||
this_item_changed = True
|
||||
updated_item[key] = updatenode[key]
|
||||
|
||||
if this_item_changed:
|
||||
editcount += 1
|
||||
self.app.services.nodes.update_node(k, updated_item)
|
||||
|
||||
if editcount == 0:
|
||||
printer.info("Nothing to do here")
|
||||
else:
|
||||
printer.success(f"{matches} edited successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.node_handler.NodeHandler.show"><code class="name flex">
|
||||
<span>def <span class="ident">show</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def show(self, args):
|
||||
if args.data == None:
|
||||
printer.error("Missing argument node")
|
||||
sys.exit(3)
|
||||
|
||||
try:
|
||||
matches = self.app.services.nodes.list_nodes(args.data)
|
||||
except Exception:
|
||||
matches = []
|
||||
|
||||
if len(matches) == 0:
|
||||
printer.error(f"{args.data} not found")
|
||||
sys.exit(2)
|
||||
elif len(matches) > 1:
|
||||
matches[0] = choose(self.app, matches, "node", "show")
|
||||
|
||||
if matches[0] == None:
|
||||
sys.exit(7)
|
||||
|
||||
try:
|
||||
node = self.app.services.nodes.get_node_details(matches[0])
|
||||
yaml_output = yaml.dump(node, sort_keys=False, default_flow_style=False)
|
||||
printer.data(matches[0], yaml_output)
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.node_handler.NodeHandler.version"><code class="name flex">
|
||||
<span>def <span class="ident">version</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def version(self, args):
|
||||
from .._version import __version__
|
||||
printer.info(f"Connpy {__version__}")</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.cli" href="index.html">connpy.cli</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-classes">Classes</a></h3>
|
||||
<ul>
|
||||
<li>
|
||||
<h4><code><a title="connpy.cli.node_handler.NodeHandler" href="#connpy.cli.node_handler.NodeHandler">NodeHandler</a></code></h4>
|
||||
<ul class="two-column">
|
||||
<li><code><a title="connpy.cli.node_handler.NodeHandler.add" href="#connpy.cli.node_handler.NodeHandler.add">add</a></code></li>
|
||||
<li><code><a title="connpy.cli.node_handler.NodeHandler.connect" href="#connpy.cli.node_handler.NodeHandler.connect">connect</a></code></li>
|
||||
<li><code><a title="connpy.cli.node_handler.NodeHandler.delete" href="#connpy.cli.node_handler.NodeHandler.delete">delete</a></code></li>
|
||||
<li><code><a title="connpy.cli.node_handler.NodeHandler.dispatch" href="#connpy.cli.node_handler.NodeHandler.dispatch">dispatch</a></code></li>
|
||||
<li><code><a title="connpy.cli.node_handler.NodeHandler.modify" href="#connpy.cli.node_handler.NodeHandler.modify">modify</a></code></li>
|
||||
<li><code><a title="connpy.cli.node_handler.NodeHandler.show" href="#connpy.cli.node_handler.NodeHandler.show">show</a></code></li>
|
||||
<li><code><a title="connpy.cli.node_handler.NodeHandler.version" href="#connpy.cli.node_handler.NodeHandler.version">version</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.5</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,391 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.5">
|
||||
<title>connpy.cli.plugin_handler API documentation</title>
|
||||
<meta name="description" content="">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.cli.plugin_handler</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-classes">Classes</h2>
|
||||
<dl>
|
||||
<dt id="connpy.cli.plugin_handler.PluginHandler"><code class="flex name class">
|
||||
<span>class <span class="ident">PluginHandler</span></span>
|
||||
<span>(</span><span>app)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">class PluginHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
|
||||
def dispatch(self, args):
|
||||
try:
|
||||
# We determine the target PluginService/PluginStub based on standard 'mode'
|
||||
# But wait, local plugins should go to app.services._init_local version
|
||||
# Or we can just use the provided app.services.plugins and pass the appropriate grpc calls if needed.
|
||||
|
||||
is_remote = getattr(args, "remote", False)
|
||||
if is_remote and self.app.services.mode != "remote":
|
||||
printer.error("Cannot use --remote flag when not running in remote mode.")
|
||||
return
|
||||
|
||||
if args.add:
|
||||
self.app.services.plugins.add_plugin(args.add[0], args.add[1])
|
||||
printer.success(f"Plugin {args.add[0]} added successfully{' remotely' if is_remote else ''}.")
|
||||
elif args.update:
|
||||
self.app.services.plugins.add_plugin(args.update[0], args.update[1], update=True)
|
||||
printer.success(f"Plugin {args.update[0]} updated successfully{' remotely' if is_remote else ''}.")
|
||||
elif args.delete:
|
||||
self.app.services.plugins.delete_plugin(args.delete[0])
|
||||
printer.success(f"Plugin {args.delete[0]} deleted successfully{' remotely' if is_remote else ''}.")
|
||||
elif args.enable:
|
||||
name = args.enable[0]
|
||||
if is_remote:
|
||||
self.app.plugins.preferences[name] = "remote"
|
||||
else:
|
||||
if name in self.app.plugins.preferences:
|
||||
del self.app.plugins.preferences[name]
|
||||
|
||||
self.app.plugins._save_preferences(self.app.services.config_svc.get_default_dir())
|
||||
|
||||
# Always try to enable it locally (remove .bkp) if it exists
|
||||
# regardless of mode, to keep files consistent with "enabled" state
|
||||
try:
|
||||
# We use a local service instance to ensure we touch local files
|
||||
from ..services.plugin_service import PluginService
|
||||
local_svc = PluginService(self.app.services.config)
|
||||
local_svc.enable_plugin(name)
|
||||
except Exception:
|
||||
pass # Ignore if not found locally or already enabled
|
||||
|
||||
if is_remote and self.app.services.mode == "remote":
|
||||
self.app.services.plugins.enable_plugin(name)
|
||||
|
||||
printer.success(f"Plugin {name} enabled successfully{' remotely' if is_remote else ' locally'}.")
|
||||
elif args.disable:
|
||||
name = args.disable[0]
|
||||
success = False
|
||||
if is_remote:
|
||||
if self.app.services.mode == "remote":
|
||||
self.app.services.plugins.disable_plugin(name)
|
||||
success = True
|
||||
else:
|
||||
# Disable locally
|
||||
from ..services.plugin_service import PluginService
|
||||
local_svc = PluginService(self.app.services.config)
|
||||
try:
|
||||
if local_svc.disable_plugin(name):
|
||||
success = True
|
||||
except Exception as e:
|
||||
printer.warning(f"Could not disable local plugin: {e}")
|
||||
|
||||
if success:
|
||||
printer.success(f"Plugin {name} disabled successfully{' remotely' if is_remote else ' locally'}.")
|
||||
|
||||
# If any remote operation was performed, trigger a sync to update local cache immediately
|
||||
if is_remote and self.app.services.mode == "remote":
|
||||
try:
|
||||
import os
|
||||
cache_dir = os.path.join(self.app.services.config_svc.get_default_dir(), "remote_plugins")
|
||||
# We use a dummy subparser choice check bypass by passing force_sync=True
|
||||
# or just letting the hasher handle it.
|
||||
self.app.plugins._import_remote_plugins_to_argparse(
|
||||
self.app.services.plugins,
|
||||
self.app.subparsers, # We'll need to make sure this is available
|
||||
cache_dir,
|
||||
force_sync=True
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
elif getattr(args, "sync", False):
|
||||
# The actual sync logic is performed in connapp.py during init
|
||||
# if the --sync flag is detected in sys.argv
|
||||
printer.success("Remote plugins synchronized successfully.")
|
||||
elif args.list:
|
||||
# We need to fetch both local and remote if in remote mode
|
||||
local_plugins = {}
|
||||
remote_plugins = {}
|
||||
|
||||
# Fetch depending on mode
|
||||
if self.app.services.mode == "remote":
|
||||
# For local we need to instantiate a local plugin service bypassing stub
|
||||
from ..services.plugin_service import PluginService
|
||||
local_svc = PluginService(self.app.services.config)
|
||||
local_plugins = local_svc.list_plugins()
|
||||
remote_plugins = self.app.services.plugins.list_plugins()
|
||||
else:
|
||||
local_plugins = self.app.services.plugins.list_plugins()
|
||||
|
||||
from rich.table import Table
|
||||
|
||||
table = Table(title="Available Plugins", show_header=True, header_style="bold cyan")
|
||||
table.add_column("Plugin", style="cyan")
|
||||
table.add_column("State", style="bold")
|
||||
table.add_column("Origin", style="magenta")
|
||||
|
||||
# Populate local plugins
|
||||
for name, details in local_plugins.items():
|
||||
state = "Disabled" if not details.get("enabled", True) else "Active"
|
||||
color = "red" if state == "Disabled" else "green"
|
||||
|
||||
if self.app.services.mode == "remote" and state == "Active":
|
||||
if self.app.plugins.preferences.get(name) == "remote":
|
||||
state = "Shadowed (Override by Remote)"
|
||||
color = "yellow"
|
||||
|
||||
table.add_row(name, f"[{color}]{state}[/{color}]", "Local")
|
||||
|
||||
# Populate remote plugins
|
||||
if self.app.services.mode == "remote":
|
||||
for name, details in remote_plugins.items():
|
||||
state = "Disabled" if not details.get("enabled", True) else "Active"
|
||||
color = "red" if state == "Disabled" else "green"
|
||||
|
||||
if state == "Active":
|
||||
pref = self.app.plugins.preferences.get(name, "local")
|
||||
# If preference isn't remote and the plugin exists locally, local takes priority
|
||||
if pref != "remote" and name in local_plugins:
|
||||
state = "Shadowed (Override by Local)"
|
||||
color = "yellow"
|
||||
|
||||
table.add_row(name, f"[{color}]{state}[/{color}]", "Remote")
|
||||
|
||||
if not local_plugins and not remote_plugins:
|
||||
printer.console.print(" No plugins found.")
|
||||
else:
|
||||
printer.console.print(table)
|
||||
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
<h3>Methods</h3>
|
||||
<dl>
|
||||
<dt id="connpy.cli.plugin_handler.PluginHandler.dispatch"><code class="name flex">
|
||||
<span>def <span class="ident">dispatch</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def dispatch(self, args):
|
||||
try:
|
||||
# We determine the target PluginService/PluginStub based on standard 'mode'
|
||||
# But wait, local plugins should go to app.services._init_local version
|
||||
# Or we can just use the provided app.services.plugins and pass the appropriate grpc calls if needed.
|
||||
|
||||
is_remote = getattr(args, "remote", False)
|
||||
if is_remote and self.app.services.mode != "remote":
|
||||
printer.error("Cannot use --remote flag when not running in remote mode.")
|
||||
return
|
||||
|
||||
if args.add:
|
||||
self.app.services.plugins.add_plugin(args.add[0], args.add[1])
|
||||
printer.success(f"Plugin {args.add[0]} added successfully{' remotely' if is_remote else ''}.")
|
||||
elif args.update:
|
||||
self.app.services.plugins.add_plugin(args.update[0], args.update[1], update=True)
|
||||
printer.success(f"Plugin {args.update[0]} updated successfully{' remotely' if is_remote else ''}.")
|
||||
elif args.delete:
|
||||
self.app.services.plugins.delete_plugin(args.delete[0])
|
||||
printer.success(f"Plugin {args.delete[0]} deleted successfully{' remotely' if is_remote else ''}.")
|
||||
elif args.enable:
|
||||
name = args.enable[0]
|
||||
if is_remote:
|
||||
self.app.plugins.preferences[name] = "remote"
|
||||
else:
|
||||
if name in self.app.plugins.preferences:
|
||||
del self.app.plugins.preferences[name]
|
||||
|
||||
self.app.plugins._save_preferences(self.app.services.config_svc.get_default_dir())
|
||||
|
||||
# Always try to enable it locally (remove .bkp) if it exists
|
||||
# regardless of mode, to keep files consistent with "enabled" state
|
||||
try:
|
||||
# We use a local service instance to ensure we touch local files
|
||||
from ..services.plugin_service import PluginService
|
||||
local_svc = PluginService(self.app.services.config)
|
||||
local_svc.enable_plugin(name)
|
||||
except Exception:
|
||||
pass # Ignore if not found locally or already enabled
|
||||
|
||||
if is_remote and self.app.services.mode == "remote":
|
||||
self.app.services.plugins.enable_plugin(name)
|
||||
|
||||
printer.success(f"Plugin {name} enabled successfully{' remotely' if is_remote else ' locally'}.")
|
||||
elif args.disable:
|
||||
name = args.disable[0]
|
||||
success = False
|
||||
if is_remote:
|
||||
if self.app.services.mode == "remote":
|
||||
self.app.services.plugins.disable_plugin(name)
|
||||
success = True
|
||||
else:
|
||||
# Disable locally
|
||||
from ..services.plugin_service import PluginService
|
||||
local_svc = PluginService(self.app.services.config)
|
||||
try:
|
||||
if local_svc.disable_plugin(name):
|
||||
success = True
|
||||
except Exception as e:
|
||||
printer.warning(f"Could not disable local plugin: {e}")
|
||||
|
||||
if success:
|
||||
printer.success(f"Plugin {name} disabled successfully{' remotely' if is_remote else ' locally'}.")
|
||||
|
||||
# If any remote operation was performed, trigger a sync to update local cache immediately
|
||||
if is_remote and self.app.services.mode == "remote":
|
||||
try:
|
||||
import os
|
||||
cache_dir = os.path.join(self.app.services.config_svc.get_default_dir(), "remote_plugins")
|
||||
# We use a dummy subparser choice check bypass by passing force_sync=True
|
||||
# or just letting the hasher handle it.
|
||||
self.app.plugins._import_remote_plugins_to_argparse(
|
||||
self.app.services.plugins,
|
||||
self.app.subparsers, # We'll need to make sure this is available
|
||||
cache_dir,
|
||||
force_sync=True
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
elif getattr(args, "sync", False):
|
||||
# The actual sync logic is performed in connapp.py during init
|
||||
# if the --sync flag is detected in sys.argv
|
||||
printer.success("Remote plugins synchronized successfully.")
|
||||
elif args.list:
|
||||
# We need to fetch both local and remote if in remote mode
|
||||
local_plugins = {}
|
||||
remote_plugins = {}
|
||||
|
||||
# Fetch depending on mode
|
||||
if self.app.services.mode == "remote":
|
||||
# For local we need to instantiate a local plugin service bypassing stub
|
||||
from ..services.plugin_service import PluginService
|
||||
local_svc = PluginService(self.app.services.config)
|
||||
local_plugins = local_svc.list_plugins()
|
||||
remote_plugins = self.app.services.plugins.list_plugins()
|
||||
else:
|
||||
local_plugins = self.app.services.plugins.list_plugins()
|
||||
|
||||
from rich.table import Table
|
||||
|
||||
table = Table(title="Available Plugins", show_header=True, header_style="bold cyan")
|
||||
table.add_column("Plugin", style="cyan")
|
||||
table.add_column("State", style="bold")
|
||||
table.add_column("Origin", style="magenta")
|
||||
|
||||
# Populate local plugins
|
||||
for name, details in local_plugins.items():
|
||||
state = "Disabled" if not details.get("enabled", True) else "Active"
|
||||
color = "red" if state == "Disabled" else "green"
|
||||
|
||||
if self.app.services.mode == "remote" and state == "Active":
|
||||
if self.app.plugins.preferences.get(name) == "remote":
|
||||
state = "Shadowed (Override by Remote)"
|
||||
color = "yellow"
|
||||
|
||||
table.add_row(name, f"[{color}]{state}[/{color}]", "Local")
|
||||
|
||||
# Populate remote plugins
|
||||
if self.app.services.mode == "remote":
|
||||
for name, details in remote_plugins.items():
|
||||
state = "Disabled" if not details.get("enabled", True) else "Active"
|
||||
color = "red" if state == "Disabled" else "green"
|
||||
|
||||
if state == "Active":
|
||||
pref = self.app.plugins.preferences.get(name, "local")
|
||||
# If preference isn't remote and the plugin exists locally, local takes priority
|
||||
if pref != "remote" and name in local_plugins:
|
||||
state = "Shadowed (Override by Local)"
|
||||
color = "yellow"
|
||||
|
||||
table.add_row(name, f"[{color}]{state}[/{color}]", "Remote")
|
||||
|
||||
if not local_plugins and not remote_plugins:
|
||||
printer.console.print(" No plugins found.")
|
||||
else:
|
||||
printer.console.print(table)
|
||||
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.cli" href="index.html">connpy.cli</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-classes">Classes</a></h3>
|
||||
<ul>
|
||||
<li>
|
||||
<h4><code><a title="connpy.cli.plugin_handler.PluginHandler" href="#connpy.cli.plugin_handler.PluginHandler">PluginHandler</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.cli.plugin_handler.PluginHandler.dispatch" href="#connpy.cli.plugin_handler.PluginHandler.dispatch">dispatch</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.5</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,320 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.5">
|
||||
<title>connpy.cli.profile_handler API documentation</title>
|
||||
<meta name="description" content="">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.cli.profile_handler</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-classes">Classes</h2>
|
||||
<dl>
|
||||
<dt id="connpy.cli.profile_handler.ProfileHandler"><code class="flex name class">
|
||||
<span>class <span class="ident">ProfileHandler</span></span>
|
||||
<span>(</span><span>app)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">class ProfileHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
self.forms = Forms(app)
|
||||
|
||||
def dispatch(self, args):
|
||||
if not self.app.case:
|
||||
args.data[0] = args.data[0].lower()
|
||||
actions = {"add": self.add, "del": self.delete, "mod": self.modify, "show": self.show}
|
||||
return actions.get(args.action)(args)
|
||||
|
||||
def delete(self, args):
|
||||
name = args.data[0]
|
||||
try:
|
||||
self.app.services.profiles.get_profile(name)
|
||||
except ProfileNotFoundError:
|
||||
printer.error(f"{name} not found")
|
||||
sys.exit(2)
|
||||
|
||||
if name == "default":
|
||||
printer.error("Can't delete default profile")
|
||||
sys.exit(6)
|
||||
|
||||
question = [inquirer.Confirm("delete", message=f"Are you sure you want to delete {name}?")]
|
||||
confirm = inquirer.prompt(question)
|
||||
if confirm == None or not confirm["delete"]:
|
||||
sys.exit(7)
|
||||
|
||||
try:
|
||||
self.app.services.profiles.delete_profile(name)
|
||||
printer.success(f"{name} deleted successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(8)
|
||||
|
||||
def show(self, args):
|
||||
try:
|
||||
profile = self.app.services.profiles.get_profile(args.data[0])
|
||||
yaml_output = yaml.dump(profile, sort_keys=False, default_flow_style=False)
|
||||
printer.data(args.data[0], yaml_output)
|
||||
except ProfileNotFoundError:
|
||||
printer.error(f"{args.data[0]} not found")
|
||||
sys.exit(2)
|
||||
|
||||
def add(self, args):
|
||||
name = args.data[0]
|
||||
if name in self.app.services.profiles.list_profiles():
|
||||
printer.error(f"Profile '{name}' already exists.")
|
||||
sys.exit(4)
|
||||
|
||||
new_profile_data = self.forms.questions_profiles(name)
|
||||
if not new_profile_data:
|
||||
sys.exit(7)
|
||||
|
||||
try:
|
||||
self.app.services.profiles.add_profile(name, new_profile_data)
|
||||
printer.success(f"{name} added successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
def modify(self, args):
|
||||
name = args.data[0]
|
||||
try:
|
||||
profile = self.app.services.profiles.get_profile(name, resolve=False)
|
||||
except ProfileNotFoundError:
|
||||
printer.error(f"Profile '{name}' not found")
|
||||
sys.exit(2)
|
||||
|
||||
old_profile = {"id": name, **profile}
|
||||
edits = self.forms.questions_edit()
|
||||
if edits == None:
|
||||
sys.exit(7)
|
||||
|
||||
update_profile_data = self.forms.questions_profiles(name, edit=edits)
|
||||
if not update_profile_data:
|
||||
sys.exit(7)
|
||||
|
||||
if sorted(update_profile_data.items()) == sorted(old_profile.items()):
|
||||
printer.info("Nothing to do here")
|
||||
return
|
||||
|
||||
try:
|
||||
self.app.services.profiles.update_profile(name, update_profile_data)
|
||||
printer.success(f"{name} edited successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
<h3>Methods</h3>
|
||||
<dl>
|
||||
<dt id="connpy.cli.profile_handler.ProfileHandler.add"><code class="name flex">
|
||||
<span>def <span class="ident">add</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def add(self, args):
|
||||
name = args.data[0]
|
||||
if name in self.app.services.profiles.list_profiles():
|
||||
printer.error(f"Profile '{name}' already exists.")
|
||||
sys.exit(4)
|
||||
|
||||
new_profile_data = self.forms.questions_profiles(name)
|
||||
if not new_profile_data:
|
||||
sys.exit(7)
|
||||
|
||||
try:
|
||||
self.app.services.profiles.add_profile(name, new_profile_data)
|
||||
printer.success(f"{name} added successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.profile_handler.ProfileHandler.delete"><code class="name flex">
|
||||
<span>def <span class="ident">delete</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def delete(self, args):
|
||||
name = args.data[0]
|
||||
try:
|
||||
self.app.services.profiles.get_profile(name)
|
||||
except ProfileNotFoundError:
|
||||
printer.error(f"{name} not found")
|
||||
sys.exit(2)
|
||||
|
||||
if name == "default":
|
||||
printer.error("Can't delete default profile")
|
||||
sys.exit(6)
|
||||
|
||||
question = [inquirer.Confirm("delete", message=f"Are you sure you want to delete {name}?")]
|
||||
confirm = inquirer.prompt(question)
|
||||
if confirm == None or not confirm["delete"]:
|
||||
sys.exit(7)
|
||||
|
||||
try:
|
||||
self.app.services.profiles.delete_profile(name)
|
||||
printer.success(f"{name} deleted successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(8)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.profile_handler.ProfileHandler.dispatch"><code class="name flex">
|
||||
<span>def <span class="ident">dispatch</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def dispatch(self, args):
|
||||
if not self.app.case:
|
||||
args.data[0] = args.data[0].lower()
|
||||
actions = {"add": self.add, "del": self.delete, "mod": self.modify, "show": self.show}
|
||||
return actions.get(args.action)(args)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.profile_handler.ProfileHandler.modify"><code class="name flex">
|
||||
<span>def <span class="ident">modify</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def modify(self, args):
|
||||
name = args.data[0]
|
||||
try:
|
||||
profile = self.app.services.profiles.get_profile(name, resolve=False)
|
||||
except ProfileNotFoundError:
|
||||
printer.error(f"Profile '{name}' not found")
|
||||
sys.exit(2)
|
||||
|
||||
old_profile = {"id": name, **profile}
|
||||
edits = self.forms.questions_edit()
|
||||
if edits == None:
|
||||
sys.exit(7)
|
||||
|
||||
update_profile_data = self.forms.questions_profiles(name, edit=edits)
|
||||
if not update_profile_data:
|
||||
sys.exit(7)
|
||||
|
||||
if sorted(update_profile_data.items()) == sorted(old_profile.items()):
|
||||
printer.info("Nothing to do here")
|
||||
return
|
||||
|
||||
try:
|
||||
self.app.services.profiles.update_profile(name, update_profile_data)
|
||||
printer.success(f"{name} edited successfully")
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.profile_handler.ProfileHandler.show"><code class="name flex">
|
||||
<span>def <span class="ident">show</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def show(self, args):
|
||||
try:
|
||||
profile = self.app.services.profiles.get_profile(args.data[0])
|
||||
yaml_output = yaml.dump(profile, sort_keys=False, default_flow_style=False)
|
||||
printer.data(args.data[0], yaml_output)
|
||||
except ProfileNotFoundError:
|
||||
printer.error(f"{args.data[0]} not found")
|
||||
sys.exit(2)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.cli" href="index.html">connpy.cli</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-classes">Classes</a></h3>
|
||||
<ul>
|
||||
<li>
|
||||
<h4><code><a title="connpy.cli.profile_handler.ProfileHandler" href="#connpy.cli.profile_handler.ProfileHandler">ProfileHandler</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.cli.profile_handler.ProfileHandler.add" href="#connpy.cli.profile_handler.ProfileHandler.add">add</a></code></li>
|
||||
<li><code><a title="connpy.cli.profile_handler.ProfileHandler.delete" href="#connpy.cli.profile_handler.ProfileHandler.delete">delete</a></code></li>
|
||||
<li><code><a title="connpy.cli.profile_handler.ProfileHandler.dispatch" href="#connpy.cli.profile_handler.ProfileHandler.dispatch">dispatch</a></code></li>
|
||||
<li><code><a title="connpy.cli.profile_handler.ProfileHandler.modify" href="#connpy.cli.profile_handler.ProfileHandler.modify">modify</a></code></li>
|
||||
<li><code><a title="connpy.cli.profile_handler.ProfileHandler.show" href="#connpy.cli.profile_handler.ProfileHandler.show">show</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.5</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,460 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.5">
|
||||
<title>connpy.cli.run_handler API documentation</title>
|
||||
<meta name="description" content="">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.cli.run_handler</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-classes">Classes</h2>
|
||||
<dl>
|
||||
<dt id="connpy.cli.run_handler.RunHandler"><code class="flex name class">
|
||||
<span>class <span class="ident">RunHandler</span></span>
|
||||
<span>(</span><span>app)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">class RunHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
self.print_lock = threading.Lock()
|
||||
|
||||
def dispatch(self, args):
|
||||
if len(args.data) > 1:
|
||||
args.action = "noderun"
|
||||
actions = {"noderun": self.node_run, "generate": self.yaml_generate, "run": self.yaml_run}
|
||||
return actions.get(args.action)(args)
|
||||
|
||||
def node_run(self, args):
|
||||
nodes_filter = args.data[0]
|
||||
commands = [" ".join(args.data[1:])]
|
||||
|
||||
try:
|
||||
header_printed = False
|
||||
|
||||
if hasattr(args, 'test_expected') and args.test_expected:
|
||||
# Mode: Test
|
||||
def _on_node_complete(unique, node_output, node_status, node_result):
|
||||
nonlocal header_printed
|
||||
with self.print_lock:
|
||||
if not header_printed:
|
||||
printer.console.print(Rule("OUTPUT", style="header"))
|
||||
header_printed = True
|
||||
printer.test_panel(unique, node_output, node_status, node_result)
|
||||
|
||||
results = self.app.services.execution.test_commands(
|
||||
nodes_filter=nodes_filter,
|
||||
commands=commands,
|
||||
expected=args.test_expected,
|
||||
on_node_complete=_on_node_complete
|
||||
)
|
||||
printer.test_summary(results)
|
||||
else:
|
||||
# Mode: Normal Run
|
||||
def _on_node_complete(unique, node_output, node_status):
|
||||
nonlocal header_printed
|
||||
with self.print_lock:
|
||||
if not header_printed:
|
||||
printer.console.print(Rule("OUTPUT", style="header"))
|
||||
header_printed = True
|
||||
printer.node_panel(unique, node_output, node_status)
|
||||
|
||||
results = self.app.services.execution.run_commands(
|
||||
nodes_filter=nodes_filter,
|
||||
commands=commands,
|
||||
on_node_complete=_on_node_complete
|
||||
)
|
||||
printer.run_summary(results)
|
||||
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)
|
||||
|
||||
def yaml_generate(self, args):
|
||||
if os.path.exists(args.data[0]):
|
||||
printer.error(f"File '{args.data[0]}' already exists.")
|
||||
sys.exit(14)
|
||||
else:
|
||||
with open(args.data[0], "w") as file:
|
||||
file.write(get_instructions("generate"))
|
||||
printer.success(f"File {args.data[0]} generated successfully")
|
||||
sys.exit()
|
||||
|
||||
def yaml_run(self, args):
|
||||
path = args.data[0]
|
||||
try:
|
||||
with open(path, "r") as f:
|
||||
playbook = yaml.load(f, Loader=yaml.FullLoader)
|
||||
|
||||
for task in playbook.get("tasks", []):
|
||||
self.cli_run(task)
|
||||
|
||||
except Exception as e:
|
||||
printer.error(f"Failed to run playbook {path}: {e}")
|
||||
sys.exit(10)
|
||||
|
||||
def cli_run(self, script):
|
||||
name = script.get("name", "Task")
|
||||
try:
|
||||
action = script["action"]
|
||||
nodelist = script["nodes"]
|
||||
commands = script["commands"]
|
||||
variables = script.get("variables")
|
||||
output_cfg = script["output"]
|
||||
options = script.get("options", {})
|
||||
except KeyError as e:
|
||||
printer.error(f"[{name}] '{e.args[0]}' is mandatory in script")
|
||||
sys.exit(11)
|
||||
|
||||
stdout = (output_cfg == "stdout")
|
||||
folder = output_cfg if output_cfg not in [None, "stdout"] else None
|
||||
prompt = options.get("prompt")
|
||||
|
||||
try:
|
||||
header_printed = False
|
||||
if action == "run":
|
||||
# If stdout is true, we stream results as they arrive
|
||||
def _on_run_complete(unique, node_output, node_status):
|
||||
nonlocal header_printed
|
||||
if stdout:
|
||||
with self.print_lock:
|
||||
if not header_printed:
|
||||
printer.console.print(Rule(name.upper(), style="header"))
|
||||
header_printed = True
|
||||
printer.node_panel(unique, node_output, node_status)
|
||||
|
||||
results = self.app.services.execution.run_commands(
|
||||
nodes_filter=nodelist,
|
||||
commands=commands,
|
||||
variables=variables,
|
||||
parallel=options.get("parallel", 10),
|
||||
timeout=options.get("timeout", 20),
|
||||
folder=folder,
|
||||
prompt=prompt,
|
||||
on_node_complete=_on_run_complete
|
||||
)
|
||||
# Final Summary
|
||||
if not stdout and not folder:
|
||||
with self.print_lock:
|
||||
printer.console.print(Rule(name.upper(), style="header"))
|
||||
for unique, data in results.items():
|
||||
output = data["output"] if isinstance(data, dict) else data
|
||||
printer.node_panel(unique, output, 0)
|
||||
|
||||
# ALWAYS show the aggregate execution summary at the end
|
||||
printer.run_summary(results)
|
||||
|
||||
elif action == "test":
|
||||
expected = script.get("expected", [])
|
||||
# Show test_panel per node ONLY if stdout is True
|
||||
def _on_test_complete(unique, node_output, node_status, node_result):
|
||||
nonlocal header_printed
|
||||
if stdout:
|
||||
with self.print_lock:
|
||||
if not header_printed:
|
||||
printer.console.print(Rule(name.upper(), style="header"))
|
||||
header_printed = True
|
||||
printer.test_panel(unique, node_output, node_status, node_result)
|
||||
|
||||
results = self.app.services.execution.test_commands(
|
||||
nodes_filter=nodelist,
|
||||
commands=commands,
|
||||
expected=expected,
|
||||
variables=variables,
|
||||
parallel=options.get("parallel", 10),
|
||||
timeout=options.get("timeout", 20),
|
||||
folder=folder,
|
||||
prompt=prompt,
|
||||
on_node_complete=_on_test_complete
|
||||
)
|
||||
# ALWAYS show the aggregate summary at the end
|
||||
printer.test_summary(results)
|
||||
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
<h3>Methods</h3>
|
||||
<dl>
|
||||
<dt id="connpy.cli.run_handler.RunHandler.cli_run"><code class="name flex">
|
||||
<span>def <span class="ident">cli_run</span></span>(<span>self, script)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def cli_run(self, script):
|
||||
name = script.get("name", "Task")
|
||||
try:
|
||||
action = script["action"]
|
||||
nodelist = script["nodes"]
|
||||
commands = script["commands"]
|
||||
variables = script.get("variables")
|
||||
output_cfg = script["output"]
|
||||
options = script.get("options", {})
|
||||
except KeyError as e:
|
||||
printer.error(f"[{name}] '{e.args[0]}' is mandatory in script")
|
||||
sys.exit(11)
|
||||
|
||||
stdout = (output_cfg == "stdout")
|
||||
folder = output_cfg if output_cfg not in [None, "stdout"] else None
|
||||
prompt = options.get("prompt")
|
||||
|
||||
try:
|
||||
header_printed = False
|
||||
if action == "run":
|
||||
# If stdout is true, we stream results as they arrive
|
||||
def _on_run_complete(unique, node_output, node_status):
|
||||
nonlocal header_printed
|
||||
if stdout:
|
||||
with self.print_lock:
|
||||
if not header_printed:
|
||||
printer.console.print(Rule(name.upper(), style="header"))
|
||||
header_printed = True
|
||||
printer.node_panel(unique, node_output, node_status)
|
||||
|
||||
results = self.app.services.execution.run_commands(
|
||||
nodes_filter=nodelist,
|
||||
commands=commands,
|
||||
variables=variables,
|
||||
parallel=options.get("parallel", 10),
|
||||
timeout=options.get("timeout", 20),
|
||||
folder=folder,
|
||||
prompt=prompt,
|
||||
on_node_complete=_on_run_complete
|
||||
)
|
||||
# Final Summary
|
||||
if not stdout and not folder:
|
||||
with self.print_lock:
|
||||
printer.console.print(Rule(name.upper(), style="header"))
|
||||
for unique, data in results.items():
|
||||
output = data["output"] if isinstance(data, dict) else data
|
||||
printer.node_panel(unique, output, 0)
|
||||
|
||||
# ALWAYS show the aggregate execution summary at the end
|
||||
printer.run_summary(results)
|
||||
|
||||
elif action == "test":
|
||||
expected = script.get("expected", [])
|
||||
# Show test_panel per node ONLY if stdout is True
|
||||
def _on_test_complete(unique, node_output, node_status, node_result):
|
||||
nonlocal header_printed
|
||||
if stdout:
|
||||
with self.print_lock:
|
||||
if not header_printed:
|
||||
printer.console.print(Rule(name.upper(), style="header"))
|
||||
header_printed = True
|
||||
printer.test_panel(unique, node_output, node_status, node_result)
|
||||
|
||||
results = self.app.services.execution.test_commands(
|
||||
nodes_filter=nodelist,
|
||||
commands=commands,
|
||||
expected=expected,
|
||||
variables=variables,
|
||||
parallel=options.get("parallel", 10),
|
||||
timeout=options.get("timeout", 20),
|
||||
folder=folder,
|
||||
prompt=prompt,
|
||||
on_node_complete=_on_test_complete
|
||||
)
|
||||
# ALWAYS show the aggregate summary at the end
|
||||
printer.test_summary(results)
|
||||
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.run_handler.RunHandler.dispatch"><code class="name flex">
|
||||
<span>def <span class="ident">dispatch</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def dispatch(self, args):
|
||||
if len(args.data) > 1:
|
||||
args.action = "noderun"
|
||||
actions = {"noderun": self.node_run, "generate": self.yaml_generate, "run": self.yaml_run}
|
||||
return actions.get(args.action)(args)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.run_handler.RunHandler.node_run"><code class="name flex">
|
||||
<span>def <span class="ident">node_run</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def node_run(self, args):
|
||||
nodes_filter = args.data[0]
|
||||
commands = [" ".join(args.data[1:])]
|
||||
|
||||
try:
|
||||
header_printed = False
|
||||
|
||||
if hasattr(args, 'test_expected') and args.test_expected:
|
||||
# Mode: Test
|
||||
def _on_node_complete(unique, node_output, node_status, node_result):
|
||||
nonlocal header_printed
|
||||
with self.print_lock:
|
||||
if not header_printed:
|
||||
printer.console.print(Rule("OUTPUT", style="header"))
|
||||
header_printed = True
|
||||
printer.test_panel(unique, node_output, node_status, node_result)
|
||||
|
||||
results = self.app.services.execution.test_commands(
|
||||
nodes_filter=nodes_filter,
|
||||
commands=commands,
|
||||
expected=args.test_expected,
|
||||
on_node_complete=_on_node_complete
|
||||
)
|
||||
printer.test_summary(results)
|
||||
else:
|
||||
# Mode: Normal Run
|
||||
def _on_node_complete(unique, node_output, node_status):
|
||||
nonlocal header_printed
|
||||
with self.print_lock:
|
||||
if not header_printed:
|
||||
printer.console.print(Rule("OUTPUT", style="header"))
|
||||
header_printed = True
|
||||
printer.node_panel(unique, node_output, node_status)
|
||||
|
||||
results = self.app.services.execution.run_commands(
|
||||
nodes_filter=nodes_filter,
|
||||
commands=commands,
|
||||
on_node_complete=_on_node_complete
|
||||
)
|
||||
printer.run_summary(results)
|
||||
|
||||
except ConnpyError as e:
|
||||
printer.error(str(e))
|
||||
sys.exit(1)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.run_handler.RunHandler.yaml_generate"><code class="name flex">
|
||||
<span>def <span class="ident">yaml_generate</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def yaml_generate(self, args):
|
||||
if os.path.exists(args.data[0]):
|
||||
printer.error(f"File '{args.data[0]}' already exists.")
|
||||
sys.exit(14)
|
||||
else:
|
||||
with open(args.data[0], "w") as file:
|
||||
file.write(get_instructions("generate"))
|
||||
printer.success(f"File {args.data[0]} generated successfully")
|
||||
sys.exit()</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.run_handler.RunHandler.yaml_run"><code class="name flex">
|
||||
<span>def <span class="ident">yaml_run</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def yaml_run(self, args):
|
||||
path = args.data[0]
|
||||
try:
|
||||
with open(path, "r") as f:
|
||||
playbook = yaml.load(f, Loader=yaml.FullLoader)
|
||||
|
||||
for task in playbook.get("tasks", []):
|
||||
self.cli_run(task)
|
||||
|
||||
except Exception as e:
|
||||
printer.error(f"Failed to run playbook {path}: {e}")
|
||||
sys.exit(10)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.cli" href="index.html">connpy.cli</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-classes">Classes</a></h3>
|
||||
<ul>
|
||||
<li>
|
||||
<h4><code><a title="connpy.cli.run_handler.RunHandler" href="#connpy.cli.run_handler.RunHandler">RunHandler</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.cli.run_handler.RunHandler.cli_run" href="#connpy.cli.run_handler.RunHandler.cli_run">cli_run</a></code></li>
|
||||
<li><code><a title="connpy.cli.run_handler.RunHandler.dispatch" href="#connpy.cli.run_handler.RunHandler.dispatch">dispatch</a></code></li>
|
||||
<li><code><a title="connpy.cli.run_handler.RunHandler.node_run" href="#connpy.cli.run_handler.RunHandler.node_run">node_run</a></code></li>
|
||||
<li><code><a title="connpy.cli.run_handler.RunHandler.yaml_generate" href="#connpy.cli.run_handler.RunHandler.yaml_generate">yaml_generate</a></code></li>
|
||||
<li><code><a title="connpy.cli.run_handler.RunHandler.yaml_run" href="#connpy.cli.run_handler.RunHandler.yaml_run">yaml_run</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.5</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,433 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.5">
|
||||
<title>connpy.cli.sync_handler API documentation</title>
|
||||
<meta name="description" content="">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.cli.sync_handler</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-classes">Classes</h2>
|
||||
<dl>
|
||||
<dt id="connpy.cli.sync_handler.SyncHandler"><code class="flex name class">
|
||||
<span>class <span class="ident">SyncHandler</span></span>
|
||||
<span>(</span><span>app)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">class SyncHandler:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
|
||||
def dispatch(self, args):
|
||||
action = getattr(args, "action", None)
|
||||
actions = {
|
||||
"login": self.login,
|
||||
"logout": self.logout,
|
||||
"status": self.status,
|
||||
"list": self.list_backups,
|
||||
"once": self.once,
|
||||
"restore": self.restore,
|
||||
"start": self.start,
|
||||
"stop": self.stop
|
||||
}
|
||||
handler = actions.get(action)
|
||||
if handler:
|
||||
return handler(args)
|
||||
|
||||
return self.status(args)
|
||||
|
||||
def login(self, args):
|
||||
self.app.services.sync.login()
|
||||
|
||||
def logout(self, args):
|
||||
self.app.services.sync.logout()
|
||||
|
||||
def status(self, args):
|
||||
status = self.app.services.sync.check_login_status()
|
||||
enabled = self.app.services.sync.sync_enabled
|
||||
remote = self.app.services.sync.sync_remote
|
||||
|
||||
printer.info(f"Login Status: {status}")
|
||||
printer.info(f"Auto-Sync: {'Enabled' if enabled else 'Disabled'}")
|
||||
printer.info(f"Sync Remote Nodes: {'Yes' if remote else 'No'}")
|
||||
|
||||
def list_backups(self, args):
|
||||
backups = self.app.services.sync.list_backups()
|
||||
if backups:
|
||||
yaml_output = yaml.dump(backups, sort_keys=False, default_flow_style=False)
|
||||
printer.custom("backups", "")
|
||||
print(yaml_output)
|
||||
else:
|
||||
printer.info("No backups found or not logged in.")
|
||||
|
||||
def once(self, args):
|
||||
# Manual backup. We check if we should include remote nodes
|
||||
remote_data = None
|
||||
if self.app.services.sync.sync_remote and self.app.services.mode == "remote":
|
||||
inventory = self.app.services.nodes.get_inventory()
|
||||
# Merge with local settings
|
||||
local_settings = self.app.services.config_svc.get_settings()
|
||||
local_settings.pop("configfolder", None)
|
||||
|
||||
# Maintain proper config structure: {config: {}, connections: {}, profiles: {}}
|
||||
remote_data = {
|
||||
"config": local_settings,
|
||||
"connections": inventory.get("connections", {}),
|
||||
"profiles": inventory.get("profiles", {})
|
||||
}
|
||||
|
||||
if self.app.services.sync.compress_and_upload(remote_data):
|
||||
printer.success("Manual backup completed.")
|
||||
|
||||
def restore(self, args):
|
||||
import inquirer
|
||||
file_id = getattr(args, "id", None)
|
||||
|
||||
# Segmented flags
|
||||
restore_config = getattr(args, "restore_config", False)
|
||||
restore_nodes = getattr(args, "restore_nodes", False)
|
||||
|
||||
# If neither is specified, we restore ALL (backwards compatibility)
|
||||
if not restore_config and not restore_nodes:
|
||||
restore_config = True
|
||||
restore_nodes = True
|
||||
|
||||
# 1. Analyze what we are about to restore
|
||||
info = self.app.services.sync.analyze_backup_content(file_id)
|
||||
if not info:
|
||||
printer.error("Could not analyze backup content.")
|
||||
return
|
||||
|
||||
# 2. Show detailed info
|
||||
printer.info("Restoration Details:")
|
||||
if restore_config:
|
||||
print(f" - Local Settings: Yes")
|
||||
print(f" - RSA Key (.osk): {'Yes' if info['has_key'] else 'No'}")
|
||||
if restore_nodes:
|
||||
target = "REMOTE" if self.app.services.mode == "remote" else "LOCAL"
|
||||
print(f" - Nodes: {info['nodes']}")
|
||||
print(f" - Folders: {info['folders']}")
|
||||
print(f" - Profiles: {info['profiles']}")
|
||||
print(f" - Destination: {target}")
|
||||
print("")
|
||||
|
||||
questions = [inquirer.Confirm("confirm", message="Do you want to proceed with the restoration?", default=False)]
|
||||
answers = inquirer.prompt(questions)
|
||||
|
||||
if not answers or not answers["confirm"]:
|
||||
printer.info("Restore cancelled.")
|
||||
return
|
||||
|
||||
# 3. Perform the actual restore
|
||||
if self.app.services.sync.restore_backup(
|
||||
file_id=file_id,
|
||||
restore_config=restore_config,
|
||||
restore_nodes=restore_nodes,
|
||||
app_instance=self.app
|
||||
):
|
||||
printer.success("Restore completed successfully.")
|
||||
|
||||
def start(self, args):
|
||||
self.app.services.config_svc.update_setting("sync", True)
|
||||
self.app.services.sync.sync_enabled = True
|
||||
printer.success("Auto-sync enabled.")
|
||||
|
||||
def stop(self, args):
|
||||
self.app.services.config_svc.update_setting("sync", False)
|
||||
self.app.services.sync.sync_enabled = False
|
||||
printer.success("Auto-sync disabled.")</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
<h3>Methods</h3>
|
||||
<dl>
|
||||
<dt id="connpy.cli.sync_handler.SyncHandler.dispatch"><code class="name flex">
|
||||
<span>def <span class="ident">dispatch</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def dispatch(self, args):
|
||||
action = getattr(args, "action", None)
|
||||
actions = {
|
||||
"login": self.login,
|
||||
"logout": self.logout,
|
||||
"status": self.status,
|
||||
"list": self.list_backups,
|
||||
"once": self.once,
|
||||
"restore": self.restore,
|
||||
"start": self.start,
|
||||
"stop": self.stop
|
||||
}
|
||||
handler = actions.get(action)
|
||||
if handler:
|
||||
return handler(args)
|
||||
|
||||
return self.status(args)</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.sync_handler.SyncHandler.list_backups"><code class="name flex">
|
||||
<span>def <span class="ident">list_backups</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def list_backups(self, args):
|
||||
backups = self.app.services.sync.list_backups()
|
||||
if backups:
|
||||
yaml_output = yaml.dump(backups, sort_keys=False, default_flow_style=False)
|
||||
printer.custom("backups", "")
|
||||
print(yaml_output)
|
||||
else:
|
||||
printer.info("No backups found or not logged in.")</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.sync_handler.SyncHandler.login"><code class="name flex">
|
||||
<span>def <span class="ident">login</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def login(self, args):
|
||||
self.app.services.sync.login()</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.sync_handler.SyncHandler.logout"><code class="name flex">
|
||||
<span>def <span class="ident">logout</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def logout(self, args):
|
||||
self.app.services.sync.logout()</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.sync_handler.SyncHandler.once"><code class="name flex">
|
||||
<span>def <span class="ident">once</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def once(self, args):
|
||||
# Manual backup. We check if we should include remote nodes
|
||||
remote_data = None
|
||||
if self.app.services.sync.sync_remote and self.app.services.mode == "remote":
|
||||
inventory = self.app.services.nodes.get_inventory()
|
||||
# Merge with local settings
|
||||
local_settings = self.app.services.config_svc.get_settings()
|
||||
local_settings.pop("configfolder", None)
|
||||
|
||||
# Maintain proper config structure: {config: {}, connections: {}, profiles: {}}
|
||||
remote_data = {
|
||||
"config": local_settings,
|
||||
"connections": inventory.get("connections", {}),
|
||||
"profiles": inventory.get("profiles", {})
|
||||
}
|
||||
|
||||
if self.app.services.sync.compress_and_upload(remote_data):
|
||||
printer.success("Manual backup completed.")</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.sync_handler.SyncHandler.restore"><code class="name flex">
|
||||
<span>def <span class="ident">restore</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def restore(self, args):
|
||||
import inquirer
|
||||
file_id = getattr(args, "id", None)
|
||||
|
||||
# Segmented flags
|
||||
restore_config = getattr(args, "restore_config", False)
|
||||
restore_nodes = getattr(args, "restore_nodes", False)
|
||||
|
||||
# If neither is specified, we restore ALL (backwards compatibility)
|
||||
if not restore_config and not restore_nodes:
|
||||
restore_config = True
|
||||
restore_nodes = True
|
||||
|
||||
# 1. Analyze what we are about to restore
|
||||
info = self.app.services.sync.analyze_backup_content(file_id)
|
||||
if not info:
|
||||
printer.error("Could not analyze backup content.")
|
||||
return
|
||||
|
||||
# 2. Show detailed info
|
||||
printer.info("Restoration Details:")
|
||||
if restore_config:
|
||||
print(f" - Local Settings: Yes")
|
||||
print(f" - RSA Key (.osk): {'Yes' if info['has_key'] else 'No'}")
|
||||
if restore_nodes:
|
||||
target = "REMOTE" if self.app.services.mode == "remote" else "LOCAL"
|
||||
print(f" - Nodes: {info['nodes']}")
|
||||
print(f" - Folders: {info['folders']}")
|
||||
print(f" - Profiles: {info['profiles']}")
|
||||
print(f" - Destination: {target}")
|
||||
print("")
|
||||
|
||||
questions = [inquirer.Confirm("confirm", message="Do you want to proceed with the restoration?", default=False)]
|
||||
answers = inquirer.prompt(questions)
|
||||
|
||||
if not answers or not answers["confirm"]:
|
||||
printer.info("Restore cancelled.")
|
||||
return
|
||||
|
||||
# 3. Perform the actual restore
|
||||
if self.app.services.sync.restore_backup(
|
||||
file_id=file_id,
|
||||
restore_config=restore_config,
|
||||
restore_nodes=restore_nodes,
|
||||
app_instance=self.app
|
||||
):
|
||||
printer.success("Restore completed successfully.")</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.sync_handler.SyncHandler.start"><code class="name flex">
|
||||
<span>def <span class="ident">start</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def start(self, args):
|
||||
self.app.services.config_svc.update_setting("sync", True)
|
||||
self.app.services.sync.sync_enabled = True
|
||||
printer.success("Auto-sync enabled.")</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.sync_handler.SyncHandler.status"><code class="name flex">
|
||||
<span>def <span class="ident">status</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def status(self, args):
|
||||
status = self.app.services.sync.check_login_status()
|
||||
enabled = self.app.services.sync.sync_enabled
|
||||
remote = self.app.services.sync.sync_remote
|
||||
|
||||
printer.info(f"Login Status: {status}")
|
||||
printer.info(f"Auto-Sync: {'Enabled' if enabled else 'Disabled'}")
|
||||
printer.info(f"Sync Remote Nodes: {'Yes' if remote else 'No'}")</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.sync_handler.SyncHandler.stop"><code class="name flex">
|
||||
<span>def <span class="ident">stop</span></span>(<span>self, args)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def stop(self, args):
|
||||
self.app.services.config_svc.update_setting("sync", False)
|
||||
self.app.services.sync.sync_enabled = False
|
||||
printer.success("Auto-sync disabled.")</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.cli" href="index.html">connpy.cli</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-classes">Classes</a></h3>
|
||||
<ul>
|
||||
<li>
|
||||
<h4><code><a title="connpy.cli.sync_handler.SyncHandler" href="#connpy.cli.sync_handler.SyncHandler">SyncHandler</a></code></h4>
|
||||
<ul class="two-column">
|
||||
<li><code><a title="connpy.cli.sync_handler.SyncHandler.dispatch" href="#connpy.cli.sync_handler.SyncHandler.dispatch">dispatch</a></code></li>
|
||||
<li><code><a title="connpy.cli.sync_handler.SyncHandler.list_backups" href="#connpy.cli.sync_handler.SyncHandler.list_backups">list_backups</a></code></li>
|
||||
<li><code><a title="connpy.cli.sync_handler.SyncHandler.login" href="#connpy.cli.sync_handler.SyncHandler.login">login</a></code></li>
|
||||
<li><code><a title="connpy.cli.sync_handler.SyncHandler.logout" href="#connpy.cli.sync_handler.SyncHandler.logout">logout</a></code></li>
|
||||
<li><code><a title="connpy.cli.sync_handler.SyncHandler.once" href="#connpy.cli.sync_handler.SyncHandler.once">once</a></code></li>
|
||||
<li><code><a title="connpy.cli.sync_handler.SyncHandler.restore" href="#connpy.cli.sync_handler.SyncHandler.restore">restore</a></code></li>
|
||||
<li><code><a title="connpy.cli.sync_handler.SyncHandler.start" href="#connpy.cli.sync_handler.SyncHandler.start">start</a></code></li>
|
||||
<li><code><a title="connpy.cli.sync_handler.SyncHandler.status" href="#connpy.cli.sync_handler.SyncHandler.status">status</a></code></li>
|
||||
<li><code><a title="connpy.cli.sync_handler.SyncHandler.stop" href="#connpy.cli.sync_handler.SyncHandler.stop">stop</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.5</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,899 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.5">
|
||||
<title>connpy.cli.terminal_ui API documentation</title>
|
||||
<meta name="description" content="">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.cli.terminal_ui</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-classes">Classes</h2>
|
||||
<dl>
|
||||
<dt id="connpy.cli.terminal_ui.CopilotInterface"><code class="flex name class">
|
||||
<span>class <span class="ident">CopilotInterface</span></span>
|
||||
<span>(</span><span>config,<br>history=None,<br>pt_input=None,<br>pt_output=None,<br>rich_file=None,<br>session_state=None)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">class CopilotInterface:
|
||||
def __init__(self, config, history=None, pt_input=None, pt_output=None, rich_file=None, session_state=None):
|
||||
self.config = config
|
||||
self.history = history or InMemoryHistory()
|
||||
self.pt_input = pt_input
|
||||
self.pt_output = pt_output
|
||||
self.ai_service = AIService(config)
|
||||
self.session_state = session_state if session_state is not None else {
|
||||
'persona': 'engineer',
|
||||
'trust_mode': False,
|
||||
'memories': [],
|
||||
'os': None,
|
||||
'prompt': None
|
||||
}
|
||||
|
||||
if rich_file:
|
||||
self.console = Console(theme=connpy_theme, force_terminal=True, file=rich_file)
|
||||
else:
|
||||
self.console = Console(theme=connpy_theme)
|
||||
|
||||
self.mode_range, self.mode_single, self.mode_lines = 0, 1, 2
|
||||
|
||||
def _get_theme_color(self, style_name: str, fallback: str = "white") -> str:
|
||||
"""Extract Hex or ANSI color name from the active rich theme."""
|
||||
try:
|
||||
style = connpy_theme.styles.get(style_name)
|
||||
if style and style.color:
|
||||
# If it's a standard color like 'green', Rich might return its hex triplet
|
||||
if style.color.is_default: return fallback
|
||||
return style.color.triplet.hex if style.color.triplet else style.color.name
|
||||
except: pass
|
||||
return fallback
|
||||
|
||||
async def run_session(self,
|
||||
raw_bytes: bytes,
|
||||
cmd_byte_positions: List[tuple],
|
||||
node_info: dict,
|
||||
on_ai_call: Callable):
|
||||
"""
|
||||
Runs the interactive Copilot session.
|
||||
on_ai_call: async function(active_buffer, question) -> result_dict
|
||||
"""
|
||||
from rich.rule import Rule
|
||||
|
||||
try:
|
||||
# Prepare UI state
|
||||
buffer = log_cleaner(raw_bytes.decode(errors='replace'))
|
||||
blocks = self.ai_service.build_context_blocks(raw_bytes, cmd_byte_positions, node_info)
|
||||
last_line = buffer.split('\n')[-1].strip() if buffer.strip() else "(prompt)"
|
||||
blocks.append((len(raw_bytes), last_line[:80]))
|
||||
|
||||
state = {
|
||||
'context_cmd': 1,
|
||||
'total_cmds': len(blocks),
|
||||
'total_lines': len(buffer.split('\n')),
|
||||
'context_lines': min(50, len(buffer.split('\n'))),
|
||||
'context_mode': self.mode_range,
|
||||
'cancelled': False,
|
||||
'toolbar_msg': '',
|
||||
'msg_expiry': 0
|
||||
}
|
||||
|
||||
# 1. Visual Separation
|
||||
self.console.print("") # Salto de línea real
|
||||
self.console.print(Rule(title="[bold cyan] AI TERMINAL COPILOT [/bold cyan]", style="cyan"))
|
||||
self.console.print(Panel(
|
||||
"[dim]Type your question. Enter to send, Escape/Ctrl+C to cancel.\n"
|
||||
"Tab to change context mode. Ctrl+\u2191/\u2193 to adjust context. \u2191\u2193 for question history.[/dim]",
|
||||
border_style="cyan"
|
||||
))
|
||||
self.console.print("\n") # Pequeño espacio antes del prompt del copilot
|
||||
|
||||
bindings = KeyBindings()
|
||||
@bindings.add('c-up')
|
||||
def _(event):
|
||||
if state['context_mode'] == self.mode_lines:
|
||||
state['context_lines'] = min(state['context_lines'] + 50, state['total_lines'])
|
||||
else:
|
||||
state['context_cmd'] = min(state['context_cmd'] + 1, state['total_cmds'])
|
||||
event.app.invalidate()
|
||||
@bindings.add('c-down')
|
||||
def _(event):
|
||||
if state['context_mode'] == self.mode_lines:
|
||||
state['context_lines'] = max(state['context_lines'] - 50, min(50, state['total_lines']))
|
||||
else:
|
||||
state['context_cmd'] = max(state['context_cmd'] - 1, 1)
|
||||
event.app.invalidate()
|
||||
@bindings.add('tab')
|
||||
def _(event):
|
||||
buf = event.current_buffer
|
||||
# If typing a slash command (no spaces yet), use tab to autocomplete inline
|
||||
if buf.text.startswith('/') and ' ' not in buf.text:
|
||||
buf.complete_next()
|
||||
else:
|
||||
state['context_mode'] = (state['context_mode'] + 1) % 3
|
||||
event.app.invalidate()
|
||||
@bindings.add('escape', eager=True)
|
||||
@bindings.add('c-c')
|
||||
def _(event):
|
||||
state['cancelled'] = True
|
||||
event.app.exit(result='')
|
||||
|
||||
def get_active_buffer():
|
||||
if state['context_mode'] == self.mode_lines:
|
||||
return '\n'.join(buffer.split('\n')[-state['context_lines']:])
|
||||
idx = max(0, state['total_cmds'] - state['context_cmd'])
|
||||
start, preview = blocks[idx]
|
||||
if state['context_mode'] == self.mode_single and idx + 1 < state['total_cmds']:
|
||||
end = blocks[idx + 1][0]
|
||||
active_raw = raw_bytes[start:end]
|
||||
else:
|
||||
active_raw = raw_bytes[start:]
|
||||
return preview + "\n" + log_cleaner(active_raw.decode(errors='replace'))
|
||||
|
||||
def get_prompt_text():
|
||||
import html
|
||||
# Always use user_prompt color for the Ask prompt
|
||||
color = self._get_theme_color("user_prompt", "cyan")
|
||||
|
||||
if state['context_mode'] == self.mode_lines:
|
||||
text = html.escape(f"Ask [Ctx: {state['context_lines']}/{state['total_lines']}L]: ")
|
||||
return HTML(f'<style fg="{color}">{text}</style>')
|
||||
active = get_active_buffer()
|
||||
lines_count = len(active.split('\n'))
|
||||
mode_str = {self.mode_range: "Range", self.mode_single: "Cmd"}[state['context_mode']]
|
||||
text = html.escape(f"Ask [{mode_str} {state['context_cmd']} ~{lines_count}L]: ")
|
||||
return HTML(f'<style fg="{color}">{text}</style>')
|
||||
|
||||
from prompt_toolkit.application.current import get_app
|
||||
|
||||
def get_toolbar():
|
||||
import html
|
||||
app = get_app()
|
||||
c_warning = self._get_theme_color("warning", "yellow")
|
||||
|
||||
if app and app.current_buffer:
|
||||
text = app.current_buffer.text
|
||||
# Solo mostrar ayuda de comandos si estamos escribiendo el primer comando y no hay espacios
|
||||
if text.startswith('/') and ' ' not in text:
|
||||
commands = ['/os', '/prompt', '/architect', '/engineer', '/trust', '/untrust', '/memorize', '/clear']
|
||||
matches = [c for c in commands if c.startswith(text.lower())]
|
||||
if matches:
|
||||
m_text = html.escape(f"Available: {' '.join(matches)}")
|
||||
return HTML(f'<style fg="{c_warning}">{m_text}</style>' + " " * 20)
|
||||
|
||||
m_label = {self.mode_range: "RANGE", self.mode_single: "SINGLE", self.mode_lines: "LINES"}[state['context_mode']]
|
||||
if state['context_mode'] == self.mode_lines:
|
||||
base_str = f'\u25b6 Ctrl+\u2191/\u2193 adjusts by 50 lines [Tab: {m_label}]'
|
||||
else:
|
||||
idx = max(0, state['total_cmds'] - state['context_cmd'])
|
||||
desc = blocks[idx][1]
|
||||
base_str = f'\u25b6 {desc} [Tab: {m_label}]'
|
||||
|
||||
# Wrap base_str in a style to maintain consistency and avoid glitches
|
||||
# The fg color will be inherited from bottom-toolbar global style if not specified here
|
||||
base_html = f'<span>{html.escape(base_str)}</span>'
|
||||
|
||||
res_html = base_html
|
||||
if state.get('toolbar_msg'):
|
||||
if time.time() < state.get('msg_expiry', 0):
|
||||
msg = html.escape(state['toolbar_msg'])
|
||||
res_html = f'<style fg="{c_warning}">⚙️ {msg}</style> | ' + base_html
|
||||
else:
|
||||
state['toolbar_msg'] = ''
|
||||
|
||||
# Pad with spaces to ensure the line is cleared when the message disappears
|
||||
return HTML(res_html + " " * 20)
|
||||
|
||||
from prompt_toolkit.completion import Completer, Completion
|
||||
class SlashCommandCompleter(Completer):
|
||||
def get_completions(self, document, complete_event):
|
||||
text = document.text_before_cursor
|
||||
if text.startswith('/'):
|
||||
parts = text.split()
|
||||
# Only autocomplete the first word
|
||||
if len(parts) <= 1 or (len(parts) == 1 and not text.endswith(' ')):
|
||||
cmd_part = parts[0] if parts else text
|
||||
commands = [
|
||||
('/os', 'Set device OS (e.g. cisco_ios)'),
|
||||
('/prompt', 'Override prompt regex'),
|
||||
('/architect', 'Switch to Architect persona'),
|
||||
('/engineer', 'Switch to Engineer persona'),
|
||||
('/trust', 'Enable auto-execute'),
|
||||
('/untrust', 'Disable auto-execute'),
|
||||
('/memorize', 'Add fact to memory'),
|
||||
('/clear', 'Clear memory')
|
||||
]
|
||||
for cmd, desc in commands:
|
||||
if cmd.startswith(cmd_part.lower()):
|
||||
yield Completion(cmd, start_position=-len(cmd_part), display_meta=desc)
|
||||
|
||||
copilot_completer = SlashCommandCompleter()
|
||||
|
||||
while True:
|
||||
# 2. Ask question
|
||||
from prompt_toolkit.styles import Style
|
||||
c_contrast = self._get_theme_color("contrast", "gray")
|
||||
ui_style = Style.from_dict({
|
||||
'bottom-toolbar': f'fg:{c_contrast}',
|
||||
})
|
||||
|
||||
session = PromptSession(
|
||||
history=self.history,
|
||||
input=self.pt_input,
|
||||
output=self.pt_output,
|
||||
completer=copilot_completer,
|
||||
reserve_space_for_menu=0,
|
||||
style=ui_style
|
||||
)
|
||||
try:
|
||||
# Usamos un try/finally interno para asegurar que si algo falla en prompt_async,
|
||||
# no nos quedemos con la terminal en un estado extraño.
|
||||
question = await session.prompt_async(
|
||||
get_prompt_text,
|
||||
key_bindings=bindings,
|
||||
bottom_toolbar=get_toolbar
|
||||
)
|
||||
except (KeyboardInterrupt, EOFError):
|
||||
state['cancelled'] = True
|
||||
question = ""
|
||||
|
||||
if state['cancelled'] or not question.strip() or question.strip().lower() in ['cancel', 'exit', 'quit']:
|
||||
return "cancel", None, None
|
||||
|
||||
# 3. Process Input via AIService
|
||||
directive = self.ai_service.process_copilot_input(question, self.session_state)
|
||||
|
||||
if directive["action"] == "state_update":
|
||||
state['toolbar_msg'] = directive['message']
|
||||
state['msg_expiry'] = time.time() + 3 # 3 seconds timeout
|
||||
|
||||
async def delayed_refresh():
|
||||
await asyncio.sleep(3.1)
|
||||
# Only invalidate if the message hasn't been replaced by a newer one
|
||||
if state.get('toolbar_msg') == directive['message']:
|
||||
state['toolbar_msg'] = '' # Explicitly clear
|
||||
try:
|
||||
from prompt_toolkit.application.current import get_app
|
||||
app = get_app()
|
||||
if app: app.invalidate()
|
||||
except: pass
|
||||
asyncio.create_task(delayed_refresh())
|
||||
|
||||
# Mover el cursor arriba y limpiar la línea para que el nuevo prompt reemplace al anterior
|
||||
sys.stdout.write('\x1b[1A\x1b[2K')
|
||||
sys.stdout.flush()
|
||||
continue
|
||||
else:
|
||||
# Limpiar el mensaje de la barra cuando se hace una pregunta real
|
||||
state['toolbar_msg'] = ''
|
||||
|
||||
clean_question = directive.get("clean_prompt", question)
|
||||
overrides = directive.get("overrides", {})
|
||||
|
||||
# Merge node_info with session_state and overrides
|
||||
merged_node_info = node_info.copy()
|
||||
if self.session_state['os']: merged_node_info['os'] = self.session_state['os']
|
||||
if self.session_state['prompt']: merged_node_info['prompt'] = self.session_state['prompt']
|
||||
merged_node_info['persona'] = self.session_state['persona']
|
||||
merged_node_info['trust'] = self.session_state['trust_mode']
|
||||
merged_node_info['memories'] = list(self.session_state['memories'])
|
||||
|
||||
for k, v in overrides.items():
|
||||
merged_node_info[k] = v
|
||||
|
||||
# Enrich question
|
||||
past = self.history.get_strings()
|
||||
if len(past) > 1:
|
||||
clean_past = [q for q in past[-6:-1] if not q.startswith('/')]
|
||||
if clean_past:
|
||||
history_text = "\n".join(f"- {q}" for q in clean_past)
|
||||
clean_question = f"Previous questions:\n{history_text}\n\nCurrent Question:\n{clean_question}"
|
||||
|
||||
# 3. AI Execution
|
||||
# Use persona from overrides (one-shot) or from session state
|
||||
active_persona = merged_node_info.get('persona', self.session_state.get('persona', 'engineer'))
|
||||
persona_color = self._get_theme_color(active_persona, fallback="cyan")
|
||||
|
||||
active_buffer = get_active_buffer()
|
||||
live_text = "Thinking..."
|
||||
panel = Panel(live_text, title=f"[bold {persona_color}]Copilot Guide[/bold {persona_color}]", border_style=persona_color)
|
||||
|
||||
def on_chunk(text):
|
||||
nonlocal live_text
|
||||
if live_text == "Thinking...": live_text = ""
|
||||
live_text += text
|
||||
|
||||
with Live(panel, console=self.console, refresh_per_second=10) as live:
|
||||
def update_live(t):
|
||||
live.update(Panel(Markdown(t), title=f"[bold {persona_color}]Copilot Guide[/bold {persona_color}]", border_style=persona_color))
|
||||
|
||||
wrapped_chunk = lambda t: (on_chunk(t), update_live(live_text))
|
||||
|
||||
# Check for interruption during AI call
|
||||
ai_task = asyncio.create_task(on_ai_call(active_buffer, clean_question, wrapped_chunk, merged_node_info))
|
||||
|
||||
try:
|
||||
while not ai_task.done():
|
||||
await asyncio.sleep(0.05)
|
||||
result = await ai_task
|
||||
except asyncio.CancelledError:
|
||||
return "cancel", None, None
|
||||
|
||||
if not result or result.get("error"):
|
||||
if result and result.get("error"): self.console.print(f"[red]Error: {result['error']}[/red]")
|
||||
return "cancel", None, None
|
||||
|
||||
# 4. Handle result
|
||||
if live_text == "Thinking..." and result.get("guide"):
|
||||
self.console.print(Panel(Markdown(result["guide"]), title=f"[bold {persona_color}]Copilot Guide[/bold {persona_color}]", border_style=persona_color))
|
||||
|
||||
commands = result.get("commands", [])
|
||||
if not commands:
|
||||
self.console.print("")
|
||||
return "continue", None, None
|
||||
|
||||
risk = result.get("risk_level", "low")
|
||||
risk_style = {"low": "success", "high": "warning", "destructive": "error"}.get(risk, "success")
|
||||
style_color = self._get_theme_color(risk_style, fallback="green")
|
||||
|
||||
cmd_text = "\n".join(f" {i+1}. {c}" for i, c in enumerate(commands))
|
||||
# Explicitly use 'bold style_color' for both TITLE and BORDER to ensure maximum consistency
|
||||
self.console.print(Panel(cmd_text, title=f"[bold {style_color}]Suggested Commands [{risk.upper()}][/bold {style_color}]", border_style=f"bold {style_color}"))
|
||||
|
||||
if merged_node_info.get('trust', False) and risk != "destructive":
|
||||
self.console.print(f"[dim]⚙️ Auto-executing (Trust Mode)[/dim]")
|
||||
return "send_all", commands, None
|
||||
|
||||
confirm_session = PromptSession(input=self.pt_input, output=self.pt_output)
|
||||
c_bindings = KeyBindings()
|
||||
@c_bindings.add('escape', eager=True)
|
||||
@c_bindings.add('c-c')
|
||||
def _(ev): ev.app.exit(result='n')
|
||||
|
||||
import html
|
||||
try:
|
||||
p_text = html.escape(f"Send? (y/n/e/range) [n]: ")
|
||||
# Use the EXACT same style_color and force bold="true" for Prompt-Toolkit
|
||||
action = await confirm_session.prompt_async(HTML(f'<style fg="{style_color}" bold="true">{p_text}</style>'), key_bindings=c_bindings)
|
||||
except (KeyboardInterrupt, EOFError):
|
||||
self.console.print("")
|
||||
return "continue", None, None
|
||||
|
||||
def parse_indices(text, max_len):
|
||||
"""Helper to parse '1-3, 5, 7' into [0, 1, 2, 4, 6]."""
|
||||
indices = []
|
||||
# Replace commas with spaces and split
|
||||
parts = text.replace(',', ' ').split()
|
||||
for part in parts:
|
||||
if '-' in part:
|
||||
try:
|
||||
start, end = map(int, part.split('-'))
|
||||
# Ensure inclusive and 0-indexed
|
||||
indices.extend(range(start-1, end))
|
||||
except: continue
|
||||
elif part.isdigit():
|
||||
indices.append(int(part)-1)
|
||||
# Filter valid indices and remove duplicates
|
||||
return [i for i in sorted(set(indices)) if 0 <= i < max_len]
|
||||
|
||||
action_l = (action or "n").lower().strip()
|
||||
if action_l in ('y', 'yes', 'all'):
|
||||
return "send_all", commands, None
|
||||
|
||||
# Check for numeric selection (e.g., "1, 2-4")
|
||||
if re.match(r'^[0-9,\-\s]+$', action_l):
|
||||
selected_idxs = parse_indices(action_l, len(commands))
|
||||
if selected_idxs:
|
||||
return "send_all", [commands[i] for i in selected_idxs], None
|
||||
|
||||
elif action_l.startswith('e'):
|
||||
# Check if it's a selective edit like 'e1-2'
|
||||
selection_str = action_l[1:].strip()
|
||||
if selection_str:
|
||||
idxs = parse_indices(selection_str, len(commands))
|
||||
cmds_to_edit = [commands[i] for i in idxs] if idxs else commands
|
||||
else:
|
||||
cmds_to_edit = commands
|
||||
|
||||
target = "\n".join(cmds_to_edit)
|
||||
e_bindings = KeyBindings()
|
||||
@e_bindings.add('c-j')
|
||||
def _(ev): ev.app.exit(result=ev.app.current_buffer.text)
|
||||
@e_bindings.add('escape', 'enter')
|
||||
def _(ev): ev.app.exit(result=ev.app.current_buffer.text)
|
||||
@e_bindings.add('escape')
|
||||
def _(ev): ev.app.exit(result='')
|
||||
|
||||
c_edit = self._get_theme_color("user_prompt", "cyan")
|
||||
import html
|
||||
e_text = html.escape("Edit (Ctrl+Enter or Esc+Enter to submit):\n")
|
||||
try:
|
||||
edited = await confirm_session.prompt_async(
|
||||
HTML(f'<style fg="{c_edit}">{e_text}</style>'),
|
||||
default=target, multiline=True, key_bindings=e_bindings
|
||||
)
|
||||
except (KeyboardInterrupt, EOFError):
|
||||
self.console.print("")
|
||||
return "continue", None, None
|
||||
|
||||
if edited and edited.strip():
|
||||
# Split by lines to ensure core.py applies delay between each command
|
||||
lines = [l.strip() for l in edited.split('\n') if l.strip()]
|
||||
return "custom", None, lines
|
||||
|
||||
self.console.print("")
|
||||
return "continue", None, None
|
||||
|
||||
return "cancel", None, None
|
||||
|
||||
finally:
|
||||
state['cancelled'] = True
|
||||
self.console.print("[dim]Returning to session...[/dim]")</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
<h3>Methods</h3>
|
||||
<dl>
|
||||
<dt id="connpy.cli.terminal_ui.CopilotInterface.run_session"><code class="name flex">
|
||||
<span>async def <span class="ident">run_session</span></span>(<span>self,<br>raw_bytes: bytes,<br>cmd_byte_positions: List[tuple],<br>node_info: dict,<br>on_ai_call: Callable)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">async def run_session(self,
|
||||
raw_bytes: bytes,
|
||||
cmd_byte_positions: List[tuple],
|
||||
node_info: dict,
|
||||
on_ai_call: Callable):
|
||||
"""
|
||||
Runs the interactive Copilot session.
|
||||
on_ai_call: async function(active_buffer, question) -> result_dict
|
||||
"""
|
||||
from rich.rule import Rule
|
||||
|
||||
try:
|
||||
# Prepare UI state
|
||||
buffer = log_cleaner(raw_bytes.decode(errors='replace'))
|
||||
blocks = self.ai_service.build_context_blocks(raw_bytes, cmd_byte_positions, node_info)
|
||||
last_line = buffer.split('\n')[-1].strip() if buffer.strip() else "(prompt)"
|
||||
blocks.append((len(raw_bytes), last_line[:80]))
|
||||
|
||||
state = {
|
||||
'context_cmd': 1,
|
||||
'total_cmds': len(blocks),
|
||||
'total_lines': len(buffer.split('\n')),
|
||||
'context_lines': min(50, len(buffer.split('\n'))),
|
||||
'context_mode': self.mode_range,
|
||||
'cancelled': False,
|
||||
'toolbar_msg': '',
|
||||
'msg_expiry': 0
|
||||
}
|
||||
|
||||
# 1. Visual Separation
|
||||
self.console.print("") # Salto de línea real
|
||||
self.console.print(Rule(title="[bold cyan] AI TERMINAL COPILOT [/bold cyan]", style="cyan"))
|
||||
self.console.print(Panel(
|
||||
"[dim]Type your question. Enter to send, Escape/Ctrl+C to cancel.\n"
|
||||
"Tab to change context mode. Ctrl+\u2191/\u2193 to adjust context. \u2191\u2193 for question history.[/dim]",
|
||||
border_style="cyan"
|
||||
))
|
||||
self.console.print("\n") # Pequeño espacio antes del prompt del copilot
|
||||
|
||||
bindings = KeyBindings()
|
||||
@bindings.add('c-up')
|
||||
def _(event):
|
||||
if state['context_mode'] == self.mode_lines:
|
||||
state['context_lines'] = min(state['context_lines'] + 50, state['total_lines'])
|
||||
else:
|
||||
state['context_cmd'] = min(state['context_cmd'] + 1, state['total_cmds'])
|
||||
event.app.invalidate()
|
||||
@bindings.add('c-down')
|
||||
def _(event):
|
||||
if state['context_mode'] == self.mode_lines:
|
||||
state['context_lines'] = max(state['context_lines'] - 50, min(50, state['total_lines']))
|
||||
else:
|
||||
state['context_cmd'] = max(state['context_cmd'] - 1, 1)
|
||||
event.app.invalidate()
|
||||
@bindings.add('tab')
|
||||
def _(event):
|
||||
buf = event.current_buffer
|
||||
# If typing a slash command (no spaces yet), use tab to autocomplete inline
|
||||
if buf.text.startswith('/') and ' ' not in buf.text:
|
||||
buf.complete_next()
|
||||
else:
|
||||
state['context_mode'] = (state['context_mode'] + 1) % 3
|
||||
event.app.invalidate()
|
||||
@bindings.add('escape', eager=True)
|
||||
@bindings.add('c-c')
|
||||
def _(event):
|
||||
state['cancelled'] = True
|
||||
event.app.exit(result='')
|
||||
|
||||
def get_active_buffer():
|
||||
if state['context_mode'] == self.mode_lines:
|
||||
return '\n'.join(buffer.split('\n')[-state['context_lines']:])
|
||||
idx = max(0, state['total_cmds'] - state['context_cmd'])
|
||||
start, preview = blocks[idx]
|
||||
if state['context_mode'] == self.mode_single and idx + 1 < state['total_cmds']:
|
||||
end = blocks[idx + 1][0]
|
||||
active_raw = raw_bytes[start:end]
|
||||
else:
|
||||
active_raw = raw_bytes[start:]
|
||||
return preview + "\n" + log_cleaner(active_raw.decode(errors='replace'))
|
||||
|
||||
def get_prompt_text():
|
||||
import html
|
||||
# Always use user_prompt color for the Ask prompt
|
||||
color = self._get_theme_color("user_prompt", "cyan")
|
||||
|
||||
if state['context_mode'] == self.mode_lines:
|
||||
text = html.escape(f"Ask [Ctx: {state['context_lines']}/{state['total_lines']}L]: ")
|
||||
return HTML(f'<style fg="{color}">{text}</style>')
|
||||
active = get_active_buffer()
|
||||
lines_count = len(active.split('\n'))
|
||||
mode_str = {self.mode_range: "Range", self.mode_single: "Cmd"}[state['context_mode']]
|
||||
text = html.escape(f"Ask [{mode_str} {state['context_cmd']} ~{lines_count}L]: ")
|
||||
return HTML(f'<style fg="{color}">{text}</style>')
|
||||
|
||||
from prompt_toolkit.application.current import get_app
|
||||
|
||||
def get_toolbar():
|
||||
import html
|
||||
app = get_app()
|
||||
c_warning = self._get_theme_color("warning", "yellow")
|
||||
|
||||
if app and app.current_buffer:
|
||||
text = app.current_buffer.text
|
||||
# Solo mostrar ayuda de comandos si estamos escribiendo el primer comando y no hay espacios
|
||||
if text.startswith('/') and ' ' not in text:
|
||||
commands = ['/os', '/prompt', '/architect', '/engineer', '/trust', '/untrust', '/memorize', '/clear']
|
||||
matches = [c for c in commands if c.startswith(text.lower())]
|
||||
if matches:
|
||||
m_text = html.escape(f"Available: {' '.join(matches)}")
|
||||
return HTML(f'<style fg="{c_warning}">{m_text}</style>' + " " * 20)
|
||||
|
||||
m_label = {self.mode_range: "RANGE", self.mode_single: "SINGLE", self.mode_lines: "LINES"}[state['context_mode']]
|
||||
if state['context_mode'] == self.mode_lines:
|
||||
base_str = f'\u25b6 Ctrl+\u2191/\u2193 adjusts by 50 lines [Tab: {m_label}]'
|
||||
else:
|
||||
idx = max(0, state['total_cmds'] - state['context_cmd'])
|
||||
desc = blocks[idx][1]
|
||||
base_str = f'\u25b6 {desc} [Tab: {m_label}]'
|
||||
|
||||
# Wrap base_str in a style to maintain consistency and avoid glitches
|
||||
# The fg color will be inherited from bottom-toolbar global style if not specified here
|
||||
base_html = f'<span>{html.escape(base_str)}</span>'
|
||||
|
||||
res_html = base_html
|
||||
if state.get('toolbar_msg'):
|
||||
if time.time() < state.get('msg_expiry', 0):
|
||||
msg = html.escape(state['toolbar_msg'])
|
||||
res_html = f'<style fg="{c_warning}">⚙️ {msg}</style> | ' + base_html
|
||||
else:
|
||||
state['toolbar_msg'] = ''
|
||||
|
||||
# Pad with spaces to ensure the line is cleared when the message disappears
|
||||
return HTML(res_html + " " * 20)
|
||||
|
||||
from prompt_toolkit.completion import Completer, Completion
|
||||
class SlashCommandCompleter(Completer):
|
||||
def get_completions(self, document, complete_event):
|
||||
text = document.text_before_cursor
|
||||
if text.startswith('/'):
|
||||
parts = text.split()
|
||||
# Only autocomplete the first word
|
||||
if len(parts) <= 1 or (len(parts) == 1 and not text.endswith(' ')):
|
||||
cmd_part = parts[0] if parts else text
|
||||
commands = [
|
||||
('/os', 'Set device OS (e.g. cisco_ios)'),
|
||||
('/prompt', 'Override prompt regex'),
|
||||
('/architect', 'Switch to Architect persona'),
|
||||
('/engineer', 'Switch to Engineer persona'),
|
||||
('/trust', 'Enable auto-execute'),
|
||||
('/untrust', 'Disable auto-execute'),
|
||||
('/memorize', 'Add fact to memory'),
|
||||
('/clear', 'Clear memory')
|
||||
]
|
||||
for cmd, desc in commands:
|
||||
if cmd.startswith(cmd_part.lower()):
|
||||
yield Completion(cmd, start_position=-len(cmd_part), display_meta=desc)
|
||||
|
||||
copilot_completer = SlashCommandCompleter()
|
||||
|
||||
while True:
|
||||
# 2. Ask question
|
||||
from prompt_toolkit.styles import Style
|
||||
c_contrast = self._get_theme_color("contrast", "gray")
|
||||
ui_style = Style.from_dict({
|
||||
'bottom-toolbar': f'fg:{c_contrast}',
|
||||
})
|
||||
|
||||
session = PromptSession(
|
||||
history=self.history,
|
||||
input=self.pt_input,
|
||||
output=self.pt_output,
|
||||
completer=copilot_completer,
|
||||
reserve_space_for_menu=0,
|
||||
style=ui_style
|
||||
)
|
||||
try:
|
||||
# Usamos un try/finally interno para asegurar que si algo falla en prompt_async,
|
||||
# no nos quedemos con la terminal en un estado extraño.
|
||||
question = await session.prompt_async(
|
||||
get_prompt_text,
|
||||
key_bindings=bindings,
|
||||
bottom_toolbar=get_toolbar
|
||||
)
|
||||
except (KeyboardInterrupt, EOFError):
|
||||
state['cancelled'] = True
|
||||
question = ""
|
||||
|
||||
if state['cancelled'] or not question.strip() or question.strip().lower() in ['cancel', 'exit', 'quit']:
|
||||
return "cancel", None, None
|
||||
|
||||
# 3. Process Input via AIService
|
||||
directive = self.ai_service.process_copilot_input(question, self.session_state)
|
||||
|
||||
if directive["action"] == "state_update":
|
||||
state['toolbar_msg'] = directive['message']
|
||||
state['msg_expiry'] = time.time() + 3 # 3 seconds timeout
|
||||
|
||||
async def delayed_refresh():
|
||||
await asyncio.sleep(3.1)
|
||||
# Only invalidate if the message hasn't been replaced by a newer one
|
||||
if state.get('toolbar_msg') == directive['message']:
|
||||
state['toolbar_msg'] = '' # Explicitly clear
|
||||
try:
|
||||
from prompt_toolkit.application.current import get_app
|
||||
app = get_app()
|
||||
if app: app.invalidate()
|
||||
except: pass
|
||||
asyncio.create_task(delayed_refresh())
|
||||
|
||||
# Mover el cursor arriba y limpiar la línea para que el nuevo prompt reemplace al anterior
|
||||
sys.stdout.write('\x1b[1A\x1b[2K')
|
||||
sys.stdout.flush()
|
||||
continue
|
||||
else:
|
||||
# Limpiar el mensaje de la barra cuando se hace una pregunta real
|
||||
state['toolbar_msg'] = ''
|
||||
|
||||
clean_question = directive.get("clean_prompt", question)
|
||||
overrides = directive.get("overrides", {})
|
||||
|
||||
# Merge node_info with session_state and overrides
|
||||
merged_node_info = node_info.copy()
|
||||
if self.session_state['os']: merged_node_info['os'] = self.session_state['os']
|
||||
if self.session_state['prompt']: merged_node_info['prompt'] = self.session_state['prompt']
|
||||
merged_node_info['persona'] = self.session_state['persona']
|
||||
merged_node_info['trust'] = self.session_state['trust_mode']
|
||||
merged_node_info['memories'] = list(self.session_state['memories'])
|
||||
|
||||
for k, v in overrides.items():
|
||||
merged_node_info[k] = v
|
||||
|
||||
# Enrich question
|
||||
past = self.history.get_strings()
|
||||
if len(past) > 1:
|
||||
clean_past = [q for q in past[-6:-1] if not q.startswith('/')]
|
||||
if clean_past:
|
||||
history_text = "\n".join(f"- {q}" for q in clean_past)
|
||||
clean_question = f"Previous questions:\n{history_text}\n\nCurrent Question:\n{clean_question}"
|
||||
|
||||
# 3. AI Execution
|
||||
# Use persona from overrides (one-shot) or from session state
|
||||
active_persona = merged_node_info.get('persona', self.session_state.get('persona', 'engineer'))
|
||||
persona_color = self._get_theme_color(active_persona, fallback="cyan")
|
||||
|
||||
active_buffer = get_active_buffer()
|
||||
live_text = "Thinking..."
|
||||
panel = Panel(live_text, title=f"[bold {persona_color}]Copilot Guide[/bold {persona_color}]", border_style=persona_color)
|
||||
|
||||
def on_chunk(text):
|
||||
nonlocal live_text
|
||||
if live_text == "Thinking...": live_text = ""
|
||||
live_text += text
|
||||
|
||||
with Live(panel, console=self.console, refresh_per_second=10) as live:
|
||||
def update_live(t):
|
||||
live.update(Panel(Markdown(t), title=f"[bold {persona_color}]Copilot Guide[/bold {persona_color}]", border_style=persona_color))
|
||||
|
||||
wrapped_chunk = lambda t: (on_chunk(t), update_live(live_text))
|
||||
|
||||
# Check for interruption during AI call
|
||||
ai_task = asyncio.create_task(on_ai_call(active_buffer, clean_question, wrapped_chunk, merged_node_info))
|
||||
|
||||
try:
|
||||
while not ai_task.done():
|
||||
await asyncio.sleep(0.05)
|
||||
result = await ai_task
|
||||
except asyncio.CancelledError:
|
||||
return "cancel", None, None
|
||||
|
||||
if not result or result.get("error"):
|
||||
if result and result.get("error"): self.console.print(f"[red]Error: {result['error']}[/red]")
|
||||
return "cancel", None, None
|
||||
|
||||
# 4. Handle result
|
||||
if live_text == "Thinking..." and result.get("guide"):
|
||||
self.console.print(Panel(Markdown(result["guide"]), title=f"[bold {persona_color}]Copilot Guide[/bold {persona_color}]", border_style=persona_color))
|
||||
|
||||
commands = result.get("commands", [])
|
||||
if not commands:
|
||||
self.console.print("")
|
||||
return "continue", None, None
|
||||
|
||||
risk = result.get("risk_level", "low")
|
||||
risk_style = {"low": "success", "high": "warning", "destructive": "error"}.get(risk, "success")
|
||||
style_color = self._get_theme_color(risk_style, fallback="green")
|
||||
|
||||
cmd_text = "\n".join(f" {i+1}. {c}" for i, c in enumerate(commands))
|
||||
# Explicitly use 'bold style_color' for both TITLE and BORDER to ensure maximum consistency
|
||||
self.console.print(Panel(cmd_text, title=f"[bold {style_color}]Suggested Commands [{risk.upper()}][/bold {style_color}]", border_style=f"bold {style_color}"))
|
||||
|
||||
if merged_node_info.get('trust', False) and risk != "destructive":
|
||||
self.console.print(f"[dim]⚙️ Auto-executing (Trust Mode)[/dim]")
|
||||
return "send_all", commands, None
|
||||
|
||||
confirm_session = PromptSession(input=self.pt_input, output=self.pt_output)
|
||||
c_bindings = KeyBindings()
|
||||
@c_bindings.add('escape', eager=True)
|
||||
@c_bindings.add('c-c')
|
||||
def _(ev): ev.app.exit(result='n')
|
||||
|
||||
import html
|
||||
try:
|
||||
p_text = html.escape(f"Send? (y/n/e/range) [n]: ")
|
||||
# Use the EXACT same style_color and force bold="true" for Prompt-Toolkit
|
||||
action = await confirm_session.prompt_async(HTML(f'<style fg="{style_color}" bold="true">{p_text}</style>'), key_bindings=c_bindings)
|
||||
except (KeyboardInterrupt, EOFError):
|
||||
self.console.print("")
|
||||
return "continue", None, None
|
||||
|
||||
def parse_indices(text, max_len):
|
||||
"""Helper to parse '1-3, 5, 7' into [0, 1, 2, 4, 6]."""
|
||||
indices = []
|
||||
# Replace commas with spaces and split
|
||||
parts = text.replace(',', ' ').split()
|
||||
for part in parts:
|
||||
if '-' in part:
|
||||
try:
|
||||
start, end = map(int, part.split('-'))
|
||||
# Ensure inclusive and 0-indexed
|
||||
indices.extend(range(start-1, end))
|
||||
except: continue
|
||||
elif part.isdigit():
|
||||
indices.append(int(part)-1)
|
||||
# Filter valid indices and remove duplicates
|
||||
return [i for i in sorted(set(indices)) if 0 <= i < max_len]
|
||||
|
||||
action_l = (action or "n").lower().strip()
|
||||
if action_l in ('y', 'yes', 'all'):
|
||||
return "send_all", commands, None
|
||||
|
||||
# Check for numeric selection (e.g., "1, 2-4")
|
||||
if re.match(r'^[0-9,\-\s]+$', action_l):
|
||||
selected_idxs = parse_indices(action_l, len(commands))
|
||||
if selected_idxs:
|
||||
return "send_all", [commands[i] for i in selected_idxs], None
|
||||
|
||||
elif action_l.startswith('e'):
|
||||
# Check if it's a selective edit like 'e1-2'
|
||||
selection_str = action_l[1:].strip()
|
||||
if selection_str:
|
||||
idxs = parse_indices(selection_str, len(commands))
|
||||
cmds_to_edit = [commands[i] for i in idxs] if idxs else commands
|
||||
else:
|
||||
cmds_to_edit = commands
|
||||
|
||||
target = "\n".join(cmds_to_edit)
|
||||
e_bindings = KeyBindings()
|
||||
@e_bindings.add('c-j')
|
||||
def _(ev): ev.app.exit(result=ev.app.current_buffer.text)
|
||||
@e_bindings.add('escape', 'enter')
|
||||
def _(ev): ev.app.exit(result=ev.app.current_buffer.text)
|
||||
@e_bindings.add('escape')
|
||||
def _(ev): ev.app.exit(result='')
|
||||
|
||||
c_edit = self._get_theme_color("user_prompt", "cyan")
|
||||
import html
|
||||
e_text = html.escape("Edit (Ctrl+Enter or Esc+Enter to submit):\n")
|
||||
try:
|
||||
edited = await confirm_session.prompt_async(
|
||||
HTML(f'<style fg="{c_edit}">{e_text}</style>'),
|
||||
default=target, multiline=True, key_bindings=e_bindings
|
||||
)
|
||||
except (KeyboardInterrupt, EOFError):
|
||||
self.console.print("")
|
||||
return "continue", None, None
|
||||
|
||||
if edited and edited.strip():
|
||||
# Split by lines to ensure core.py applies delay between each command
|
||||
lines = [l.strip() for l in edited.split('\n') if l.strip()]
|
||||
return "custom", None, lines
|
||||
|
||||
self.console.print("")
|
||||
return "continue", None, None
|
||||
|
||||
return "cancel", None, None
|
||||
|
||||
finally:
|
||||
state['cancelled'] = True
|
||||
self.console.print("[dim]Returning to session...[/dim]")</code></pre>
|
||||
</details>
|
||||
<div class="desc"><p>Runs the interactive Copilot session.
|
||||
on_ai_call: async function(active_buffer, question) -> result_dict</p></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.cli" href="index.html">connpy.cli</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-classes">Classes</a></h3>
|
||||
<ul>
|
||||
<li>
|
||||
<h4><code><a title="connpy.cli.terminal_ui.CopilotInterface" href="#connpy.cli.terminal_ui.CopilotInterface">CopilotInterface</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.cli.terminal_ui.CopilotInterface.run_session" href="#connpy.cli.terminal_ui.CopilotInterface.run_session">run_session</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.5</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,514 +0,0 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1">
|
||||
<meta name="generator" content="pdoc3 0.11.5">
|
||||
<title>connpy.cli.validators API documentation</title>
|
||||
<meta name="description" content="">
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/sanitize.min.css" integrity="sha512-y1dtMcuvtTMJc1yPgEqF0ZjQbhnc/bFhyvIyVNb9Zk5mIGtqVaAB1Ttl28su8AvFMOY0EwRbAe+HCLqj6W7/KA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/10up-sanitize.css/13.0.0/typography.min.css" integrity="sha512-Y1DYSb995BAfxobCkKepB1BqJJTPrOp3zPL74AWFugHHmmdcvO+C48WLrUOlhGMc0QG7AE3f7gmvvcrmX2fDoA==" crossorigin>
|
||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css" crossorigin>
|
||||
<style>:root{--highlight-color:#fe9}.flex{display:flex !important}body{line-height:1.5em}#content{padding:20px}#sidebar{padding:1.5em;overflow:hidden}#sidebar > *:last-child{margin-bottom:2cm}.http-server-breadcrumbs{font-size:130%;margin:0 0 15px 0}#footer{font-size:.75em;padding:5px 30px;border-top:1px solid #ddd;text-align:right}#footer p{margin:0 0 0 1em;display:inline-block}#footer p:last-child{margin-right:30px}h1,h2,h3,h4,h5{font-weight:300}h1{font-size:2.5em;line-height:1.1em}h2{font-size:1.75em;margin:2em 0 .50em 0}h3{font-size:1.4em;margin:1.6em 0 .7em 0}h4{margin:0;font-size:105%}h1:target,h2:target,h3:target,h4:target,h5:target,h6:target{background:var(--highlight-color);padding:.2em 0}a{color:#058;text-decoration:none;transition:color .2s ease-in-out}a:visited{color:#503}a:hover{color:#b62}.title code{font-weight:bold}h2[id^="header-"]{margin-top:2em}.ident{color:#900;font-weight:bold}pre code{font-size:.8em;line-height:1.4em;padding:1em;display:block}code{background:#f3f3f3;font-family:"DejaVu Sans Mono",monospace;padding:1px 4px;overflow-wrap:break-word}h1 code{background:transparent}pre{border-top:1px solid #ccc;border-bottom:1px solid #ccc;margin:1em 0}#http-server-module-list{display:flex;flex-flow:column}#http-server-module-list div{display:flex}#http-server-module-list dt{min-width:10%}#http-server-module-list p{margin-top:0}.toc ul,#index{list-style-type:none;margin:0;padding:0}#index code{background:transparent}#index h3{border-bottom:1px solid #ddd}#index ul{padding:0}#index h4{margin-top:.6em;font-weight:bold}@media (min-width:200ex){#index .two-column{column-count:2}}@media (min-width:300ex){#index .two-column{column-count:3}}dl{margin-bottom:2em}dl dl:last-child{margin-bottom:4em}dd{margin:0 0 1em 3em}#header-classes + dl > dd{margin-bottom:3em}dd dd{margin-left:2em}dd p{margin:10px 0}.name{background:#eee;font-size:.85em;padding:5px 10px;display:inline-block;min-width:40%}.name:hover{background:#e0e0e0}dt:target .name{background:var(--highlight-color)}.name > span:first-child{white-space:nowrap}.name.class > span:nth-child(2){margin-left:.4em}.inherited{color:#999;border-left:5px solid #eee;padding-left:1em}.inheritance em{font-style:normal;font-weight:bold}.desc h2{font-weight:400;font-size:1.25em}.desc h3{font-size:1em}.desc dt code{background:inherit}.source > summary,.git-link-div{color:#666;text-align:right;font-weight:400;font-size:.8em;text-transform:uppercase}.source summary > *{white-space:nowrap;cursor:pointer}.git-link{color:inherit;margin-left:1em}.source pre{max-height:500px;overflow:auto;margin:0}.source pre code{font-size:12px;overflow:visible;min-width:max-content}.hlist{list-style:none}.hlist li{display:inline}.hlist li:after{content:',\2002'}.hlist li:last-child:after{content:none}.hlist .hlist{display:inline;padding-left:1em}img{max-width:100%}td{padding:0 .5em}.admonition{padding:.1em 1em;margin:1em 0}.admonition-title{font-weight:bold}.admonition.note,.admonition.info,.admonition.important{background:#aef}.admonition.todo,.admonition.versionadded,.admonition.tip,.admonition.hint{background:#dfd}.admonition.warning,.admonition.versionchanged,.admonition.deprecated{background:#fd4}.admonition.error,.admonition.danger,.admonition.caution{background:lightpink}</style>
|
||||
<style media="screen and (min-width: 700px)">@media screen and (min-width:700px){#sidebar{width:30%;height:100vh;overflow:auto;position:sticky;top:0}#content{width:70%;max-width:100ch;padding:3em 4em;border-left:1px solid #ddd}pre code{font-size:1em}.name{font-size:1em}main{display:flex;flex-direction:row-reverse;justify-content:flex-end}.toc ul ul,#index ul ul{padding-left:1em}.toc > ul > li{margin-top:.5em}}</style>
|
||||
<style media="print">@media print{#sidebar h1{page-break-before:always}.source{display:none}}@media print{*{background:transparent !important;color:#000 !important;box-shadow:none !important;text-shadow:none !important}a[href]:after{content:" (" attr(href) ")";font-size:90%}a[href][title]:after{content:none}abbr[title]:after{content:" (" attr(title) ")"}.ir a:after,a[href^="javascript:"]:after,a[href^="#"]:after{content:""}pre,blockquote{border:1px solid #999;page-break-inside:avoid}thead{display:table-header-group}tr,img{page-break-inside:avoid}img{max-width:100% !important}@page{margin:0.5cm}p,h2,h3{orphans:3;widows:3}h1,h2,h3,h4,h5,h6{page-break-after:avoid}}</style>
|
||||
<script defer src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js" integrity="sha512-D9gUyxqja7hBtkWpPWGt9wfbfaMGVt9gnyCvYa+jojwwPHLCzUm5i8rpk7vD7wNee9bA35eYIjobYPaQuKS1MQ==" crossorigin></script>
|
||||
<script>window.addEventListener('DOMContentLoaded', () => {
|
||||
hljs.configure({languages: ['bash', 'css', 'diff', 'graphql', 'ini', 'javascript', 'json', 'plaintext', 'python', 'python-repl', 'rust', 'shell', 'sql', 'typescript', 'xml', 'yaml']});
|
||||
hljs.highlightAll();
|
||||
/* Collapse source docstrings */
|
||||
setTimeout(() => {
|
||||
[...document.querySelectorAll('.hljs.language-python > .hljs-string')]
|
||||
.filter(el => el.innerHTML.length > 200 && ['"""', "'''"].includes(el.innerHTML.substring(0, 3)))
|
||||
.forEach(el => {
|
||||
let d = document.createElement('details');
|
||||
d.classList.add('hljs-string');
|
||||
d.innerHTML = '<summary>"""</summary>' + el.innerHTML.substring(3);
|
||||
el.replaceWith(d);
|
||||
});
|
||||
}, 100);
|
||||
})</script>
|
||||
</head>
|
||||
<body>
|
||||
<main>
|
||||
<article id="content">
|
||||
<header>
|
||||
<h1 class="title">Module <code>connpy.cli.validators</code></h1>
|
||||
</header>
|
||||
<section id="section-intro">
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
</section>
|
||||
<section>
|
||||
<h2 class="section-title" id="header-classes">Classes</h2>
|
||||
<dl>
|
||||
<dt id="connpy.cli.validators.Validators"><code class="flex name class">
|
||||
<span>class <span class="ident">Validators</span></span>
|
||||
<span>(</span><span>app)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">class Validators:
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
|
||||
def host_validation(self, answers, current, regex = "^.+$"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Host cannot be empty")
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
return True
|
||||
|
||||
def profile_protocol_validation(self, answers, current, regex = "(^ssh$|^telnet$|^kubectl$|^docker$|^ssm$|^$)"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Pick between ssh, telnet, kubectl, docker, ssm or leave empty")
|
||||
return True
|
||||
|
||||
def protocol_validation(self, answers, current, regex = "(^ssh$|^telnet$|^kubectl$|^docker$|^ssm$|^$|^@.+$)"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Pick between ssh, telnet, kubectl, docker, ssm, leave empty or @profile")
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
return True
|
||||
|
||||
def profile_port_validation(self, answers, current, regex = "(^[0-9]*$)"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Pick a port between 1-65535, @profile o leave empty")
|
||||
try:
|
||||
port = int(current)
|
||||
except ValueError:
|
||||
port = 0
|
||||
if current != "" and not 1 <= int(port) <= 65535:
|
||||
raise inquirer.errors.ValidationError("", reason="Pick a port between 1-65535 or leave empty")
|
||||
return True
|
||||
|
||||
def port_validation(self, answers, current, regex = "(^[0-9]*$|^@.+$)"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Pick a port between 1-65535, @profile or leave empty")
|
||||
try:
|
||||
port = int(current)
|
||||
except ValueError:
|
||||
port = 0
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
elif current != "" and not 1 <= int(port) <= 65535:
|
||||
raise inquirer.errors.ValidationError("", reason="Pick a port between 1-65535, @profile o leave empty")
|
||||
return True
|
||||
|
||||
def pass_validation(self, answers, current, regex = "(^@.+$)"):
|
||||
profiles = current.split(",")
|
||||
for i in profiles:
|
||||
if not re.match(regex, i) or i[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(i))
|
||||
return True
|
||||
|
||||
def tags_validation(self, answers, current):
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
elif current != "":
|
||||
isdict = False
|
||||
try:
|
||||
isdict = ast.literal_eval(current)
|
||||
except Exception:
|
||||
pass
|
||||
if not isinstance (isdict, dict):
|
||||
raise inquirer.errors.ValidationError("", reason="Tags should be a python dictionary.".format(current))
|
||||
return True
|
||||
|
||||
def profile_tags_validation(self, answers, current):
|
||||
if current != "":
|
||||
isdict = False
|
||||
try:
|
||||
isdict = ast.literal_eval(current)
|
||||
except Exception:
|
||||
pass
|
||||
if not isinstance (isdict, dict):
|
||||
raise inquirer.errors.ValidationError("", reason="Tags should be a python dictionary.".format(current))
|
||||
return True
|
||||
|
||||
def jumphost_validation(self, answers, current):
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
elif current != "":
|
||||
if current not in self.app.nodes_list:
|
||||
raise inquirer.errors.ValidationError("", reason="Node {} don't exist.".format(current))
|
||||
return True
|
||||
|
||||
def profile_jumphost_validation(self, answers, current):
|
||||
if current != "":
|
||||
if current not in self.app.nodes_list:
|
||||
raise inquirer.errors.ValidationError("", reason="Node {} don't exist.".format(current))
|
||||
return True
|
||||
|
||||
def default_validation(self, answers, current):
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
return True
|
||||
|
||||
def bulk_node_validation(self, answers, current, regex = "^[0-9a-zA-Z_.,$#-]+$"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Host cannot be empty")
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
return True
|
||||
|
||||
def bulk_folder_validation(self, answers, current):
|
||||
if not self.app.case:
|
||||
current = current.lower()
|
||||
|
||||
candidate = current
|
||||
if "/" in current:
|
||||
candidate = current.split("/")[0]
|
||||
|
||||
matches = list(filter(lambda k: k == candidate, self.app.folders))
|
||||
if current != "" and len(matches) == 0:
|
||||
raise inquirer.errors.ValidationError("", reason="Location {} don't exist".format(current))
|
||||
return True
|
||||
|
||||
def bulk_host_validation(self, answers, current, regex = "^.+$"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Host cannot be empty")
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
hosts = current.split(",")
|
||||
nodes = answers["ids"].split(",")
|
||||
if len(hosts) > 1 and len(hosts) != len(nodes):
|
||||
raise inquirer.errors.ValidationError("", reason="Hosts list should be the same length of nodes list")
|
||||
return True</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
<h3>Methods</h3>
|
||||
<dl>
|
||||
<dt id="connpy.cli.validators.Validators.bulk_folder_validation"><code class="name flex">
|
||||
<span>def <span class="ident">bulk_folder_validation</span></span>(<span>self, answers, current)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def bulk_folder_validation(self, answers, current):
|
||||
if not self.app.case:
|
||||
current = current.lower()
|
||||
|
||||
candidate = current
|
||||
if "/" in current:
|
||||
candidate = current.split("/")[0]
|
||||
|
||||
matches = list(filter(lambda k: k == candidate, self.app.folders))
|
||||
if current != "" and len(matches) == 0:
|
||||
raise inquirer.errors.ValidationError("", reason="Location {} don't exist".format(current))
|
||||
return True</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.validators.Validators.bulk_host_validation"><code class="name flex">
|
||||
<span>def <span class="ident">bulk_host_validation</span></span>(<span>self, answers, current, regex='^.+$')</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def bulk_host_validation(self, answers, current, regex = "^.+$"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Host cannot be empty")
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
hosts = current.split(",")
|
||||
nodes = answers["ids"].split(",")
|
||||
if len(hosts) > 1 and len(hosts) != len(nodes):
|
||||
raise inquirer.errors.ValidationError("", reason="Hosts list should be the same length of nodes list")
|
||||
return True</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.validators.Validators.bulk_node_validation"><code class="name flex">
|
||||
<span>def <span class="ident">bulk_node_validation</span></span>(<span>self, answers, current, regex='^[0-9a-zA-Z_.,$#-]+$')</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def bulk_node_validation(self, answers, current, regex = "^[0-9a-zA-Z_.,$#-]+$"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Host cannot be empty")
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
return True</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.validators.Validators.default_validation"><code class="name flex">
|
||||
<span>def <span class="ident">default_validation</span></span>(<span>self, answers, current)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def default_validation(self, answers, current):
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
return True</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.validators.Validators.host_validation"><code class="name flex">
|
||||
<span>def <span class="ident">host_validation</span></span>(<span>self, answers, current, regex='^.+$')</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def host_validation(self, answers, current, regex = "^.+$"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Host cannot be empty")
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
return True</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.validators.Validators.jumphost_validation"><code class="name flex">
|
||||
<span>def <span class="ident">jumphost_validation</span></span>(<span>self, answers, current)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def jumphost_validation(self, answers, current):
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
elif current != "":
|
||||
if current not in self.app.nodes_list:
|
||||
raise inquirer.errors.ValidationError("", reason="Node {} don't exist.".format(current))
|
||||
return True</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.validators.Validators.pass_validation"><code class="name flex">
|
||||
<span>def <span class="ident">pass_validation</span></span>(<span>self, answers, current, regex='(^@.+$)')</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def pass_validation(self, answers, current, regex = "(^@.+$)"):
|
||||
profiles = current.split(",")
|
||||
for i in profiles:
|
||||
if not re.match(regex, i) or i[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(i))
|
||||
return True</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.validators.Validators.port_validation"><code class="name flex">
|
||||
<span>def <span class="ident">port_validation</span></span>(<span>self, answers, current, regex='(^[0-9]*$|^@.+$)')</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def port_validation(self, answers, current, regex = "(^[0-9]*$|^@.+$)"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Pick a port between 1-65535, @profile or leave empty")
|
||||
try:
|
||||
port = int(current)
|
||||
except ValueError:
|
||||
port = 0
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
elif current != "" and not 1 <= int(port) <= 65535:
|
||||
raise inquirer.errors.ValidationError("", reason="Pick a port between 1-65535, @profile o leave empty")
|
||||
return True</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.validators.Validators.profile_jumphost_validation"><code class="name flex">
|
||||
<span>def <span class="ident">profile_jumphost_validation</span></span>(<span>self, answers, current)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def profile_jumphost_validation(self, answers, current):
|
||||
if current != "":
|
||||
if current not in self.app.nodes_list:
|
||||
raise inquirer.errors.ValidationError("", reason="Node {} don't exist.".format(current))
|
||||
return True</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.validators.Validators.profile_port_validation"><code class="name flex">
|
||||
<span>def <span class="ident">profile_port_validation</span></span>(<span>self, answers, current, regex='(^[0-9]*$)')</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def profile_port_validation(self, answers, current, regex = "(^[0-9]*$)"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Pick a port between 1-65535, @profile o leave empty")
|
||||
try:
|
||||
port = int(current)
|
||||
except ValueError:
|
||||
port = 0
|
||||
if current != "" and not 1 <= int(port) <= 65535:
|
||||
raise inquirer.errors.ValidationError("", reason="Pick a port between 1-65535 or leave empty")
|
||||
return True</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.validators.Validators.profile_protocol_validation"><code class="name flex">
|
||||
<span>def <span class="ident">profile_protocol_validation</span></span>(<span>self, answers, current, regex='(^ssh$|^telnet$|^kubectl$|^docker$|^ssm$|^$)')</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def profile_protocol_validation(self, answers, current, regex = "(^ssh$|^telnet$|^kubectl$|^docker$|^ssm$|^$)"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Pick between ssh, telnet, kubectl, docker, ssm or leave empty")
|
||||
return True</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.validators.Validators.profile_tags_validation"><code class="name flex">
|
||||
<span>def <span class="ident">profile_tags_validation</span></span>(<span>self, answers, current)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def profile_tags_validation(self, answers, current):
|
||||
if current != "":
|
||||
isdict = False
|
||||
try:
|
||||
isdict = ast.literal_eval(current)
|
||||
except Exception:
|
||||
pass
|
||||
if not isinstance (isdict, dict):
|
||||
raise inquirer.errors.ValidationError("", reason="Tags should be a python dictionary.".format(current))
|
||||
return True</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.validators.Validators.protocol_validation"><code class="name flex">
|
||||
<span>def <span class="ident">protocol_validation</span></span>(<span>self,<br>answers,<br>current,<br>regex='(^ssh$|^telnet$|^kubectl$|^docker$|^ssm$|^$|^@.+$)')</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def protocol_validation(self, answers, current, regex = "(^ssh$|^telnet$|^kubectl$|^docker$|^ssm$|^$|^@.+$)"):
|
||||
if not re.match(regex, current):
|
||||
raise inquirer.errors.ValidationError("", reason="Pick between ssh, telnet, kubectl, docker, ssm, leave empty or @profile")
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
return True</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
<dt id="connpy.cli.validators.Validators.tags_validation"><code class="name flex">
|
||||
<span>def <span class="ident">tags_validation</span></span>(<span>self, answers, current)</span>
|
||||
</code></dt>
|
||||
<dd>
|
||||
<details class="source">
|
||||
<summary>
|
||||
<span>Expand source code</span>
|
||||
</summary>
|
||||
<pre><code class="python">def tags_validation(self, answers, current):
|
||||
if current.startswith("@"):
|
||||
if current[1:] not in self.app.profiles:
|
||||
raise inquirer.errors.ValidationError("", reason="Profile {} don't exist".format(current))
|
||||
elif current != "":
|
||||
isdict = False
|
||||
try:
|
||||
isdict = ast.literal_eval(current)
|
||||
except Exception:
|
||||
pass
|
||||
if not isinstance (isdict, dict):
|
||||
raise inquirer.errors.ValidationError("", reason="Tags should be a python dictionary.".format(current))
|
||||
return True</code></pre>
|
||||
</details>
|
||||
<div class="desc"></div>
|
||||
</dd>
|
||||
</dl>
|
||||
</dd>
|
||||
</dl>
|
||||
</section>
|
||||
</article>
|
||||
<nav id="sidebar">
|
||||
<div class="toc">
|
||||
<ul></ul>
|
||||
</div>
|
||||
<ul id="index">
|
||||
<li><h3>Super-module</h3>
|
||||
<ul>
|
||||
<li><code><a title="connpy.cli" href="index.html">connpy.cli</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><h3><a href="#header-classes">Classes</a></h3>
|
||||
<ul>
|
||||
<li>
|
||||
<h4><code><a title="connpy.cli.validators.Validators" href="#connpy.cli.validators.Validators">Validators</a></code></h4>
|
||||
<ul class="">
|
||||
<li><code><a title="connpy.cli.validators.Validators.bulk_folder_validation" href="#connpy.cli.validators.Validators.bulk_folder_validation">bulk_folder_validation</a></code></li>
|
||||
<li><code><a title="connpy.cli.validators.Validators.bulk_host_validation" href="#connpy.cli.validators.Validators.bulk_host_validation">bulk_host_validation</a></code></li>
|
||||
<li><code><a title="connpy.cli.validators.Validators.bulk_node_validation" href="#connpy.cli.validators.Validators.bulk_node_validation">bulk_node_validation</a></code></li>
|
||||
<li><code><a title="connpy.cli.validators.Validators.default_validation" href="#connpy.cli.validators.Validators.default_validation">default_validation</a></code></li>
|
||||
<li><code><a title="connpy.cli.validators.Validators.host_validation" href="#connpy.cli.validators.Validators.host_validation">host_validation</a></code></li>
|
||||
<li><code><a title="connpy.cli.validators.Validators.jumphost_validation" href="#connpy.cli.validators.Validators.jumphost_validation">jumphost_validation</a></code></li>
|
||||
<li><code><a title="connpy.cli.validators.Validators.pass_validation" href="#connpy.cli.validators.Validators.pass_validation">pass_validation</a></code></li>
|
||||
<li><code><a title="connpy.cli.validators.Validators.port_validation" href="#connpy.cli.validators.Validators.port_validation">port_validation</a></code></li>
|
||||
<li><code><a title="connpy.cli.validators.Validators.profile_jumphost_validation" href="#connpy.cli.validators.Validators.profile_jumphost_validation">profile_jumphost_validation</a></code></li>
|
||||
<li><code><a title="connpy.cli.validators.Validators.profile_port_validation" href="#connpy.cli.validators.Validators.profile_port_validation">profile_port_validation</a></code></li>
|
||||
<li><code><a title="connpy.cli.validators.Validators.profile_protocol_validation" href="#connpy.cli.validators.Validators.profile_protocol_validation">profile_protocol_validation</a></code></li>
|
||||
<li><code><a title="connpy.cli.validators.Validators.profile_tags_validation" href="#connpy.cli.validators.Validators.profile_tags_validation">profile_tags_validation</a></code></li>
|
||||
<li><code><a title="connpy.cli.validators.Validators.protocol_validation" href="#connpy.cli.validators.Validators.protocol_validation">protocol_validation</a></code></li>
|
||||
<li><code><a title="connpy.cli.validators.Validators.tags_validation" href="#connpy.cli.validators.Validators.tags_validation">tags_validation</a></code></li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</main>
|
||||
<footer id="footer">
|
||||
<p>Generated by <a href="https://pdoc3.github.io/pdoc" title="pdoc: Python API documentation generator"><cite>pdoc</cite> 0.11.5</a>.</p>
|
||||
</footer>
|
||||
</body>
|
||||
</html>
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user